Open access

Multi-Dimensional Calibration of Impact Models

Written By

Lucas G. Horta, Mercedes C. Reaves, Martin S. Annett and Karen E. Jackson

Submitted: 30 March 2011 Published: 12 September 2011

DOI: 10.5772/21636

From the Edited Volume

Aeronautics and Astronautics

Edited by Max Mulder

Chapter metrics overview

2,174 Chapter Downloads

View Full Metrics

1. Introduction

As computational capabilities continue to improve and the costs associated with test programs continue to increase, certification of future rotorcraft will rely more on computational tools along with strategic testing of critical components. Today, military standards (MIL-STD 1290A (AV), 1988) encourage designers of rotary wing vehicles to demonstrate compliance with the certification requirements for impact velocity and volume loss by analysis. Reliance on computational tools, however, will only come after rigorous demonstration of the predictive capabilities of existing computational tools. NASA, under the Subsonic Rotary Wing Program, is sponsoring the development and validation of such tools. Jackson(2006) discussed detailed requirements and challenges associated with certification by analysis. Fundamental to the certification effort is the demonstration of verification, validation, calibration, and algorithms for this class of problems. Work in this chapter deals with model calibration of systems undergoing impact loads.

The process of model calibration, which follows the verification and validation phases, involves reconciling differences between test and analysis. Most calibration efforts combine both heuristics and quantitative methods to assess model deficiencies, to consider uncertainty, to evaluate parameter importance, and to compute required model changes. Calibration of rotorcraft structural models presents particular challenges because the computational time, often measured in hours, limits the number of solutions obtainable in a timely manner. Oftentimes, efforts are focused on predicting responses at critical locations as opposed to assessing the overall adequacy of the model. For example (Kamat, 1976) conducted a survey, which at the time, studied the most popular finite element analysis codes and validation efforts by comparing impact responses from a UH-1H helicopter drop test. Similarly, (Wittlin and Gamon, 1975) used the KRASH analysis program for data correlation of the UH-1H helicopter. Another excellent example of a rotary wing calibration effort is that of (Cronkhite and Mazza, 1988)comparing results from a U.S. Army composite helicopter with simulation data from the KRASH analysis program. Recently, (Tabiei, Lawrence, and Fasanella, 2009) reported on a validation effort using anthropomorphic test dummy data from crash tests to validate an LS-DYNA (Hallquist, 2006) finite element model. Common to all these calibration efforts is the use of scalar deterministic metrics.

One complication with calibration efforts of nonlinear models is the lack of universally accepted metrics to judge model adequacy. Work by (Oberkampf et al., 2006)and later(Schwer et al., 2007) are two noteworthy efforts that provide users with metrics to evaluate nonlinear time histories. Unfortunately, seldom does one see them used to assess model adequacy. In addition, the metrics as stated in (Oberkampf et al., 2006)and (Schwer et al., 2007) do not consider the multi-dimensional aspect of the problem explicitly. A more suitable metric for multi-dimensional calibration exploits the concept of impact shapes as proposed by (Anderson et al., 1998) and demonstrated by (Horta et al.,2003). Aside from the metrics themselves, the verification, validation, and calibration elements, as described by (Roache, 1998;Oberkampf, 2003;Thacker, 2005; and Atamturktur, 2010), must be adapted to rotorcraft problems. Because most applications in this area use commercially available codes, it is assumed that code verification and validation have been addressed elsewhere. Thus, this work concentrates on calibration elements only. In particular, this work concentrates on deterministic input parameter calibration of nonlinear finite element models. For non-deterministic input parameter calibration approaches, the reader is referred to (Kennedy and O‘Hagan, 2001; McFarland et al., 2008).

Fundamental to the success of the model calibration effort is a clear understanding of the ability of a particular model to predict the observed behavior in the presence of modeling uncertainty. The approach proposed in this chapter is focused primarily on model calibration using parameter uncertainty propagation and quantification, as opposed to a search for a reconciling solution. The process set forth follows a three-step approach. First, Analysis of Variance (ANOVA) as described in work by (Sobol et al., 2007; Mullershon and Liebsher, 2008; Homma and Saltelli, 1996; and Sudret, 2008) is used for parameter selection and sensitivity. To reduce the computational burden associated with variance based sensitivity estimates, response surface models are created and used to estimate time histories. In our application, the Extended Radial Basis Functions (ERBF) response surface method, as described by (Mullur, 2005, 2006) has been implemented and used. Second, after ANOVA estimates are completed, uncertainty propagation is conducted to evaluate uncertainty bounds and to gage the ability of the model to explain the observed behavior by comparing the statistics of the 2-norm of the response vector between analysis and test. If the model is reconcilable according to the metric, the third step seeks to find a parameter set to reconcile test with analysis by minimizing the prediction error using the optimization scheme proposed (Regis and Shoemaker2005). To concentrate on the methodology development, simulated experimental data has been generated by perturbing an existing model. Data from the perturbed model is used as the target set for model calibration. To keep from biasing this study, changes to the perturbed model were not revealed until the study was completed.

In this chapter, a description of basic model calibration elements is described first followed by an example using a helicopter model. These elements include time and spatial multi-dimensional metrics, parameter selection, sensitivity using analysis of variance, and optimization strategy for model reconciliation. Other supporting topics discussed are sensor placement to assure proper evaluation of multi-dimensional orthogonality metrics, prediction of unmeasured responses from measured data, and the use of surrogates for computational efficiency. Finally, results for the helicopter calibrated model are presented and, at the end, the actual perturbations made to the original model are revealed for a quick assessment.

Advertisement

2. Problem formulation

Calibration of models is a process that requires analysts to integrate different methodologies in order to achieve the desired end goal which is to reconcile prediction with observations. Although in the literature the word “model” is used to mean many different forms of mathematical representations of physical phenomena, for our purposes, the word model is used to refer to a finite element representation of the system. Starting with an analytical model that incorporates the physical attributes of the test article, this model is initially judged based on some pre-established calibration metrics. Although there are no universally accepted metrics, the work in this paper uses two metrics; one that addresses the predictive capability of time responses and a second metric that addresses multi-dimensional spatial correlation of sensors for both test and analysis data. After calibration metrics are established, the next step in the calibration process involves parameter selection and uncertainty estimates using engineering judgment and available data. With parameters selected and uncertainty models prescribed, the effect of parameter variations on the response of interest must be established. If parameter variations are found to significantly affect the response of interest, then calibration of the model can proceed to determine a parameter set to reconcile the model. These steps are described in more detail, as follows.

2.1. Time domain calibration metrics

Calibration metrics provide a mathematical construct to assess fitness of a model in a quantitative manner. Work by (Oberkampf, 2006) and (Schwer, 2007) set forth scalar statistical metrics ideally suited for use with time history data. Metrics in terms of mean, variance, and confidence intervals facilitate assessment of experimental data, particularly when probability statements are sought. For our problem, instead of using response predictions at a particular point, a vector 2-norm (magnitude of vector)of the system response is used as a function of time. An important benefit of using this metric is that it provides for a direct measure of multi-dimensional closeness of two models. In addition, when tracked as a function of time, closeness is quantified at each time step.

Because parameter values are uncertain, statistical measures of the metric need to be used to conduct assessments. With limited information about parameter uncertainty, a uniform distribution function, which is the least informative distribution function, is the most appropriate representation to model parameter uncertainty. This uncertainty model is used to create a family of N equally probable parameter vectors, where N is arbitrarily selected. From the perspective of a user, it is important to know the probability of being able to reconcile measured data with predictions, given a particular model for the structure and parameter uncertainty. To this end, let Q(t,p)v(t,p)2 be a scalar time varying function, in which the response vector v is used to compute the 2-norm of the response at time t, using parameter vector p. Furthermore, let σ_(t)=minpQ(t,p)be the minimum value over all parameter variations, and let σ¯(t)=maxpQ(t,p)be the maximum value. Using these definitions and N LS-DYNA solutions corresponding to equally probable parameter vectors, a calibration metric can be established to bound the probability of test values falling outside the analysis bounds as;

M1=Prob(σ_(t)<Qe(t)Qe(t)>σ¯(t))<<1/NE1

whereQe(t) is the 2-norm of responses from the experiment. Note that N controls tightness of the bounds and also the number of LS-DYNA solutions required.

The use of norms, although convenient, tends to hide the spatial relationships that exist between responses at different locations in the model. In order to study this spatial multi-dimensional dependency explicitly, a different metric must be established.

2.2. Spatial multi-dimensional calibration metric

Spatial multi-dimensional dependency of models has been studied in classical linear dynamic problems in terms of mode shapes or eigenvectors resulting from a solution to an eigenvalue problem. Unfortunately, the nonlinear nature of impact problems precludes use of any simple eigenvalue solution scheme. Alternatively, an efficient and compact way to study the spatial relationship is by using a set of orthogonal impact shape basis vectors. Impact shapes, proposed by (Anderson, 1998 and later by Horta, 2003), are computed by decomposing the time histories using orthogonal decomposition. For example, time histories from analysis or experiments can be decomposed using singular value decomposition as

y(x,t)=i=1nσiϕi(x)gi(t)E2

In this form, the impact shape vector ϕi sized m x 1, contains the spatial distribution information for m sensors, g(t) contains the time modulation information, σcontains scalar values with shape participation factors, and n is the number of impact shapes to be included in the decomposition, often truncated based on allowable reconstruction error. Although Eq. (2) is written in continuous time form, for most applications, time is sampled at fixed intervals such thatt=kΔT where the integer k=0,…,L and ΔTis the sample time. From Eq. (2), the fractional contribution of the ith impact shape to the total response is proportional toδi, defined as;

δi=σi/l=1nσlE3

Mimicking the approach used in classical dynamic problems, impact shapes can now be used to compare models using orthogonality. Orthogonality, computed as the dot product operation of vectors (or matrices), quantifies the projection of one vector onto another. If the projection is zero, vectors are orthogonal, i.e., uncorrelated. This same idea applies when comparing test and analysis impact shapes. Numerically, the orthogonality metric is computed as;

M2=ΦTΨE4

where Φ is sized m x l with l measured impact shapes at m locations andΨ, sized m x l, are shapes computed using simulation data. Note that both Φand Ψare normalized matrices such that ΦTΦ=I andΨTΨ=I. Because individual impact shape vectors are stacked column-wise, metric M2is a matrix sized l x l with diagonal values corresponding to the vector projection numerical value. If vectors are identical then their projection equals 1. Consequently, when evaluating models, multi-dimensional closeness with experiment is judged based on similarity of impact shapes and shape contributions. Two direct benefits of using impact shapes are discussed in the next two sections.

2.2.1. Algorithm for response interpolation

Adopting impact shapes as a means to compare models has two advantages. First, it allows for interpolation of unmeasured response points, and second, it provides a metric to conduct optimal sensor placement. During most test programs, the number of sensors used is often limited by the availability of transducers and the data acquisition system. Although photogrammetry and videogrammetry measurements provide significantly more data, even these techniques are limited to only those regions in the field of view of the cameras. At times, the inability to view responses over the full structure can mislead analysts as to their proper behavior. For this purpose, a hybrid approach has been developed to combine measured data with physics-based models to provide more insight into the full system response. Although the idea is perhaps new in the impact dynamics area, this approach is used routinely in modal tests where a limited number of measurements is augmented with predictions using the analytical stiffness matrix. This approach takes advantage of the inherent stiffness that relates the motion at different locations on the structure. Because in impact dynamic problems, the stiffness matrix is likely to be time varying, implementation of a similar approach is difficult. An alternative is to use impact shapes as a means to combine information from physics-based models with experimental data. Specifically, responses at unmeasured locations are related to measured locations through impact shapes. To justify the approach, Eq. (2) is re-written as;

(y(t)ye(t))=i=1n[ϕiκi]σigi(t)=[ΦΚ]SG(t)E5

where the matrix partitions are defined as

[ΦΚ][ϕ1ϕnκ1κn];S[σ1000000σn];G(t)[g1(t)gn(t)]E6

In contrast to Eq. (2), Eq. (5) shows explicitly responses at an augmented set of locations namedye(t), constructed using impact shapes κiat q unmeasured locations.

Using Eq. (5) with experimental data, the time dependency of the response can be computed as

SG(t)=(ΦTΦ)1ΦTy(t)E7

Consequently, predictions for unmeasured locations can now be computed as

ye(t)=Κ(ΦTΦ)1ΦTy(t)E8

Although Eq. (7) requires a matrix inversion, the rank of this matrix is controlled by sensor placement. Hence, judicious pretest sensor placement must be an integral part of this process. Fortunately, because the impact shapes are computed using singular value decomposition, they form an orthonormal set of basis vectors, i.e.(ΦTΦ)1I. It is important to note that measured data are used to compute the impact shapes (at sensor locations) and the time dependent part of the response, whereas data from the analytical model are used to compute impact shapes at all unmeasured locations.

2.2.2. Optimum sensor placement for impact problems

Optimal sensor placement must be driven by the ultimate goals of the test. If model calibration is the goal, sensor placement must focus on providing information to properly evaluate the established metrics. In multi-dimensional calibration efforts using the orthogonality metric, sensor placement is critical because if sensors are not strategically placed, it is impossible to distinguish between impact shapes. Fortunately, the use of impact shapes enables the application of well established sensor placement algorithms routinely used in modal tests. Placement for our example used the approach developed by (Kammer, 1991). Using this approach sensors are placed to ensure proper numerical conditioning of the orthogonality matrix.

2.3. Parameter selection

The parameter selection (parameters being in this case material properties, structural dimensions, etc.) process relies heavily on the analyst’s knowledge and familiarity with the model and assumptions. Formal approaches like Phenomena Identification and Ranking Table (PIRT), discussed by (Wilson and Boyack, 1998), provide users with a systematic method for ranking parameters for a wide class of problems. Elements of this approach are used for the initial parameter selection. After an initial parameter selection is made, parameter uncertainty must be quantified empirically if data are available or oftentimes engineering judgment is ultimately used. With an initial parameter set and an uncertainty model at hand, parameter importance is assessed using uncertainty propagation. That is, the LS-DYNA model is exercised with parameter values created using the (Halton, 1960) deterministic sampling technique. Time history results are processed to compute the metrics and to assess variability. A by-product of this step produces variance-based sensitivity results which are used to rank the parameters. In the end, adequacy of the parameter set is judged based on the probability of one being able to reconcile test with analysis. If the probability is zero, as will be shown later in the example, the parameter selection must be revisited.

2.4. Optimization strategy

With an adequate set of parameters selected, the next step is to use an optimization procedure to determine values that reconcile test with the analysis. A difficulty with using classical optimization tools in this step is in the computational time it takes to obtain LS-DYNA solutions. Although in the helicopter example the execution time was optimized to be less than seven minutes, the full model execution time is measured in days. For this reason,ideally optimization tools for this step must take advantage of all LS-DYNA solutions at hand. To address this issue, optimization tools that use surrogate models in addition to new LS-DYNA solutions are ideal. For the present application the Constrained Optimization using Response Surface (CORS) algorithm, developed by (Regis and Shoemaker, 2005),has been implemented in MATLAB for reconciliation. Specifically, the algorithm starts by looking for parameter values away from the initial set of LS-DYNA solutions, then slowly steps closer to known solutions by solving a series of local constrained optimization problems. This optimization process will produce a global optimum if enough steps are taken. Of course, the user controls the number of steps and therefore the accuracy and computational expense in conducting the optimization. In cases where the predictive capability of the surrogate model is poor, CORS adds solutions in needed areas. Because parameter uncertainty is not used explicitly in the optimization, this approach is considered to be deterministic. If a probabilistic approach was used instead (see Kennedy and O’Hagan, 2001; McFarland et al., 2008).), in addition to a reconciling set, the user should also be able to determine the probability that the parameter set found is correct. Lack of credible parameter uncertainty data precludes the use of probabilistic optimization methods at this time, but future work could use the same computational framework.

2.5. Analysis of variance

Parameter sensitivity in most engineering fields is often associated with derivative calculations at specific parameter values. However, for analysis of systems with uncertainties, sensitivity studies are often conducted using ANOVA. In classical ANOVA studies, data is collected from multiple experiments while varying all parameters (factors) and also while varying one parameter at a time. These results are then used to quantify the output response variance due to variations of a particular parameter, as compared to the total output variance when varying all the parameters simultaneously. The ratio of these two variance contributions is a direct measure of the parameter importance. Sobol et al.(2007) and others (Mullershon and Liebsher, 2008; Homma and Saltelli, 1996; and Sudret, 2008) have studied the problem as a means to obtain global sensitivity estimates using variance based methods. To compute sensitivity using these variance based methods, one must be able to compute many response predictions as parameters are varied. In our implementation, after a suitable set of LS-DYNA solutions are obtained, response surface surrogates are used to estimate additional solutions.

2.6. Response surface methodology

A response surface (RS) model is simply a mathematical representation that relates input variables (parameters that the user controls) and output variables (response quantities of interest), often used in place of computationally expensive solutions. Many papers have been published on response surface techniques, see for example (Myers, 2002). The one adopted here is the Extended Radial Basis Functions (ERBF) method as described by (Mullur, 2005, 2006). In this adaptive response surface approach, the total number of RS parameters computed equals N(3np+1), where np is the number of parameters and N is the number of LS-DYNA solutions. The user must also prescribe two additional parameters: 1) the order of a local polynomial (set to 4 in the present case), and 2) a smoothness parameter (set to 0.15 here). Finally, the radial basis function is chosen to be an exponentially decaying function e(ppi)2/2rc2 with characteristic radius rcset to 0.15. A distinction with this RS implementation is that ERBF is used to predict full time histories, as opposed to just extreme values. In addition, ERBF is able to match the responses used to create the surrogate with prediction errors less than 10-10.

Advertisement

3. Description of helicopter test article

A full-scale crash test of an MD-500 helicopter, as described by (Annett and Polanco, 2010), was conducted at the Landing and Impact Research (LandIR) Facility at NASA Langley Research Center (LaRC). Figure 1a shows a photograph of the test article while it was being prepared for test, including an experimental dynamic energy absorbing honeycomb structure underneath the fuselage designed by (Kellas, 2007). The airframe, provided by the US Army's Mission Enhanced Little Bird (MELB) program, has been used for civilian and military applications for more than 40 years. NASA Langley is spearheading efforts to develop analytical models capable of predicting the impact response of such systems.

Figure 1.

MD-500 helicopter model.

Advertisement

4. LS-DYNA model description

To predict the behavior of the MD-500 helicopter during a crash test, an LS-DYNA (Hallquist, 2006) finite element model (FEM) of the fuselage, as shown in Figure 1b, was developed and reported in (Annett and Polanco, 2010). The element count for the fuselage was targeted to not exceed 500,000 elements, including seats and occupants; with 320,000 used to represent the energy absorbing honeycomb and skid gear. Shell elements were used to model the airframe skins, ribs and stiffeners. Similarly, the lifting and pullback fixtures, and the platform supporting the data acquisition system (mounted in the tail) were modeled using rigid shells. Ballast used in the helicopter to represent the rotor, tail section, and the fuel was modelled as concentrated masses. For materials, the fuselage section is modeled using Aluminum 2024-T3 with elastic-plastic properties, whereas the nose is fiberglass and the engine fairing is Kevlar fabric. Instead of using the complete “as-tested” FEM model, this study uses a simplified model created by removing the energy absorbing honeycomb, skid gears, anthropomorphic dummies, data acquisition system, and lifting/pull-back fixtures. After these changes, the resulting simplified model is shown in Figure 2. Even with all these components removed, the simplified model had 27,000 elements comprised primarily of shell elements to represent airframe skins, ribs and stiffeners. The analytical test case used for calibration, simulates a helicopter crash onto a hard surface with vertical and horizontal speeds of 26 ft/sec and 40 ft/sec, respectively. For illustration, Figure 3 shows four frames from an LS-DYNA simulation as the helicopter impacts the hard surface.

Figure 2.

Simplified finite element model.

Advertisement

5. Example results

Results described here are derived from the simplified LS-DYNA model, as shown in Figure 2. This simplified model reduced the computational time from days to less than seven minutes and allowed for timely debugging of the software and demonstration of the methodology, which is the main focus of the chapter. Nonetheless, the same approach can be applied to the complete “as-tested” FEM model without modifications.

Figure 3.

Four frames of the LS-DYNA simulation as the helicopter impacts the hard surface.

For evaluation purposes, simulated data are used in lieu of experimental data. Because more often than not analytical model predictions do not agree with the measured data, the simplified model was arbitrarily perturbed. Knowledge of the perturbations and areas affected are not revealed until the entire calibration process is completed. Data from this model, referred to as the perturbed model, takes the place of experimental data. In this study, no test uncertainty is considered. Therefore, only 1 data set is used for test.

Figure 4.

Helicopter wireframe for a) simplified model b) simulated test sensor placement and numbering.

Figure 4(a) depicts a wireframe of the simplified model showing only 405 nodes. Superimposed is a second wiring frame with connections to 34 nodes identified by an optimal sensor placement algorithm. At each node there can be up to 3 translational measurements, however, here the placement algorithm was instructed to place only 41 sensors. Figure 4(b) shows the location for the 41 sensors. Results from the optimal sensor placement located 8 sensors along the x direction, 10 sensors along the y direction, and 23 sensors along the z direction.

5.1. Initial parameter selection

Calibration efforts begin by selecting model parameters thought to be uncertain. Selecting these parameters is perhaps the most difficult step. Not knowing what had been changed in the perturbed model, the initial study considered displacements, stress contours, and plastic strain results at different locations on the structure before selecting the modulus of elasticity and tangent modulus at various locations. The parameters and uncertainty ranges selected are shown in Table 1. Without additional information about parameter uncertainty, the upper and lower bounds were selected using engineering judgment with the understanding that values anywhere between the bounds were equally likely.

No.ParameterDescriptionNominalLowerBoundUpperBound
1E back panel (lbs/in2)10,000,0008,000,00012,000,000
2E subfloor ribs (lbs/in2)10,000,0008,000,00012,000,000
3E keel beam web (lbs/in2)9,880,0007,904,00011,856,000
4E stinger upper tail (lbs/in2)10,000,0008,000,00012,000,000
5E stinger lower tail (lbs/in2)10,000,0008,000,00012,000,000
6Et subfloor ribs (lbs/in2)134,200107,360161,040
7Et keel beam web (lbs/in2)134,200107,360161,040
8Et lower tail stinger (lbs/in2)134,200107,360161,040

Table 1.

Initial parameter set description

With the parameter uncertainty definition in Table 1, LS-DYNA models can be created and executed to study the calibration metrics as described earlier. As an example, 150 LS-DYNA runs with the simplified model were completed while varying parameters over the ranges shown in Table 1. To construct the uncertainty bounds for each of the 150 runs,Q(t,p) is computed from velocities at 41 sensors (see Figure 4) and plotted in Figure 5 as a function of time; analysis (dashed-blue) and the simulated test (solid-red). With this sample size, the probability of being able to reconcile test with analysis during times when test results are outside the analysis bounds is less than 1/150 (recall that the simulated test data is from the perturbed model). Figure 5 shows that during the time interval between 0.01 and 0.02 seconds, the analysis bounds are above the test. Therefore, it is unlikely that one would be able to find parameter values within the selected set to reconcile analysis with test. This finding prompted another look at parameter selection and uncertainty models to determine a more suitable set.

5.2. Revisedparameter selection

A second search for a revised parameter set involved conversations with the model developer and additional runs while varying parameter bounds to see their effect onM1. The second set of parameters selected, after considering several intermediate sets, consisted of thicknesses at various locations in the structure. A concern with varying thickness is its effect on structural mass. However, because 80% of the helicopter model is comprised of non-structural masses, thickness changes had little impact on the total mass. Table 2 shows a revised parameter set and ranges selected for the second study.

Advertisement

6. Evaluation of calibration metrics with revised parameter set

Results for metricM1 using the revised parameter set are shown in Figure 6; solid red is Qe(t)with the simulated test data and in dotted blue lines are analysis bounds using 50 LS-DYNA runs. With 50 runs, the probability that LS-DYNA would produce results outside these bounds is less than 1/50. Consequently, if the test results are outside these bounds, the probability of reconciling the model with test is also less than 1/50. Even though Figure 6 shows that, in certain areas, test results are very close to the analysis bounds; this new parameter set provides enough freedom to proceed with the calibration process.

No.ParameterDescriptionNominalLowerBoundUpperBoundCalibrated
Value
1Keel beam stiffener thickness (in)0.0200.0150.0250.0161
2Belly panel thickness (in)0.0900.080.1350.1008
3Keel beam thickness (in)0.0400.0350.0450.0358
4Lower tail thickness (in)0.0400.0350.0450.0414
5Back panel thickness (in)0.0200.0150.0250.0166
6Upper tail thickness (in)0.0200.0150.0250.0168

Table 2.

Revised parameter description

Figure 5.

Velocity vector 2-norm for analysis (with 150 LS-DYNA runs) and for simulated test.

Figure 6.

Velocity vector 2-norm for analysis (with 50 LS-DYNA runs) and for simulated test.

Thus far, in this study, metric M1 has been used exclusively to evaluate parameter adequacy and uncertainty bounds. What is missing from this evaluation is how well the model predicts the response at all locations. Considering that impact shapes provide a spatial multi-dimensional relationship among different locations, two models with similar impact shapes, all else being equal, should exhibit similar responses at all sensor locations. With this in mind, orthogonality results for the simplified model versus “test”, i.e., the perturbed model, are shown in Figure 7. Essentially, the matrix M2, as defined in Eq. (4), is plotted with analysis

Figure 7.

Orthogonality results using impact shapes from the simulated test and baseline model.

along the ordinate and test along the abscissa. Colors represent the numerical value of the vector projections, e.g. a value of 1 (black) indicates perfect matching between test and analysis. Listed on the labels are the corresponding shape contribution to the response for both analysis (ordinate) and simulated test (top axis). For example, the first impact shape for analysis (bottom left) contributes 0.4 of the total response as compared to 0.39 for test. It is apparent that initial impact shape matching is poor at best with the exception of the first two shapes. An example of an impact shape is provided in Figure 8. Here, a sequence of 8 frames for the test impact shape number 2 (contributionδ2=0.18) expanded to 405 nodes, is shown. Motion of the tail and floor section of the helicopter dominates.

Figure 8.

Test impact shape number 2 (δ2=0.18) animation sequence.

6.1.1. Sensitivity with revisedparameter set

Another important aspect of the calibration process is in understanding how parameter variations affect the norm metricQ(t,p). This information is used as the basis to remove or retain parameters during the calibration process. As mentioned earlier, sensitivity results in this study look at the ratio of the single parameter variance to the total variance ofQ(t,p). This ratio is plotted in Figure 9 for each of the six parameters considered (as defined in Table 2). Along the abscissa is time in seconds and the ordinate shows contribution to variance. Colors are used to denote individual parameter contribution; total sum should approach 1 when no parameter interaction exists. In addition, Q(t,p)is shown across the top, for reference. Because only 50 LS-DYNA runs are executed, an ERBF surrogate model is used to estimate responses with 1000 parameter sets for variance estimates. From results in Figure 9, note that parameter contributions vary significantly over time but for simulation times greater than 0.04 sec the upper tail thickness clearly dominates.

Figure 9.

Norm metric Q(t,p) and sensitivity for thickness parameters.

6.2. Optimization for calibration parameters

The optimization problem is instructed to find parameter values to minimize the natural log of the prediction error using CORS. Starting with only 50 LS-DYNA solutions, CORS is allowed to compute 60 additional LS-DYNA runs. After completing the additional runs, the calibrated parameter set in Table 2 produced an overall prediction error reduction of 8%. Because the number of LS-DYNA runs is set by the user, the calibrated parameter set in Table 2 is not a converged set but rather an intermediate set. Nonetheless, it is referred to as the calibrated parameter set. The optimization process is purposely set up this way to control the computational expense in computing new LS-DYNA solutions. To show the impact of the parameter changes on the vector normQ(t,p), Figure 10 shows a comparison

Figure 10.

Comparison of the velocity 2-norm using simulated test, baseline, and calibrated model data.

of results with data from the simulated test (solid blue), the baseline model using nominal values (dotted red), and the calibrated model (dashed-dot black). When comparing vector norms an overall assessment of the model predictive capability is quantified. However, users may prefer to evaluate improvements at specific sensor locations. For example, Figure 11 shows the velocity for the simulated test, baseline model, and the calibrated models at location 34 (see fig 1) in the z-direction. Improvements like those shown in Figure 11 are common for most locations but not all. Furthermore, after calibration, significant improvements are also apparent in the orthogonality results, as shown in Figure 12. As in Figure 7, results from the calibrated model are along the ordinate and the abscissa contains simulated test impact shapes. When comparing results against the baseline model (see Figure 7), it is clear that impact shape predictions have improved significantly, i.e., responses from the calibrated model and test are closer.

Figure 11.

Comparison of velocity response for the simulated test, baseline, and calibrated model data.

Figure 12.

Orthogonality results using impact shapes from the simulated test and calibrated model.

6.3. Revealing the correct answer

After completing the calibration process it is instructive to examine the actual changes made to the simplified model that resulted in the perturbed model. Because data from the perturbed model served as our target test data set, revealing the changes effectively gives the reader the true answer to the problem. Figure 13a depicts all the sections altered in the perturbed model. Alterations consisted of ± 20 % change in thickness at these locations. In contrast, Figure 13b shows the sections updated during the calibration process. It is interesting to see that although exact changes were not identified, regions where changes needed to be made were identified.

Figure 13.

Sections altered to create the perturbed model and changes suggested during calibration.

Advertisement

7. Concluding remarks

An approach to conduct model calibration of nonlinear models has been developed and demonstrated using data from a simulated crash test of a helicopter. Fundamental to the approach is the definition and application of two calibration metrics: 1) metric M1 compares the statistical bounds of the 2-norm of the velocity response from analysis versus test, and 2) metric M2 evaluates the orthogonality between test and analysis impact shapes. The ability to reconcile analysis with test, assessed using metric M1, is evaluated by computing the system response when parameters are varied to establish response bounds. Once the adequacy of the model is established, the process of reconciling model and test proceeds using constrained optimization to minimize the natural log of the prediction error. The optimization approach takes advantage of surrogate models to reduce the computational time associated with executing hundreds of LS-DYNA runs. Because the computational time is significant, users must trade the number of LS-DYNA solutions allowed against prediction error.

For the simulated helicopter example studied in this chapter, a flawed parameter set was evaluated and found to be inadequate using metric M1. After determining a revised set of parameters, results from calibration show an overall prediction error reduction of 8%. Although this overall reduction may appear to be relatively small, improvements at specific sensor locations can be significantly better. Furthermore, improvements in orthogonality values after calibration resulted in matching 7 impact shapes with orthogonality values greater than 0.9 as compared to the initial model that only matched 1 impact shape. Improvements in orthogonality produced multi-dimensional improvements in the overall model predictive capability. Finally, because the calibration process was demonstrated using simulated experiment data, it was shown that although exact changes were not identified, regions where changes needed to be made were identified.

Advertisement

Acknowledgments

The authors would like to thank Susan Gorton, Manager of the Subsonic Rotary Wing Project of NASA’s Fundamental Aeronautics Program, for her support of this activity.

References

  1. 1. AndersonM. C.GanW..HaselmanT. K.1998Statistical Analysis of Modeling Uncertainty and Predictive Accuracy for Nonlinear Finite Element Models.Proceedings of the 69th Shock and Vibration Symposium, Minneapolis/St. Paul, MN, USA.
  2. 2. Anonymous,1988Military Standard MIL-STD-1290A (AV), Light Fixed and Rotary-Wing Aircraft Crash Resistance. Department of Defense, Washington, DC, USA, September 1988.
  3. 3. AnnettM. S.PolancoM. A.2010System-Integrated Finite Element Analysis of a Full-Scale Helicopter Crash Test with Deployable Energy Absorbers.Proceedings of the 66th American Helicopter Annual Forum, Phoenix, AZ, USA, May 2010.
  4. 4. AtamturkturS.HemezF.UnalC.2010Calibration Under Uncertainty for Finite Element Models of Masonry Monuments.Los Alamos National Laboratory (LANL) Technical Report, LA-14414, February 2010.
  5. 5. CronkhiteJ. D.MazzaL. T.1988KRASH Analysis Correlation with the Bell ACAP Full-Scale Aircraft Crash Test.Proceedings of the National Technical Specialists’ Meeting on Advanced Rotorcraft Structures: Requirements Vs. Opportunities, a Look at the Future. VA, USA, October 1988.
  6. 6. HallquistJ. O.2006LS-DYNA Keyword User’s Manual. Version 971, Livermore Software Technology Company, Livermore, CA, USA.
  7. 7. HaltonJ. H.1960On the Efficiency of Certain Quasi-Random Sequences of Points in Evaluating Multi-Dimensional IntegralsNumerische Mathematik28490
  8. 8. HommaT.SaltelliA.1996Importance Measures in Global Sensitivity Analysis of Nonlinear Models.Reliability Engineering and System Safety, 1117April 1996.
  9. 9. HortaL. G.LyleK. H.LessardW. B.2003Evaluation of Singular Value Decomposition Approach for Impact Dynamic Data Correlation.NASA TM 2003212657October 2003.
  10. 10. JacksonK. E.FasanellaE. L.LyleK. H.2006Crash Certification by Analysis-Are We There Yet? Proceedings of the 62nd Annual Forum of American Helicopter Society, Phoenix AZ, USA, May 2006.
  11. 11. Kamat M.P.1976Survey of Computer Programs for Prediction of Crash Response and of Its Experimental Validation.Measurement and Prediction of Structural and Biodynamic Crash-Impact Response. Proceedings of the Winter Annual Meeting of the ASME, 3348December 1976.
  12. 12. KammerD. C.1991Sensor Placement for On-Orbit Modal Identification and Correlation of Large Space StructuresJournal of GuidanceControl, and Dynamics, 142251259April 1991.
  13. 13. KellasS.JacksonK. E.2007Deployable System for Crash-Load AttenuationProceedings of the 63rd American Helicopter Annual Forum, VA Beach, USA, May 2007.
  14. 14. KennedyM. C.O’HaganA.2001Bayesian Calibration of Computer Models.Journal of Royal Statistical Society, 63Part 3, 425464January 2002.
  15. 15. Mc FarlandJ.MahadevanS.RomeroV.SwilerL.2008Calibration and Uncertainty Analysis for Computer Simulations with Multivariate OutputAIAA Journal46512531265May 2008.
  16. 16. MullershonH.LiebsherM.2008Statistics and Non-Linear Sensitivity Analysis with LS-OPT and DSPEX.Presented at the10th International LS-DYNA Users Conference, 41Dearborn, MI, June 2008.
  17. 17. MullurA.MessacA.2005Extended Radial Basis Functions: More Flexible and Effective Metamodeling.AIAA Journal, 436June 2006.
  18. 18. MullurA.MessacA.2006Metamodeling Using Extended Radial Basis Functions: A Comparative Approach.Engineering with Computers,21203217December 2005.
  19. 19. MyersR. H.MontgomeryD. C.2002Response Surface Methodologynd Edition, Copyright © 2002 John Wiley and Sons, Inc. USA.
  20. 20. OberkampfW. L.BaroneM. F.2006Measures of Agreement Between Computation and Experiment: Validation Metrics.Journal of Computational Physics, 217,556June 2006.
  21. 21. OberkampfW. L.TrucanoT. G.HirschC.2003Verification, Validation, and Predictive Capabilityin Computational Engineering and Physics. SAND 20033769February 2003.
  22. 22. RegisR. G.ShoemakerC. A.2005Constrained Global Optimization of Expensive Black Box Functions Using Radial Basis Functions.Journal of Global Optimization (2005) 31: 153171September 2005.
  23. 23. Roache,P.J.1998Verification and Validation in Computational Science and EngineeringHermosa Publishers, Albuquerque, NM, USA.
  24. 24. SchwerL. E.2007Validation Metrics for Response Histories: Perspectives and Case Studies.Engineering with Computers, 234295309June 2007.
  25. 25. SobolI. M.TarantolaS.GatelliD.KucherenkoS. S.MauntzW.2007Estimating Approximation Error When Fixing Unessential Factors in Global Sensitivity Analysis.Reliability Engineering and System Safety, 92957960July 2007.
  26. 26. SudretB.2008Global Sensitivity Analysis Using Polynomial Chaos Expansion.Reliability Engineering and System Safety, 93964979July 2008.
  27. 27. TabieiA. L.LawrenceC..FasanellaE. L.2009Validation of Finite Element Crash Test Dummy Models for Predicting Orion Crew Member Injuries During a Simulated Vehicle Landing. NASA TM-2009215476March 2009.
  28. 28. ThackerB. H.2005The Role of Non-determinism in Computational Model Verification and Validation.Proceedings of the 46th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, & Materials Conference, Austin TX, USA, AIAA 20051902April 2005.
  29. 29. WittlinG.GamonM. A.1975Experimentally Verified Analytical Techniques for Predicting Vehicle Crash Responses.Proceedings of the AIAA 11th Annual Meeting and Technical Display, AIAA Paper 75-27375273February 1975.
  30. 30. WilsonG. E.BoyackB. E.1998The Role of PIRT in Experiments, Code Development and Code Applications Associated with Reactor Safety Assessment, NuclearEngineering and Design, 1862337November 1998.

Written By

Lucas G. Horta, Mercedes C. Reaves, Martin S. Annett and Karen E. Jackson

Submitted: 30 March 2011 Published: 12 September 2011