Initial parameter set description
1. Introduction
As computational capabilities continue to improve and the costs associated with test programs continue to increase, certification of future rotorcraft will rely more on computational tools along with strategic testing of critical components. Today, military standards (MIL-STD 1290A (AV), 1988) encourage designers of rotary wing vehicles to demonstrate compliance with the certification requirements for impact velocity and volume loss by analysis. Reliance on computational tools, however, will only come after rigorous demonstration of the predictive capabilities of existing computational tools. NASA, under the Subsonic Rotary Wing Program, is sponsoring the development and validation of such tools. Jackson(2006) discussed detailed requirements and challenges associated with certification by analysis. Fundamental to the certification effort is the demonstration of verification, validation, calibration, and algorithms for this class of problems. Work in this chapter deals with model calibration of systems undergoing impact loads.
The process of model calibration, which follows the verification and validation phases, involves reconciling differences between test and analysis. Most calibration efforts combine both heuristics and quantitative methods to assess model deficiencies, to consider uncertainty, to evaluate parameter importance, and to compute required model changes. Calibration of rotorcraft structural models presents particular challenges because the computational time, often measured in hours, limits the number of solutions obtainable in a timely manner. Oftentimes, efforts are focused on predicting responses at critical locations as opposed to assessing the overall adequacy of the model. For example (Kamat, 1976) conducted a survey, which at the time, studied the most popular finite element analysis codes and validation efforts by comparing impact responses from a UH-1H helicopter drop test. Similarly, (Wittlin and Gamon, 1975) used the KRASH analysis program for data correlation of the UH-1H helicopter. Another excellent example of a rotary wing calibration effort is that of (Cronkhite and Mazza, 1988)comparing results from a U.S. Army composite helicopter with simulation data from the KRASH analysis program. Recently, (Tabiei, Lawrence, and Fasanella, 2009) reported on a validation effort using anthropomorphic test dummy data from crash tests to validate an LS-DYNA (Hallquist, 2006) finite element model. Common to all these calibration efforts is the use of scalar deterministic metrics.
One complication with calibration efforts of nonlinear models is the lack of universally accepted metrics to judge model adequacy. Work by (Oberkampf et al., 2006)and later
Fundamental to the success of the model calibration effort is a clear understanding of the ability of a particular model to predict the observed behavior in the presence of modeling uncertainty. The approach proposed in this chapter is focused primarily on model calibration using parameter uncertainty propagation and quantification, as opposed to a search for a reconciling solution. The process set forth follows a three-step approach. First, Analysis of Variance (ANOVA) as described in work by (Sobol et al., 2007; Mullershon and Liebsher, 2008; Homma and Saltelli, 1996; and Sudret, 2008) is used for parameter selection and sensitivity. To reduce the computational burden associated with variance based sensitivity estimates, response surface models are created and used to estimate time histories. In our application, the Extended Radial Basis Functions (ERBF) response surface method, as described by (Mullur, 2005, 2006) has been implemented and used. Second, after ANOVA estimates are completed, uncertainty propagation is conducted to evaluate uncertainty bounds and to gage the ability of the model to explain the observed behavior by comparing the statistics of the 2-norm of the response vector between analysis and test. If the model is reconcilable according to the metric, the third step seeks to find a parameter set to reconcile test with analysis by minimizing the prediction error using the optimization scheme proposed (Regis and Shoemaker2005). To concentrate on the methodology development, simulated experimental data has been generated by perturbing an existing model. Data from the perturbed model is used as the target set for model calibration. To keep from biasing this study, changes to the perturbed model were not revealed until the study was completed.
In this chapter, a description of basic model calibration elements is described first followed by an example using a helicopter model. These elements include time and spatial multi-dimensional metrics, parameter selection, sensitivity using analysis of variance, and optimization strategy for model reconciliation. Other supporting topics discussed are sensor placement to assure proper evaluation of multi-dimensional orthogonality metrics, prediction of unmeasured responses from measured data, and the use of surrogates for computational efficiency. Finally, results for the helicopter calibrated model are presented and, at the end, the actual perturbations made to the original model are revealed for a quick assessment.
2. Problem formulation
Calibration of models is a process that requires analysts to integrate different methodologies in order to achieve the desired end goal which is to reconcile prediction with observations. Although in the literature the word “model” is used to mean many different forms of mathematical representations of physical phenomena, for our purposes, the word model is used to refer to a finite element representation of the system. Starting with an analytical model that incorporates the physical attributes of the test article, this model is initially judged based on some pre-established calibration metrics. Although there are no universally accepted metrics, the work in this paper uses two metrics; one that addresses the predictive capability of time responses and a second metric that addresses multi-dimensional spatial correlation of sensors for both test and analysis data. After calibration metrics are established, the next step in the calibration process involves parameter selection and uncertainty estimates using engineering judgment and available data. With parameters selected and uncertainty models prescribed, the effect of parameter variations on the response of interest must be established. If parameter variations are found to significantly affect the response of interest, then calibration of the model can proceed to determine a parameter set to reconcile the model. These steps are described in more detail, as follows.
2.1. Time domain calibration metrics
Calibration metrics provide a mathematical construct to assess fitness of a model in a quantitative manner. Work by (Oberkampf, 2006) and (Schwer, 2007) set forth scalar statistical metrics ideally suited for use with time history data. Metrics in terms of mean, variance, and confidence intervals facilitate assessment of experimental data, particularly when probability statements are sought. For our problem, instead of using response predictions at a particular point, a vector 2-norm (magnitude of vector)of the system response is used as a function of time. An important benefit of using this metric is that it provides for a direct measure of multi-dimensional closeness of two models. In addition, when tracked as a function of time, closeness is quantified at each time step.
Because parameter values are uncertain, statistical measures of the metric need to be used to conduct assessments. With limited information about parameter uncertainty, a uniform distribution function, which is the least informative distribution function, is the most appropriate representation to model parameter uncertainty. This uncertainty model is used to create a family of
where
The use of norms, although convenient, tends to hide the spatial relationships that exist between responses at different locations in the model. In order to study this spatial multi-dimensional dependency explicitly, a different metric must be established.
2.2. Spatial multi-dimensional calibration metric
Spatial multi-dimensional dependency of models has been studied in classical linear dynamic problems in terms of mode shapes or eigenvectors resulting from a solution to an eigenvalue problem. Unfortunately, the nonlinear nature of impact problems precludes use of any simple eigenvalue solution scheme. Alternatively, an efficient and compact way to study the spatial relationship is by using a set of orthogonal impact shape basis vectors. Impact shapes, proposed by (Anderson, 1998 and later by Horta, 2003), are computed by decomposing the time histories using orthogonal decomposition. For example, time histories from analysis or experiments can be decomposed using singular value decomposition as
In this form, the impact shape vector
Mimicking the approach used in classical dynamic problems, impact shapes can now be used to compare models using orthogonality. Orthogonality, computed as the dot product operation of vectors (or matrices), quantifies the projection of one vector onto another. If the projection is zero, vectors are orthogonal, i.e., uncorrelated. This same idea applies when comparing test and analysis impact shapes. Numerically, the orthogonality metric is computed as;
where
2.2.1. Algorithm for response interpolation
Adopting impact shapes as a means to compare models has two advantages. First, it allows for interpolation of unmeasured response points, and second, it provides a metric to conduct optimal sensor placement. During most test programs, the number of sensors used is often limited by the availability of transducers and the data acquisition system. Although photogrammetry and videogrammetry measurements provide significantly more data, even these techniques are limited to only those regions in the field of view of the cameras. At times, the inability to view responses over the full structure can mislead analysts as to their proper behavior. For this purpose, a hybrid approach has been developed to combine measured data with physics-based models to provide more insight into the full system response. Although the idea is perhaps new in the impact dynamics area, this approach is used routinely in modal tests where a limited number of measurements is augmented with predictions using the analytical stiffness matrix. This approach takes advantage of the inherent stiffness that relates the motion at different locations on the structure. Because in impact dynamic problems, the stiffness matrix is likely to be time varying, implementation of a similar approach is difficult. An alternative is to use impact shapes as a means to combine information from physics-based models with experimental data. Specifically, responses at unmeasured locations are related to measured locations through impact shapes. To justify the approach, Eq. (2) is re-written as;
where the matrix partitions are defined as
In contrast to Eq. (2), Eq. (5) shows explicitly responses at an augmented set of locations named
Using Eq. (5) with experimental data, the time dependency of the response can be computed as
Consequently, predictions for unmeasured locations can now be computed as
Although Eq. (7) requires a matrix inversion, the rank of this matrix is controlled by sensor placement. Hence, judicious pretest sensor placement must be an integral part of this process. Fortunately, because the impact shapes are computed using singular value decomposition, they form an orthonormal set of basis vectors, i.e.
2.2.2. Optimum sensor placement for impact problems
Optimal sensor placement must be driven by the ultimate goals of the test. If model calibration is the goal, sensor placement must focus on providing information to properly evaluate the established metrics. In multi-dimensional calibration efforts using the orthogonality metric, sensor placement is critical because if sensors are not strategically placed, it is impossible to distinguish between impact shapes. Fortunately, the use of impact shapes enables the application of well established sensor placement algorithms routinely used in modal tests. Placement for our example used the approach developed by (Kammer, 1991). Using this approach sensors are placed to ensure proper numerical conditioning of the orthogonality matrix.
2.3. Parameter selection
The parameter selection (parameters being in this case material properties, structural dimensions, etc.) process relies heavily on the analyst’s knowledge and familiarity with the model and assumptions. Formal approaches like Phenomena Identification and Ranking Table (PIRT), discussed by (Wilson and Boyack, 1998), provide users with a systematic method for ranking parameters for a wide class of problems. Elements of this approach are used for the initial parameter selection. After an initial parameter selection is made, parameter uncertainty must be quantified empirically if data are available or oftentimes engineering judgment is ultimately used. With an initial parameter set and an uncertainty model at hand, parameter importance is assessed using uncertainty propagation. That is, the LS-DYNA model is exercised with parameter values created using the (Halton, 1960) deterministic sampling technique. Time history results are processed to compute the metrics and to assess variability. A by-product of this step produces variance-based sensitivity results which are used to rank the parameters. In the end, adequacy of the parameter set is judged based on the probability of one being able to reconcile test with analysis. If the probability is zero, as will be shown later in the example, the parameter selection must be revisited.
2.4. Optimization strategy
With an adequate set of parameters selected, the next step is to use an optimization procedure to determine values that reconcile test with the analysis. A difficulty with using classical optimization tools in this step is in the computational time it takes to obtain LS-DYNA solutions. Although in the helicopter example the execution time was optimized to be less than seven minutes, the full model execution time is measured in days. For this reason,ideally optimization tools for this step must take advantage of all LS-DYNA solutions at hand. To address this issue, optimization tools that use surrogate models in addition to new LS-DYNA solutions are ideal. For the present application the Constrained Optimization using Response Surface (CORS) algorithm, developed by (Regis and Shoemaker, 2005),has been implemented in MATLAB for reconciliation. Specifically, the algorithm starts by looking for parameter values away from the initial set of LS-DYNA solutions, then slowly steps closer to known solutions by solving a series of local constrained optimization problems. This optimization process will produce a global optimum if enough steps are taken. Of course, the user controls the number of steps and therefore the accuracy and computational expense in conducting the optimization. In cases where the predictive capability of the surrogate model is poor, CORS adds solutions in needed areas. Because parameter uncertainty is not used explicitly in the optimization, this approach is considered to be deterministic. If a probabilistic approach was used instead (see Kennedy and O’Hagan, 2001; McFarland et al., 2008).), in addition to a reconciling set, the user should also be able to determine the probability that the parameter set found is correct. Lack of credible parameter uncertainty data precludes the use of probabilistic optimization methods at this time, but future work could use the same computational framework.
2.5. Analysis of variance
Parameter sensitivity in most engineering fields is often associated with derivative calculations at specific parameter values. However, for analysis of systems with uncertainties, sensitivity studies are often conducted using ANOVA. In classical ANOVA studies, data is collected from multiple experiments while varying all parameters (factors) and also while varying one parameter at a time. These results are then used to quantify the output response variance due to variations of a particular parameter, as compared to the total output variance when varying all the parameters simultaneously. The ratio of these two variance contributions is a direct measure of the parameter importance. Sobol et al.(2007) and others (Mullershon and Liebsher, 2008; Homma and Saltelli, 1996; and Sudret, 2008) have studied the problem as a means to obtain global sensitivity estimates using variance based methods. To compute sensitivity using these variance based methods, one must be able to compute many response predictions as parameters are varied. In our implementation, after a suitable set of LS-DYNA solutions are obtained, response surface surrogates are used to estimate additional solutions.
2.6. Response surface methodology
A response surface (RS) model is simply a mathematical representation that relates input variables (parameters that the user controls) and output variables (response quantities of interest), often used in place of computationally expensive solutions. Many papers have been published on response surface techniques, see for example (Myers, 2002). The one adopted here is the Extended Radial Basis Functions (ERBF) method as described by (Mullur, 2005, 2006). In this adaptive response surface approach, the total number of RS parameters computed equals
3. Description of helicopter test article
A full-scale crash test of an MD-500 helicopter, as described by (Annett and Polanco, 2010), was conducted at the Landing and Impact Research (LandIR) Facility at NASA Langley Research Center (LaRC). Figure 1a shows a photograph of the test article while it was being prepared for test, including an experimental dynamic energy absorbing honeycomb structure underneath the fuselage designed by (Kellas, 2007). The airframe, provided by the US Army's Mission Enhanced Little Bird (MELB) program, has been used for civilian and military applications for more than 40 years. NASA Langley is spearheading efforts to develop analytical models capable of predicting the impact response of such systems.
4. LS-DYNA model description
To predict the behavior of the MD-500 helicopter during a crash test, an LS-DYNA (Hallquist, 2006) finite element model (FEM) of the fuselage, as shown in Figure 1b, was developed and reported in (Annett and Polanco, 2010). The element count for the fuselage was targeted to not exceed 500,000 elements, including seats and occupants; with 320,000 used to represent the energy absorbing honeycomb and skid gear. Shell elements were used to model the airframe skins, ribs and stiffeners. Similarly, the lifting and pullback fixtures, and the platform supporting the data acquisition system (mounted in the tail) were modeled using rigid shells. Ballast used in the helicopter to represent the rotor, tail section, and the fuel was modelled as concentrated masses. For materials, the fuselage section is modeled using Aluminum 2024-T3 with elastic-plastic properties, whereas the nose is fiberglass and the engine fairing is Kevlar fabric. Instead of using the complete “as-tested” FEM model, this study uses a simplified model created by removing the energy absorbing honeycomb, skid gears, anthropomorphic dummies, data acquisition system, and lifting/pull-back fixtures. After these changes, the resulting simplified model is shown in Figure 2. Even with all these components removed, the simplified model had 27,000 elements comprised primarily of shell elements to represent airframe skins, ribs and stiffeners. The analytical test case used for calibration, simulates a helicopter crash onto a hard surface with vertical and horizontal speeds of 26 ft/sec and 40 ft/sec, respectively. For illustration, Figure 3 shows four frames from an LS-DYNA simulation as the helicopter impacts the hard surface.
5. Example results
Results described here are derived from the simplified LS-DYNA model, as shown in Figure 2. This simplified model reduced the computational time from days to less than seven minutes and allowed for timely debugging of the software and demonstration of the methodology, which is the main focus of the chapter. Nonetheless, the same approach can be applied to the complete “as-tested” FEM model without modifications.
For evaluation purposes, simulated data are used in lieu of experimental data. Because more often than not analytical model predictions do not agree with the measured data, the simplified model was arbitrarily perturbed. Knowledge of the perturbations and areas affected are not revealed until the entire calibration process is completed. Data from this model, referred to as the perturbed model, takes the place of experimental data. In this study, no test uncertainty is considered. Therefore, only 1 data set is used for test.
Figure 4(a) depicts a wireframe of the simplified model showing only 405 nodes. Superimposed is a second wiring frame with connections to 34 nodes identified by an optimal sensor placement algorithm. At each node there can be up to 3 translational measurements, however, here the placement algorithm was instructed to place only 41 sensors. Figure 4(b) shows the location for the 41 sensors. Results from the optimal sensor placement located 8 sensors along the
5.1. Initial parameter selection
Calibration efforts begin by selecting model parameters thought to be uncertain. Selecting these parameters is perhaps the most difficult step. Not knowing what had been changed in the perturbed model, the initial study considered displacements, stress contours, and plastic strain results at different locations on the structure before selecting the modulus of elasticity and tangent modulus at various locations. The parameters and uncertainty ranges selected are shown in Table 1. Without additional information about parameter uncertainty, the upper and lower bounds were selected using engineering judgment with the understanding that values anywhere between the bounds were equally likely.
10,000,000 | 8,000,000 | 12,000,000 | ||
10,000,000 | 8,000,000 | 12,000,000 | ||
9,880,000 | 7,904,000 | 11,856,000 | ||
10,000,000 | 8,000,000 | 12,000,000 | ||
10,000,000 | 8,000,000 | 12,000,000 | ||
134,200 | 107,360 | 161,040 | ||
134,200 | 107,360 | 161,040 | ||
134,200 | 107,360 | 161,040 |
With the parameter uncertainty definition in Table 1, LS-DYNA models can be created and executed to study the calibration metrics as described earlier. As an example, 150 LS-DYNA runs with the simplified model were completed while varying parameters over the ranges shown in Table 1. To construct the uncertainty bounds for each of the 150 runs,
5.2. Revisedparameter selection
A second search for a revised parameter set involved conversations with the model developer and additional runs while varying parameter bounds to see their effect on
6. Evaluation of calibration metrics with revised parameter set
Results for metric
Calibrated Value | |||||
Keel beam stiffener thickness (in) | 0.020 | 0.015 | 0.025 | 0.0161 | |
Belly panel thickness (in) | 0.090 | 0.08 | 0.135 | 0.1008 | |
Keel beam thickness (in) | 0.040 | 0.035 | 0.045 | 0.0358 | |
Lower tail thickness (in) | 0.040 | 0.035 | 0.045 | 0.0414 | |
Back panel thickness (in) | 0.020 | 0.015 | 0.025 | 0.0166 | |
Upper tail thickness (in) | 0.020 | 0.015 | 0.025 | 0.0168 |
Thus far, in this study, metric
along the ordinate and test along the abscissa. Colors represent the numerical value of the vector projections, e.g. a value of 1 (black) indicates perfect matching between test and analysis. Listed on the labels are the corresponding shape contribution to the response for both analysis (ordinate) and simulated test (top axis). For example, the first impact shape for analysis (bottom left) contributes 0.4 of the total response as compared to 0.39 for test. It is apparent that initial impact shape matching is poor at best with the exception of the first two shapes. An example of an impact shape is provided in Figure 8. Here, a sequence of 8 frames for the test impact shape number 2 (contribution
6.1.1. Sensitivity with revisedparameter set
Another important aspect of the calibration process is in understanding how parameter variations affect the norm metric
6.2. Optimization for calibration parameters
The optimization problem is instructed to find parameter values to minimize the natural log of the prediction error using CORS. Starting with only 50 LS-DYNA solutions, CORS is allowed to compute 60 additional LS-DYNA runs. After completing the additional runs, the calibrated parameter set in Table 2 produced an overall prediction error reduction of 8%. Because the number of LS-DYNA runs is set by the user, the calibrated parameter set in Table 2 is not a converged set but rather an intermediate set. Nonetheless, it is referred to as the calibrated parameter set. The optimization process is purposely set up this way to control the computational expense in computing new LS-DYNA solutions. To show the impact of the parameter changes on the vector norm
of results with data from the simulated test (solid blue), the baseline model using nominal values (dotted red), and the calibrated model (dashed-dot black). When comparing vector norms an overall assessment of the model predictive capability is quantified. However, users may prefer to evaluate improvements at specific sensor locations. For example, Figure 11 shows the velocity for the simulated test, baseline model, and the calibrated models at location 34 (see fig 1) in the z-direction. Improvements like those shown in Figure 11 are common for most locations but not all. Furthermore, after calibration, significant improvements are also apparent in the orthogonality results, as shown in Figure 12. As in Figure 7, results from the calibrated model are along the ordinate and the abscissa contains simulated test impact shapes. When comparing results against the baseline model (see Figure 7), it is clear that impact shape predictions have improved significantly, i.e., responses from the calibrated model and test are closer.
6.3. Revealing the correct answer
After completing the calibration process it is instructive to examine the actual changes made to the simplified model that resulted in the perturbed model. Because data from the perturbed model served as our target test data set, revealing the changes effectively gives the reader the true answer to the problem. Figure 13a depicts all the sections altered in the perturbed model. Alterations consisted of ± 20 % change in thickness at these locations. In contrast, Figure 13b shows the sections updated during the calibration process. It is interesting to see that although exact changes were not identified, regions where changes needed to be made were identified.
7. Concluding remarks
An approach to conduct model calibration of nonlinear models has been developed and demonstrated using data from a simulated crash test of a helicopter. Fundamental to the approach is the definition and application of two calibration metrics: 1) metric
For the simulated helicopter example studied in this chapter, a flawed parameter set was evaluated and found to be inadequate using metric
Acknowledgments
The authors would like to thank Susan Gorton, Manager of the Subsonic Rotary Wing Project of NASA’s Fundamental Aeronautics Program, for her support of this activity.
References
- 1.
Anderson M. C. Gan W.. Haselman T. K. 1998 Statistical Analysis of Modeling Uncertainty and Predictive Accuracy for Nonlinear Finite Element Models. Proceedings of the 69th Shock and Vibration Symposium, Minneapolis/St. Paul, MN, USA. - 2.
Anonymous, 1988 Military Standard MIL-STD-1290A (AV), Light Fixed and Rotary-Wing Aircraft Crash Resistance. Department of Defense, Washington, DC, USA, September 1988. - 3.
Annett M. S. Polanco M. A. 2010 System-Integrated Finite Element Analysis of a Full-Scale Helicopter Crash Test with Deployable Energy Absorbers., Phoenix, AZ, USA, May 2010. - 4.
Atamturktur S. Hemez F. Unal C. 2010 Calibration Under Uncertainty for Finite Element Models of Masonry Monuments. LA-14414, February 2010. - 5.
Cronkhite J. D. Mazza L. T. 1988 KRASH Analysis Correlation with the Bell ACAP Full-Scale Aircraft Crash Test., a Look at the Future. VA, USA, October 1988. - 6.
Hallquist J. O. 2006 , Livermore Software Technology Company, Livermore, CA, USA. - 7.
Halton J. H. 1960 On the Efficiency of Certain Quasi-Random Sequences of Points in Evaluating Multi-Dimensional Integrals 2 84 90 - 8.
Homma T. Saltelli A. 1996 Importance Measures in Global Sensitivity Analysis of Nonlinear Models.,1 1 17 April 1996. - 9.
Horta L. G. Lyle K. H. Lessard W. B. 2003 Evaluation of Singular Value Decomposition Approach for Impact Dynamic Data Correlation. NASA TM2003 212657 October 2003. - 10.
Jackson K. E. Fasanella E. L. Lyle K. H. 2006 Crash Certification by Analysis-Are We There Yet? , Phoenix AZ, USA, May 2006. - 11.
Kamat M.P. 1976 Survey of Computer Programs for Prediction of Crash Response and of Its Experimental Validation. Measurement and Prediction of Structural and Biodynamic Crash-Impact Response. ,33 48 December 1976. - 12.
Kammer D. C. 1991 Sensor Placement for On-Orbit Modal Identification and Correlation of Large Space Structures Control, and Dynamics,14 2 251 259 April 1991. - 13.
Kellas S. Jackson K. E. 2007 Deployable System for Crash-Load Attenuation , VA Beach, USA, May 2007. - 14.
Kennedy M. C. O’Hagan A. 2001 Bayesian Calibration of Computer Models. ,63 Part 3,425 464 January 2002. - 15.
Mc Farland J. Mahadevan S. Romero V. Swiler L. 2008 Calibration and Uncertainty Analysis for Computer Simulations with Multivariate Output 46 5 1253 1265 May 2008. - 16.
Mullershon H. Liebsher M. 2008 Statistics and Non-Linear Sensitivity Analysis with LS-OPT and DSPEX.,4 1 Dearborn, MI, June 2008. - 17.
Mullur A. Messac A. 2005 Extended Radial Basis Functions: More Flexible and Effective Metamodeling.,43 6 June 2006. - 18.
Mullur A. Messac A. 2006 Metamodeling Using Extended Radial Basis Functions: A Comparative Approach.,21 203 217 December 2005. - 19.
Myers R. H. Montgomery D. C. 2002 Response Surface Methodology nd Edition, Copyright © 2002 John Wiley and Sons, Inc. USA. - 20.
Oberkampf W. L. Barone M. F. 2006 Measures of Agreement Between Computation and Experiment: Validation Metrics., 217,5 56 June 2006. - 21.
Oberkampf W. L. Trucano T. G. Hirsch C. 2003 Verification, Validation, and Predictive Capability in Computational Engineering and Physics. SAND2003 3769 February 2003. - 22.
Regis R. G. Shoemaker C. A. 2005 Constrained Global Optimization of Expensive Black Box Functions Using Radial Basis Functions. (2005) 31:153 171 September 2005. - 23.
Roache,P.J. 1998 Verification and Validation in Computational Science and Engineering Hermosa Publishers, Albuquerque, NM, USA. - 24.
Schwer L. E. 2007 Validation Metrics for Response Histories: Perspectives and Case Studies.,23 4 295 309 June 2007. - 25.
Sobol I. M. Tarantola S. Gatelli D. Kucherenko S. S. Mauntz W. 2007 Estimating Approximation Error When Fixing Unessential Factors in Global Sensitivity Analysis.92 957 960 July 2007. - 26.
Sudret B. 2008 Global Sensitivity Analysis Using Polynomial Chaos Expansion.93 964 979 July 2008. - 27.
Tabiei A. L. Lawrence C.. Fasanella E. L. 2009 Validation of Finite Element Crash Test Dummy Models for Predicting Orion Crew Member Injuries During a Simulated Vehicle Landing. NASA TM-2009 215476 March 2009. - 28.
Thacker B. H. 2005 The Role of Non-determinism in Computational Model Verification and Validation., Austin TX, USA, AIAA2005 1902 April 2005. - 29.
Wittlin G. Gamon M. A. 1975 Experimentally Verified Analytical Techniques for Predicting Vehicle Crash Responses., AIAA Paper75-273 75 273 February 1975. - 30.
Wilson G. E. Boyack B. E. 1998 The Role of PIRT in Experiments, Code Development and Code Applications Associated with Reactor Safety Assessment, Nuclear Engineering and Design,186 23 37 November 1998.