Open access peer-reviewed chapter

Propagating Stress-Strain Curve Variability in Multi-Material Problems: Temperature-Dependent Material Tests to Plasticity Models to Structural Failure Predictions

Written By

Vicente Romero, Amalia Black, George Orient and Bonnie Antoun

Submitted: 17 September 2019 Reviewed: 04 November 2019 Published: 17 December 2019

DOI: 10.5772/intechopen.90357

From the Edited Volume

Engineering Failure Analysis

Edited by Kary Thanapalan

Chapter metrics overview

1,158 Chapter Downloads

View Full Metrics

Abstract

This chapter presents a practical methodology for characterizing and propagating the effects of temperature-dependent material strength and failure-criteria variability to structural model predictions. The application involves a cylindrical canister (“can”) heated and pressurized to failure. Temperature dependence and material sample-to-sample stochastic variability are inferred from very limited experimental data of a few replicate uniaxial tension tests at each of seven temperatures spanning the 800°C temperature excursion experienced by the can, for each of several stainless steel alloys that make up the can. The load-displacement curves from the material tests are used to determine effective temperature-dependent stress-strain relationships in ductile-metal plasticity models used in can-level model predictions. Particularly challenging aspects of the problem are the appropriate inference, representation, and propagation of temperature dependence and material stochastic variability from just a few experimental data curves at a few temperatures (as sparse discrete realizations or samples from a random field of temperature-dependent stress-strain behavior), for multiple such materials involved in the problem. Currently unique methods are demonstrated that are relatively simple and effective.

Keywords

  • materials
  • modeling
  • calibration
  • uncertainty
  • thermal-structural failure

1. Introduction

Sandia National Laboratories is developing the capability to adequately model the complex multiphysics leading to pressurization and breach of sealed compartments that contain organic materials such as foams, which volatilize when the compartments are heated in fire accidents. The present chapter along with references [1, 2, 3, 4, 5, 6, 7] describes aspects of the associated activities, including experiments, modeling and simulation, code and calculation verification, and advanced model validation and uncertainty quantification (UQ) methods.

The modeling and verification, validation, and uncertainty quantification (VVUQ) activities were performed under a multiyear “abnormal thermal-mechanical breach” (T-M breach) task [1] of a Predictive Capability Assessment Project (PCAP) in the Verification & Validation (V&V) subelement of the U.S. Dept. of Energy Advanced Simulation and Computing (ASC) program. The goal of the PCAP T-M breach task was to assess the error and quantify the uncertainty in modeling the thermal-chemical-mechanical response and weld-related breach failure of sealed canisters (“cans”) weakened by high temperatures and pressurized by heat-induced pyrolysis of foam. The planned outcome of the PCAP T-M breach task was to measure improvements in prediction accuracy over time as the models and computer platforms became more capable.

The Sandia Weapon System Engineering and Assessment Technology Campaign (WSEAT) program supported the project by conducting material characterization tests and validation experiments [2] (see Figure 1 ). This partnership provided an opportunity to develop a fully integrated process from design of experiments through model validation assessment, with uncertainty reduced as much as possible and propagated through the process.

Figure 1.

Thermal-chemical-mechanical validation experiments [2], including internal pressure response. The ‘can’ includes the cylindrical ‘sidewalls’ or ‘walls,’ as well as the top ‘lid’ and bottom ‘base.’

Breach failures were expected to occur, and in the tests, they did occur, at the circumferential perimeter (laser) weld that joins the top lid to the can sidewalls. This is because the weld thickness is significantly less than the can lid and sidewalls (see Figure 2 ), and the tests/cans of interest in this chapter were heated at the lid top surface, so the top weld material was much hotter/weaker than the perimeter weld material at the bottom of the can. While prediction of canister internal temperatures, time to breach, and breach pressure are sought in the T-M breach task, breach pressure is the quantity of interest (QOI) in this chapter.

Figure 2.

Close-up of modeled geometry where can top lid, sidewall, and internal foam meet. (Nominal geometry values are 0.03 in. weld depth, 0.0645 in. wall thickness, and 0.007 in. clearance between the lid and sidewall in the weld region.)

This chapter describes a practical methodology for characterizing and propagating the effects of variability of material strength and failure criteria to structural response and failure predictions involving multiple temperature-dependent materials. Relatively simple and effective UQ techniques are used to model and propagate temperature dependence and material sample-to-sample variability effects inferred from very limited material characterization tests.

Section 2 summarizes the material characterization tests and results. These involve uniaxial tension tests on several cylinder specimens at each of seven temperatures spanning the 800°C temperature excursion experienced by the can, for two stainless steel alloys that make up the can and weld materials. Section 3 summarizes the ductile-metal material constitutive models used for elastic-to-plastic stress-strain response at a given temperature. The procedure to parameterize the constitutive model’s stress-strain relationships through inverse analysis to best match measured load-deflection data curves from the tension tests is also explained and demonstrated. Section 4 describes the material damage models and failure criteria calibrated to the experimental stress-strain data. The thermal-chemical-mechanical models for predicting can thermal, pressurization, and structural response (and failure) are also briefly summarized. Section 5 describes the use of the models and associated simulations to propagate effects of material strength and failure variability to estimate breach failure pressure variability. Sensitivity analysis is also performed to assess the relative contributions of the various materials’ strength and failure criteria variability on the total variability of predicted failure pressure. Section 6 provides some summary observations and conclusions.

Advertisement

2. Temperature-dependent material strength characterization tests and results

Round-bar tensile tests were conducted at seven temperatures: 20, 100, 200, 400, 600, 700, and 800°C for both the can lid and base (bar) material and the sleeve/wall (tubular) material. Most specimens were in the axial orientation (see Figures 3 and 5 ), but some tests for the lid material were conducted in the radial orientation at 20, 600, and 800°C to provide an indication of orientation dependence.

Figure 3.

Axial tensile specimen extraction from bar stock.

2.1 Tensile characterization for PCAP 304L stainless steel lid and base material

Round-bar tensile-test specimens of 0.3 in. (inch) diameter and extensometer gage length 0.80 in. were used in the tension tests described here. Specimens in the axial direction were extracted from 3.5 in. diameter bar stock as shown in Figure 3 . The can top lids and bottom bases were machined from this lot of bar stock.

Material strength stress-strain characterization tests (uniaxial tensile tests) were conducted [8] on an MTS 880 20 Kip axial test frame with displacement (stroke) control to produce a nominal strain rate of 0.001/s. This strain rate was based on the model-predicted conditions in the PCAP thermal-mechanical breach experiments [2]. A strain rate of 0.0001/s, based on computed local strain rates in the weld region, was also tested to explore the sensitivity of the material to strain rate. However, strain-rate effects were not included in the PCAP material strength model because it was figured that in the accident scenarios being assessed here, strain-rate effects were of secondary importance prior to reaching a stress-strain maximum load condition where our failure criteria would be activated (see Section 4). For conditions past maximum load, it is well known that 304L stainless steel (ss) has nonnegligible strain-rate dependence at all temperatures.

The test results for the PCAP lid material in the axial direction are shown in Figure 4 in terms of engineering stress versus engineering strain. As expected, the strength of the material decreases as the temperature increases. However, around 600°C, there is a noticeable inflection point in the temperature related shape trend of the stress-strain curves. It is believed that this inflection occurs because the deformation mechanisms change from void growth and deformation to grain slippage at about half of the material melt temperature for 304L stainless steel (see last subsection of Section 2). Half the melt temperature of 304L stainless steel is roughly 700°C.

Figure 4.

Engineering stress vs. engineering strain curves for PCAP lid and base material (ss 304L, axial specimens from bar stock).

Reannealed and not re-annealed sets of specimens were tested to better quantify the effect of the material starting condition on the tensile properties. It was presumed that annealing the tensile specimens would not have a large effect since the raw materials were reported to be in an annealed condition, but the test results do show a noticeable difference [1], which seems to indicate that the original material was not in a fully annealed state. The specimens that were reannealed in this test were placed in a vacuum at 1000°C for 30 min. The effect of reannealing the specimens was larger at the lower temperatures. As the test temperature increased, this effect became less noticeable, and by 700°C, it is fairly indistinguishable.

Specimens cut from the cylindrical bar stock in a direction normal to its axis were tested at 20, 600, and 800°C to provide an indication of orientation dependence. These results are not shown, but typically exhibited ultimate stress values at lower test displacements/strains and with sharper subsequent weakening than the axial samples did show. Nonetheless, an isotropic constitutive model was used (see Section 3.1). It was calibrated with stress-strain curves from tension tests with the axial-cut specimens only.

2.2 Tensile characterization for PCAP 304L stainless steel wall material

Specimens in the axial direction were extracted from the tube-stock material as shown in Figure 5 . The can sleeve/sidewalls were machined from this stock. The nominal specimen diameter was 0.1 in. with an extensometer gage length of 0.62 in. The test results for the PCAP wall material in the axial direction are shown in Figure 6 . Like the lid material, the strength of the wall material decreases as the temperature increases and a noticeable change in the temperature related shape trend of the stress-strain curves occurs at about 600°C.

Figure 5.

Axial tensile specimen extraction from wall material.

Figure 6.

Engineering stress vs. engineering strain curves for PCAP tube (wall) material (ss 304L, axial specimens from tube stock).

2.3 Ignored but possible creep and strain-rate effects

In general, the engineering stress versus engineering strain curves for both the wall and lid materials exhibit a markedly different character above 600°C. Below this temperature, the ultimate strain decreases as the temperature increases, but above 600°C, features of the stress-strain curve change and the ultimate strain becomes larger. Considering that test data are only available at one strain rate, a plausible explanation includes the hypothesis that at about half of the melt temperature, creep deformation is observed manifesting in creep relaxation in a displacement-controlled tensile test (see [9]).

For the PCAP test conditions, a temperature of 600°C translates to a homologous temperature of 0.48. The yield stress at 600°C is about 25 ksi. Converting yield stress to shear stress and then normalizing it by the shear modulus (76.3 GPa) gives a value of 1.1e−3. These conditions are right at the transition to power-law creep [9], which is where dislocations are able to climb (through thermal fluctuations) over precipitates and other barriers. Thus, a creep-dependent model may be necessary for temperature conditions above 600°C.

Additional factors may include temperature and strain-rate–dependent phase transformation mechanisms as reported in [10]. At room temperature, a martensitic phase appears when strain is loaded. However, this does not appear to happen at elevated temperatures leaving the material in an austenitic phase, which is weaker than martensitic phase, allowing more necking at elevated temperatures.

These potential mechanisms still need to be further investigated. A creep and strain-rate–dependent phenomenological constitutive model is recommended in future studies with elevated temperatures, especially if the material deformation will be simulated past maximum load conditions.

Advertisement

3. Temperature-dependent material strength characterization tests and results

Mechanical constitutive behavior was modeled using a strain-rate-independent isotropic ductile-metal multilinear elastic plastic (MLEP) plasticity model (e.g., [11]). The parameterized form of the “true” stress-strain curve (Cauchy stress‑plastic logarithmic strain) consistent with the constitutive model’s formulation is represented in piecewise linear fashion by multiple linear segments as described in Section 3.2. Fundamental assumptions of the constitutive model follow.

3.1 Constitutive model description

Strain-rate independence: Load-displacement response of a specimen subjected to constant strain rate is independent of the strain-rate magnitude. For ductile metals and over the small strain rates and small range of rates encountered in the PCAP application, this is considered an acceptable assumption below about half of the melting temperature. However, it is known that 304L stainless steel exhibits some strain-rate dependence at lower temperatures as well, especially past a maximum load condition. In the PCAP project, this issue was examined by testing specimens at several strain rates expected to represent the bulk strain rates of the cans in the tests. This was used to assess model-form uncertainty since local strain rates are expected to spatially vary over the regions surrounding stress concentrations, like at the welds. Ultimately, strain-rate effects were judged to be small relative to other modeling errors and uncertainties in the PCAP T-M breach problem.

Independence on hydrostatic stress: Independence of yield behavior on the hydrostatic stress state in metals is well understood. Plastic deformation is attributed to shear states of stress and strain characterized by the second invariant of the deviatoric stress tensor (J2). Definitions of the von Mises effective stress σ_eff and the equivalent plastic strain (EQPS) are derived from this statement [13]. The effective stress and equivalent strain are uniaxial measures that allow collapsing triaxial stress and strain states in structural applications into uniaxial measures that map into experimental stress-strain curves derived from uniaxial load-displacement tests.

Isotropy: The inelastic constitutive response is independent of the load orientation. This is a reasonable assumption for metal components unless they have been fabricated with processes that introduce directional character of the grain structure such as rolling or forging without annealing.

The MLEP model is a standard metal plasticity constitutive representation for industry practice. MLEP treatment helps FE models to be affordable with reasonable computational resources and is suitable as long as the limitations are understood and not violated significantly. The MLEP model only relates stress and strain; no intrinsic statement about material strength-related failure is made. Material failure modeling is discussed in Section 4.1.

3.2 Constitutive model parameterization procedure

Material characterization involves solving an inverse problem to determine the MLEP constitutive model’s “true” Cauchy stress‑plastic logarithmic strain relationship that recovers the load-displacement or engineering stress-strain data (e.g., Figure 4 ) from tensile tests. As such, a fitting procedure was used to enable the inverse calculation.

Before the onset of necking, the true stress and true strain in a tensile specimen can be calculated from the load-displacement recorded from the load cell of the testing frame and an extensometer mounted on the specimen. Once necking occurs, the true strain in the middle of the necked region must be calculated from a finite element (FE) model of the gage section of the specimen. The ASC massively parallel solid-mechanics code Adagio [12] was used for the simulations. To ensure that necking initiates between the ends of the gage section, a small imperfection is introduced in the mesh. Section 3.3 investigates hex-mesh density sufficiency.

The implicit relationship between the load-displacement response of the FE model and the MLEP constitutive relationship necessitates implementing an iterative procedure to fit an MLEP model to the load-displacement record obtained from testing. Since the test data contain a large number of potentially noisy data points, some data conditioning through down-sampling and/or smoothing is necessary, resulting in order 20 data points. This is based on engineering judgment and is ultimately confirmed by comparing the load-displacement curve with the entire test record. Point selection could be done on the experimental data record or by conditioning the experimental data and selecting points from the smooth conditioned data.

Assuming that the multilinear true stress-true strain is fitted at the ith point, the next linear section is obtained by a two-step process illustrated in Figure 7 and summarized next.

  1. Bracket the slope of the next segment.

    1. Extend the current MLEP curve by the current slope candidate to the next strain point in the conditioned dataset.

    2. Solve the FE model with the current candidate MLEP model loaded with the strain point.

    3. Evaluate the reaction force in the gage section. If the force is less in the conditioned dataset, decrease the slope candidate and repeat 1.a; otherwise, slope candidates bracketing the actual slope are found.

  2. Solve for the slope resulting in a reaction force that matches the force on the experimental dataset. In the current implementation of the MLEP fitting method, the bisection algorithm is used.

Figure 7.

Iteration process to arrive at next point in piecewise linear parameterization of MLEP stress-strain curve.

This process requires careful management of analysis restarts to efficiently iterate on the MLEP line segment slopes. Once all the line segments are determined, an analysis is run with the complete MLEP line segment set to characterize necking through the entire strain history.

Noise in the test data was sometimes a factor in the MLEP calibrations for this project, especially at high temperatures. Cubic spline smoothing [14] was used to smooth test data when needed. The raw and smoothed data are illustrated in Figure 8 . The view is closely zoomed to a section of the curve.

Figure 8.

Cubic-spline smoothing of tension test data

It was visually observed that the differences between MLEP curves calibrated to different specimens (specimen-to-specimen differences) were much larger than the fitting errors from the calibration process itself.

Several factors influence the success of the iterative MLEP procedure:

Model gage-length and cross-section mismatch to test specimen geometry: This may make the model too stiff to follow necking behavior. Iteration usually fails near the ultimate load, resulting in a true stress-true strain curve with constant slope.

Element refinement: When the mesh is coarse, the model is too stiff to produce the target necking behavior. For strains beyond ultimate load, the iteration fails, resulting in a true stress-true strain curve with constant slope.

Load step: It may be necessary to reduce the load step in the solid-mechanics simulation so that the calibration procedure does not miss the necking behavior, which would drop the final engineering stress-strain curve below the test data.

Number of points before and after ultimate load: Similar to the load step effect, too few points result in the analysis taking the wrong path.

Noisy test data: An increased level of noise was observed in the data at elevated temperatures. This may actually be a result of a physical phenomenon but the modeling approach in this study assumes a smooth stress-strain behavior, and constitutive curves with abrupt changes in slope may throw the iteration off course. Data conditioning with piecewise smooth regression fit usually addresses this adequately.

Stress and slope tolerance: Insufficient tolerance in the bisection method may result in failure to converge (and, therefore, constant slope true stress-true strain curve) caused by unstable undershooting/overshooting.

Confinement of necking within the extensometer gage length: It is possible that numerically the model necks at the ends of the gage section instead of the middle. Always check the final necking pattern before accepting the fitted model.

Amount of mesh imperfection to induce necking in the middle of the gage section: This is an artificial imperfection, and care must be taken not to reduce the cross-section area significantly. Typically, 0.1% or less artificial reduction of area is desired although this guideline may not be sufficient for “flat” stress-strain curves. Insensitivity with respect to this numerical uncertainty needs to be demonstrated.

Element type selection: Necking tends to excite the hourglass modes, if present, and the hardening curve might be altered by low resistance to shear deformation in the necked section.

The calibration process requires the elastic modulus, yield stress, and Poisson’s ratio as input. While the experimental load-displacement curves have data at small strains (<0.2%), they have been measured with extensometers optimized for large strains (>50%). No specifications regarding their accuracy at small strains were received, and it was observed that while the lid data showed an initial linear section consistent with literature data for elastic modulus and yield stress, the wall data exhibited no significant linear section. The decision was made to use the modulus/yield data obtained from the test record for the lid and literature data for wall.

The MLEP model is not parametric in the sense of a power law or a Johnson-Cook constitutive model, where a handful of parameters describe the shape of the true stress-true strain curves. A cubic spline elastic-plastic (CSEP [15]) version of MLEP now exists where the true stress-strain curve is parameterized by order 7 stress-strain points or knots whose stress and strain values are simultaneously optimized such that the experimental load-displacement or engineering stress-strain curve is best matched. This appears to also generally work well but often requires more model runs. In either case, generating material-curve data fits is not easy.

3.3 Solution verification

The MLEP fit process has been used by analysts at Sandia National Laboratories to fit room-temperature curves for several years and experience has shown that about 16 elements across the radius of a cylindrical gage section have been sufficient. High-temperature 304L curves, however, exhibit rather “flat” characteristics and low yield points, so the concern of using mesh-converged models was revisited here. The results in Figure 9 show that, for the final number of elements (32) used in the FE simulations in the MLEP calibrations, the calibration results are well converged up to and beyond the maximum load point where the stress-strain curves will be evaluated in the can-level simulations (because of the material failure criteria being tied to the maximum load point; see Section 4.1).

Figure 9.

Solution verification for gage-section FE model and solves used for MLEP model calibrations.

3.4 Selected calibration results

Calibration to a set of wall specimen test data at room temperature is shown in Figures 10 and 11 . The yellow symbols indicate the calibrated curve, the test data are shown with black symbols, and the blue symbols identify the points selected from the spline smoothed data where the MLEP fit was performed. Figure 11 illustrates the following: (a) the fact that the load-displacement record is not linear at small strains; (b) so literature data were used for modulus and yield stress (second blue marker); and (c) the MLEP calibration iteration is stopped when convergence criteria are satisfied; the knee of the yellow curve and the second blue marker (the target) do not coincide exactly. Convergence criteria were decided based on considerations of the expected impact on QOIs and practical computational throughput limits.

Figure 10.

Room temperature MLEP calibration.

Figure 11.

Room temperature calibration, small strains.

3.5 Process automation and archiving

Considering the number of calibration instances (49 sets of test data comprised of several replicate tension tests at seven temperatures for two materials), the calibration process was automated in a script, and the different instances of calibration were executed under the control of a DAKOTA [16] parametric study.

Advertisement

4. Material failure criteria and can pressurization and response/failure modeling

4.1 Weld material modeling and failure criteria

It was originally planned to obtain weld material stress-strain curves and failure criteria by calibrating to tension tests of butt-weld square bar specimens and then validating to can pie-section weld flexure tests to failure. However, both endeavors proved to be problematic experimentally and computationally [1] such that adequate model accuracy could not be established.

As a reasonable alternative, the following approach was taken. For welds of normal quality that do not have anomalies like voids, empirical evidence strongly suggests that weld material strength lies somewhere between the strengths of the two materials joined by the weld—here the lid and the can wall. Wall/tube material was slightly weaker at max load than the lid/bar-stock material, so we made a conservative-leaning choice to assign the wall/tube material curves and failure criteria to the weld.

Microstructural examination of the PCAP cans pressurized to failure indicated ductile overload failure of the laser welds of the heated lids. Equivalent plastic strain (EQPS) and tearing parameter (TP) are candidate models for accumulated material damage, as explained next. These models’ computed damage values at the point of maximum load and engineering stress in the uniaxial tension tests are taken to be critical material failure criteria for these two models. This is consistent with current failure modeling practice at Sandia National Laboratories in conjunction with MLEP models in overload failure modes. Because of the notorious difficulty of predicting structural failure from material damage modeling, the two models and their failure criteria were used and assessed as candidate indicators of onset of failure in the PCAP application.

As paraphrased from [11], the TP failure indicator within Sierra uses an approach based on the work of Brozzo et al. [17]. This parameter takes the form of an evolution integral of the stress state integrated over the plastic strain. Two modifications were needed beyond Brozzo’s original formulation. The first modification was the inclusion of a Heaviside bracket on the maximum principal stress. That is, if the maximum principal stress is compressive (negative), the increment to the tearing parameter is zero. Thus, there is no increase in material “damage” for compressive stress states, nor is there “damage healing” for compressive states.

The second modification to TP calculation resulted from the investigation of notched tensile test results. Two sets of notched round bar tensile tests referenced and summarized in [11] were performed on different heating treatments of 6061-T6. Comparison between the simulations and the experimental results showed excessive ductility for the simulations using the original formula. By raising the stress-state portion of the integral to the fourth power, a match between experimental data and simulation results was achieved. The final form follows. It is used as well for the ductile stainless steels in the present work.

TP = 2 σ T 3 σ T σ m 4 d ε p E1

In this equation, σT is the maximum principal stress; σm is the mean stress (average of principal stresses); and εp is the equivalent plastic strain, EQPS.

All input parameters, including the critical value of TP that coincides with material “failure” as interpreted below can be obtained from a model calibrated to a standard tensile test as explained below. The mechanical response code with MLEP constitutive model requires a Cauchy stress and plastic logarithmic strain to define strain-hardening behavior. To determine this from a standard tensile test, it is necessary to solve the inverse problem described in Section 3.2. More details of the tearing parameter model for ductile failure can be found in [11].

In addition to the critical TP value, the equivalent plastic strain critical value is also used to predict failure in order to assess the uncertainty due to failure-model form. EQPS is derived directly from the strain (e.g., [13]).

Although TP and EQPS critical values are often defined based on tension test material separation failure, the critical values for this project were defined at the maximum load in the tension tests. This decision was made for two reasons. First, the global loading of the can structure is due to pressurization, and the pressure is always increasing and will cause incipient failure when a maximum load condition is reached. Second, the weld failures observed in the can tests showed little evidence of necking. Up to maximum load, there is little necking. It was reasoned that defining critical failure values based on any finite element in the model reaching the hardening curve maximum load point identified from the tension tests would result in conservative failure predictions for the can.

Therefore, the failure criteria defined at the maximum load in the tube/wall round bar tension tests were used to signify weld material failure in the can breach predictions. Figure 12 shows the location (i.e., circle) of the max load condition used to obtain the critical values for EQPS and for TP at each temperature from the stress-strain curves. The critical values were obtained by calibrating the MLEP model to match the load-displacement curves (engineering stress-strain curves) from a mesh-converged model of the specimen gage section and then by searching for the maximum TP and EQPS values on the specimen midsection. There were replicate tension tests at each temperature and the corresponding critical values were determined for each data curve as listed in Table 1 .

Figure 12.

Maximum load locations (circled) where critical failure values of TP and EQPS are obtained (from ss 304L tube/wall material tension tests, only one curve is shown at each temperature for illustration).

TEST # TEMP (°C) Critical TP Critical EQPS
1NA 20 0.40411 0.400321
2NA 20 0.419455 0.415633
3NA 20 0.403365 0.399622
4NA 100 0.303277 0.300511
5NA 100 0.293452 0.290511
6NA 100 0.276675 0.273797
7NA 200 0.224306 0.221862
8NA 200 0.236491 0.233974
9NA 200 0.225990 0.223562
10NA 400 0.203274 0.201021
11NA 400 0.205928 0.203682
12NA 400 0.206650 0.204383
14NA 600 0.237258 0.234747
15NA 600 0.272458 0.269665
16NA 600 0.217022 0.215917
17NA 700 0.184131 0.182011
18NA 700 0.208394 0.206040
19NA 700 0.234210 0.231570
24NA 800 0.233182 0.230782
25NA 800 0.193157 0.192174
26NA 800 0.164383 0.162645

Table 1.

Max load-related failure criteria values for TP and EQPS determined from tension-test specimen model calibrated to each ss 304L tube/wall experimental load-displacement curve.

4.2 Models for can thermal-chemical-structural response and failure

The thermal-chemical-mechanical models used are briefly summarized here from [1]. The Sandia SIERRA module [18] for massively parallel thermal-fluid computations was used to model the heating of the can, its thermal response, and thermally-induced chemical-kinetic decomposition of the foam [19] and resulting gas species generation that causes pressurization. The solid mechanics and structural modeling module [12] were used to model the mechanical response of the can and failure at the weld under pressurization and high temperatures and large temperature variations in time and space. The module uses a nonlinear quasi-statics finite element approach based on a Lagrangian, three-dimensional, implicit scheme. A multilevel iterative solver enables solution of problems with large deformations, nonlinear material behavior, and contact. Temperature-dependent elasto-plastic constitutive models are accommodated, where the elastic parameters (Young’s modulus, Poisson’s ratio, and yield stress) and the stress-strain plasticity curves are temperature dependent.

The thermal-chemical simulation provides the temperature and pressure boundary conditions for the mechanical model. The only feedback from the mechanical model to the thermal-chemical model is the can’s internal volume change due to deformation. The volume change affects the pressure level in the can through the Ideal Gas Law, which is evaluated within the thermal module and then communicated to the mechanical module. The can geometry is not changed/updated in the computational heat-transfer model because the can deformation is fairly slight (lateral bulging equivalent to a few can-wall widths) so it is thought to negligibly affect the heat transfer (or at least not affect the heat transfer in the model, given the way it was modeled). The heat transfer and foam decomposition submodels and parameters are also not affected by pressure in the current treatment. (The uncertainties associated with including pressure effects on these phenomena were judged larger than the error involved by not including pressure effects, and any modeling error effects would be quantified through the validation comparisons [1, 4] that were the culmination of the PCAP assessments).

The thermal-chemical and mechanical models were run in a “concurrent but segregated” manner in which Sandia’s SIERRA [20] software framework for massively parallel multiphysics computations passed temperature, pressure, and volume information between the thermal-chemical simulation and the mechanical simulation. SIERRA coordinates and manages the different time-stepping of the thermal-chemical and mechanical codes and the transfer of spatial temperature fields solved on the tetrahedral thermal mesh to nodal temperature assignments to the nodes of the mechanical hex mesh.

The full 360-degree can geometry with internal foam was used for the thermal-chemical simulations, and the 90-degree pie-slice geometry without foam was used for the mechanical simulations. The full 360-degree geometry was used in the thermal-chemical simulations because at the time, the foam and enclosure radiation models did not accommodate any kind of symmetry boundary conditions. The mechanical simulations were much more computationally expensive, so a quarter-can partial geometry without foam was used to reduce cost. Leaving foam out of the mechanical model tremendously reduces the number of finite elements and thus computational cost, and is thought to have negligible impact on structural behavior and pressure-breach failure in the PCAP problem.

In the thermal model, a uniform heat flux boundary condition was applied on the lid surface. The flux level was calculated as follows to be consistent with the temperature data from the experiment control TCs. The four control TCs were fully inserted into radially drilled holes at midplane on the lids at 0, 90, 180, and 270 degrees around the lids [2]. A proportional-integral-derivative (PID) routine [21] was used to determine the heat flux magnitude needed to match the control thermocouple temperature responses. This approach results in a more realistic temperature distribution versus using a TC-guided uniform temperature condition over the entire lid surface.

On the side walls and base of the can, convection and radiation boundary conditions were specified (as described in [5]) to represent the heat transfer between the can exterior and the surrounding environment.

Different element types and mesh densities are used as appropriate in the thermal and mechanical models. Code verification activities were performed for the thermal and solid/structural mechanics codes and models [1]. For the order-200 thermal-mechanical and mechanical-only simulations run for VVUQ and sensitivity analysis in the PCAP project, an affordable mesh size of 1.85 million hex elements for the structural model (12 elements through the thickness of the weld) and 14.3 million tet elements for the more affordable thermal model were used. This affordable ‘Level 4’ mesh was one in a succession that went up to Level 6 with approximately double the number of elements in the structural and thermal models (see [3]). The succession of meshes was used for a solution verification assessment in [1] to estimate and account for Mesh 4–related error/uncertainty in the VVUQ analysis and results in [4]. Solver tolerances were experimented with and set to contribute small error/uncertainty relative to mesh effects.

Figure 13 shows the Level 4 mesh at a critical portion of the structural model where weld failure is determined in the thermal-structural simulations. Stress concentration is evident at the crown of the weld notch. This type of weld-geometry representation was found to best support weld failure predictions analyzing many different geometry representation schemes [1, 22].

Figure 13.

Weld-section close-up of structural-model Level 4 hex mesh used in model validation, UQ, and sensitivity analysis simulations in the PCAP project. Stress concentration is evident at the crown of the weld notch. (Figure from [3].)

Advertisement

5. Discrete propagation of material strength and failure variability to can breach-pressure variability predictions

The material strength uncertainty sources treated here come in discrete (not parametric) form of multiple slightly varying stress-strain curves and failure criteria representing stochastic material strength variations in the can lid, weld, and wall materials. These curve-to-curve variations and failure criteria when propagated cause predicted variability (and uncertainty thereof) in can response and failure pressure level. The 16 uncertainties not related to material strength and failure variability are all parametric in nature and are held at nominal values listed in [4] for the purposes of the following material-curve propagations and analysis of results.

Sensitivity studies in [1, 4] of the effects of the more prominent modeling uncertainties regarding thermal, pressurization, and structural phenomena in the PCAP T-M breach problem reveal that material curve strength variations are among the most significant causes of failure-pressure predicted variability and uncertainty thereof.

5.1 Dealing with temperature dependence of the material stress-strain curves

Dealing with temperature dependence of the material curves adds a significant difficulty to the discrete propagation problem. This is addressed in the following two data processing steps before propagation can be performed in Section 5.2.

Step I: material stress-strain curves strength-to-failure ranking and down-selection

In this step, the effective strength of the repeat material curves at each temperature was ranked and then down-selected to three representative curves (high, medium, and low strength) according to predicted failure pressure predictions from the PCAP can simulations. The curve-strength ranking process at a given temperature is much more involved when multiple materials exist than when only one material exists (which allows a simple straightforward process, [23]). This is because the strength ranking of a given set of material curves can depend on the particular combination of material curves used for the other two materials (e.g., wall strength and flexure can affect stress-strain phenomena at the weld notch). There are many such combinations because each of the two other materials has multiple material curves, so the ranking investigation should involve confirming curve ranking is robust over all or at least a few different test combinations of the other materials’ curves. This is addressed in the rather involved and computationally expensive ranking process, summarized in the Appendix.

Step II: correlation and Interpolation of stress-strain curves across temperatures

This step proceeds from a precedent in [23] and Step I’s determinations, portrayed conceptually in Figure 14 . Three material curves of low, medium, and high effective strength exist per characterization temperature. When several material curves exist at each temperature, for UQ purposes, strength is assumed to be highly correlated across temperatures such that a curve with higher relative strength at lower temperatures is assumed to retain higher relative strength at higher temperatures. This assumes that material weakening mechanisms and % weakening are roughly similar with increasing temperature whether the material is initially of higher, medium, or lower relative strength.

Figure 14.

Notional portrayal of high, medium, and low effective strength stress-strain curves at adjacent characterization temperatures.

The correlation assumption appears physically reasonable and tremendously reduces the number of potential combinations of material curves to be sampled when a material transitions temperatures. For example, there are 3 × 3 × 3 = 27 potential combinations of material curve combinations in the figure that could be used in a simulation that transitions temperatures from 600°C to 700°C to 800°C. So to investigate all these potential combinations would take 27 simulations with the expensive PCAP can model. To transition all seven temperatures would present 37 = 2187 possible combinations. This is just for one material. For the three materials in this problem, each with three material curve options, this would present 21,873 ≈ 1010 possible combinations. Clearly, this is unaffordable and seems wholly unnecessary given the reasonableness of the temperature-strength correlation assumption.

Hence, for each material, we link, for example, its high-strength curves across the seven characterization temperatures. We interpolate across the characterization temperatures as follows: at a temperature in-between two adjacent characterization temperatures, the stress is linearly interpolated from the stress values (at the applicable input stain level) from the two stress-strain curves at the upper and lower enveloping temperatures.

For each material, this effectively gives one constructed high-strength, temperature-varying, stress-strain function. Temperature-dependent medium and low strength functions are likewise constructed. For this problem, we end up with each material having high strength (HS), medium strength (MS), and low strength (LS) temperature-dependent stress-strain functions as depicted in Figure 15 .

Figure 15.

Notional depiction of PCAP can materials’ high strength (HS), medium strength (MS), and low strength (LS) temperature-dependent stress-strain functions, and propagation of the material strength variability via 27 assumed equally-likely combinations of material strength functions (e.g., one combination is lid MS function/weld HS function/wall LS function).

5.2 Stress-strain function uncertainty propagation, results, and sensitivities

Given the constructed high, medium, and low strength stress-strain functions for the materials, a strategy in [1, 4] was taken to form and propagate all 27 possible combinations of stress-strain functions as conveyed in Figure 15 . The model (Mesh 4) was run with experimental heating and other conditions summarized in [5] from Test 6 in [2]. This is the reference nominal test of the five replicate tests in the PCAP validation assessment (see [4]). This yields 27 failure pressures for each of the TP and the EQPS failure criteria as depicted in the figure.

5.2.1 Sensitivity analysis

The 27 failure pressures for the TP and EQPS failure criteria are plotted in Figure 16 . The left columns of the TP and EQPS results are for the HS stress-strain function for the weld material coupled with nine different (all possible) combinations of lid and tube/wall strengths varied over their LS, MS, and HS options. Similarly, the center and right columns of results in the figure are, respectively, for MS and LS weld strength functions coupled with the nine possible combinations of lid and tube/wall strengths.

The EQPS results are an average of about 460 psi or 50% higher than the TP failure pressures. (It was later determined that much of this difference could be explainable by very underconverged Mesh 4 results with the EQPS damage model; see [3, 4].) For both failure criteria, the individual and average failure pressures decrease as expected from column to column as the weld material strength decreases from HS to MS to LS. The decreases are somewhat greater with the EQPS failure criterion than with the TP criterion. This is reflected in the relative sizes of the interval bars labeled weld material strength variation relative effect in the figure. These mark the average decrease in failure pressure when weld strength goes from high to low. For EQPS, the interval bar has a length of 83 psi = average of HS weld column (1468 psi) minus average of LS weld column (1385 psi). For TP, the interval bar has a length of 55 psi = average of HS weld column (993 psi) minus average of LS weld column (938 psi).

Figure 16.

Predicted failure pressures and sensitivities for different combinations of lid, weld, and tube material strength functions.

For EQPS, the vertical ordering of results within a column does not change from column to column as weld strength decreases from HS to MS to LS. The ordering of TP results is also consistent across columns but is slightly different from the ordering of EQPS results, as discussed below. Both TP and EQPS results within a given column (where weld strength is fixed) are marked by a symbol shape that identifies the lid material strength (like the triangle that corresponds to a lid strength 1 = High per the legend at the bottom of the figure). The color of the symbols signifies the strength level for the tube/wall material: red, yellow, and green correspond to 1 (High), 2 (Medium or Nominal), and 3 (Low) strengths.

The predicted failure pressures in Figure 16 always show that the red instance of a given symbol corresponds to a higher failure pressure than the yellow instance, which is always higher than the green instance. Thus, for any given weld and lid material strength combination, the predicted failure pressure increases with wall strength. The interval bars labeled tube material strength variation relative effect in the figure signify a representative magnitude of increase in failure pressure when tube/wall strength rank goes from low to high (green to red for a given symbol shape). For EQPS, the plotted interval bar has a representative magnitude of 32 psi. For TP, the corresponding interval bar has a much larger tube/wall strength effect of representative magnitude of 52 psi. As expected, these tube/wall strength effects are less than the weld strength EQPS and TP average failure pressure effects of 83 psi and 55 psi.

Failure pressure orderings relative to lid material strength rankings are less intuitive. EQPS and TP results within a given column (where weld strength is fixed) are indicated by a given symbol shape while holding the symbol color (signifying tube/wall strength) fixed. A stronger lid might make the weld fail at lower pressure than a weaker lid because there is more bending occurring at the weld if a stiffer lid is involved. This proposition is supported by the TP results ordering that the triangle symbols (lid strength 1 = High) and the diamond symbols (lid strength 2 = Medium) are always lower than the rectangle symbols (lid strength 3 = Low). However, the proposition conflicts with the TP ordering of the diamond symbols (lid strength 2 = Medium) always being lower than the triangle symbols (lid strength 1 = High). This apparent nonmonotonic behavior of predicted failure pressure with lid-strength could be due to the nonisothermal can temperatures underlying the simulation results here, whereas isothermal cans were used to rank the lid curve strengths. This could indicate that the isothermal lid curve-strength ranking process was not fully robust.

For EQPS, results within a given column and for a given color show a monotonic ordering fully consistent with the said proposition: the rectangle symbols (lid strength 3 = Low) are always highest on the plot and then the diamond symbols next (lid strength 2 = Medium), with the triangle symbol (lid strength 1 = High) always lowest.

The sizes of the interval bars labeled lid material strength variation relative effect in the figure are indifferent to any potential curve strength ordering errors because all 27 curve combinations are used. For both TP and EQPS, the said interval bars have representative magnitudes of 25 psi each. As expected, these lid strength variation effects are smaller in magnitude than, but not insignificant relative to, the effects of weld and tube/wall material strength variations.

Thus, for both TP and EQPS failure criteria, weld material strength variations have the largest effect, as expected, but lid and wall strength variations also have significant effects.

5.2.2 Uncertainty processing and interpretation of failure pressure results

We now consider the uncertainty processing and interpretation of the pressure failure results. If dealing with multiple but few stress-strain curves for only one material, then appropriate uncertainty treatment has been established and confirmed in the series of papers and reports [24, 25, 26, 27]. The approach recognizes that the stress-strain curves are discrete realizations with no readily identifiable parametric relationship between them. Yet, the stress-strain curves come from and belong to a larger population that reflects the material’s variability. Fortunately, a mathematical description of the generating function for the larger population of ss curves is not needed with the approach summarized next. The output scalar data (the predicted failure pressures) are worked with, rather than attempting to create a parametric or spectral generator function that is consistent with the ss curve data realizations.

In analogy with Figure 15 , an application of the approach with, say, three stress-strain function curves for a single material would result in three predicted failure pressures with the EQPS or TP failure criteria. (We could also work with other scalar output responses of interest, like displacement, strain, or Von Mises stress at a given point on the can and at a given time, or even spatial-temporal maxima as scalar quantities that vary with the three input stress-strain function curves.) Because only three function curves and corresponding failure pressure realizations exist, small-sample related error will typically exist in any characterization of aleatory uncertainty due to the stochastic material strength variability. Thus, substantial small-sample epistemic uncertainty exists concerning the error in characterizing the aleatory variability.

A small number of realizations or samples will usually underpredict the true variance of material strength and related failure pressures or other responses. Mean or central response will also usually be significantly mispredicted. Potential significant nonconservative small-sample bias error can result, causing unsafe engineering design and risk analysis, even if the physics prediction model was perfect in every other way.

Statistical tolerance intervals (TIs) attempt to compensate for sparse sample data by appropriately biasing response estimates. For instance, the three failure pressure values would be processed into 95%coverage/90%confidence TIs (95/90 TIs). With reasonably high reliability, these estimate conservative but not overly conservative bounds on the “central” 95% of response from very sparse random samples/realizations of the input data. The central 95% of response is the range between the 2.5 and 97.5 percentiles of the true response distribution that would arise from an infinite number of samples. This central 95% range has been found to be convenient and meaningful for model validation comparisons of experimental and model-predicted aleatory response quantities (e.g., [23, 28, 29, 30]), which is also the purpose [4] of the present UQ results.

Investigations in [26, 31, 32] have concluded 95/90 TIs to be preferable to many other UQ methods tried or critically assessed for estimating, from very sparse sample data, conservative but not overly conservative bounds on the central 95% of response. The other methods tried or critically evaluated include bootstrapping [33], optimized four-parameter Johnson-family distribution fit to the response samples [34], nonparametric kernel density estimation specifically designed for sparse data [35], nonparametric cubic spline PDF fit to the data based on maximum likelihood [36], and Bayesian sparse-data approaches [37].

The TI approach is also much easier to use than the other UQ methods investigated. A 95/90 TI is constructed by simply multiplying the calculated standard deviation σ of the data samples by a factor f to create an interval of total length 2f σ . The interval is centered about the calculated mean ( μ ) of the samples.

The multiplying factor f is readily available from tables in statistical texts (e.g., [38, 39]), formulas (e.g., [40]), or software (e.g., [41]) that encodes the formulas. The factor is parameterized by two user-prescribed levels: one for the desired “coverage” proportion of stochastic response, and one for the desired degree of statistical “confidence” in covering or bounding at least that proportion. For instance, a 95%coverage/90%confidence TI prescribes lower and upper values of a range that is said to have at least 90% odds that it spans at least 95% of the true probability distribution from which the random samples were drawn. The said 90% odds or confidence exist only when sampling from a Normal distribution. Reduced confidence levels for non-Normal distributions are discussed next.

Although derived for Normal populations, 95/90 TIs will span the central 95% ranges of many other sparsely sampled PDF types with reasonable/useful odds or confidence. For instance, 89% of 144 PDFs (including highly skewed and multimodal highly non-Normal distributions) studied in [25, 26, 27] had empirical confidence levels of 75% or greater with 95/90 TIs and N = 4 random samples. From studies in [26] on several representative PDFs, it is projected that 90% of the 144 PDFs would have confidence levels > 85% with 95/95 TIs and N = 4.1 These average or expected confidence levels decline slowly as the number of samples increases.

Although TIs often provide reliably conservative estimates, TIs can egregiously exaggerate the true variability when very few samples are involved. This is a downside that comes with high confidence levels of bounding the true central 95% of response.

Now, we consider the problem where the output response samples come from discrete stress-strain function variations of “multiple” materials as in the present problem. A naive approach would be to construct (e.g., 95/90) TIs from the 27 failure pressure values indicated in Figure 15 for the TP and EQPS failure criteria. However, TIs pertain to random sampling of the contributing input uncertainties, where for 27 response samples, each of the contributing source uncertainties would typically be sampled at 27 different values. Repeat values would not ordinarily occur, especially with a moderately small number of samples like 27. This is not the case here; each input stress-strain function of a given material is sampled repeatedly (nine times) in the course of propagating all possible combinations of curves. So it was decided that constructing TIs using N = 27 would not be appropriate. (This was later confirmed by studies on a linear test problem in [42].) Instead, because only nine independent realizations of input information exist in this problem (three stress-strain functions for each of three materials), it was ventured that TIs should be constructed based on an effective number of samples N = 9. Having no more-fundamental basis to proceed on at the time, this course was taken in the PCAP VVUQ project [1, 3, 4, 5, 6, 7].

There is a lack of well-established sampling methods for identifying combinations of model inputs from sparse quantized sets of choices or “levels” in the various factors (i.e., the levels are not prescribable; the few available stress-strain functions are the only “levels” available), such that propagation of the relatively small number of affordable or available input combinations will yield appropriate response statistics and distribution information. Subsequent to the PCAP project, investigations in [43] provide a more fundamentally grounded approach. For the present problem, it would construct and average TIs based on failure pressure results from propagating selected sets of the 27 possible combinations of material curves according to an analogy with Latin hypercube sampling (LHS [44]) as explained next.

5.2.2.1 Latin hypercube sampling analogue for discrete material curves, and associated TIs

Latin hypercube sampling of one or more continuous input random variables is well recognized as an efficient sampling method for Monte Carlo propagation of probabilistic uncertainty through general nonlinear response functions or models (e.g., [45, 46]). With LHS and continuous random variables, M samples of a given output variable corresponds to M points in a D-dimensional space of D input random variables, where each input random variable is sampled at M different values or realizations of that variable. An analogue of this type of treatment exists for our discrete random function UQ problem as follows:

  1. For each of the three materials, there are M = 3 different realizations of stress-strain functions.

  2. For each material, choose one strength level, for example, form an input data combination {weld HS, tube/wall LS, lid LS} (refer to Figure 15 ). This is one of the possible 27 combinations of the materials’ stress-strain functions discussed previously.

  3. Run the model with these input curves to predict a corresponding failure pressure.

  4. Do this three times; each time create a new random combination of input curves that does not use a curve that was previously selected and used. This yields three simulations, each with a single curve from each material, where each material curve is used once and only once over the prediction set of three simulations.

  5. The three failure pressures predicted from the three simulations are used to construct a 95/90 TI based on n = 3 samples of response, in analogy with n = 3 TI that would be constructed for three samples of response from LHS MC with continuous random variable inputs.

5.2.2.2 Averaging equally legitimate TIs to reduce chances of extreme/nonrepresentative TIs

This methodology yields a prediction set of M = 3 results from M = 3 LHS combinations of the 27 possible combinations. Many such equally legitimate M = 3 prediction sets can be formed. Table 2 lists three equally legitimate sets as examples. Across any row for Set 1, 2, or 3, the high-strength, medium-strength, and low-strength function curves appear and are used once and only once for each of the materials (Weld, Tube/Wall, and Lid).

Example 3-run
LHS sets
Combination A {weld, tube/wall, lid} Combination B {weld, tube/wall, lid} Combination C {weld, tube/wall, lid}
Set 1 {H,H,H} {M,L,M} {L,M,L}
Set 2 {L,L,L} {M,H,M} {H,M,H}
Set 3 {L,M,H} {M,L,L} {H,H,M}

Table 2.

Three diverse LHS sets of material curves combinations.

Each set in the table leads to a legitimate TI. There is no apparent reason to favor one LHS input set and TI result over another. Therefore, one could think about equally weighting the various TI results to get an “average TI” by averaging the individual TI upper ends to get an average TI upper end, and similarly to get an average TI bottom end. The average TI might be better than the individual TIs in that the average TI has a reduced chance of being an anomalous nonrepresentative result from an extreme/nonrepresentative sample set that could be obtained by random chance. A constraint on the averaging strategy is that the averaged TIs should ideally come from LHS sets that are diverse as a group. This means they do not have input-sample combinations (so output response samples) in common between the sets. Table 3 shows three individually legitimate LHS sets that are nondiverse as a group because all sets have a response sample based on the same input Combination A and corresponding output response sample.

Example 3-run
LHS sets
Combination A {weld, tube/wall, lid} Combination B {weld, tube/wall, lid} Combination C {weld, tube/wall, lid}
Set 1 {H,H,H} {M,L,M} {L,M,L}
Set 2 {H,H,H} {M,M,L} {L,L,M}
Set 3 {H,H,H} {M,L,L} {L,M,M}

Table 3.

Three LHS sets of material curves combinations that are nondiverse between sets.

Diverse TI averaging was performed on a “Can Crush” solid-mechanics UQ test problem where reference truth results were available as part of the development of the test problem [43]. The test problem had two material variability sources each with two aleatory realizations of stress-strain curves. It was found that TI-averaging improved 95/90 TI success rates of bounding the true central 95% of response by 9 percentage points over the average success rate of individual TIs. A success rate of 94% was obtained for average TIs over a test matrix of 16 output quantities and 10s of random trials for each quantity. Individual TIs had a lesser but still reasonable success rate of 85% on average over the same 16 quantities. Similar results have been found on a second solid mechanics test problem with a different constitutive model and structural failure problem. This is now in the process of being written up.

Note that the TI averaging method does not require more experimental data (realizations of stress-strain random functions), but does require more model runs. For the present PCAP UQ application, one legitimate TI requires three model runs. To average three equally legitimate TIs results would require nine model runs total. It appears that averaging three or four individual TIs represents the knee in the cost vs. reliability curve (the size of confidence intervals on sample means has a sharp knee at 4 samples). Therefore, TI averaging incurs ∼3× computational cost over producing a single TI, but for the regime of solid mechanics UQ problems discussed in this chapter, the total computational cost would often be limited to ∼10 model runs. This is a moderate computational cost to pay for the likely significant improvement from TI averaging. It is also small relative to the computational cost of the material curve ranking procedure that temperature dependence requires. The computational cost would also usually be small compared to the cost of getting the experimental data in the first place, or getting more of it.

In related methodology, reference [6] presents a method for aggregating the aleatory uncertainty of response (failure pressure) from propagated discrete aleatory realizations of functional data (per the present chapter), with aleatory uncertainty from propagated parametric random variables. Ref. [7] demonstrates how to further handle, in a practical way, any epistemic parametric uncertainty that may be involved in the UQ problem.

Finally, if the model predictions are to be used to support estimation of small “tail” probabilities of response for robust/reliable design or safety/risk analysis, the sample results from the LHS sets in 2 would be processed in a different way. This is demonstrated in recent investigations in [26, 47, 48, 49] on 16 diversely shaped distributions and tail probability magnitudes from 10−5 to 10−1. Reliably conservative and efficient estimates of small tail probabilities are obtained. Further reliability and accuracy benefits occur from averaging multiple estimates from equally legitimate subsets of samples from the available sparse-data pool (i.e., from use of statistical jackknifing).

Advertisement

6. Conclusions

This chapter presented a practical and reasonable methodology for characterizing and propagating the effects of temperature-dependent material strength and failure-criteria variability to structural model predictions. Particularly challenging aspects of the application problem in this chapter (and often in other real applications) are the appropriate inference, representation, and propagation of temperature dependence and material stochastic variability from just a few experimental stress-strain curves at a few temperatures (as sparse discrete realizations or samples from a random field of temperature-dependent stress-strain behavior), for multiple such materials involved in the problem. Currently unique methods are demonstrated that are relatively simple and effective. The practical methodology is versatile and flexible for application to other solid-mechanics problems involving constitutive model calibration to sparse functional temperature- and/or strain-rate-dependent data, and then propagation of the incorporated uncertainty to application models and their output quantities.

Advertisement

Authour note

Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525. This manuscript is a work of the United States Government and is not subject to copyright protection in the U.S. This manuscript describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.

Advertisement

Appendix: Process for stress-strain curves strength-to-failure ranking and down-selection

Table A1 lists the material tests for the bar stock (can lid and base) and tube stock (can sidewall and weld) characterized at seven temperatures (Section 2). The weld material is specified to be the same as the wall material because it was weaker than the lid material, so it provides a more conservative representation of the weld material strength. Nonetheless, the following is pursued as though the weld material has different stress-strain curve data because this was the original plan in the project and illustrates how three different materials would be handled.

20°C 100°C 200°C 400°C 600°C 700°C 800°C
Lid—1 2NAL 5NAL 8NAL 11NAL 14NAL 17NAL 20NAL
Lid—2 3NAL 6NAL 9NAL 12NAL 15NAL 18NAL 21NAL
Lid—3 4NAL 7NAL 10NAL 13NAL 16NAL 19NAL 22NAL
Lid—4 28NAL 26NAL 24NAL 23NAL
Lid—5 29NAL 25NAL
Weld—1 1NA 4NA 7NA 10NA 14NA 17NA 24NA
Weld—2 2NA 5NA 8NA 11NA 15NA 18NA 25NA
Weld—3 3NA 6NA 9NA 12NA 16NA 19NA 26NA
Tube—1 1NA 4NA 7NA 10NA 14NA 17NA 24NA
Tube—2 2NA 5NA 8NA 11NA 15NA 18NA 25NA
Tube—3 3NA 6NA 9NA 12NA 16NA 19NA 26NA

Table A1.

List of all material curves for the lid, weld, and tube (wall) at each temperature.

For the wall material, there were three stress-strain (ss) curves at each temperature. For the lid material, there were four to five replicates at some temperatures as shown in Table A1 . These were reduced to three curves at each temperature by first determining which curve had the lowest effective strength to failure (lowest predicted failure pressure in the PCAP can simulation) and which curve had the highest effective strength to failure. Then, the medium strength curve that resulted in a computed failure pressure closest to the middle between the failure pressures with the high and low strength curves was identified.

Ranking and down-selection were conducted with mechanical-only isothermal simulations with Mesh 4. The effective strength of the material curves was determined according to the calculated pressure at which weld critical TP and EQPS values from Table 1 were reached. A uniform temperature condition at the relevant temperature from Table A1 and a linear pressure ramp of 63 psi/min representative of the reference can/test #6 were used in the simulations.

To down-select the lid material curves for temperatures with more than three curves, simulations were conducted for each ss curve at 20, 200, 400, and 800°C. These simulations used wall and weld ss curves labeled Tube-1 and Weld-1 in Table A1 at the said temperatures.

Table A2 shows the final three material curves chosen for the lid. It should be noted that the order of the lid material curves at each temperature does not necessarily coincide with the high, medium, and low rankings. The final ranking of the lid material curves is reevaluated in the next phase of this process.

20°C 100°C 200°C 400°C 600°C 700°C 800°C
Lid—1 2NAL 5NAL 8NAL 11NAL 14NAL 17NAL 20NAL
Lid—2 3NAL 6NAL 9NAL 12NAL 15NAL 18NAL 21NAL
Lid—3 4NAL 7NAL 26NAL 25NAL 16NAL 19NAL 23NAL
Weld—1 1NA 4NA 7NA 10NA 14NA 17NA 24NA
Weld—2 2NA 5NA 8NA 11NA 15NA 18NA 25NA
Weld—3 3NA 6NA 9NA 12NA 16NA 19NA 26NA
Tube—1 1NA 4NA 7NA 10NA 14NA 17NA 24NA
Tube—2 2NA 5NA 8NA 11NA 15NA 18NA 25NA
Tube—3 3NA 6NA 9NA 12NA 16NA 19NA 26NA

Table A2.

Final set of three material curves for the lid, weld, and tube (Wall) at each temperature.

The next phase involved running mechanical-only simulations with various combinations of the tube (T), lid (L), and weld (W) ss curves at each temperature. The ranking process included 6 rounds of simulations as exemplified in Table A3 for the case of 700°C. In the first three rounds, each of the three replicate curves was sampled starting with the weld in Round 1, then the lid in Round 2, and finally the tube in Round 3. An example ranking analysis for Round 1 is explained immediately after Table A3 . Rounds 4 and 5 rechecked the rankings for both the weld and the lid curves using the nominal curves of the other two materials because the initial tests were not necessarily performed using their medium curves. Finally, Round 6 rechecked the rankings for the tube curves using off-medium conditions, in particular at low-low material curve strengths of the lid and weld, since the original tube rankings were conducted using the medium lid and weld curves.

Round T L W Comments
1 1 1 1,2,3 Three runs to determine weld curve rankings
2 1 3,2,(1) 2 Two runs to determine lid curve rankings using the medium weld curve (here W = 2). Note that the L = 1 simulation was previously performed in Round 1
3 2,3,(1) 2 2 Two runs to determine tube curve rankings using the medium weld (W = 2) and lid (L = 2) curves. Note that T = 1 simulation was performed in Round 2
4 3 2 1,(2),3 Two runs to recheck weld curve rankings from Round 1. Use the medium tube (T = 3) and lid (L = 2) curves. Note that the W = 2 simulation was performed in Round 3
5 3 3,(2),1 2 Two runs to recheck lid curve rankings from Round 2. Use the medium tube (T = 3) and weld (W = 2) curves. Note that the L = 2 simulation was performed in Round 4
6 2,3,1 3 1 Three runs to recheck tube curve rankings from Round 3. Use the low lid (L = 3) and low weld (W = 1) strength curves

Table A3.

700°C example of process used to rank replicate material curves at a given temperature.

This process was performed for all seven temperatures. In all rechecked cases, the result of material curve rankings remained the same with the one exception of the tube curves at 20°C, which changed low to high strength ordering from 3-2-1 to 3-1-2.

Six rounds of simulations were involved. Numerical indexes in columns 2–4 are from Table A2 . Left-to-right order for multiple entries in a cell is lowest to highest effective curve strength. Entries in parenthesis ( ) indicate no new simulation was needed; result already available from a prior round.

Figure A1 shows an example of the computed spatial-maximum tearing parameter (TP) in the weld as a function of time (which is linearly related to pressure for these linear pressure-ramp simulations) for a 700°C temperature in Round 1. In this round, the lid-1 and tube-1 ss curves for 700°C in Table A2 were used as indicated in the applicable row of Table A3 , while the weld material ss curves varied over the three identified for 700°C in Table A2 . The calculated rises of the material damage TP values in time are compared to their critical TP values from Table 1 (plotted horizontal lines) to indicate failure by these criteria. The first result (W1) to reach its critical TP value was designated as low strength (L), the second result (W2) was designated as medium (M), and the third result (W3) was designated as high strength (H).

Figure A1.

Comparison of weld spatial-maximum TP results to corresponding critical TP values (plotted horizontal curves) at 700°C.

A similar process was used to evaluate the material curve rankings in all rounds of the ranking process. From Figure A1 , note that at any point in time the 700°C W2 ss curve yields lowest calculated damage (TP value) of any of the weld ss curves, so it represents the “strongest” ss curve by this measure. However, the TP response reaches its critical value faster than the W1 ss curve, which is higher in effective “strength-to-failure.” The latter is the measure used for ranking the effective strength of the ss curves.

The final curve strength rankings for the lid, weld, and tube are summarized in Table A4 . Approximately, 116 mechanical-only simulations were performed in the ranking process, 18 to reduce the number of lid curves to three at 20, 200, 400, and 800°C, and 14 for each of the 7 temperatures to rank the curve strengths. For lid and weld ss curves, the rankings were consistent whether a critical TP or critical EQPS value was used to indicate failure. However, some of the tube results did show differences, and in those cases, the TP ranking was used because the tearing parameter was believed to be the most valid criterion for this application (less mesh-related error than with EQPS; see [3]). Note that even though the same three stress-strain curves per temperature are used for the can weld and walls, the curve strength rankings at a given temperature are usually different for these can parts. This reflects the dependency of effective curve strength on the particular geometry and loading conditions.

20°C 100°C 200°C 400°C 600°C 700°C 800°C
Lid—H 4NAL 7NAL 8NAL 25NAL 15NAL 17NAL 20NAL
Lid—M 3NAL 6NAL 26NAL 11NAL 14NAL 18NAL 23NAL
Lid—L 2NAL 5NAL 9NAL 12NAL 16NAL 19NAL 21NAL
Weld—H 2NA 4NA 8NA 12NA 15NA 19NA 24NA
Weld—M 1NA 5NA 9NA 11NA 14NA 18NA 25NA
Weld—L 3NA 6NA 7NA 10NA 16NA 17NA 26NA
Tube—H 3NA 4NA 9NA 12NA 16NA 17NA 24NA
Tube—M 1NA 5NA 7NA 10NA 14NA 19NA 25NA
Tube—L 2NA 6NA 8NA 11NA 15NA 18NA 26NA

Table A4.

Final material curve rankings for the lid, weld and tube.

References

  1. 1. Black A, Romero V, Breivik N, Orient G, Antoun B, Dodd A, et al. Predictive capability assessment project: Abnormal thermal-mechanical breach V&V/UQ. Sandia National Laboratories report SAND2019-13790 (Official Use Only/Export Controlled). November 2019
  2. 2. Suo-Anttila JM, Dodd AB, Jernigan DA. Thermal mechanical exclusion region barrier breach foam experiments (800C upright and inverted 20 lb/ft3 PMDI Cans). Sandia National Laboratories report SAND2012-7600 (OUO/ECI). September 2012
  3. 3. Black A, Romero V, Breivik N, Orient G, Suo-Anttila J, Antoun B, et al. Verification, validation, and uncertainty quantification of a thermal-mechanical pressurization and breach application. In: Presentation VVS2015-8047 in the Archives of the ASME Verification & Validation Symposium. Las Vegas, NV; May 13-15, 2015
  4. 4. Romero V, Black A, Dodd A, Orient G, Breivik N, Suo-Anttila J, et al. Real-space model validation-UQ methodology and assessment for thermal-chemical-mechanical response and weld failure in heated pressurizing canisters. ASME Journal of Verification, Validation and Uncertainty Quantification. 2019
  5. 5. Romero V, Black A. Processing of random and systematic experimental uncertainties for real-space model validation involving stochastic systems. ASME Journal of Verification, Validation and Uncertainty Quantification
  6. 6. Romero V. Propagating and combining aleatory uncertainties characterized by continuous random variables and sparse discrete realizations from random functions. Sandia National Laboratories document SAND2019-14642 C, 22nd Non-Deterministic Approaches Conference, AIAA SciTech. Orlando, FL; Jan 6-10, 2020
  7. 7. Romero V, Black A. Adaptive polynomial response surfaces and level-1 probability boxes for propagating and representing aleatory and epistemic components of uncertainty in model validation. Sandia National Laboratories document in review. 2019
  8. 8. Antoun BR. Material Characterization and Coupled Thermal-Mechanical Experiments for Pressurized, High Temperature Systems. Technical Report. Livermore, CA: Sandia National Laboratories; 2012
  9. 9. Frost HJ, Ashby MF. Deformation-Mechanism Maps: The Plasticity and Creep of Metals and Ceramics. Oxford [Oxfordshire]: Pergamon Press; 1982
  10. 10. Lichtenfeld JA, Mataya MC, Van Tyne CJ. Effect of strain rate on stress-strain behavior of alloy 309 and 304L austenitic stainless steel. Metallurgical and Materials Transactions A. 2006;37A:147-161
  11. 11. Wellman GW. A simple approach to modeling ductile failure. Sandia National Laboratories report SAND 2012-1343. June 2012
  12. 12. Sierra/SM Development Team. Sierra/SM theory manual. Sandia National Laboratories report SAND2013-4615. July 2013
  13. 13. Chen W-F, Han DJ. Plasticity for Structural Engineers. Ft. Lauderdale, FL: J. Ross Publishing; 2007
  14. 14. Available from: http://www.mathworks.com/matlabcentral/fileexchange/13812-splinefit
  15. 15. Wilson KM, Karlson KN, Jones R, Hoffa T. MatCal: A tool for improving the traceability and workflow for material calibration. Sandia National Laboratories document SAND2019-11952 C (Official Use Only/Export Controlled). October 2019
  16. 16. Adams BM, Bauman LE, Bohnhoff WJ, Dalbey KR, Ebeida MS, Eddy JP, et al. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis: Version 6.0 user’s manual. Sandia Technical Report SAND2014-4633. July 2014
  17. 17. Brozzo P, Deluca B, Rendina R. A new method for the prediction of the formability limits of metal sheets. In: Proceedings of the 7th Biennial Congress of International Deep Drawing Research Group. 1972
  18. 18. Notz PK, Subia SR, Hopkins MM, Moffat HK, Nobel DR. Aria 1.5: User manual. Sandia National Laboratories report SAND2007-2734. April 2007
  19. 19. Erickson KL, Dodd AB, Hogan RE. Modeling pressurization caused by thermal decomposition of highly charring foam in sealed containers. In: Proceedings of BCC 2010, Stamford, CT, 23-26 May 2010
  20. 20. Edwards HC, Stewart JR. SIERRA: A software environment for developing complex multi-physics applications. In: Bathe KJ, editor. First MIT Conference on Computational Fluid and Solid Mechanics. Amsterdam: Elsevier; 2001. pp. 1147-1150
  21. 21. Larsen ME, Dodd AB. Modeling and validation of the thermal response of TDI encapsulating foam as a function of initial density. Sandia National Laboratories report SAND2014-17850. September 2014
  22. 22. Contact: Nicole Breivik, Sandia National Laboratories, Laser weld modeling methods for deformation and failure
  23. 23. Romero V, Dempsey F, Antoun B. Application of UQ and V&V to experiments and simulations of heated pipes pressurized to failure. In: Mehta U, Eklund D, Romero V, Pearce J, Keim N, editors. Chapter 11 of Joint Army/Navy/ NASA/Air Force (JANNAF) e-book: Simulation Credibility—Advances in Verification, Validation, and Uncertainty Quantification, Document NASA/TP-2016-219422 and JANNAF/GL-2016-0001. 2016
  24. 24. Romero V, Dempsey JF, Wellman G, Antoun B. A Method for projecting uncertainty from sparse samples of discrete random functions ─ Example of multiple stress-strain curves. Paper AIAA-2012-1365, 14th AIAA Non-Deterministic Approaches Conference; April 23-26, 2012; Honolulu, HI
  25. 25. Romero V, Dempsey JF, Schroeder B, Lewis J, Breivik N, Orient G, et al. Evaluation of a simple UQ approach to compensate for sparse stress-strain curve data in solid mechanics applications. In: 19th AIAA Non-Deterministic Approaches Conference, Paper AIAA2017-0818, AIAA SciTech 2017, Jan. 9-13, Grapevine, TX
  26. 26. Romero V, Bonney M, Schroeder B, Weirs VG. Evaluation of a class of simple and effective uncertainty methods for sparse samples of random variables and functions. Sandia National Laboratories report SAND2017-12349. November 2017
  27. 27. Romero V, Schroeder B, Dempsey JF, Breivik N, Orient G, Antoun B, et al. Simple effective conservative treatment of uncertainty from sparse samples of random variables and functions. ASCE-ASME Journal of Uncertainty and Risk in Engineering Systems: Part B. Mechanical Engineering. 2018;4:041006-1-041006-17. DOI: 10.1115/1.4039558
  28. 28. Jamison R, Romero V, Stavig M, Buchheit T, Newton C. Experimental data uncertainty, calibration, and validation of a viscoelastic potential energy clock model for inorganic sealing glasses. In: Sandia National Laboratories Document SAND2016-4635C, Albuquerque, NM, Presented at ASME Verification & Validation Symposium; Las Vegas, NV; May 18-20, 2016
  29. 29. Romero V, Heaphy R, Rutherford B, Lewis JR. Uncertainty quantification and model validation for III-V SSICs in annular core research reactor shots. Sandia National Laboratories report SAND2016-11772 (Official Use Only/Export Controlled). 2016
  30. 30. Romero V. Real-space model validation and predictor-corrector extrapolation applied to the sandia cantilever beam end-to-end UQ problem. In: Paper AIAA-2019-1488, 21st AIAA Non-Deterministic Approaches Conference, AIAA SciTech 2019; Jan. 7-11; San Diego, CA
  31. 31. Romero V, Mullins J, Swiler L, Urbina A. A comparison of methods for representing and aggregating experimental uncertainties involving sparse data—more results. SAE International Journal of Materials and Manufacturing. 2013;6(3):447-473. DOI: 10.4271/2013-01-0946
  32. 32. Romero V, Swiler L, Urbina A, Mullins J. A comparison of methods for representing sparsely sampled random quantities. Sandia National Laboratories report SAND2013-4561. September 2013
  33. 33. Bhachu KS, Haftka RT, Kim NH. Comparison of methods for calculating B-basis crack growth life using limited tests. AIAA Journal. 2016;54(4):1287-1298
  34. 34. Zaman K, McDonald M, Rangavajhala S, Mahadevan S. Representation and propagation of both probabilistic and interval uncertainty. In: Paper AIAA-2010-2853, 12th AIAA Non-Deterministic Approaches Conference; April 12–15, 2010; Orlando, FL
  35. 35. Pradlwarter HJ, Schuëller GI. The use of kernel densities and confidence intervals to cope with insufficient data in validation experiments. Computer Methods in Applied Mechanics and Engineering. 2008;197(29-32):2550-2560
  36. 36. Sankararaman S, Mahadevan S. Likelihood-based representation of epistemic uncertainty due to sparse point data and/or interval data. Reliability Engineering and System Safety. 2011;96(7):814-824
  37. 37. Sankararaman S, Mahadevan S. Distribution type uncertainty due to sparse and imprecise data. Mechanical Systems and Signal Processing. 2013;37(1):182-198
  38. 38. Hahn GJ, Meeker WQ. Statistical Intervals—A Guide for Practitioners. New York: Wiley & Sons; 1991
  39. 39. Montgomery DC, Runger GC. Applied Statistics and Probability for Engineers. New York: Wiley & Sons; 1994
  40. 40. Howe WG. Two-sided tolerance limits for normal populations—Some improvements. Journal of the American Statistical Association. 1969;64:610-620
  41. 41. Young DS. Tolerance: An R package for estimating tolerance intervals. Journal of Statistical Software. 2010;36(50):1-39
  42. 42. Winokur J, Romero V. Optimal design of computer experiments for uncertainty quantification with sparse discrete sampling. Sandia National Laboratories document SAND2016-12608. 2016
  43. 43. Romero V, Winokur J, Orient G, Dempsey JF. Confirmation of discrete-direct calibration and uncertainty propagation approach for multi-parameter plasticity model calibrated to sparse random field data. In: Presentation VVS2019-5172 in the Archives of the ASME Verification & Validation Symposium; May 15-17, 2019
  44. 44. Conover WJ. On a better method for selecting values of input variables for computer codes. Unpublished 1975 manuscript recorded as Appendix A of “Latin Hypercube Sampling and the Propagation of Uncertainty Analysis of Complex Systems,” Helton JC and Davis FJ, Sandia National Laboratories report SAND2001-0417 November 2002
  45. 45. McKay MD, Beckman RJ, Conover WJ. A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics. 1979;21(2):239-245
  46. 46. Helton JC, Davis FJ. Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliability Engineering and System Safety. 2003;81(1):23-69
  47. 47. Jekel C, Romero V. Bootstrapping and jackknife resampling to improve sparse-data UQ methods for tail probability estimates with limited samples. In: ASME Paper VVS2019-5127, ASME 2019 Verification and Validation Symposium VVS2019; May 15-17, 2019; Las Vegas, NV
  48. 48. Jekel C, Romero V. Improving tail probability estimation from sparse-sample UQ methods with bootstrapping and jackknifing. ASME Journal Verification, Validation and Uncertainty Quantification. Sandia National Laboratories document SAND2019-10731 J
  49. 49. Jekel C, Romero V. Conservative estimation of tail probabilities from limited sample data. Sandia National Laboratories Report in Review. 2019

Notes

  • Confidence levels of 75% or 85% are often adequate to manage risk, especially if conservatism from other sources exists in the analysis or results—such as when several sources of uncertainty are present where each involves sparse data conservatively treated with the TI method. Studies in [32] and [42] indicate that when more than one dominant or influential uncertainty sources are sparsely sampled and represented conservatively with TI confidence levels of say >70%, when the conservatively represented uncertainties are combined in linear propagation or aggregation, the individual conservative biases compound to yield substantially greater than 70% confidence of conservative bias in the combined uncertainty estimate.

Written By

Vicente Romero, Amalia Black, George Orient and Bonnie Antoun

Submitted: 17 September 2019 Reviewed: 04 November 2019 Published: 17 December 2019