Random variable types and the corresponding orthogonal polynomials.
Abstract
Uncertainty propagation (UP) methods are of great importance to design optimization under uncertainty. As a wellknown and rigorous probabilistic UP approach, the polynomial chaos expansion (PCE) technique has been widely studied and applied. However, there is a lack of comprehensive overviews and studies of the latest advances of the PCE methods, and there is still a large gap between the academic research and engineering application for PCE due to its high computational cost. In this chapter, latest advances of the PCE theory and method are elaborated, in which the newly developed datadriven PCE method that does not depend on the complete information of input probabilistic distribution as the common PCE approaches is introduced and improved. Meanwhile, the least angle regression technique and the trust region scenario are, respectively, extended to reduce the computational cost of datadriven PCE to accommodate it to practical engineering design applications. In addition, comprehensive comparisons are made to explore the relative merits of the most commonly used PCE approaches in the literature to help designers to choose more suitable PCE techniques in probabilistic design optimization.
Keywords
 uncertainty propagation
 probabilistic design
 polynomial chaos expansion
 datadriven
 sparse
 trust region
1. Introduction
Uncertainties are ubiquitous in engineering problems, which can roughly be categorized as aleatory and epistemic uncertainty [1, 2]. The former represents natural or physical randomness that cannot be controlled or reduced by designers or experimentalists, while the latter refers to reducible uncertainty resulting from a lack of data or knowledge. In systems design, all sources of uncertainties need to be propagated to assess the uncertainty of system quantities of interest, i.e., uncertainty propagation (UP). As is well known, UP is of great importance to design under uncertainty, which greatly determines the efficiency of the design. Since generally sufficient data are available for aleatory uncertainties, probabilistic methods are commonly employed for computing response distribution statistics based on the probability distribution specifications of input [3, 4]. Conversely, for epistemic uncertainties, data are generally sparse, making the use of probability distribution assertions questionable and typically leading to nonprobabilistic approaches, such as the fuzzy, evidence, and interval theories [5–7]. This chapter mainly focuses on propagating the aleatory uncertainties to assess the uncertainty of system quantities of interest using probabilistic methods, which is shown in Figure 1.
A wide variety of probabilistic UP approaches for the analysis of aleatory uncertainties have been developed [8], among which the polynomial chaos expansion (PCE) technique is a rigorous approach due to its strong mathematical basis and ability to produce functional representations of stochastic quantities. With PCE, the function with random inputs can be represented as a stochastic metamodel, based on which lowerorder statistical moments as well as reliability of the function output can be derived efficiently to facilitate the implementation of design optimization under uncertainty scenarios such as robust design [9] and reliabilitybased design [10]. The original PCE method is an intrusive approach in the sense that it requires extensive modifications in existing deterministic codes of the analysis model, which is generally limited to research where the specialist has full control of all model equations as well as detailed knowledge of the software. Alternatively, nonintrusive approaches have been developed without modifying the original analysis model, gaining increasing attention, thus is the focus of this chapter. As a wellknown PCE approach, the generalized PCE (gPCE) method based on the Askey scheme [11, 12] has been widely applied to UP for its higher accuracy and better convergence [13, 14] compared to the classic Wiener PCE [15]. Generally, the random input does not necessarily follow the five types of probabilistic distributions (i.e., normal, uniform, exponential, beta, and gamma) in the Askey scheme. In this case, the transformation should be made to transfer each random input variable to one of the five distributions. It would induce substantially lower convergence rate, which makes the nonoptimal application of Askey polynomial chaos computationally inefficient [8]. Therefore, the GramSchmidt PCE (GSPCE) [16] and multielement PCE (MEPCE) [17] methods have been developed to accommodate arbitrary distributions through constructing their own orthogonal polynomials rather than referring to the Askey scheme.
All the PCE methods discussed above are constructed based on the assumption that the exact knowledge of the involved joint multivariate probability density function (PDF) of all random input variables exists. Generally, by assumption of independence of the random variables, the joint PDF is factorized into univariate PDFs of each random variable in introducing PCE in the literature. However, the random input could exist as some raw data with a complicated cumulative histogram, such as bimodal or multimodal type, for which it is often difficult to obtain the analytical expression of its PDF accurately. Under these scenarios, all the above PCE approaches become ineffective since they all have to assume the PDFs to be complete. To address this issue, the datadriven PCE (DDPCE) method has been proposed [18], in which its accuracy and convergence with diverse statistical distributions and raw data are tested and well demonstrated. With this PCE method, the onedimensional orthogonal polynomial basis is constructed directly based on a set of data of the random input variables by matching certain order of their statistic moments, rather than the complete distributions as in the existing PCE methods, including gPCE, GSPCE, and MEPCE.
At present, great research achievements about PCE have been made in the literature, which have also been applied to practical engineering problems to save the computational cost in UP. However, there is still a large gap between the academic study and engineering application for the PCE theory due to the following reasons: (1) the complete information of input PDF often is not known in engineering, which cannot be solved by most PCE methods presented in the literature; (2) the computational cost of existing PCE approaches is still very high, which cannot be afforded in practical problems, especially when applied to design optimization; and (3) there is a lack of comprehensive exploration of the relative merits of all the PCE approaches to help designers to choose more suitable PCE techniques in design under uncertainty.
2. Datadriven polynomial chaos expansion method
Most PCE methods presented in the literature are constructed based on the assumption that the exact knowledge of the involved PDF of each random input variable exists. However, the PDF of a random parameter could exist as some raw data or numerically as a complicated cumulative histogram, such as bimodal or multimodal type, which is often difficult to obtain the analytical expression of its PDF accurately. To address this issue, the datadriven PCE method (DDPCE for short in this chapter) has been proposed. DDPCE follows the similar general procedure as that of the wellknown gPCE method. For gPCE, the onedimensional orthogonal polynomial basis simply comes from the Askey scheme in Table 1 and is a function of standard random variables. While for DDPCE, the onedimensional orthogonal polynomial basis is constructed directly based on the data of random input by matching certain order of statistic moments of the random inputs and is a function of the original random variables.
2.1. Procedure of datadriven PCE method
where
where
Since the construction of
where
It is assumed that all the coefficients
In the same way as above, one has
There are totally
It is observed that
where
Clearly, to obtain a
2.2. Extension of Galerkin projection to DDPCE
In the existing work about DDPCE, only the regression method is employed to calculate the PCE coefficients. To the experience of the authors, the matrix during regression may become illconditioned during regression for higherdimensional problems since the sample points required for regression that is often set as two times of the number of PCE coefficients
With the projection method, the Galerkin projection is conducted on each side of Eq. (1):
where ⟨•⟩ represents the operation of inner product as below
where H(
Based on the orthogonality property of orthogonal polynomials, the PCE coefficient can be calculated as
Similar to gPCE, the key point is the computation of the numerator in Eq. (11), which can be expressed as
The Gaussian quadrature technique, such as full factorial numerical integration (FFNI) and spare grid numerical integration, has been widely used to calculate the numerator in the existing gPCE approaches, with which the onedimensional Gaussian quadrature nodes and weighs are directly derived by multiplying some scaling factors on the nodes and weights from the existing Gaussian quadrature formulae and then the tensor product is employed to obtain the multidimensional nodes. For some common type of probability distributions, for example, normal, uniform, and exponential distributions, their PDFs have the similar formulations as the weighting functions of the GaussianHermite, GaussianLegendre, and GaussianLaguerre quadrature formula. Therefore,
Distribution types  PDFs  Polynomials  Weights  Intervals 

Normal 

Hermite 

[−∞, +∞] 
Uniform  1/2  Legendre 
1  [−1, 1] 
Beta 

Jacobi


[−1, 1] 
Exponential  Laguerre 
[0, +∞]  
Gamma 

General Laguerre

[0, +∞] 
Normal  Exponential  Uniform  







However, the distributions of random inputs may not follow the Askey scheme, or are even nontrivial, or even exist in some raw data with a cumulative histogram of complicated shapes. Thus, such way to derive these nodes and weighs is not applicable in this case. In this work, a simple method is proposed based on the momentmatching equations below to obtain the onedimensional quadrature nodes and weights.
where
However, Eq. (13) are multivariate nonlinear equations, which are difficult to solve when the number of equations is large (
In the same way, the nodes and weights in other dimensions are obtained conveniently. Then, the numerator can be calculated by the full factorial numerical integration (FFNI) method [8] for lowerdimensional problems (
where
Generally,
For higherdimensional problems (
where
For the FFNIbased method, if
In this chapter, we focus on extending the Galerkin projection to the DDPCE method to address higherdimensional UP problems and then exploring the relative merits of these PCE approaches. For the case with only small data sets, both DDPCE and the existing distributionbased method (
2.3. Comparative study of various PCE methods
In this section, the enhanced DDPCE method, the recognized gPCE method, and the GSPCE method that can address arbitrary random distributions are applied to uncertainty propagation to calculate the first four statistic moments (mean
The PCE order is set as
In Case 1, all the random input distributions are known and belong to the Askey scheme. The test results are shown in Tables 4–7, where the bold numbers with underline are the relatively best results and
Methods  MCS  DDPCE  GSPCE  

–  0.0050 

0.0100  
–  0.0164  (0.0164)  0.0164  

–  0.1367  0.1367 


–  0.4877  0.3032 

[8.5185,8.6328]  8.5472  8.5901  8.5688  
10^{7}  125  125  125 
Methods  MCS  DDPCE  GSPCE  

–  0.0115 

0.0231  
–  0.0516 

0.0258  

– 

0.4202  5.4852 

–  0.1725 

0.0958 
[3.1403,3.2101]  3.1713  3.2017  3.1936  
10^{7}  125  125  125 
Methods  MCS  DDPCE*  GSPCE*  

– 

0.0112  0.0225  
–  0.0288  0.0288  (0.0288  

–  2.2284  2.6927 


–  0.6040  0.6074 

[4.8454,4.9318]  4.8993  4.8669  4.9074  
10^{7}  1820  1820  1820 
Methods  MCS  DDPCE*  GSPCE*  

–  0.0123  0.0296 


– 

0.0723  0.0522  

–  0.1018 

0.1399 

–  0.1050 

0.1326 
[4.2476,4.3286]  4.2627  4.2881  4.2562  
10^{7}  10,626  10,626  10,626 
In Case 2, all the random input distributions are known but do not belong to the Askey scheme. In this case, the Rosenblatt transformation is employed for the gPCE method first. However, DDPCE and GSPCE can be directly used. The results are shown in Tables 8–11. It is observed that overall DDPCE and GSPCE perform better than gPCE, yielding results that are close to those of MCS. The reason is that the transformation in gPCE would induce error. Specifically, in Tables 9 and 10, the gPCE method causes relatively large errors due to the transformation. In addition, note the numbers with shadow, they are clearly larger than those of DDPCE and GSPCE, and
Methods  MCS  DDPCE  GSPCE  

–  0.0196 

0.0175  
–  0.0298 

0.0199  

–  0.2573  0.2059 


–  0.2170  0.2263 

[1.9818,2.1602]  2.0360  2.1490  2.0480  
10^{7}  125  125  125 
Methods  MCS  DDPCE  GSPCE  

–  0.0243  0.0182 


–  0.0467  0.2101 



– 

8.0227  2.5956 

–  0.0307  1.1659 

[9.0052,9.3808]  9.0130  7.9720  9.0250  
10^{7}  125  125  125 
Methods  MCS  DDPCE*  GSPCE*  

– 

0.0084 


– 

0.0887 



– 

0.6480  0.4397 

– 

0.1927  0.1368 
[1.0859,1.1271]  1.0963  1.2291  1.1188  
10^{7}  1820  1820  1820 
Methods  MCS  DDPCE*  GSPCE*  

–  0.0240 

0.0320  
– 

0.0722  0.0250  

–  0.2170 

0.2362 

– 

1.9117  0.4582 
[4.4019,4.4843]  4.4635  4.6942  4.4200  
10^{7}  10,626  10,626  10,626 
In Case 3, the PDFs of some variables is bounded (BD) as below,
and the rest of the variables follow typical distributions. In this case, the Rosenblatt transformation is also employed for the gPCE method first.
From the results in Tables 12–15, it is found that generally large errors are induced by gPCE, especially the numbers with shadow in the tables. Since the first two variables follow the distribution bounded in an interval, the error induced by the transformation is large and all values of
Methods  MCS  DDPCE  GSPCE  

– 

0.0150 


–  0.0195  24.1063 



–  0.1359  36.9565 


– 

12.3239 

[4.9841,5.0717]  5.0038  5.2620  5.0333  
10^{7}  125  125  125 
Methods  MCS  DDPCE  GSPCE  

– 

0.0914 


– 

19.7662 



–  0.4186  123.2093 


–  0.0555  12.7841 

[1.4429,1.4903]  1.4449  1.7890  1.4452  
10^{7}  125  125  125 
Methods  MCS  DDPCE*  GSPCE*  

– 

0.7473  0.0598  
– 

(4.2798  0.3693  

–  0.1221  22.5570  0.2036 

–  0.6186  77.1134  0.6321 
[3.1972,3.2676]  
10^{7}  1820  1820  1820 
Methods  MCS  DDPCE*  GSPCE*  

– 

6.4980  0.0194  
–  0.0409  8.2618 



–  0.1187  50.3681 


– 

11.8984  0.1949 
[8.6089,8.7237]  8.6559  0.8227  8.6728  
10^{7}  10,626  10,626  10,626 
In Case 4, the distributions of the random input variables are unknown and only some data exist. Although, based on the data, the analytical PDF can be obtained through some experience systems, such as Johnson or Pearson system [8], if the distribution of the data is very complicated, such as with a complicated cumulative histogram of bi or multimodes, it is often very difficult to obtain the analytical PDF accurately. As is wellknown that the Pearson system based on the first four statistic moments of the random variable would produce large errors for bimode (BM) or multimode PDFs. Evidently, the existing PCE approaches, including gPCE and GSPCE, may produce large errors since they all depend on the exact PDFs of the random inputs in this case. However, DDPCE can still work since it is a datadriven approach. To explore the effectiveness and advantage of DDPCE over the other two approaches, it is assumed that the input data for some random input variables have a complicated bimode (BM) histogram shown in Figure 3 and the data for the rest from the typical distributions. Therefore, for the convenience and effectiveness of test, all the input data are generated based on the PDFs, of which the PDF of BM distribution is shown in Eq. (17). It should be pointed out that the PDFs actually are unknown and only some data exist in practice.
We tested small (500) and large (10^{7}) numbers of input data to investigate the impact of number of data on the accuracy of UP. The results are shown in Tables 16–19, from which it is noticed that the results of DDPCE are generally very close to those of MCS when the number of sample points of the random input variables is large (10^{7}). When only 500 sample points are used, the errors are much larger. It means that the accuracy of DDPCE is improved with the increase of the number of sample points. The reason is very simple that with the increase of the number of sample points, the statistic moments of random input variables calculated are more accurate, which would undoubtedly increase the accuracy of UP. The observation exhibits great agreements to what has been reported in work of Oladyshkin and Nowak. Similar to Case 3, the estimated
Methods  MCS  DDPCE (10^{7})  DDPCE (500) 

–  0.0066  1.4873  
–  0.0196  0.0688  

–  0.0150  0.0451 

–  0.0052  3.2327 
[1.4772,1.5252]  1.5069  0  
10^{7}  125  125 
Methods  MCS  DDPCE(10^{7})  DDPCE(500) 

–  0.0132  0.4350  
–  0.0109  0.1957  

–  0.1159  13.4783 

–  0.0131  0.8956 
[6.4478,6.5474]  6.4703  8.000  
10^{7}  125  125 
Methods  MCS  DDPCE(10^{7})  DDPCE(500) 

–  0.0327  0.6047  
–  2.7503  5.3717  

–  3.8373  9.5932 

–  0.5563  1.3573 
[7.7830,7.8924]  6.6667  6.0000  
10^{7}  1820  1820 
Methods  MCS  DDPCE(10^{7})  DDPCE(500) 

–  0.0024  0.1925  
–  0.0241  3.5156  

–  0.4149  214.4537 

–  0.0170  11.9346 
[9.2650,9.3842]  9.2937  0  
10^{7}  10626  10626 
To study the convergence property of the enhanced DDPCE method, the errors (
2.4. Summary
Overall, the three approaches produce comparably good results when the random inputs follow the Askey scheme. However, gPCE is the most mature and convenient to be implemented since there is no need to construct the orthogonal polynomials. When the PDFs of random inputs are unknown but do not follow the Askey scheme, large errors would be induced by the transformation for gPCE and the rest two PCE methods are comparable in accuracy and implementation complexity. It should also be pointed out that for DDPCE, when constructing onedimensional polynomials, the statistic moments (often 0–10 order) should be calculated first. If large gap exists between the highorder and loworder moments, the matrix singularity would happen in solving the linear equations (Eq. (7)). Therefore, in this case, GSPCE is preferable especially when the function is highly nonlinear. When the PDF is unknown and cannot be obtained accurately, such as when random inputs exist as some raw data with a complicated cumulative histogram, only the DDPCE method can still perform well since it is a datadriven method instead of the probabilisticdistributiondriven, while large errors would be produced if GSPCE and gPCE are employed. However, more efforts should be made to solve the numerical problems in the DDPCE method to make it more robust and applicable in constructing the onedimensional orthogonal polynomials.
3. A sparse datadriven PCE method
The size of the truncated polynomial terms in the full PCE model is increased with the increase of the dimension of random inputs
Although the computational cost and accuracy are dependent on the PCE order, how to determine a suitable order that compromises between accuracy and efficiency is not within the scope of this chapter. In common situations, PCE of order
3.1. Procedure of datadriven PCE method
A stepbystep description of the proposed method is given in detail as below with a sidebyside flowchart in Figure 8.
Then one has all the standardized data as
where
Once the PCE coefficients are calculated, the predicted value by the candidate sparse PCE model at the sample point
To evaluate the accuracy more effectively, the relative error is employed based on
where
If the accuracy
If
If
In this work, if the PDF of random input is known, a large number of sample points are generated as the database according to the PDF beforehand; if the PDF of random input is unknown, the raw data are considered as the database. Each sample point in the database has its own index. The initial sample points are selected from the database through randomly and uniformly generating their indices. Then these sample points will be removed from the database and the rest will be indexed again. Similarly, by randomly and uniformly generating the indices, the sequential sample points will be selected from the reduced database. By using this sampling strategy, the sample points are distributed uniformly as far as possible, which is helpful to improve the accuracy of the PCE coefficient calculation.
3.2. Comparative study
In this section, the proposed sparse DDPCE method (shortened as sDDPCE hereafter) is applied to three mathematical examples to calculate the mean and variance of the output responses. The full DDPCE (shortened as fDDPCE hereafter) method that adopts a full PCE structure and onestage sampling with the size of one times the number of PCE coefficients is also applied to UP, of which the results are compared to those of sDDPCE to demonstrate its effectiveness and advantage.
The test examples of varying dimensions including their input information are shown in Table 20, in which the symbols
Another type of nontrivial distribution considered here is invented by conducting square operation on the sample points from some common distributions (see Case 3 in Function 2). The target accuracy
The results are listed in Tables 21–23, in which

0.099  0.044  0.330  0.201  0.181  
0.232  0.813  0.173  0.203  0.099  0.068  
3  4  5  3  4  5  

15  21  20  30  20 
6.162  Na  Na  8.803  7.263  2.402  
10.182  Na  Na  16.670  5.026  8.882  
3  4  5  3  4  5  
56  126  252  20  20  30 
Na  Na  Na  0.045  0.739  0.239  
Na  Na  Na  18.134  12.882  2.479  
3  4  5  3  4  5  
286  1001  3003  30  30  30 
From the results some noteworthy observations are made. First, generally with high PCE order (
In Case 2, the PDFs of all the random inputs are known and assumed to follow common distributions. This is a general case that can be solved by the traditional probabilistic distributionbased PCE methods. The results are shown in Tables 24–26. Generally with high PCE order (

0.044  0.060  0.710  0.010  0.100  

0.758  0.211  0.975  0.061  0.061  
3  4  5  3  4  5  

15  21  30  15  20 
24.401  Na  Na  1.244  0.490  0.216  
39.578  Na  Na  4.380  3.271  2.837  
3  4  5  3  4  5  
56  126  252  20  20  30 
fDDPCE  sDDPCE  

Na  Na  Na  3.461  4.432  0.317  
Na  Na  Na  20.155  6.217  4.223  
3  4  5  3  4  5  
286  1001  3003  30  30  30 
In Case 3, the PDFs of all the random inputs are known; however, some of them follow nontrivial distributions. In this case, the traditional gPCE method cannot work well since large errors would be induced in transforming such nontrivial distributions to certain ones in the Askey scheme. The results are shown in Tables 27–29, which exhibit great agreements to what has been observed in Case 1 and Case 2. The proposed sDDPCE method can significantly reduce the number of sample points while with high accuracy. The higher the dimension, the more advantageous the adaptive sparse structure of sDDPCE can be. In this case, only 11 polynomial terms are selected from 3003 total terms for
fDDPCE  sDDPCE  

1.210  0.854  0.302  1.366  1.044  0.161  
2.321  0.748  0.815  0.805  0.161  0.000  
3  4  5  3  4  5  
10  15  21  10  10  10 
fDDPCE  sDDPCE  

3.324  Na  Na  5.718  1.383  0.680  
7.855  Na  Na  7.634  7.322  2.290  
3  4  5  3  4  5  
56  126  252  20  30  30 
fDDPCE  sDDPCE  

Na  Na  Na  4.114  2.212  0.112  
Na  Na  Na  48.894  15.817  3.101  
3  4  5  3  4  5  
286  1001  3003  30  30  30 
To verify the guess that for lowdimensional problems with loworder PCE models, fDDPCE may produce more accurate results than sDDPCE since it maintains more information. Another test is conducted for Function 1 with lower order
Case 1  Case 2  Case 3  

fDD  sDD  fDD  sDD  fDD  sDD  
0.2801  0.146  0.0366  0.244  0.414  0.807  
0.6344  0.367  0.3577  0.431  0.552  0.477  
6  7  6  10  6  18 
3.3. Summary
The developed sDDPCE can reduce the number of polynomial terms in the PCE model, thus reducing the computational cost. Generally, the larger the random input dimension, the more obvious the advantage of the developed sDDPCE over fDDPCE in efficiency. The sDDPCE method is much more efficient than fDDPCE in solving highdimensional problems, especially those requiring a high order PCE model.
4. Sparse DDPCEbased robust optimization using trust region
In Section 3, to reduce the computational cost of DDPCE, a sparse DDPCE method has been developed by removing some insignificant polynomial terms from the full PCE model, thus decreasing the number of samples for regression in computing PCE coefficients. However, when the sparse DDPCE is applied to robust optimization, it is conventionally a tripleloop process (see Figure 9): the inner one tries to identify the insignificant polynomial terms of the PCE model (the dash box); the middle is UP; the outer is the search for optima, which clearly is still very timeconsuming for problems with expensive simulation models.
As has been mentioned in Section 3, during each optimization iteration, although the sample points required for regression during UP of sDDPCE are greatly reduced, certain additional number of sample points are required to identify the insignificant polynomial terms by the inner loop. If at some iteration design points, almost the same sparse polynomial terms are retained, the inner loop can clearly be avoided, thus saving the computational cost. To address this issue, the trust region technique widely used in nonlinear optimization is extended in this section. During optimizing, a trust region is dynamically defined. If the updated design point lies in the current trust region, it is considered that the insignificant terms of its PCE model remain unchanged compared to those of the last design point, i.e., the inner loop is eliminated at the updated design point. Meanwhile, to further save the computational cost, the sample points lying in the overlapping area of two adjacent sampling regions are reused for the PCE coefficient regression for the updated design point. The proposed robust optimization procedure employing sparse DDPCE in conjunction with the trust region scenario is applied to several examples of robust optimization, of which the results are compared to those obtained by the robust optimization without the trust region method, to demonstrate its effectiveness and advantage.
4.1. The trust region scenario
The trust region method is a traditional approach that has been widely used in nonlinear numerical optimization [28]. The basic idea of the trust region method is that in the trust region of the current iteration design point, the secondorder Taylor expansion is used to approximate the original objective function. If the accuracy of the current secondorder Taylor expansion is satisfied, the size of the trust region is increased to speed up the convergence, and if not it is reduced to improve the accuracy of approximation. To reduce the computational cost of design optimization, the idea of the trust region technique has been extended and applied to reliabilitybased wing design optimization [29], multifidelity wing aerostructural optimization [30], and multifidelity surrogatebased wing optimization [31], which has been widely believed as an efficient strategy in design optimization. For example, when the trust region technique is applied to metamodelbased design optimization, during optimization, the sample points are sequentially generated in the trust region and the radius of the trust region is dynamically adjusted based on the accuracy of the metamodel in the local region.
4.2. Robust design using sparse datadriven PCE and trust region
The scenario of trust region is extended here to reduce the computational cost of sDDPCEbased robust optimization. The basic idea is that the radius of a trust region is determined by the distance between two successive design points and the variation of the corresponding objective function values. If the updated design point
where
The above procedure will continue until the convergent criterion is satisfied. Figure 10 shows the case that the sample points in the previous optimization iteration are reused in the two successive iterations. As is seen that two points are located in the overlapping area of two successive sampling regions, thus are reused in the next iteration for regression to identify the significant polynomials/calculate the PCE coefficients. In this way, the computational cost can be further reduced.
4.3. Comparative studies
The first example is the Ackley Function:
The robust design optimization of this example is:
All the design variables are considered to follow uniform distribution with variation of ±0.2 around their mean values and
The results are shown in Table 31, from which it is found that compared to the robust optimization without the trust region scenario (denoted by without), the obtained performance results (

Funcall  

DD  [0,0,0,0,0,0,0,0,0,0]  1.8839  0.4390  10.6639   
with  [0.6246,0.7066,0.6687,0.7796,0.5744, 0.6784,0.7470,0.6333,0.6578,0.6904] 
4.5014  0.1377  7.2554  12,735 
without  [0.6564,0.6935,0.6984,0.7036,0.6691, 0.0299,0.0141,0.6407,0.0205,0.0038] 
3.7457  0.2003  7.7517  16,840 
The second example is the robust design optimization of an automobile torque arm, shown in Figure 11.
In this problem, the four geometrical parameters (
where the objective function
The distribution parameters of the four design variables and design parameters are shown in Table 32.
Random variables  Distribution  Lower bound  Upper bound 

Uniform  
Uniform  
Uniform  
Uniform  
Deterministic  5500 N  
Deterministic  170 N/mm^{2}  
Deterministic  2.1 × 10^{10} N/mm^{2} 
The corresponding robust design optimization model is formulated as
As has been mentioned above, the PCE model is only constructed for the objective function and the results are shown in Table 33. It is noticed that the robust optimization designs with and without the trust region scenario yields comparable results, while the function calls (objective function calls) required by design with trust region is evidently smaller. The deterministic design cannot even obtain a feasible optimal solution with both constraint violated (>0), since it does not consider uncertainties during design. These results further demonstrate the effectiveness and advantage of the proposed method.

Funcall  

DD  [8.13, 55.00, 55.00, 110.00]  2.6616e^{4}  1.2171e^{3}  1 


82 
with  [8.53, 54.10, 58.67, 111.03]  3.1027e^{4}  1.3355e^{3}  1.1315  −0.0123  −1.1833e^{5}  658 
without  [8.57, 52.68, 57.50, 110.00]  3.0332e^{4}  1.3093e^{3}  1.1077  −4.0000e^{−4}  −1.2913e^{2}  1283 
4.4. Summary
The employment of the trust region in sDDPCEbased robust optimization can evidently reduce the computational cost. However, the determination of the trust region in this chapter is still very subjective and a more rigorous method should be explored. In this section as well as Section 3, the scenarios of sparse PCE and trust region are only employed to DDPCE to save the computational cost. However, the methods proposed here are also applicable to other PCE approaches, such as gPCE and GSPCE.
In this chapter, the latest advances in PCE theory and approach for probabilistic UP are comprehensively presented in detail. However, it does not limit the application of PCE to nonprobabilistic UP to address epistemic uncertainties. Sudret and Schöbi have proposed a twolevel metamodeling approach using nonintrusive sparse PCE to surrogate the exact computational model to facilitate the uncertainty quantification analysis, in which the input variables are modeled by probabilityboxes (
References
 1.
Matthies HG. Quantifying uncertainty: Modern computational representation of probability and applications. Extreme ManMade and Natural Hazards in Dynamics of Structures. Springer Netherlands, 2007;105–135  2.
Kiureghian AD, Ditlevsen O. Aleatory or epistemic? Does it matter? Structural Safety. 2009; 31 (2):105–112  3.
Swiler LP, Romero VJ. A survey of advanced probabilistic uncertainty propagation and sensitivity analysis methods. Proposed for presentation at the 2012 Joint Army Navy NASA Air Force Combustion/Propulsion Joint Subcommittee Meeting; December 37, 2012; Monterey, CA  4.
Du X, Chen W. A most probable pointbased method for efficient uncertainty analysis. Journal of Design & Manufacturing Automation. 2001; 4 (1):47–66  5.
Mukhopadhyay S, Khodaparast H, Adhikari S. Fuzzy uncertainty propagation in composites using Gram–Schmidt polynomial chaos expansion. Applied Mathematical Modelling. 2016; 40 (7–8):4412–4428  6.
Jiang C, Zheng J, Ni BY, Han X. A probabilistic and interval hybrid reliability analysis method for structures with correlated uncertain parameters. International Journal of Computational Methods. 2015; 12 (4):1540006 (24 pages)  7.
Terejanu G, Singla P, Singh T, Scott PD. Approximate interval method for epistemic uncertainty propagation using polynomial chaos and evidence theory. IEEE American Control Conference; 30 June–2 July 2010; Marriott Waterfront, Baltimore, MD, USA.  8.
Lee SH, Chen W. A comparative study of uncertainty propagation methods for blackboxtype problems. Structural & Multidisciplinary Optimization. 2009; 37 (3):239–253  9.
Dodson M, Parks GT. Robust aerodynamic design optimization using polynomial chaos. Journal of Aircraft. 2009; 46 (2):635–646  10.
Coelho R, Bouillard P. Multiobjective reliabilitybased optimization with stochastic metamodels. Evolutionary Computation. 2011; 19 (4):525–560  11.
Xiu D, Karniadakis GE. The wieneraskey polynomial chaos for stochastic differential equations. SIAM Journal on Scientific Computing. 2002; 24 (2):619–644  12.
Wiener N. The homogeneous chaos. American Journal of Mathematics. 1938; 60 (1):897–936  13.
Fan et al. Parameter uncertainty and temporal dynamics of sensitivity for hydrologic models: A hybrid sequential data assimilation and probabilistic collocation method. Environmental Modelling & Software. 2016; 86 :30–49  14.
Guerine A, Hami AE, Walha L, et al. A polynomial chaos method for the analysis of the dynamic behavior of uncertain gear friction system. European Journal of Mechanics  A/Solids. 2016; 59 :7684  15.
Meecham WC, Siegel A. WienerHermite expansion in model turbulence at large Reynolds numbers. Physics of Fluids (19581988). 1964; 7 (8):1178–1190. DOI: 10.1063/1.1711359  16.
Witteveen JAS, Bijl H. Modeling arbitrary uncertainties using GramSchmidt polynomial chaos. 44th AIAA Aerospace Sciences Meeting and Exhibit; 9–12 January 2006; Reno, Nevada  17.
Wan X, Karniadakis GE. Multielement generalized polynomial chaos for arbitrary probability measures. SIAM Journal on Scientific Computing. 2006; 28 (3):901–928  18.
Oladyshkin S, Nowak W. Datadriven uncertainty quantification using the arbitrary polynomial chaos expansion. Reliability Engineering & System Safety. 2012; 106 (4):179–190  19.
Hosder S, Walters RW, Balch M. Efficient sampling for nonintrusive polynomial chaos applications with multiple uncertain input variables. 48th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference; 23–26 April 2007; Honolulu, Hawall  20.
Eldred MS. Recent advances in nonintrusive polynomial chaos and stochastic collocation methods for uncertainty analysis and design. 50th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference; 4–7 May 2009; Palm Springs, California  21.
Abramowitz M, Stegun I, Mcquarrie D A. Handbook of Mathematical Functions. Dover Publications, New York,1964  22.
Xiong F, Greene S, Chen W, Xiong Y, Yang S A new sparse grid based method for uncertainty propagation. Structural & Multidisciplinary Optimization. 2009; 41 (3):335–349  23.
Efron B, Hastie T, Johnstone I, Tibshirani R. Least angle regression. Mathematics. 2004; 32 (2):407–499  24.
Tatang MA, Pan W, Prinn RG, McRae GJ. An efficient method for parametric uncertainty analysis of numerical geophysical models. Journal of Geophysics Research. 1997; 102 (D18):21925–21932  25.
Hu C, Youn BD. Adaptivesparse polynomial chaos expansion for reliability analysis and design of complex engineering systems. Structural & Multidisciplinary Optimization. 2011; 43 (3):419–442  26.
Wan X, Karniadakis GE. An adaptive multielement generalized polynomial chaos method for stochastic differential equations. Journal of Computational Physics. 2005; 209 (2):617–642  27.
Tu J, Cheng YP. An integrated stochastic design framework using crossvalidated multivariate metamodeling methods. SAE Technical Paper 2003010876; 2003  28.
Nocedal J, Wright S. Numerical Optimization. Springer Series in Operations Research and Financial Engineering. New York: Springer; 2006  29.
Elham A, Tooren MJLV. Trust region filterSQP method for multifidelity wing aerostructural optimization. Variational Analysis and Aerospace Engineering. 2016; 116 :247–267  30.
Kim S, Ahn J, Kwon JH. Reliability based wing design optimization using trust region framework. 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference; 30 August–1 September 2004; Albany, New York  31.
Robinson TD, Eldred MS, Willcox KE, Haimes R. Surrogatebased optimization using multifidelity models with variable parameterization and corrected space mapping. AIAA Journal. 2008; 46 (11):2814–2822  32.
Schöbi R, Sudret B. Uncertainty propagation of pboxes using sparse polynomial chaos expansions. 2016, 339 :307–327  33.
Jacquelin E, Friswell MI, Adhikari S, Dessombz O, Sinou J. Polynomial chaos expansion with random and fuzzy variables. Mechanical Systems and Signal Processing. 2016; 75 (15):41–56  34.
Eldred MS, Swiler LP, Tang G. Mixed aleatoryepistemic uncertainty quantification with stochastic expansions and optimizationbased interval estimation. Reliability Engineering and System Safety. 2011; 96 (9):1092–1113  35.
Lu F, Morzfeld M, Tu X, Chorin AJ. Limitations of polynomial chaos expansions in the Bayesian solution of inverse problems. Journal of Computational Physics. 2014; 282 (C):138–147