1. Introduction
Uncertainties are ubiquitous in engineering problems, which can roughly be categorized as aleatory and epistemic uncertainty [1, 2]. The former represents natural or physical randomness that cannot be controlled or reduced by designers or experimentalists, while the latter refers to reducible uncertainty resulting from a lack of data or knowledge. In systems design, all sources of uncertainties need to be propagated to assess the uncertainty of system quantities of interest, i.e., uncertainty propagation (UP). As is well known, UP is of great importance to design under uncertainty, which greatly determines the efficiency of the design. Since generally sufficient data are available for aleatory uncertainties, probabilistic methods are commonly employed for computing response distribution statistics based on the probability distribution specifications of input [3, 4]. Conversely, for epistemic uncertainties, data are generally sparse, making the use of probability distribution assertions questionable and typically leading to nonprobabilistic approaches, such as the fuzzy, evidence, and interval theories [5–7]. This chapter mainly focuses on propagating the aleatory uncertainties to assess the uncertainty of system quantities of interest using probabilistic methods, which is shown in Figure 1.
A wide variety of probabilistic UP approaches for the analysis of aleatory uncertainties have been developed [8], among which the polynomial chaos expansion (PCE) technique is a rigorous approach due to its strong mathematical basis and ability to produce functional representations of stochastic quantities. With PCE, the function with random inputs can be represented as a stochastic metamodel, based on which lowerorder statistical moments as well as reliability of the function output can be derived efficiently to facilitate the implementation of design optimization under uncertainty scenarios such as robust design [9] and reliabilitybased design [10]. The original PCE method is an intrusive approach in the sense that it requires extensive modifications in existing deterministic codes of the analysis model, which is generally limited to research where the specialist has full control of all model equations as well as detailed knowledge of the software. Alternatively, nonintrusive approaches have been developed without modifying the original analysis model, gaining increasing attention, thus is the focus of this chapter. As a wellknown PCE approach, the generalized PCE (gPCE) method based on the Askey scheme [11, 12] has been widely applied to UP for its higher accuracy and better convergence [13, 14] compared to the classic Wiener PCE [15]. Generally, the random input does not necessarily follow the five types of probabilistic distributions (i.e., normal, uniform, exponential, beta, and gamma) in the Askey scheme. In this case, the transformation should be made to transfer each random input variable to one of the five distributions. It would induce substantially lower convergence rate, which makes the nonoptimal application of Askey polynomial chaos computationally inefficient [8]. Therefore, the GramSchmidt PCE (GSPCE) [16] and multielement PCE (MEPCE) [17] methods have been developed to accommodate arbitrary distributions through constructing their own orthogonal polynomials rather than referring to the Askey scheme.
All the PCE methods discussed above are constructed based on the assumption that the exact knowledge of the involved joint multivariate probability density function (PDF) of all random input variables exists. Generally, by assumption of independence of the random variables, the joint PDF is factorized into univariate PDFs of each random variable in introducing PCE in the literature. However, the random input could exist as some raw data with a complicated cumulative histogram, such as bimodal or multimodal type, for which it is often difficult to obtain the analytical expression of its PDF accurately. Under these scenarios, all the above PCE approaches become ineffective since they all have to assume the PDFs to be complete. To address this issue, the datadriven PCE (DDPCE) method has been proposed [18], in which its accuracy and convergence with diverse statistical distributions and raw data are tested and well demonstrated. With this PCE method, the onedimensional orthogonal polynomial basis is constructed directly based on a set of data of the random input variables by matching certain order of their statistic moments, rather than the complete distributions as in the existing PCE methods, including gPCE, GSPCE, and MEPCE.
At present, great research achievements about PCE have been made in the literature, which have also been applied to practical engineering problems to save the computational cost in UP. However, there is still a large gap between the academic study and engineering application for the PCE theory due to the following reasons: (1) the complete information of input PDF often is not known in engineering, which cannot be solved by most PCE methods presented in the literature; (2) the computational cost of existing PCE approaches is still very high, which cannot be afforded in practical problems, especially when applied to design optimization; and (3) there is a lack of comprehensive exploration of the relative merits of all the PCE approaches to help designers to choose more suitable PCE techniques in design under uncertainty.
2. Datadriven polynomial chaos expansion method
Most PCE methods presented in the literature are constructed based on the assumption that the exact knowledge of the involved PDF of each random input variable exists. However, the PDF of a random parameter could exist as some raw data or numerically as a complicated cumulative histogram, such as bimodal or multimodal type, which is often difficult to obtain the analytical expression of its PDF accurately. To address this issue, the datadriven PCE method (DDPCE for short in this chapter) has been proposed. DDPCE follows the similar general procedure as that of the wellknown gPCE method. For gPCE, the onedimensional orthogonal polynomial basis simply comes from the Askey scheme in Table 1 and is a function of standard random variables. While for DDPCE, the onedimensional orthogonal polynomial basis is constructed directly based on the data of random input by matching certain order of statistic moments of the random inputs and is a function of the original random variables.
2.1. Procedure of datadriven PCE method
Step 1. Represent the output y as a PCE model of order p.
where P+1 (
where
Step 2. Solve the unknown polynomial coefficient
Since the construction of
where δ_{kl} is the Kronecker delta, Ω is the original stochastic span, and Γ(X) represents the cumulative distribution function of the random variable X.
It is assumed that all the coefficients
In the same way as above, one has
There are totally k equations in Eqs. (4) and (5). Through substituting Eq. (4) into the first equation in Eq. (5), and then substituting Eq. (4) and the first equation in Eq. (5) to the second equation in Eq. (5), and so on, one set of new equations can be derived:
It is observed that
where
Clearly, to obtain a korder onedimensional orthogonal polynomial basis, 0 to (2k − 1)order statistic moments of x should be matched, which can be calculated based on the PDF or statistically based on the data set. Of course, when the number of data is not large enough, errors would be induced in the moment calculation. The polynomial coefficients for the onedimensional orthogonal polynomial basis can be obtained by solving Eq. (7) with the Cramer’s Rule.
Step 3. Calculate the PCE coefficients b_{i} by the least square regression technique.
Step 4. Once the PCE coefficients are obtained, a stochastic metamodel (i.e., PCE model) that is much cheaper than the original model is provided. Evaluate on the PCE model by Monte Carlo simulation (MCS) to obtain the probabilistic characteristics of y. Since the PCE model is cheap, a large amount of sample points can be used. For the statistic moments, the analytical expressions can also be conveniently derived based on the PCE coefficients:
2.2. Extension of Galerkin projection to DDPCE
In the existing work about DDPCE, only the regression method is employed to calculate the PCE coefficients. To the experience of the authors, the matrix during regression may become illconditioned during regression for higherdimensional problems since the sample points required for regression that is often set as two times of the number of PCE coefficients P + 1 [19] is increased greatly causing a largescale matrix during regression. To solve higherdimensional problems, the Galerkin projection method in conjunction with the sparse grid technique has been widely used in gPCE due to its high accuracy, robustness, and convergence [20], which has also been observed and demonstrated during our earlier studies on PCE in recent years. In this section, the Galerkin projection method for PCE coefficients calculation is extended to the DDPCE approach to address higherdimensional UP problems. Figure 2 shows the general procedure of the improved DDPCE method.
With the projection method, the Galerkin projection is conducted on each side of Eq. (1):
where 〈•〉 represents the operation of inner product as below
where H(X) is the joint cumulative distribution function of random input variables X.
Based on the orthogonality property of orthogonal polynomials, the PCE coefficient can be calculated as
Similar to gPCE, the key point is the computation of the numerator in Eq. (11), which can be expressed as
The Gaussian quadrature technique, such as full factorial numerical integration (FFNI) and spare grid numerical integration, has been widely used to calculate the numerator in the existing gPCE approaches, with which the onedimensional Gaussian quadrature nodes and weighs are directly derived by multiplying some scaling factors on the nodes and weights from the existing Gaussian quadrature formulae and then the tensor product is employed to obtain the multidimensional nodes. For some common type of probability distributions, for example, normal, uniform, and exponential distributions, their PDFs have the similar formulations as the weighting functions of the GaussianHermite, GaussianLegendre, and GaussianLaguerre quadrature formula. Therefore, l_{i} and w_{i} can be conveniently obtained based on the tabulated nodes and weights of Gaussian quadrature formula [21], which are shown in Table 2, where
Normal  Exponential  Uniform  

l_{i}  ω_{i}  l_{i}  ω_{i}  l_{i}  ω_{i} 






However, the distributions of random inputs may not follow the Askey scheme, or are even nontrivial, or even exist in some raw data with a cumulative histogram of complicated shapes. Thus, such way to derive these nodes and weighs is not applicable in this case. In this work, a simple method is proposed based on the momentmatching equations below to obtain the onedimensional quadrature nodes and weights.
where l_{i} and ω_{i} (i = 0, 1, …, n) are respectively the ith onedimensional Gaussian quadrature nodes and weights, which theoretically can be obtained by solving Eq. (13).
However, Eq. (13) are multivariate nonlinear equations, which are difficult to solve when the number of equations is large (n + 1 > 7). It is noted that the onedimensional polynomial basis P^{(k)} corresponding to each dimension constructed above is orthogonal. Therefore, its zeros are just the Gaussian quadrature nodes l_{i}, which can be easily obtained by solving P^{(k)} = 0. Through substituting l_{i} into Eq. (13), the n + 1 weights ω_{i} can be conveniently calculated. To calculate Eq. (13) of PCE order p, generally at least p + 1 onedimensional nodes should be generated to ensure the accuracy, i.e., n ≥ p, which means that 0 to at least pth statistic moments of the random variable X should be matched. In this work, n is set as n = p.
In the same way, the nodes and weights in other dimensions are obtained conveniently. Then, the numerator can be calculated by the full factorial numerical integration (FFNI) method [8] for lowerdimensional problems (d < 4) as
where
Generally, m is set as m ≥ p + 1 (p is the order of the PCE model). If the number of nodes N for calculating E[yΦ_{i}(X)] is too small, which is not matched with the PCE order, large error would be induced. Therefore, the conclusion that the higher the PCE order, the more accurate the UP results is based on the fact that E[yΦ_{i}(X)] has been calculated accurately enough. Clearly, the number of nodes N is increased exponentially with the increase of dimension d, causing curse of dimensionality. Therefore, FFNI is only suitable for lowerdimensional problems (d < 4).
For higherdimensional problems (d ≥ 4), the sparse grid numerical integration method [22] can be used to calculate E[yΦ_{i}(X)] to reduce the computational cost:
where
For the FFNIbased method, if m nodes are selected on each dimension (m_{1} =…= m_{d} = m), 2m − 1 accuracy level can be obtained. For the sparse gridbased method, 2k + 1 accuracy level can be obtained with the accuracy level k = q − d. For example, if k = 2 and d = 8, for the sparse gridbased method, 17 nodes are required yielding 5th (2×2 + 1)order accuracy level. For the FFNIbased method, to obtain the same accuracy level 5 (2×3 − 1), m should be m = 3 requiring 3^{8} nodes. Clearly, to obtain the same accuracy level, the number of nodes of the sparse gridbased method is much smaller than that of the FFNIbased method if d is relatively large.
In this chapter, we focus on extending the Galerkin projection to the DDPCE method to address higherdimensional UP problems and then exploring the relative merits of these PCE approaches. For the case with only small data sets, both DDPCE and the existing distributionbased method (gPCE) may produce large errors for UP, and the estimation of PDF for the existing PCE methods is problem dependent and very subjective. It is difficult to make a comparison effectively between DDPCE and the existing PCE methods. Therefore, during the comparison, it is assumed that there are enough data of the random input to ensure the accuracy of the moments.
2.3. Comparative study of various PCE methods
In this section, the enhanced DDPCE method, the recognized gPCE method, and the GSPCE method that can address arbitrary random distributions are applied to uncertainty propagation to calculate the first four statistic moments (mean μ, standard deviation σ, skewness β_{1}, kurtosis β_{2}) and probability of failure (P_{f}), of which the results are compared to help designers to choose the most suitable PCE method for UP. To comprehensively compare the three PCE approaches, four cases are respectively tested on four mathematical functions with varying nonlinearity and dimension shown in Table 3 and N, U, Exp, Wbl, Rayl, and Logn denote normal, uniform, exponential, Weibull, Rayleigh, and lognormal distribution, respectively. P_{f} is defined as P_{f} = probability (y ≤ 0).
The PCE order is set as p = 5 for all the functions for comparison, which means that 0–9th statistic moments of the random inputs should be matched to construct the onedimensional orthogonal polynomials for the DDPCE approach. For the first and second functions, FFNIbased Galerkin projection is used to calculate the PCE coefficients, while for the latter two, the sparse gridbased method with accuracy level k = 4 is used since the dimension is higher. The results of MCS with 10^{7} runs are used to benchmark the effectiveness of the three methods.
In Case 1, all the random input distributions are known and belong to the Askey scheme. The test results are shown in Tables 4–7, where the bold numbers with underline are the relatively best results and e represents the relative errors of the first four moments (μ, σ, β_{1}, β_{2}) with respect to MCS. P_{f} estimated by MCS is presented with 95% confidence interval. The results marked with * are from the sparse gridbased method. From these tables, it is found that with the same number of function calls (denoted as Ns), DDPCE, gPCE, and GSPCE produce almost the same results of the statistic moments, which are very similar to those of MCS (with the largest error as 2.6927%). The estimation of P_{f} for all the methods is within the 95% confidence interval with respect to MCS, indicating the high accuracy of UP. Although the orthogonal polynomial basis for DDPCE is constructed by matching only 0–9th statistic moments of the random input variable instead of the complete PDFs for gPCE and GSPCE, the results are accurate enough in this case. Moreover, the application of sparse grid technique to DDPCE can greatly reduce the function calls for higherdimensional problems (see Tables 5 and 6), while exhibiting good accuracy. Especially for the fourth function, with FFNI, the computational cost is very large (N_{s} = 976,562).
Methods  MCS  DDPCE  gPCE  GSPCE 

e_{μ} (%)  –  0.0115  0  0.0231 
e_{σ} (%)  –  0.0516  0  0.0258 
 –  0  0.4202  5.4852 
 –  0.1725  0.0814  0.0958 
P_{f} ( 1e^{−3})  [3.1403,3.2101]  3.1713  3.2017  3.1936 
N_{s}  10^{7}  125  125  125 
In Case 2, all the random input distributions are known but do not belong to the Askey scheme. In this case, the Rosenblatt transformation is employed for the gPCE method first. However, DDPCE and GSPCE can be directly used. The results are shown in Tables 8–11. It is observed that overall DDPCE and GSPCE perform better than gPCE, yielding results that are close to those of MCS. The reason is that the transformation in gPCE would induce error. Specifically, in Tables 9 and 10, the gPCE method causes relatively large errors due to the transformation. In addition, note the numbers with shadow, they are clearly larger than those of DDPCE and GSPCE, and P_{f} is outside the range of the 95% confidence interval of MCS. The interpretation is that since the first function is linear, the impact of transformation employed in gPCE on the accuracy of UP is small; while, for the second and third functions, they are more complicated and nonlinear (including trigonometric and exponential terms), the error induced by the transformation employed in gPCE is amplified more. The fourth function is a nonlinear polynomial one, which is easier to be handled than functions 2 and 3 in doing UP. Therefore, the results are generally accurate except P_{f} that is still outside the range of the 95% confidence interval of MCS. Moreover, the application of sparse grid greatly reduces N_{s}, exhibiting good potential applications for higherdimensional problems.
In Case 3, the PDFs of some variables is bounded (BD) as below,
and the rest of the variables follow typical distributions. In this case, the Rosenblatt transformation is also employed for the gPCE method first.
From the results in Tables 12–15, it is found that generally large errors are induced by gPCE, especially the numbers with shadow in the tables. Since the first two variables follow the distribution bounded in an interval, the error induced by the transformation is large and all values of P_{f} are outside the confidence intervals for gPCE. While, the results of DDPCE and GSPCE are generally accurate and comparable, which are still very close to those of MCS. It should be noted that although the error of gPCE is the largest, all P_{f} by the three methods are outside the confidence intervals for function 3 (italic numbers) since this function is the most nonlinear and complicated. Hence, we increase the PCE order p and accuracy level k of the sparse grid to p = 6 and k = 5, and the results of P_{f} for DDPCE, gPCE, and GSPCE are 3.1263, 3.1446, and 3.1350, exhibiting evident improvement. Clearly with the same Ns, DDPCE and GSPCE are much more accurate than gPCE when nontrivial distribution is involved. These results further demonstrates the effectiveness and advantage of the enhanced DDPCE for UP.
In Case 4, the distributions of the random input variables are unknown and only some data exist. Although, based on the data, the analytical PDF can be obtained through some experience systems, such as Johnson or Pearson system [8], if the distribution of the data is very complicated, such as with a complicated cumulative histogram of bi or multimodes, it is often very difficult to obtain the analytical PDF accurately. As is wellknown that the Pearson system based on the first four statistic moments of the random variable would produce large errors for bimode (BM) or multimode PDFs. Evidently, the existing PCE approaches, including gPCE and GSPCE, may produce large errors since they all depend on the exact PDFs of the random inputs in this case. However, DDPCE can still work since it is a datadriven approach. To explore the effectiveness and advantage of DDPCE over the other two approaches, it is assumed that the input data for some random input variables have a complicated bimode (BM) histogram shown in Figure 3 and the data for the rest from the typical distributions. Therefore, for the convenience and effectiveness of test, all the input data are generated based on the PDFs, of which the PDF of BM distribution is shown in Eq. (17). It should be pointed out that the PDFs actually are unknown and only some data exist in practice.
We tested small (500) and large (10^{7}) numbers of input data to investigate the impact of number of data on the accuracy of UP. The results are shown in Tables 16–19, from which it is noticed that the results of DDPCE are generally very close to those of MCS when the number of sample points of the random input variables is large (10^{7}). When only 500 sample points are used, the errors are much larger. It means that the accuracy of DDPCE is improved with the increase of the number of sample points. The reason is very simple that with the increase of the number of sample points, the statistic moments of random input variables calculated are more accurate, which would undoubtedly increase the accuracy of UP. The observation exhibits great agreements to what has been reported in work of Oladyshkin and Nowak. Similar to Case 3, the estimated P_{f} is outside the confidence intervals for function 3 since this function is the most nonlinear and the random distribution is more irregular, which can be improved by increasing N_{s}. This means that the generally the more nonlinear the function and the more irregular the random input distribution, the more difficult it is to achieve accurate UP results. These results further demonstrate the effectiveness and advantage of the enhanced DDPCE method for UP.
Methods  MCS  DDPCE (10^{7})  DDPCE (500) 

e_{μ} (%)  –  0.0066  1.4873 
e_{σ} (%)  –  0.0196  0.0688 
 –  0.0150  0.0451 
 –  0.0052  3.2327 
P_{f} (1e^{−3})  [1.4772,1.5252]  1.5069  0 
N_{s}  10^{7}  125  125 
Methods  MCS  DDPCE(10^{7})  DDPCE(500) 

e_{μ} (%)  –  0.0132  0.4350 
e_{σ} (%)  –  0.0109  0.1957 
 –  0.1159  13.4783 
 –  0.0131  0.8956 
P_{f} (1e^{−3})  [6.4478,6.5474]  6.4703  8.000 
N_{s}  10^{7}  125  125 
Methods  MCS  DDPCE(10^{7})  DDPCE(500) 

e_{μ} (%)  –  0.0327  0.6047 
e_{σ} (%)  –  2.7503  5.3717 
 –  3.8373  9.5932 
 –  0.5563  1.3573 
P_{f} (1e^{−3})  [7.7830,7.8924]  6.6667  6.0000 
N_{s}  10^{7}  1820  1820 
Methods  MCS  DDPCE(10^{7})  DDPCE(500) 

e_{μ} (%)  –  0.0024  0.1925 
e_{σ} (%)  –  0.0241  3.5156 
 –  0.4149  214.4537 
 –  0.0170  11.9346 
P_{f} (1e^{−3})  [9.2650,9.3842]  9.2937  0 
N_{s}  10^{7}  10626  10626 
To study the convergence property of the enhanced DDPCE method, the errors (e) of moments and P_{f} with different PCE orders obtained by the proposed one as well as gPCE and GSPCE are shown in Figures 4–7, taking Function 2, for example. Clearly, similar to the existing two methods, with the increase of the PCE order, the errors decrease significantly, exhibiting an approximate exponential convergence rate. Meanwhile, it is observed that the speed of convergence in Case 1 (Askey scheme) is the fastest. Generally, the more irregular the input distribution and the more nonlinear the function, the slower is the convergence process. In addition, it is also noticed that for Case 3, since x_{1} and x_{2} follow the nontrivial distribution, the convergence rate is very slow for gPCE (see left in Figure 6) due to the error induced by the transformation.
2.4. Summary
Overall, the three approaches produce comparably good results when the random inputs follow the Askey scheme. However, gPCE is the most mature and convenient to be implemented since there is no need to construct the orthogonal polynomials. When the PDFs of random inputs are unknown but do not follow the Askey scheme, large errors would be induced by the transformation for gPCE and the rest two PCE methods are comparable in accuracy and implementation complexity. It should also be pointed out that for DDPCE, when constructing onedimensional polynomials, the statistic moments (often 0–10 order) should be calculated first. If large gap exists between the highorder and loworder moments, the matrix singularity would happen in solving the linear equations (Eq. (7)). Therefore, in this case, GSPCE is preferable especially when the function is highly nonlinear. When the PDF is unknown and cannot be obtained accurately, such as when random inputs exist as some raw data with a complicated cumulative histogram, only the DDPCE method can still perform well since it is a datadriven method instead of the probabilisticdistributiondriven, while large errors would be produced if GSPCE and gPCE are employed. However, more efforts should be made to solve the numerical problems in the DDPCE method to make it more robust and applicable in constructing the onedimensional orthogonal polynomials.
3. A sparse datadriven PCE method
The size of the truncated polynomial terms in the full PCE model is increased with the increase of the dimension of random inputs d and the order of PCE model p (see Eq. (1)), resulting in a significant growth of the computational cost. Therefore, attempts are made in this section on the full DDPCE method introduced in Section 2 to reduce the computational cost. Accordingly, a sparse PCE approximation, which only contains a small number of polynomial terms compared to a classical full representation, is eventually provided by using the least angle regression (LAR) theory [23] and the sequential sampling method. The original LAR method is used for variables selection, aiming to find the most important variables with respect to a function response. In this work, LAR is extended to select some polynomial terms Ф_{i}(x) from the full PCE model that have the greatest impact on the model response
Although the computational cost and accuracy are dependent on the PCE order, how to determine a suitable order that compromises between accuracy and efficiency is not within the scope of this chapter. In common situations, PCE of order p = 2 or 3 can produce results with good agreement to MCS for the output PDF estimation [24]. For more rigorous approaches of adaptively determining the order of the PCE model rather than specifying it in advance, readers can refer to references [25, 26].
3.1. Procedure of datadriven PCE method
A stepbystep description of the proposed method is given in detail as below with a sidebyside flowchart in Figure 8.
Step 1. Given the information of the random inputs (raw data or probabilistic distributions), specify the PCE order p, and then construct the full DDPCE model without computing the PCE coefficients.
Step 2. Generate the initial input sample points X = [x_{1},…,x_{m},…,x_{N}]^{T} according to the distributions of the random inputs or select the sample points from the given raw data, where x_{m} = [x_{m1},…,x_{md}]. Meanwhile, calculate the corresponding real function responses y = [y_{1},…,y_{m},…,y_{N}]^{T}.
X is standardized to have mean 0 and unit length, and that the response y has mean 0.
Then one has all the standardized data as
Step 3. Set the iteration number as K = 0 and compute the values of all polynomial terms Ф_{i}(x) (i = 0, 1, …, P) of the full PCE model in Eq. (1) by, respectively, substituting each input sample point x_{m} into them. Then one obtains the information matrix as
Step 4. The LAR algorithm is employed to automatically detect some number (often K + 1) of significant orthogonal polynomial terms from the first K + 1 terms Ф_{i}(x) (i = 0, 1, …, K) in Eq. (1), which will be retained to construct a sparse candidate PCE model that has a smaller scale than the full PCE model. For the introduction of the original LAR algorithm, readers can refer to reference [23] for more details.
Step 5. To save the computational cost, the leaveoneout crossvalidation method [27] is employed to evaluate the accuracy of the candidate sparse PCE model constructed above, which is represented as the leaveoneout error analytically as below:
where g(x_{j}) is the response value from the original model at the sample point x_{j};
Once the PCE coefficients are calculated, the predicted value by the candidate sparse PCE model at the sample point x_{j} is calculated as
To evaluate the accuracy more effectively, the relative error is employed based on Err_{LOO} as
where
Step 6. Check the stop criterion:
If the accuracy ε_{LOO} satisfies the target threshold ε, i.e., ε_{Loo} ≤ ε, the procedure will be stopped, the PCE model obtained by LAR in Step 4 will be considered as the final one, and all the sample points will be used for regression to calculate the PCE coefficients of the current sparse PCE model;
If ε_{Loo} > ε and K < P, set K = K + 1 and go to Step 4 to find another candidate sparse PCE model by LAR;
If ε_{Loo} > ε and K = P, generate some new sample points X_{new} with the sequential sampling technique and calculate the corresponding responses y_{new}, and add the new sample points into the old ones as X = [X; X_{new}] and y = [y; y_{new}], then go to Step 3 to find another candidate sparse PCE model.
In this work, if the PDF of random input is known, a large number of sample points are generated as the database according to the PDF beforehand; if the PDF of random input is unknown, the raw data are considered as the database. Each sample point in the database has its own index. The initial sample points are selected from the database through randomly and uniformly generating their indices. Then these sample points will be removed from the database and the rest will be indexed again. Similarly, by randomly and uniformly generating the indices, the sequential sample points will be selected from the reduced database. By using this sampling strategy, the sample points are distributed uniformly as far as possible, which is helpful to improve the accuracy of the PCE coefficient calculation.
Step 7. Based on the final sparse PCE model, the probabilistic properties of y can be obtained by running MCS or analytically.
3.2. Comparative study
In this section, the proposed sparse DDPCE method (shortened as sDDPCE hereafter) is applied to three mathematical examples to calculate the mean and variance of the output responses. The full DDPCE (shortened as fDDPCE hereafter) method that adopts a full PCE structure and onestage sampling with the size of one times the number of PCE coefficients is also applied to UP, of which the results are compared to those of sDDPCE to demonstrate its effectiveness and advantage.
The test examples of varying dimensions including their input information are shown in Table 20, in which the symbols
Another type of nontrivial distribution considered here is invented by conducting square operation on the sample points from some common distributions (see Case 3 in Function 2). The target accuracy ε of sDDPCE is set as 10^{−5}. Meanwhile, to ensure the effectiveness of comparison between sDDPCE and fDDPCE, the order of the PCE model p is set as the same (p = 3, 4, 5) for both methods. MCS with 10^{8} runs is conducted to benchmark the accuracy of both methods. In Case 1, the probabilistic distributions of all the random input variables are unknown and only a number of raw data (10^{5}) exist, which cannot be solved by the traditional PCE methods, such as gPCE. Clearly, the more the raw data, the more reliable the results will be. Considering that the main objective of this paper is to investigate the effectiveness and capability of sDDPCE in reducing the computational cost, it is assumed that a large number of raw data (10^{5}) exist of the random inputs.
The results are listed in Tables 21–23, in which e_{m} and e_{v}, respectively, denote the errors (%) of mean and variance relative to the results of MCS, N denotes the number of total sample points (function evaluations) used for PCE coefficients estimation during regression, and Na represents that the result cannot be obtained.
fDDPCE  sDDPCE  

e_{m}  0.321  0.099  0.044  0.330  0.201  0.181 
e_{v}  0.232  0.813  0.173  0.203  0.099  0.068 
p  3  4  5  3  4  5 
N  10  15  21  20  30  20 
From the results some noteworthy observations are made. First, generally with high PCE order (p = 5), the results of sDDPCE are accurate. Second, for lowdimensional problem (d = 2, Function 1), the efficiency and accuracy of sDDPCE and fDDPCE are almost comparable. Specially, for lower orders p = 3 and 4, sDDPCE is even less efficient. The interpretation is that in addition to the regression process, the sample points are also required during the construction of the sparse PCE model for sDDPCE, while for fDDPCE, the sample points are only used during regression. Moreover, for em with p = 3 (lower order) of Function 1, fDDPCE is even more accurate with higher efficiency (see underlined numbers). The reason may be that for lowdimensional problems with loworder PCE models, the size of the total polynomial terms is already small and the sparse structure of sDDPCE is of little help in reducing the number of sample points since additional sample points are required during the selection of important polynomial terms. Therefore, fDDPCE may produce more accurate results than sDDPCE since it maintains more information. This will be verified later. Third, with the increase of dimension (from d = 2, 5 to d = 10), N is increased significantly with the increase of p for fDDPCE, causing matrix illconditioned problem. So some results (p = 4 and 5) even cannot be obtained by fDDPCE. Specially, for Function 3, the dimension is high (d = 10), fDDPCE cannot produce results for any order p. However, for sDDPCE, no remarkable increase in N is noticed since it adopts a sparse PCE model that can adaptively remove the insignificant polynomial terms, while its accuracy is generally improved clearly exhibiting small error relative to MCS. When p = 5, only 13 polynomial terms are selected from 3003 total terms for Function 3; while for Function 1, 4 are selected from 21 total terms. Therefore, the larger the dimension, the more obvious the advantage of sDDPCE over fDDPCE in efficiency.
In Case 2, the PDFs of all the random inputs are known and assumed to follow common distributions. This is a general case that can be solved by the traditional probabilistic distributionbased PCE methods. The results are shown in Tables 24–26. Generally with high PCE order (p = 5), the results of sDDPCE are accurate, demonstrating its effectiveness in dealing with random inputs with known PDFs. Meanwhile, for lowdimensional problem (Function 1), generally sDDPCE is more accurate with the similar N as fDDPCE. However, for lower order (p = 2) of Function 1, fDDPCE is even more accurate than sDDPCE, but with much smaller N. This observation is consistent with what has been noticed in Case 1 and the reason is that additional sample points are required to selecting important polynomial terms. With the increase of dimension, N is increased significantly with the increase of p for fDDPCE. However, for sDDPCE, remarkable improvement in the accuracy is noticed without a remarkable increase in N. These results show great agreements to what has been noticed in Case 1.
fDDPCE  sDDPCE  

e_{m}  0.083  0.044  0.060  0.710  0.010  0.100 
e_{v}  0.468  0.758  0.211  0.975  0.061  0.061 
p  3  4  5  3  4  5 
N  10  15  21  30  15  20 
In Case 3, the PDFs of all the random inputs are known; however, some of them follow nontrivial distributions. In this case, the traditional gPCE method cannot work well since large errors would be induced in transforming such nontrivial distributions to certain ones in the Askey scheme. The results are shown in Tables 27–29, which exhibit great agreements to what has been observed in Case 1 and Case 2. The proposed sDDPCE method can significantly reduce the number of sample points while with high accuracy. The higher the dimension, the more advantageous the adaptive sparse structure of sDDPCE can be. In this case, only 11 polynomial terms are selected from 3003 total terms for d = 10 with sDDPCE. Moreover, sDDPCE can produce accurate and efficient results for nontrivial distributed random inputs.
fDDPCE  sDDPCE  

e_{m}  1.210  0.854  0.302  1.366  1.044  0.161 
e_{v}  2.321  0.748  0.815  0.805  0.161  0.000 
p  3  4  5  3  4  5 
N  10  15  21  10  10  10 
To verify the guess that for lowdimensional problems with loworder PCE models, fDDPCE may produce more accurate results than sDDPCE since it maintains more information. Another test is conducted for Function 1 with lower order p = 2 with all the three cases, of which the results are shown in Table 30. Just as expected, fDDPCE is clearly more accurate than sDDPCE while generally with less sample points. For Function 1 with p = 2, the total number of polynomial terms is 6, which is very small. With sDDPCE, only the last polynomial term is removed, while more points are required in removing the insignificant polynomials. So the sparse scheme does not have obvious impact under this circumstance. Therefore, it is concluded that the developed sDDPCE method is particularly applicable to highdimensional problems, especially those requiring a high order PCE model.
3.3. Summary
The developed sDDPCE can reduce the number of polynomial terms in the PCE model, thus reducing the computational cost. Generally, the larger the random input dimension, the more obvious the advantage of the developed sDDPCE over fDDPCE in efficiency. The sDDPCE method is much more efficient than fDDPCE in solving highdimensional problems, especially those requiring a high order PCE model.
4. Sparse DDPCEbased robust optimization using trust region
In Section 3, to reduce the computational cost of DDPCE, a sparse DDPCE method has been developed by removing some insignificant polynomial terms from the full PCE model, thus decreasing the number of samples for regression in computing PCE coefficients. However, when the sparse DDPCE is applied to robust optimization, it is conventionally a tripleloop process (see Figure 9): the inner one tries to identify the insignificant polynomial terms of the PCE model (the dash box); the middle is UP; the outer is the search for optima, which clearly is still very timeconsuming for problems with expensive simulation models.
As has been mentioned in Section 3, during each optimization iteration, although the sample points required for regression during UP of sDDPCE are greatly reduced, certain additional number of sample points are required to identify the insignificant polynomial terms by the inner loop. If at some iteration design points, almost the same sparse polynomial terms are retained, the inner loop can clearly be avoided, thus saving the computational cost. To address this issue, the trust region technique widely used in nonlinear optimization is extended in this section. During optimizing, a trust region is dynamically defined. If the updated design point lies in the current trust region, it is considered that the insignificant terms of its PCE model remain unchanged compared to those of the last design point, i.e., the inner loop is eliminated at the updated design point. Meanwhile, to further save the computational cost, the sample points lying in the overlapping area of two adjacent sampling regions are reused for the PCE coefficient regression for the updated design point. The proposed robust optimization procedure employing sparse DDPCE in conjunction with the trust region scenario is applied to several examples of robust optimization, of which the results are compared to those obtained by the robust optimization without the trust region method, to demonstrate its effectiveness and advantage.
4.1. The trust region scenario
The trust region method is a traditional approach that has been widely used in nonlinear numerical optimization [28]. The basic idea of the trust region method is that in the trust region of the current iteration design point, the secondorder Taylor expansion is used to approximate the original objective function. If the accuracy of the current secondorder Taylor expansion is satisfied, the size of the trust region is increased to speed up the convergence, and if not it is reduced to improve the accuracy of approximation. To reduce the computational cost of design optimization, the idea of the trust region technique has been extended and applied to reliabilitybased wing design optimization [29], multifidelity wing aerostructural optimization [30], and multifidelity surrogatebased wing optimization [31], which has been widely believed as an efficient strategy in design optimization. For example, when the trust region technique is applied to metamodelbased design optimization, during optimization, the sample points are sequentially generated in the trust region and the radius of the trust region is dynamically adjusted based on the accuracy of the metamodel in the local region.
4.2. Robust design using sparse datadriven PCE and trust region
The scenario of trust region is extended here to reduce the computational cost of sDDPCEbased robust optimization. The basic idea is that the radius of a trust region is determined by the distance between two successive design points and the variation of the corresponding objective function values. If the updated design point
Step 0: Set the iteration number as k = 1 and the initial staring design point
Step 1: After the kth optimization iteration, define/update the trust region at the current obtained new design point
where
Step 2: During the (k + 1)th optimization iteration, the obtained new design point is
Step 3: If ΔX ≤ r_{2} and ΔY ≤ r_{2} both are satisfied,
Step 4: The retained polynomial terms Ф_{i}(x) at the updated new design point
Step 5: The inner loop is conducted on the updated design point
Step 6: Set k = k + 1, based on the results of UP, search for the next updated new design point
The above procedure will continue until the convergent criterion is satisfied. Figure 10 shows the case that the sample points in the previous optimization iteration are reused in the two successive iterations. As is seen that two points are located in the overlapping area of two successive sampling regions, thus are reused in the next iteration for regression to identify the significant polynomials/calculate the PCE coefficients. In this way, the computational cost can be further reduced.
4.3. Comparative studies
The first example is the Ackley Function:
The robust design optimization of this example is:
All the design variables are considered to follow uniform distribution with variation of ±0.2 around their mean values and k in Eq. (26) is set as k = 20. In this study, the fmincon function in Matlab is used for optimization, and ζ_{1} and ζ_{2} in Eq. (45) are set as ζ_{1} = 0.5 and ζ_{2} = 0.5. Meanwhile, the obtained optimal design variables of sDDPCEbased robust design with and without trust region scenario as well as the deterministic design without considering any uncertainties are respectively substituted into Eq. (26) through MCS (with 1e^{6} runs) to calculate the mean μ_{f} and standard deviation σ_{f} of the objective function.
The results are shown in Table 31, from which it is found that compared to the robust optimization without the trust region scenario (denoted by without), the obtained performance results (μ_{f}, σ_{f}, and F) of the robust optimization with the trust region scenario (denoted by with) are comparable. However, the number of function calls (denoted as Funcall) is clearly reduced. The decrease in computational cost is attributed to the application of trust region scenario and the reusing of sample points. Meanwhile, the optimal designs of the two robust designs are both less sensitive to uncertainties (smaller σ_{f}) compared to the results of deterministic design (denoted by DD). These results demonstrate the effective and advantage of the proposed method.
The second example is the robust design optimization of an automobile torque arm, shown in Figure 11.
In this problem, the four geometrical parameters (a, d_{1}, d_{2}, and l) are considered as design variables, and the yielding strength S_{y}, Young’s modulus E, and the applied force Q are deterministic parameters.
(27) 
where the objective function f represents the volume of the arm, the first constraint g_{1} denotes the yielding failure at section AA, the second constraint g_{2} denotes the buckling failure at the two connecting rods, and
The distribution parameters of the four design variables and design parameters are shown in Table 32.
The corresponding robust design optimization model is formulated as
(28) 
As has been mentioned above, the PCE model is only constructed for the objective function and the results are shown in Table 33. It is noticed that the robust optimization designs with and without the trust region scenario yields comparable results, while the function calls (objective function calls) required by design with trust region is evidently smaller. The deterministic design cannot even obtain a feasible optimal solution with both constraint violated (>0), since it does not consider uncertainties during design. These results further demonstrate the effectiveness and advantage of the proposed method.
4.4. Summary
The employment of the trust region in sDDPCEbased robust optimization can evidently reduce the computational cost. However, the determination of the trust region in this chapter is still very subjective and a more rigorous method should be explored. In this section as well as Section 3, the scenarios of sparse PCE and trust region are only employed to DDPCE to save the computational cost. However, the methods proposed here are also applicable to other PCE approaches, such as gPCE and GSPCE.
In this chapter, the latest advances in PCE theory and approach for probabilistic UP are comprehensively presented in detail. However, it does not limit the application of PCE to nonprobabilistic UP to address epistemic uncertainties. Sudret and Schöbi have proposed a twolevel metamodeling approach using nonintrusive sparse PCE to surrogate the exact computational model to facilitate the uncertainty quantification analysis, in which the input variables are modeled by probabilityboxes (pboxes), accounting for both aleatory and epistemic uncertainty [32]. The Fuzzy uncertainty propagation in composites has been implemented using GramSchmidt polynomial chaos expansion, in which the parameter uncertainties are represented by fuzzy membership functions [5]. A general framework has been proposed for a dynamical uncertain system to deal with both aleatory and epistemic uncertainty using PCE, where the uncertain parameters are described through random variables and/or fuzzy variables [33]. The mix UP approach is proposed, in which the inner loop PDFs are calculated using the PCE, and outer loop bounds can be computed with optimizationbased interval estimation [34]. PCE has also been applied for solving Bayesian inverse problem as “surrogate posterior.” However, it has been indicated that the accuracy cannot always be ensured by PCE since a sufficiently accurate PCE for this problem requires a high order, making PCE impractical compared to directly sampling the posterior [35].