Open access peer-reviewed chapter

Bayesian Uncertainty Quantification for Functional Response

Written By

Xiao Guo, Yang He, Binbin Zhu, Yang Yang, Ke Deng, Ruopeng Liu and Chunlin Ji

Reviewed: April 21st, 2017 Published: July 5th, 2017

DOI: 10.5772/intechopen.69385

Chapter metrics overview

1,675 Chapter Downloads

View Full Metrics


This chapter addresses the stochastic modeling of functional response, which is a major concern in engineering implementation. We first introduce a general framework and several conventional models for functional data, including the functional linear model, penalized regression splines, and the spatial temporal model. However, in engineering practice, a naive mathematical modeling of functional response may fail due to the lack of expressing the underlying physical mechanism. We propose a series of quasiphysical models to handle the functional response. A motivating example of metamaterial design is thoroughly discussed to demonstrate the idea of quasiphysical models. In real applications, various uncertainties have to be taken into account, such as that of the permittivity or permeability of the substrate of the metamaterial. For the propagation of uncertainty, simulation‐based methods are discussed. A Bayesian framework is presented to deal with the model calibration in the case of functional response. Experimental results illustrate the efficiency of the proposed method.


  • functional response
  • meta model
  • Bayesian uncertainty quantification
  • model calibration
  • metamaterial design

1. Introduction

In recent years, computer experiments have become widely adopted in both engineering applications and scientific research to replace or support their physical counterparts. Functional response is the mathematical representation of system behaviors, where the data are collected over an interval of some input indices. With the advance of modern simulation and experiment technology, accessing functional data becomes easier. Functional response can be in the form of one‐dimensional data such as a curve or higher dimensional data such as an image, which can provide better physical insights. However, even with the advancement of computer technology, full simulation based on a finite element method or a finite difference method still takes an extensive amount of time. To reduce the amount of simulation time, historical simulated data are usually used to build a cheaper metamodel [1], in which the functional response of unobserved input can be predicted by either regression or interpolation. The simplest representation of functional data can be considered basis expansion, where polynomials are used to formulate the input‐output relation [2]. For frequency response analysis, Fourier series are usually applied to replace the polynomials [3]. Both methods are categorized as linear regression, which requires parameter estimations. Nonparametric approaches were also used to analyze functional data in many scientific and engineering fields [4]. The purpose of building these models is to provide the “best” estimate regarding the given data, while providing a statistical scheme for prediction at unobserved inputs.

In this chapter, we provide a more sophisticated approach to naturally analyze functional responses, which may suggest more insightful conclusions that may not be apparent otherwise. We introduce one motivating example of functional response in computer experiments. In the design of metamaterial, the goal is to establish a relationship between the physical dimensions of a unit cell and its electromagnetic (EM) frequency response [5]. In practice, designers usually evaluate the EM properties of a metamaterial microstructure via full‐wave simulation data, such that corresponding adjustments are constantly made to the design (dielectric architecture, microstructure topologies, etc.) until a desired performance is achieved. Figure 1 depicts an example of unit cell design whose response phases differ on a frequency span along with the varying geometric parameter. Naïve regression‐based metamodels fail in dealing with such a problem because they require building regressions for each output, which could be very expensive and leaves the correlation between different frequencies unutilized. Moreover, when resonance is involved, the functional data cannot be well described by polynomials or splines. However, this can be overcome by some quasiphysical models, which explore the essential physical mechanism. In addition, a more general two‐stage modeling scheme can be applied, where in Stage I, we approximate the response with rational functions. This allows us to decompose the continuous response into a few discrete parameters. Stage II consists of a nonparametric metamodel to capture the input dependence.

Figure 1.

Example of functional response in metamaterial unit cell design‐phase shift between different physical dimension inputs.


2. General models for functional response

Various statistical models, including the spatial temporal model, functional linear model, and penalized regression splines, have been widely discussed in the past. Most models share a unified expression that sums up a mean function μ(f,x) and a random term ε(f,x), written as


where y is the response, x={x1,,xp} is the input variables with dimensionality p, and f represents some index, which could be the frequency of an electromagnetic wave or the time of a time series. Despite the shared form, these models differ in the way the mentioned terms are estimated.

2.1. Functional linear model

To model the functional response, the primary task is to estimate the mean function μ(f,x), on which a certain form is often imposed. As a generalization of linear regression models, the functional linear model is in the form of


with basis functions β={β0(f) ,β(f)} (β(f)=(β1(f),,βp(f))T has the same dimensionality to the input variable), which incorporate the index dependence, and can be seen as an extension to the parameters of linear regression models. Therefore, by substituting the mean function into Eq. (1), we obtain the resulting output


Given a certain index f, this model is a universal linear model. Furthermore, it contains an underlying index‐varying effect of x, whereas β is assumed to be a smooth function of f. Thus, the model is referred to as a functional linear model. To estimate the coefficients β0(f) and β(f), it is straightforward to apply the least squares method, which adopts the data collected at f. However, smoothing over f componentwise, using penalized splines, can enhance the efficiency of estimates [6].

Penalized regression splines implement estimation of smoothing basis functions in functional linear models by minimizing the penalized least squares. They are widely adopted in modeling functional responses due to their easy implementation and low computational cost [6].

Noted that the primary purpose of applying penalized regression splines is to estimate the basis function β. Suppose we have n data {(f,yi),i=1,,n}, and the basis function is a random sample from


with j=1,,p. m(f) is an unspecified smooth mean function of β and δj is a zero mean random error. In practice, m(f) can be estimated by a series of power‐truncated spline basis 1,f,f2,fp,(fκ1)+p,,(fκK)+p, where {κ1,,κK} is a given set of knots and a+ denotes the positive part of a, i.e., a+=(a+|a|)/2. Therefore, the model in Eq. (4) can be approximately written as


where αj represents coefficients whose values can be obtained via least squares estimates. Generally, overfitting in the approximation of m(f) may occur, which leads to high variance and poor prediction. To avoid large modeling bias, the trade‐off between model bias and overfitting requires careful consideration. In order to resolve such a problem, variable selection procedures should be applied to the linear regression model. However, when the number of involved basis functions is very large, variable selection would encounter great computational difficulty [7]. Alternatively, αj is estimated by minimizing the penalized least squares function in the form of


where g is a tuning parameter determined by cross‐validation or generalized cross‐validation [8].

The smoothing method with penalized splines estimates also requires selection of the number of knots and the order p, which may vary from case to case. Fortunately, the estimates are not sensitive to these choices; and cubic splines are suggested in most cases [6], which ensure continuous second‐order and piecewise continuous third‐order derivatives at the knots. Meanwhile, knots are usually selected from the interval over which f is evenly distributed, or κk is taken to be the 100k/(K+1)th percentile from the unevenly distributed f.

2.2. Spatial temporal model

The spatial temporal model is defined by the sum of a mean function, μ(f,x), and ε(f,x) of a zero‐mean Gaussian random field. It is a generalization of the Gaussian processes (GP) model, which has been widely adopted for spatial statistic problems [4, 9].

Both of the preceding models aim to represent the functional data in terms of their mean functions. In contrast, the spatial temporal model utilizes the property of the normal distribution of the residuals; thus, the output can be seen as a realization of a Gaussian random field. We assume a mean function μ(f,x) in the form of


where h(x) and β(f) are two series of basis functions of the input variable and index variable, respectively. Such an assumption leads to a spatial temporal model


where ε(f,x) is a zero‐mean Gaussian random field, and the covariance function follows the form


where K(κf) denotes the covariance matrix, whose (i,j) element K(κf;|xixj|) measures the covariance between xi and xj. κf is an f‐dependent hyperparameter that controls the properties of the covariance.

Suppose we have obtained observation y(fj,xi) at input sites (fj,xi) with j=1,,J and i=1,,n, where J and n are the length of indices and input settings. β(f) and κf can be calculated following the hyperparameter estimation procedure within a standard Gaussian processes model [4]. The spatial temporal model also allows predictions at unobserved sites f* and x*. The procedures for prediction are summarized in the following algorithm.

Step 1: For j=1,,J, calculate the best estimates of β^(f) and κf for f=fj by maximizing the (log) likelihood, given by

log p(y|X)=12[(yh(x)β)TK1(κf)(yh(x)β)]12log|K|N2log 2πE34

Step 2: According to the data {xi,y(fj,xi)}, obtain estimates μ(f,x), and κ^f. Calculate prediction y(fj,x*), at f=fj, with the best linear unbiased prediction [2]


Step 3: For new index f*[f1,fJ] and given outputs at two existing indices y(f0) and y(f1), use linear interpolation to make predictions for y(f*,x*), as


2.3. Quasiphysical model

Metamaterial frequency response, for example, modeling the resonance response is often quite challenging and cannot be achieved with the models introduced above. This is due to that the above models are based upon linear regression and simply encode the index dependence within the linear index‐dependent smooth basis functions. However, when distinct resonance peaks exist, a common scenario in radio frequency engineering, fitting to these smooth basis functions often, leads to poor accuracy [10]. To deal with these problems, we tend to utilize some underling physical mechanism and establish a quasiphysical modeling method. For example, the mean function μ(fx) is represented by the combination of some link function L(f,), which follows certain physical mechanisms, and a set of low‐dimensional scaling variables φ(x), i.e.,

μ(f,x)=L(f, φ(x)).E10

Then, we have y(f,x)=L(f, φ(x))+ε(f,x). Instead of finding a single function with respect to both frequency index and input variables, the functional response is separated into two parts: a physical meaningful link function L(f,) contains the functional features, whereas the other captures the relationship between input variables x and scaling variables φ(x). This separation often leads to dimension reduction in statistical models. In the example of metamaterial design, the functional response is represented by the effective permittivity of a unit cell, which can be well fitted by a Drude‐Lorentz form [11],


where φ(x){εa,Fe,f0,γe} is the intermediate variable which can be estimated via fitting the functional response by Eq. (11), meanwhile φ(x) is a function of input variables. Here, we choose the Gaussian processes (GP) regression model for interpolate new φ* given previous obtained pairs {x,φ(x)} and new x*. Once the new φ* is obtained, it can then be used to evaluate the new functional response by Eq. (11). Figure 2 displays a smooth surface of f0 and an example of predicted effective permittivity.

Figure 2.

Example of modeling functional response assisted by the physical model: (a) Gaussian process surface of a scaling variable; (b) predicted functional response.

The aforementioned Drude‐Lorentz model allows high accuracy only when the metamaterial system works in a static or quasistatic regime, such that the metamaterial architecture can be seen as a single piece of effective medium. However, for complex metamaterial systems, the working regime is beyond static; thus, approximation accuracy by such a model is severely deteriorated. We noted that EM waves propagate through each layer of metamaterial like a current on transmission lines. Such a perspective transfers the EM field problems to circuit problems. Hence, function response to a continuous spectrum is reduced to discrete LRC (short form of inductor, resistor and capacitor) networks. We propose a two‐stage modeling scheme, where in the first stage, a vector fitting (VF) technique is adopted to provide accurate rational approximation to frequency responses with distinct resonances. Its results are easily interpreted as an equivalent circuit. The approximation accuracy to a frequency response and its corresponding equivalent circuits are shown in part (a) and (b) of Figure 3, respectively. And in Stage II, the empirical circuit elements are then taken as the target response in statistical models to establish the mapping input‐output relation by performing regression, which also allows predictions at unobserved input sites. Part (c) of Figure 3 presents the GP surface built of circuit parameters over two input variables. A graphical display of this two‐stage approach is illustrated in Figure 4. To predict functional response at unobserved input, it is implemented by first predict the presenting circuit elements and then recover the response.

Figure 3.

Example of modeling frequency response via (a) vector fitting, (b) equivalent circuit, and (c) GP regression.

Figure 4.

Flowchart of the two‐stage modeling approach.


3. Uncertainty quantification

In the engineering modeling and design, uncertainty is ubiquitous, due to the inability to specify a “true” input or model parameter. Quantifying the uncertainty of the model, e.g., in the form of predictive confidence intervals, is of great importance for decision making and advanced design [1]. In general, uncertainty quantification can be divided into two major types of problem: forward uncertainty propagation and inverse assessment of model and parameter uncertainties [12].

The full relationship between experimental output z(f,x) and simulation output y(f,x) can be expressed as


where φ(x,θ) denotes the GP regression model, which depends on the input variable x and several unobservable calibration parameters θ. η(x) is the additive discrepancy function (or model bias function), which does not depend upon the calibration parameters. To reduce the complexity of the analysis, we assume that ε(f,x) is a zero‐mean Gaussian random field and being independent of x. And then we can merge ε(f,x) and ε'(f) together, which is denoted by ε(f). Thus, the model becomes


where ε(f) is assumed to be a zero‐mean Gaussian noise with known variance λ, ε(fi)N(0,λ).

In forward problems, with an uncertain input x and given model parameters θ, the model output y and other quantities of interest are to be calculated. On the other hand, the inverse problem is to estimate the values of model parameters θ such that it makes the model’s output fit the experimental data as accurate as possible (or satisfy some precision requirements).

3.1. Metamodel‐based uncertainty propagation

The main problem in analyzing uncertainty propagation is obtaining an analytical representation of the metamodel for any arbitrary (uncertain) input values. Given its probability density, the Bayesian framework can provide a probability measure of random inputs on the output field. The purpose of such an operation is to evaluate the influence of an uncertain input on the model response.

Assume that the Gaussian process regression model is trained on a dataset with the input X={x1,,xN} and the corresponding intermediate variable Ψ=(φ(x1),,φ(xN))T which is obtained by fitting algorithm in Stage I. The GP hyperparameters learned from the data are denoted by γ. The uncertainty of the input variable x* is captured by a probability density function,


At a deterministic test input x*, the predictive distribution of the function, p(φ*|x*,X,Ψ,γ) (for simplicity, we use φ* to denote φ(x*), the output of the metamodel.), is Gaussian with mean


and variance


where ζi is the ith element of column vector ζ=[C+σ2I]Ψ. C denotes the covariance matrix of the Gaussian process, whose ijth element is given by Cij=C(xi,xj).

The final goal is to propagate uncertainty through the link function L(f, φ*). The computation of the statistics is implemented by integrating over the uncertainty with the mean


and the variance


The uncertainty propagation is induced by the variability of the input variable. For example, in metamaterial engineering, the dimension of a design parameter, say the thickness of the metallic microstructure layer, could differ from what has been instructed during the manufacturing processes. From measurements, the value of such a variable would rather follow a distribution than be pre‐specified as an exact value. Therefore, the analysis of uncertainty propagation is needed to be in the metamaterial design process.

3.2. Bayesian calibration

Compared to uncertainty forward propagation, the inverse problem is more difficult yet of great importance in enhancing the fidelity of metamodels. Two major aspects concerning the inverse problem are measuring model discrepancy and model calibration. In this chapter, we use the formulation to address both issues within an updating process, similar to that proposed in Ref. [12].

3.2.1. The model

In this section, we introduce the details of performing Bayesian calibration with regard to Eq. (13). The calibration parameters, denoted by θ, are defined as any physical parameters that can be specified as an input to the statistical model given by Eq. (13). The fundamental difference between x and θ is that the former refers to design inputs whose value can be specified by the user during experiment and simulation, whereas the latter cannot be controlled and its true value is not directly observable [12]. In the previous chapter, the calibration parameter is not explicitly specified. However, we here include it in the framework to quantify its uncertainty, which completes the full cycle of metamaterial design and modeling. Suppose θ represent a constitutive parameter, say permittivity, of a dielectric used to fabricate the metamaterial system, which cannot be accurately measured directly.

Before offering the detailed statistics for uncertainty quantification, we must note that the purpose of parameter calibration is to provide an accurate prediction with the metamodel with a small amount of data. An even smaller amount of experimental data is acquired to calibrate and validate the main model. To select the “best” experiment samples, uniform experimental design techniques are usually applied [6]. A Latin hypercube sampling, for example, is widely used for such cases, mainly due to its good coverage property [13].

The data corresponding to the metamodel φ are obtained at D1={(x1',θ1),,(xN',θN)}, where {x1',,xN'} and {θ1,,θN} are the set of design inputs and calibration parameters. Although the notation is included, the true values of the calibration parameters are unknown throughout the entire calibration process. The inverse problem of uncertainty quantification is implemented in an updated formulation with a Bayesian approach [1]. In model (13), the metamodel, φ(x,θ), and discrepancy function, η(x), are both Gaussian processes:


where m1(x,θ)=h1(x,θ)Tβ1 and m2(x)=h2(x)Tβ2 [12]. C1((,),(,)) and C2(,) are covariance functions, which can be parameterized by some hyperparameters, denoted by Γ1 and Γ2, respectively. Let us denote these hyperparameters with Γ=(Γ1,Γ2), collectively. There are many candidates of covariance functions from which one can chose. For example, as one of the most applied covariance functions, squared exponential function, in form of

C1((x,θ),(x',θ'))=σ12exp{(xx')TV1x(xx')}exp{(θθ')TVθ(θθ')},C2(x,x')=σ22exp{(xx')TV2x(xx')} ,E21

can provide smooth samples to infer the latent function variable. In Eq. (21), the value of hyperparameters can be inferred via Markov Chain Monte Carlo (MCMC) techniques.

3.2.2. Data and prior distribution

Let us denote the matrix of basis functions H1(D1) with rows {h1(x1',θ1)T,,h1(xN',θN)T}, which leads to the expectation of φ as H1(D1)β1. Similarly, from the experimental observations we can obtain φ^, the estimation of φ by Eq. (13). It can be further augmented by the calibration parameter at each x, with D2(θ)={(x1,θ),,(xn,θ)}. In contrast to the simulation output {x1',,xN'}, the experimental data are usually acquired with much smaller size, i.e., n<<N, which is in accordance with the purpose of reducing the amount of physical experiments with calibrated models. Meanwhile, we use xi and xi' to describe that the observation points could be different between two datasets. The expectation φ^ of can be represented by H1{D2(θ)}β1+H2(D2)β2. We write the full data vector ΩT={φT,φ^T}, which is obtained via Stage I given the simulation and observation of functional response. Meanwhile, they are normally distributed given the full set of parameters {θ,β,ϕ} (β=(β1T,β2T)T, ϕ=(λ,Γ)).

The goal of calibration is to obtain p(θ|Ω), the posterior distribution of conditional only on the full data Ω. To derive the posterior distribution of parameters, we begin with the normal distribution of the full set of data, during which the likelihood function will yield a Gaussian [14], with mean




To specify the variance matrix of Ω, we need the variance matrix of φ, denoted by V1(D1), whose (i,i') element is C1((xi' ,θi),(xi'',θi')). Similarly, we can define V1{D2(θ)} and V2(D2). Let C1{D1,D2(θ)} be the matrix with (i,j) element C1{(xi' ,θi),(xj',θj)}. Therefore,


where I is the n×n identity matrix.

To derive the posterior distribution under the Bayesian framework, the prior distributions of parameters, {θ,β,ϕ}, must also be independently specified. Following the suggestion of [12], we chose conjugate prior for θ and ϕ, and a weak prior for β, specifically p(β1,β2)1, then we have p(θ,β,ϕ)=p(θ)p(ϕ), where p(ϕ)=p(λ)p(Γ1)p(Γ2). Meanwhile, Bayesian inference with MCMC requires specification of proper prior distributions to perform Bayesian statistics. For such purpose, conjugate priors are specified, e.g.


where IG, W, and N are inverse gamma, Wishart, and normal distributions, respectively [15].

3.2.3. Posterior distribution

Conditional on full data, the independence of parameters leads to the full joint posterior distribution


To obtain p(θ|Ω), it is required to integrate out β and hyperparameters ϕ from Eq. (26). Integrating β yields




3.2.4. Calibration and prediction

Since the posterior distribution specified in Eq. (27) is a highly intractable function of ϕ, we need Monte Carlo method to integrate out ϕ and get the numerical estimation for the posterior distribution of the calibration parameters θ. The formulation is given by


However, the purpose of calibration of parameters is to predict the real process rather than achieve their values. Therefore, in practice, we are rather more interested in expressing the posterior distribution of ϕ, which is a Gaussian process as well, conditional on the calibration parameters and estimated hyperparameters. The mean and covariance function of this GP is given by


where h(x,θ)=(h1(x,θ)h2(x)), t(x,θ)=(V1((x,θ),D1)V1((x,θ),D2(θ))+V2(x,D2)),

and covariance


Inference about ϕ(x) can be implemented again numerically with its posterior mean E[φ(x)|θ,ϕ,Ω] at estimated θ and ϕ, by integrating Eq. (31) with regard to Eq. (28). Given the estimation of ϕ(x), the analysis of z becomes straightforward by applying the link function L() as described in model (13).

So far, we have accomplished calibrating a metamodel in the Bayesian framework using the experimental data, which accounts for parameter uncertainty and corrects the model discrepancy and experimental uncertainty.


4. Simulation study

This section demonstrate the results obtained using the Bayesian uncertainty quantification framework for the metamaterial design problem with the models described in Sections 2 and 3, with examples. Of both propagation and inverse assessment, the overall model is formulated in Eq. (13), where geometric variable w and incident angle α are input variables specified in the simulation, i.e., xT={w,α}T. Thus, the model is expressed as


To demonstrate parameter calibration within the metamaterial modeling and design, we consider an example where the real part of the permittivity of a dielectric material, εd, is defined as the calibration parameters θ=εd, and its prior is given normal distribution as model (25), with mean μθ=3 and variance Vθ=0.5. Figure 5 illustrates the probability density function of this prior distribution. We demonstrate a measure of uncertain propagation in Figure 6, where predictions with 2.5% quantile (green or light gray) and 97.5% quantile (red or dark gray) of the samples are depicted to show the discrepancy induced by the uncertain input. Following the methodology introduced in Section 3, metamodels can be established for the simulation data and discrepancy function, with Gaussian process regression models. In our example, we obtained 92 simulation data to build GPs and 20 observations for calibration. The posterior distribution of the calibration parameter is also displayed in Figure 5. After calibration, the distribution of calibration parameter has a much smaller variance. The comparison between the prediction at posterior mean (cyan curve) and “real data” (blue dash) is shown in Figure 6, where the discrepancy reduction is remarkable.

Figure 5.

Comparison of prior and posterior distributions of the calibration parameter. The mean of the Gaussian distribution shifts from 3 to 3.17, and the variance is much smaller after Bayesian calibration.

Figure 6.

The effect of uncertainty propagation and results of parameter calibration.


5. Conclusion

In this chapter, we review several conventional model for functional response and present the quasiphysical model for functional response. Compared with the conventional models, this model can reveal the physical insight more clearly and make better use of historical experience. The two‐stage method was presented to model the frequency response of metamaterial and facilitate the design process. Using this approach, we decomposed the complex modeling problem into a vector fitting‐based equivalent circuit modeling process and a GP regression process, which can easily generate the mapping function from the structure’s geometric design. The predictive property of this model enables the massive reduction of time‐consuming simulations.

Another important topic with this chapter was the development and application of a Bayesian uncertainty quantification approach in dealing with functional response. Both forward uncertainty propagation and inverse assessment of the model were discussed, and a Bayesian framework was presented with simulation experimental results to deal with the model calibration for functional response. We envision that our two‐stage approach can be generalized to model any functional responses of a rational form. With the Bayesian framework for the functional data of computer experiments, we were able to incorporate our prior knowledge into the model and obtain a probabilistic measure of the uncertainty associated with metamaterial system design. This general methodology enables researchers and designers to achieve high efficiency and accuracy in modeling functional response with a considerably small amount of data. With a Bayesian calibration framework, we are able to constantly increase the precision of predictions of the functional response at unobserved sites, thus replacing expensive physical experiments.



This work was supported by the Shenzhen Key Laboratory of Data Science and Modeling.


  1. 1. Kennedy M, O’Hagan A. Predicting the output from a complex computer code when fast approximations are available. Biometrika. 1998;87:1–13. DOI: 10.1093/biomet/87.1.1
  2. 2. Hastie T, Tibshirani R, Friedman J. The Elements of Statistical Learning. New York:Springer; 2001. p. 745. DOI: 10.1007/978‐0‐387‐84858‐7
  3. 3. Ramsay J, Silverman B. Applied functional data analysis: Methods and case studies. New York: Springer; 2002. DOI: 10.1007/b98886
  4. 4. Cressie A. Statistics for spatial data. Terra Nova. 1992;4(5):613–617. DOI: 10.1002/9781119115151
  5. 5. Liu R, Ji C, Mock J, Chin J, Cui T, Smith D. Broadband ground‐plane cloak. Science. 2009;323(5912):366–369. DOI: 10.1126/science.1166949
  6. 6. Fang KT, Li R, Sudjianto A. Design and modeling for computer experiments. Boca Raton, FL, USA: Chapman and Hall/CRC Press. 2005. DOI: 10.1201/9781420034899
  7. 7. Ruppert D, Carroll R. Theory & methods: Spatially-adaptive penalties for spline fitting. Australian & New Zealand Journal of Statistics. 2000;42(2):205–223. DOI: 10.1111/1467‐842X.00119
  8. 8. Graven P, Wahba G. Smoothing noisy data with spline functions. Numerische Mathematik. 1979;31:377–403. DOI:10.1007/BF01404567
  9. 9. Carroll R, Chen R, George E, Li T, Newton H, Schmiediche H, et al. Ozone exposure and population density in Harris County, Texas. Journal of the American Statistical Association. 1997;92(438):392–404. DOI: 10.2307/2965684
  10. 10. Liu B, Ji C. Bayesian nonparametric modeling for rapid design of metamaterial microstructures. International Journal of Antennas & Propagation. 2014;2014:187–187
  11. 11. Cui T J, Smith D, Liu R. Metamaterials: Theory, Design, and Applications. New York: Springer Publishing Company, Incorporated; 2009. DOI: 10.1007/978‐1‐4419‐0573‐4.
  12. 12. Kennedy M, O’Hagan A. Bayesian calibration of computer models. Journal of the Royal Statistical Society: Series B (Statistical Methodology). 2001;63(3):425–464. DOI:10.1111/1467‐9868.00294
  13. 13. McKay M, Beckman R, Conover W. A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics. 1979;21(2):239–245. DOI:10.2307/1271432
  14. 14. Kennedy M, O’Hagan A. Supplementary details on Bayesian calibration of computer. Technical Report., University of Nottingham. Statistics Section; 2001
  15. 15. Neal RM, Probabilistic inference using Markov Chain Monte Carlo methods. 1997

Written By

Xiao Guo, Yang He, Binbin Zhu, Yang Yang, Ke Deng, Ruopeng Liu and Chunlin Ji

Reviewed: April 21st, 2017 Published: July 5th, 2017