Open access peer-reviewed chapter

Bayesian Uncertainty Quantification for Functional Response

By Xiao Guo, Yang He, Binbin Zhu, Yang Yang, Ke Deng, Ruopeng Liu and Chunlin Ji

Submitted: October 13th 2016Reviewed: April 21st 2017Published: July 5th 2017

DOI: 10.5772/intechopen.69385

Downloaded: 1311


This chapter addresses the stochastic modeling of functional response, which is a major concern in engineering implementation. We first introduce a general framework and several conventional models for functional data, including the functional linear model, penalized regression splines, and the spatial temporal model. However, in engineering practice, a naive mathematical modeling of functional response may fail due to the lack of expressing the underlying physical mechanism. We propose a series of quasiphysical models to handle the functional response. A motivating example of metamaterial design is thoroughly discussed to demonstrate the idea of quasiphysical models. In real applications, various uncertainties have to be taken into account, such as that of the permittivity or permeability of the substrate of the metamaterial. For the propagation of uncertainty, simulation‐based methods are discussed. A Bayesian framework is presented to deal with the model calibration in the case of functional response. Experimental results illustrate the efficiency of the proposed method.


  • functional response
  • meta model
  • Bayesian uncertainty quantification
  • model calibration
  • metamaterial design

1. Introduction

In recent years, computer experiments have become widely adopted in both engineering applications and scientific research to replace or support their physical counterparts. Functional response is the mathematical representation of system behaviors, where the data are collected over an interval of some input indices. With the advance of modern simulation and experiment technology, accessing functional data becomes easier. Functional response can be in the form of one‐dimensional data such as a curve or higher dimensional data such as an image, which can provide better physical insights. However, even with the advancement of computer technology, full simulation based on a finite element method or a finite difference method still takes an extensive amount of time. To reduce the amount of simulation time, historical simulated data are usually used to build a cheaper metamodel [1], in which the functional response of unobserved input can be predicted by either regression or interpolation. The simplest representation of functional data can be considered basis expansion, where polynomials are used to formulate the input‐output relation [2]. For frequency response analysis, Fourier series are usually applied to replace the polynomials [3]. Both methods are categorized as linear regression, which requires parameter estimations. Nonparametric approaches were also used to analyze functional data in many scientific and engineering fields [4]. The purpose of building these models is to provide the “best” estimate regarding the given data, while providing a statistical scheme for prediction at unobserved inputs.

In this chapter, we provide a more sophisticated approach to naturally analyze functional responses, which may suggest more insightful conclusions that may not be apparent otherwise. We introduce one motivating example of functional response in computer experiments. In the design of metamaterial, the goal is to establish a relationship between the physical dimensions of a unit cell and its electromagnetic (EM) frequency response [5]. In practice, designers usually evaluate the EM properties of a metamaterial microstructure via full‐wave simulation data, such that corresponding adjustments are constantly made to the design (dielectric architecture, microstructure topologies, etc.) until a desired performance is achieved. Figure 1 depicts an example of unit cell design whose response phases differ on a frequency span along with the varying geometric parameter. Naïve regression‐based metamodels fail in dealing with such a problem because they require building regressions for each output, which could be very expensive and leaves the correlation between different frequencies unutilized. Moreover, when resonance is involved, the functional data cannot be well described by polynomials or splines. However, this can be overcome by some quasiphysical models, which explore the essential physical mechanism. In addition, a more general two‐stage modeling scheme can be applied, where in Stage I, we approximate the response with rational functions. This allows us to decompose the continuous response into a few discrete parameters. Stage II consists of a nonparametric metamodel to capture the input dependence.

Figure 1.

Example of functional response in metamaterial unit cell design‐phase shift between different physical dimension inputs.


2. General models for functional response

Various statistical models, including the spatial temporal model, functional linear model, and penalized regression splines, have been widely discussed in the past. Most models share a unified expression that sums up a mean function μ(f,x)and a random term ε(f,x), written as


where yis the response, x={x1,,xp}is the input variables with dimensionality p, and frepresents some index, which could be the frequency of an electromagnetic wave or the time of a time series. Despite the shared form, these models differ in the way the mentioned terms are estimated.

2.1. Functional linear model

To model the functional response, the primary task is to estimate the mean function μ(f,x), on which a certain form is often imposed. As a generalization of linear regression models, the functional linear model is in the form of


with basis functions β={β0(f) ,β(f)}(β(f)=(β1(f),,βp(f))Thas the same dimensionality to the input variable), which incorporate the index dependence, and can be seen as an extension to the parameters of linear regression models. Therefore, by substituting the mean function into Eq. (1), we obtain the resulting output


Given a certain index f, this model is a universal linear model. Furthermore, it contains an underlying index‐varying effect of x, whereas βis assumed to be a smooth function of f. Thus, the model is referred to as a functional linear model. To estimate the coefficients β0(f)and β(f), it is straightforward to apply the least squares method, which adopts the data collected at f. However, smoothing over fcomponentwise, using penalized splines, can enhance the efficiency of estimates [6].

Penalized regression splines implement estimation of smoothing basis functions in functional linear models by minimizing the penalized least squares. They are widely adopted in modeling functional responses due to their easy implementation and low computational cost [6].

Noted that the primary purpose of applying penalized regression splines is to estimate the basis function β. Suppose we have ndata {(f,yi),i=1,,n}, and the basis function is a random sample from


with j=1,,p. m(f)is an unspecified smooth mean function of βand δjis a zero mean random error. In practice, m(f)can be estimated by a series of power‐truncated spline basis 1,f,f2,fp,(fκ1)+p,,(fκK)+p, where {κ1,,κK}is a given set of knots and a+denotes the positive part of a, i.e., a+=(a+|a|)/2. Therefore, the model in Eq. (4) can be approximately written as


where αjrepresents coefficients whose values can be obtained via least squares estimates. Generally, overfitting in the approximation of m(f)may occur, which leads to high variance and poor prediction. To avoid large modeling bias, the trade‐off between model bias and overfitting requires careful consideration. In order to resolve such a problem, variable selection procedures should be applied to the linear regression model. However, when the number of involved basis functions is very large, variable selection would encounter great computational difficulty [7]. Alternatively, αjis estimated by minimizing the penalized least squares function in the form of


where gis a tuning parameter determined by cross‐validation or generalized cross‐validation [8].

The smoothing method with penalized splines estimates also requires selection of the number of knots and the order p, which may vary from case to case. Fortunately, the estimates are not sensitive to these choices; and cubic splines are suggested in most cases [6], which ensure continuous second‐order and piecewise continuous third‐order derivatives at the knots. Meanwhile, knots are usually selected from the interval over which fis evenly distributed, or κkis taken to be the 100k/(K+1)thpercentile from the unevenly distributed f.

2.2. Spatial temporal model

The spatial temporal model is defined by the sum of a mean function, μ(f,x), and ε(f,x)of a zero‐mean Gaussian random field. It is a generalization of the Gaussian processes (GP) model, which has been widely adopted for spatial statistic problems [4, 9].

Both of the preceding models aim to represent the functional data in terms of their mean functions. In contrast, the spatial temporal model utilizes the property of the normal distribution of the residuals; thus, the output can be seen as a realization of a Gaussian random field. We assume a mean function μ(f,x)in the form of


where h(x)and β(f)are two series of basis functions of the input variable and index variable, respectively. Such an assumption leads to a spatial temporal model


where ε(f,x)is a zero‐mean Gaussian random field, and the covariance function follows the form


where K(κf)denotes the covariance matrix, whose (i,j) element K(κf;|xixj|)measures the covariance between xiand xj. κfis an f‐dependent hyperparameter that controls the properties of the covariance.

Suppose we have obtained observation y(fj,xi)at input sites (fj,xi)with j=1,,Jand i=1,,n, where Jand nare the length of indices and input settings. β(f)and κfcan be calculated following the hyperparameter estimation procedure within a standard Gaussian processes model [4]. The spatial temporal model also allows predictions at unobserved sites f*and x*. The procedures for prediction are summarized in the following algorithm.

Step 1:For j=1,,J, calculate the best estimates of β^(f)and κffor f=fjby maximizing the (log) likelihood, given by

log p(y|X)=12[(yh(x)β)TK1(κf)(yh(x)β)]12log|K|N2log 2πE34

Step 2:According to the data {xi,y(fj,xi)}, obtain estimates μ(f,x), and κ^f. Calculate prediction y(fj,x*), at f=fj, with the best linear unbiased prediction [2]


Step 3: For new index f*[f1,fJ]and given outputs at two existing indices y(f0)and y(f1), use linear interpolation to make predictions for y(f*,x*), as


2.3. Quasiphysical model

Metamaterial frequency response, for example, modeling the resonance response is often quite challenging and cannot be achieved with the models introduced above. This is due to that the above models are based upon linear regression and simply encode the index dependence within the linear index‐dependent smooth basis functions. However, when distinct resonance peaks exist, a common scenario in radio frequency engineering, fitting to these smooth basis functions often, leads to poor accuracy [10]. To deal with these problems, we tend to utilize some underling physical mechanism and establish a quasiphysical modeling method. For example, the mean function μ(fx)is represented by the combination of some link function L(f,), which follows certain physical mechanisms, and a set of low‐dimensional scaling variables φ(x), i.e.,


Then, we have y(f,x)=L(f,φ(x))+ε(f,x). Instead of finding a single function with respect to both frequency index and input variables, the functional response is separated into two parts: a physical meaningful link function L(f,)contains the functional features, whereas the other captures the relationship between input variables xand scaling variables φ(x). This separation often leads to dimension reduction in statistical models. In the example of metamaterial design, the functional response is represented by the effective permittivity of a unit cell, which can be well fitted by a Drude‐Lorentz form [11],


where φ(x){εa,Fe,f0,γe}is the intermediate variable which can be estimated via fitting the functional response by Eq. (11), meanwhile φ(x)is a function of input variables. Here, we choose the Gaussian processes (GP) regression model for interpolate new φ*given previous obtained pairs {x,φ(x)}and new x*. Once the new φ*is obtained, it can then be used to evaluate the new functional response by Eq. (11). Figure 2 displays a smooth surface of f0and an example of predicted effective permittivity.

Figure 2.

Example of modeling functional response assisted by the physical model: (a) Gaussian process surface of a scaling variable; (b) predicted functional response.

The aforementioned Drude‐Lorentz model allows high accuracy only when the metamaterial system works in a static or quasistatic regime, such that the metamaterial architecture can be seen as a single piece of effective medium. However, for complex metamaterial systems, the working regime is beyond static; thus, approximation accuracy by such a model is severely deteriorated. We noted that EM waves propagate through each layer of metamaterial like a current on transmission lines. Such a perspective transfers the EM field problems to circuit problems. Hence, function response to a continuous spectrum is reduced to discrete LRC (short form of inductor, resistor and capacitor) networks. We propose a two‐stage modeling scheme, where in the first stage, a vector fitting (VF) technique is adopted to provide accurate rational approximation to frequency responses with distinct resonances. Its results are easily interpreted as an equivalent circuit. The approximation accuracy to a frequency response and its corresponding equivalent circuits are shown in part (a) and (b) of Figure 3, respectively. And in Stage II, the empirical circuit elements are then taken as the target response in statistical models to establish the mapping input‐output relation by performing regression, which also allows predictions at unobserved input sites. Part (c) of Figure 3 presents the GP surface built of circuit parameters over two input variables. A graphical display of this two‐stage approach is illustrated in Figure 4. To predict functional response at unobserved input, it is implemented by first predict the presenting circuit elements and then recover the response.

Figure 3.

Example of modeling frequency response via (a) vector fitting, (b) equivalent circuit, and (c) GP regression.

Figure 4.

Flowchart of the two‐stage modeling approach.


3. Uncertainty quantification

In the engineering modeling and design, uncertainty is ubiquitous, due to the inability to specify a “true” input or model parameter. Quantifying the uncertainty of the model, e.g., in the form of predictive confidence intervals, is of great importance for decision making and advanced design [1]. In general, uncertainty quantification can be divided into two major types of problem: forward uncertainty propagation and inverse assessment of model and parameter uncertainties [12].

The full relationship between experimental output z(f,x)and simulation output y(f,x)can be expressed as


where φ(x,θ)denotes the GP regression model, which depends on the input variable xand several unobservable calibration parameters θ. η(x)is the additive discrepancy function (or model bias function), which does not depend upon the calibration parameters. To reduce the complexity of the analysis, we assume that ε(f,x)is a zero‐mean Gaussian random field and being independent of x. And then we can merge ε(f,x)and ε'(f)together, which is denoted by ε(f). Thus, the model becomes


where ε(f)is assumed to be a zero‐mean Gaussian noise with known variance λ, ε(fi)N(0,λ).

In forward problems, with an uncertain input xand given model parameters θ, the model output yand other quantities of interest are to be calculated. On the other hand, the inverse problem is to estimate the values of model parameters θsuch that it makes the model’s output fit the experimental data as accurate as possible (or satisfy some precision requirements).

3.1. Metamodel‐based uncertainty propagation

The main problem in analyzing uncertainty propagation is obtaining an analytical representation of the metamodel for any arbitrary (uncertain) input values. Given its probability density, the Bayesian framework can provide a probability measure of random inputs on the output field. The purpose of such an operation is to evaluate the influence of an uncertain input on the model response.

Assume that the Gaussian process regression model is trained on a dataset with the input X={x1,,xN}and the corresponding intermediate variable Ψ=(φ(x1),,φ(xN))Twhich is obtained by fitting algorithm in Stage I. The GP hyperparameters learned from the data are denoted by γ. The uncertainty of the input variable x*is captured by a probability density function,


At a deterministic test input x*, the predictive distribution of the function, p(φ*|x*,X,Ψ,γ)(for simplicity, we use φ*to denote φ(x*), the output of the metamodel.), is Gaussian with mean


and variance


where ζiis the ith element of column vector ζ=[C+σ2I]Ψ. Cdenotes the covariance matrix of the Gaussian process, whose ijth element is given by Cij=C(xi,xj).

The final goal is to propagate uncertainty through the link function L(f,φ*). The computation of the statistics is implemented by integrating over the uncertainty with the mean


and the variance


The uncertainty propagation is induced by the variability of the input variable. For example, in metamaterial engineering, the dimension of a design parameter, say the thickness of the metallic microstructure layer, could differ from what has been instructed during the manufacturing processes. From measurements, the value of such a variable would rather follow a distribution than be pre‐specified as an exact value. Therefore, the analysis of uncertainty propagation is needed to be in the metamaterial design process.

3.2. Bayesian calibration

Compared to uncertainty forward propagation, the inverse problem is more difficult yet of great importance in enhancing the fidelity of metamodels. Two major aspects concerning the inverse problem are measuring model discrepancy and model calibration. In this chapter, we use the formulation to address both issues within an updating process, similar to that proposed in Ref. [12].

3.2.1. The model

In this section, we introduce the details of performing Bayesian calibration with regard to Eq. (13). The calibration parameters, denoted by θ, are defined as any physical parameters that can be specified as an input to the statistical model given by Eq. (13). The fundamental difference between xand θis that the former refers to design inputs whose value can be specified by the user during experiment and simulation, whereas the latter cannot be controlled and its true value is not directly observable [12]. In the previous chapter, the calibration parameter is not explicitly specified. However, we here include it in the framework to quantify its uncertainty, which completes the full cycle of metamaterial design and modeling. Suppose θrepresent a constitutive parameter, say permittivity, of a dielectric used to fabricate the metamaterial system, which cannot be accurately measured directly.

Before offering the detailed statistics for uncertainty quantification, we must note that the purpose of parameter calibration is to provide an accurate prediction with the metamodel with a small amount of data. An even smaller amount of experimental data is acquired to calibrate and validate the main model. To select the “best” experiment samples, uniform experimental design techniques are usually applied [6]. A Latin hypercube sampling, for example, is widely used for such cases, mainly due to its good coverage property [13].

The data corresponding to the metamodel φare obtained at D1={(x1',θ1),,(xN',θN)}, where {x1',,xN'}and {θ1,,θN}are the set of design inputs and calibration parameters. Although the notation is included, the true values of the calibration parameters are unknown throughout the entire calibration process. The inverse problem of uncertainty quantification is implemented in an updated formulation with a Bayesian approach [1]. In model (13), the metamodel, φ(x,θ), and discrepancy function, η(x), are both Gaussian processes:


where m1(x,θ)=h1(x,θ)Tβ1and m2(x)=h2(x)Tβ2[12]. C1((,),(,))and C2(,)are covariance functions, which can be parameterized by some hyperparameters, denoted by Γ1and Γ2, respectively. Let us denote these hyperparameters with Γ=(Γ1,Γ2), collectively. There are many candidates of covariance functions from which one can chose. For example, as one of the most applied covariance functions, squared exponential function, in form of

C1((x,θ),(x',θ'))=σ12exp{(xx')TV1x(xx')}exp{(θθ')TVθ(θθ')},C2(x,x')=σ22exp{(xx')TV2x(xx')} ,E21

can provide smooth samples to infer the latent function variable. In Eq. (21), the value of hyperparameters can be inferred via Markov Chain Monte Carlo (MCMC) techniques.

3.2.2. Data and prior distribution

Let us denote the matrix of basis functions H1(D1)with rows {h1(x1',θ1)T,,h1(xN',θN)T}, which leads to the expectation of φas H1(D1)β1. Similarly, from the experimental observations we can obtain φ^, the estimation of φby Eq. (13). It can be further augmented by the calibration parameter at each x, with D2(θ)={(x1,θ),,(xn,θ)}. In contrast to the simulation output {x1',,xN'}, the experimental data are usually acquired with much smaller size, i.e., n<<N, which is in accordance with the purpose of reducing the amount of physical experiments with calibrated models. Meanwhile, we use xiand xi'to describe that the observation points could be different between two datasets. The expectation φ^of can be represented by H1{D2(θ)}β1+H2(D2)β2. We write the full data vector ΩT={φT,φ^T}, which is obtained via Stage Igiven the simulation and observation of functional response. Meanwhile, they are normally distributed given the full set of parameters {θ,β,ϕ}(β=(β1T,β2T)T, ϕ=(λ,Γ)).

The goal of calibration is to obtain p(θ|Ω), the posterior distribution of conditional only on the full data Ω. To derive the posterior distribution of parameters, we begin with the normal distribution of the full set of data, during which the likelihood function will yield a Gaussian [14], with mean




To specify the variance matrix of Ω, we need the variance matrix of φ, denoted by V1(D1), whose (i,i') element is C1((xi' ,θi),(xi'',θi')). Similarly, we can define V1{D2(θ)}and V2(D2). Let C1{D1,D2(θ)}be the matrix with (i,j) element C1{(xi' ,θi),(xj',θj)}. Therefore,


where Iis the n×nidentity matrix.

To derive the posterior distribution under the Bayesian framework, the prior distributions of parameters, {θ,β,ϕ}, must also be independently specified. Following the suggestion of [12], we chose conjugate prior for θand ϕ, and a weak prior for β, specifically p(β1,β2)1, then we have p(θ,β,ϕ)=p(θ)p(ϕ), where p(ϕ)=p(λ)p(Γ1)p(Γ2). Meanwhile, Bayesian inference with MCMC requires specification of proper prior distributions to perform Bayesian statistics. For such purpose, conjugate priors are specified, e.g.


where IG, W, and Nare inverse gamma, Wishart, and normal distributions, respectively [15].

3.2.3. Posterior distribution

Conditional on full data, the independence of parameters leads to the full joint posterior distribution


To obtain p(θ|Ω), it is required to integrate out βand hyperparameters ϕfrom Eq. (26). Integrating βyields




3.2.4. Calibration and prediction

Since the posterior distribution specified in Eq. (27) is a highly intractable function of ϕ, we need Monte Carlo method to integrate out ϕand get the numerical estimation for the posterior distribution of the calibration parameters θ. The formulation is given by


However, the purpose of calibration of parameters is to predict the real process rather than achieve their values. Therefore, in practice, we are rather more interested in expressing the posterior distribution of ϕ, which is a Gaussian process as well, conditional on the calibration parameters and estimated hyperparameters. The mean and covariance function of this GP is given by


where h(x,θ)=(h1(x,θ)h2(x)),t(x,θ)=(V1((x,θ),D1)V1((x,θ),D2(θ))+V2(x,D2)),

and covariance


Inference about ϕ(x)can be implemented again numerically with its posterior mean E[φ(x)|θ,ϕ,Ω]at estimated θand ϕ, by integrating Eq. (31) with regard to Eq. (28). Given the estimation of ϕ(x), the analysis of zbecomes straightforward by applying the link function L()as described in model (13).

So far, we have accomplished calibrating a metamodel in the Bayesian framework using the experimental data, which accounts for parameter uncertainty and corrects the model discrepancy and experimental uncertainty.


4. Simulation study

This section demonstrate the results obtained using the Bayesian uncertainty quantification framework for the metamaterial design problem with the models described in Sections 2 and 3, with examples. Of both propagation and inverse assessment, the overall model is formulated in Eq. (13), where geometric variable wand incident angle αare input variables specified in the simulation, i.e., xT={w,α}T. Thus, the model is expressed as


To demonstrate parameter calibration within the metamaterial modeling and design, we consider an example where the real part of the permittivity of a dielectric material, εd, is defined as the calibration parameters θ=εd, and its prior is given normal distribution as model (25), with mean μθ=3and variance Vθ=0.5. Figure 5 illustrates the probability density function of this prior distribution. We demonstrate a measure of uncertain propagation in Figure 6, where predictions with 2.5% quantile (green or light gray) and 97.5% quantile (red or dark gray) of the samples are depicted to show the discrepancy induced by the uncertain input. Following the methodology introduced in Section 3, metamodels can be established for the simulation data and discrepancy function, with Gaussian process regression models. In our example, we obtained 92 simulation data to build GPs and 20 observations for calibration. The posterior distribution of the calibration parameter is also displayed in Figure 5. After calibration, the distribution of calibration parameter has a much smaller variance. The comparison between the prediction at posterior mean (cyan curve) and “real data” (blue dash) is shown in Figure 6, where the discrepancy reduction is remarkable.

Figure 5.

Comparison of prior and posterior distributions of the calibration parameter. The mean of the Gaussian distribution shifts from 3 to 3.17, and the variance is much smaller after Bayesian calibration.

Figure 6.

The effect of uncertainty propagation and results of parameter calibration.


5. Conclusion

In this chapter, we review several conventional model for functional response and present the quasiphysical model for functional response. Compared with the conventional models, this model can reveal the physical insight more clearly and make better use of historical experience. The two‐stage method was presented to model the frequency response of metamaterial and facilitate the design process. Using this approach, we decomposed the complex modeling problem into a vector fitting‐based equivalent circuit modeling process and a GP regression process, which can easily generate the mapping function from the structure’s geometric design. The predictive property of this model enables the massive reduction of time‐consuming simulations.

Another important topic with this chapter was the development and application of a Bayesian uncertainty quantification approach in dealing with functional response. Both forward uncertainty propagation and inverse assessment of the model were discussed, and a Bayesian framework was presented with simulation experimental results to deal with the model calibration for functional response. We envision that our two‐stage approach can be generalized to model any functional responses of a rational form. With the Bayesian framework for the functional data of computer experiments, we were able to incorporate our prior knowledge into the model and obtain a probabilistic measure of the uncertainty associated with metamaterial system design. This general methodology enables researchers and designers to achieve high efficiency and accuracy in modeling functional response with a considerably small amount of data. With a Bayesian calibration framework, we are able to constantly increase the precision of predictions of the functional response at unobserved sites, thus replacing expensive physical experiments.



This work was supported by the Shenzhen Key Laboratory of Data Science and Modeling.

© 2017 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Xiao Guo, Yang He, Binbin Zhu, Yang Yang, Ke Deng, Ruopeng Liu and Chunlin Ji (July 5th 2017). Bayesian Uncertainty Quantification for Functional Response, Uncertainty Quantification and Model Calibration, Jan Peter Hessling, IntechOpen, DOI: 10.5772/intechopen.69385. Available from:

chapter statistics

1311total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Fitting Models to Data: Residual Analysis, a Primer

By Julia Martin, David Daffos Ruiz de Adana and Agustin G. Asuero

Related Book

First chapter

Introductory Chapter: Ramifications of Incomplete Knowledge

By Jan Peter Hessling

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us