Open access peer-reviewed chapter

Smoothing Spline ANOVA Models and their Applications in Complex and Massive Datasets

By Jingyi Zhang, Honghe Jin, Ye Wang, Xiaoxiao Sun, Ping Ma and Wenxuan Zhong

Submitted: May 3rd 2017Reviewed: February 22nd 2018Published: June 6th 2018

DOI: 10.5772/intechopen.75861

Downloaded: 253

Abstract

Complex and massive datasets can be easily accessed using the newly developed data acquisition technology. In spite of the fact that the smoothing spline ANOVA models have proven to be useful in a variety of fields, these datasets impose the challenges on the applications of the models. In this chapter, we present a selected review of the smoothing spline ANOVA models and highlight some challenges and opportunities in massive datasets. We review two approaches to significantly reduce the computational costs of fitting the model. One real case study is used to illustrate the performance of the reviewed methods.

Keywords

  • smoothing spline
  • smoothing spline ANOVA models
  • reproducing kernel Hilbert space
  • penalized likelihood
  • basis sampling

1. Introduction

Among the nonparametric models, smoothing splines have been widely used in many real applications. There has been a rich body of literature in smoothing splines such as the additive smoothing spline [1, 2, 3, 4, 5, 6], the interaction smoothing spline [7, 8, 9, 10], and smoothing spline ANOVA (SSANOVA) models [11, 12, 13, 14].

In this chapter, we focus on studying the SSANOVA models. Suppose that the data yixiand i=1,2,,nare independent and identically distributed (i.i.d.) copies of YX, where YYRis the response variable and XXRdis the covariate variable. We consider the regression model:

yi=ηxi+ϵi,i=1,2,,n,E1

where yiis the response, ηis the nonparametric function varying in an infinite-dimensional space, xi=xi1xidTis on the domain XRd, and ϵii.i.d.N0σ2. More general cases, in which the conditional distribution of Ygiven x, denoted as Yx, which follows different distributions instead of the Gaussian distribution, will be discussed later. The nonparametric function ηin (1) can be decomposed into

ηx=ηc+j=1dηjxj+k<jηkjxkxj+
through the functional ANOVA, where ηcis a constant function, ηjis the main effect function of xj, ηkjis the interaction effect of xkand xj, and so on.

In the model (1), ηcan be estimated by minimizing the following penalized likelihood functional:

Lη+λJη,E2

where Lηis a log likelihood measuring the goodness of fit of η, Jηis a quadratic functional on ηto quantify its smoothness, and λis the smoothing parameter balancing trade-offs between the goodness of fit and the smoothness of η[11, 12, 13]. The computational complexity of estimating ηby minimizing (2) is of the order On3for the sample of size n. Therefore, the high computational costs render SSANOVA models impractical for massive datasets. In this chapter, we review two methods to lower the computational costs. One approach is through the adaptive basis selection algorithm [14]. By carefully sampling a smaller set of basic functions conditional on the response variables, the adaptive sampling reduces the computational costs to Onn2, where nnis the number of the sampled basis functions. The computational costs can also be reduced by the rounding algorithm [15]. This algorithm can significantly decrease the sample size to μby rounding the data with a given precision, where μn. After rounding, the computational costs can be dramatically reduced to Oμ3.

The rest of the chapter is organized as follows. Section 2 provides a detailed introduction to SSANOVA models and the model estimation. The details of adaptive basis selection algorithm and rounding algorithm are reviewed in Section 3. In Appendix, we demonstrate the numerical implementations using the R software.

2. Smoothing spline ANOVA models

In this section, we first review smoothing spline models and the reproducing kernel Hilbert space. Second, we present how to decompose a nonparametric function on tensor product domains, which lays the theoretical foundation for SSANOVA models. In the end, we show the estimation of SSANOVA models and illustrate the model with a real data example.

2.1. Introduction of smoothing spline models

In the model (1), ηis located in an infinite-dimensional space. One way to estimate it is to add some constraints and estimate ηin a finite-dimensional space. With the smoothness constraint, we estimate ηby minimizing the penalized likelihood functional (2), and the minimizer of (2) is called a smoothing spline.

Cubic smoothing splines

Suppose that Yxfollows a normal distribution, that is, YxiNηxiσ2. Then, the penalized likelihood functional (2) can be reduced as the penalized least squares:

1ni=1nyiηxi2+λXη¨x2dx,E3

where η¨=d2η/dx2. The minimizer of (3) is called a cubic smoothing spline [16, 17, 18]. In (3), the first term quantifies the fidelity to the data, and the second term controls the roughness of the function.

Exponential family smoothing splines

Suppose that Yxfollows an exponential family distribution:

Yxiexpxibηxi/aϕ+cyϕ,

where a>0, b, and care known functions and ϕis either known or a nuisance parameter. Then, ηcan be estimated by minimizing the following penalized likelihood functional [19, 20]:

1ni=1nyiηxibηxi+λJη.E4

Note that the cubic smoothing spline in Example 1 is a special case of exponential family smoothing splines when Yxfollows the Gaussian distribution.

The smoothing parameter λis sensitive to the estimation of η(see Figure 1). Therefore, it is crucial to implement some proper smoothing parameter selection methods to estimate λ. One of the most popular methods is the generalized cross validation (GCV) [21, 22]. More details will be discussed in Section 2.6.2.

Figure 1.

The data are generated by the model ϵ. y=5+e3x1+106x2111−x26+104x221−x210+5cos2πx1−x2+ϵ, where ϵ∼N01. Panels (a) and (b) show the data and the true function, respectively. The estimated functions depending on different smoothing parameters are shown in panels (c), (d), and (e). We set λ→0 in panel (c) and λ→∞ in panel (e). The proper λ selected by generalized cross validation (GCV) is used in panel (d).

2.2. Reproducing kernel Hilbert space

We assume that readers are familiar with Hilbert space, which is a complete vector space with an inner product well defined that allows length and angle to be measured [23]. In a general Hilbert space, the continuity of a functional, which is required in minimizing (2) on H=Jη<, is not always satisfied. To circumvent the problem, we optimize (2) in a special Hilbert space named reproducing kernel Hilbert space [24].

For each gH, there exists a corresponding continuous linear functional Lgsuch that Lgf=gf, where fHand defines the inner product in H. Conversely, an element gLHcan also be found such that gLf=Lffor any continuous linear functional Lof H[23]. This is known as the Riesz representation theorem.

Riesz representation

Let Hbe a Hilbert space. For any functional Lof H, there uniquely exists an element gLHsuch that

L=gL,

where gLis called the representer of L. The uniqueness is in the sense that g1and g2are considered as the same representer for any g1and g2satisfying g1g2=0, where =defines the norm in H.

For a better construction of estimator minimizing (2), one needs the continuity of evaluation functional xf=fx. Roughly speaking, this means that if two functions fand gare close in norm, that is, fgis small, then fand gare also pointwise close, that is, fxgxis small for all x.

Reproducing kernel Hilbert space

Consider a Hilbert space Hconsisting of functions on domain X. For every element xX, define an evaluation functional xsuch that xf=fx. If all the evaluation functional xs are continuous, xX, then His called a reproducing kernel Hilbert space.

By Theorem 2.1, for every evaluation functional x, there exists a corresponding function RxHon Xas the representer of x, such that Rxf=xf=fxand fH. By the definition of evaluation functional, it follows

Rxy=RxRy=Ryx.E5

The bivariate function Rxy=RxRyis called the reproducing kernel of H, which is unique if it exists. The essential meaning of the name “reproducing kernel” comes from its reproducing property

Rxf=Rxf=fx
for any fH. In general, a reproducing kernel Hilbert space defines a reproducing kernel function that is both symmetric and positive definite. In addition, Moore-Aronszajn theorem states that every symmetric, positive definite kernel defines a unique reproducing kernel Hilbert space [25], and hence one can construct a reproducing kernel Hilbert space simply by specifying its reproducing kernel.

We now introduce the concept of tensor sum decomposition. Suppose that His a Hilbert space and Gis a linear subspace of H. The linear subspace Gc=fHfg=0gGis called the orthogonal complement of G. It is easy to verify that for any fH, there exists a unique decomposition f=fG+fGc, where fGGand fGcGc. This decomposition is called a tensor sum decomposition, denoted by H=GGc. Suppose that H1and H2are two Hilbert spaces with inner products 1and 2. If the only common element of these two spaces is 0, one can also define a tensor sum Hilbert space H=H1H2. For any f,gH, one has unique decompositions f=f1+f2and g=g1+g2, where f1,g1H1and f2,g2H2. Moreover, the inner product defined on Hwould be fg=f1g11+f2g22. The following theorem provides the rules in the tensor sum decomposition of a reproducing kernel Hilbert space.

Suppose that R1and R2are the reproducing kernel Hilbert spaces H1and H2, respectively. If H1H2=0, then H=H1H2has a reproducing kernel R=R1+R2.

Conversely, if the reproducing kernel Rof Hcan be decomposed into R=R1+R2, where both R1and R2are positive definite, and they are orthogonal to each other, that is, R1x1R2x2=0for x1,x2X, then the spaces H1and H2corresponding to the kernels R1and R2form a tensor sum decomposition H=H1H2.

2.3. Representer theorem

In (2), the smoothness penalty term Jη=Jηηis nonnegative definite, that is, Jηη0, and hence it is a squared semi-norm on the reproducing kernel Hilbert space H=η:Jη<. Denote NJ=ηH:Jη=0as the null space of Jηand HJas its orthogonal complement. By the tensor sum decomposition H=NJHJ, one may decompose the ηinto two parts: one in the null space NJthat has no contribution on the smoothness penalty and the other in HJ“reproduced” by the reproducing kernel R[12].

There exist coefficient vectors d=d1dMTRMand c=c1cnTRnsuch that

ηx=k=1Mdkξkx+i=1nciRxix,E6

where ξkk=1..Mis the basis of null space NJand Ris the reproducing kernel of HJ.

This theorem indicates that although the minimization problem is in an infinite-dimensional space, the minimizer of (2) lies in a data-adaptive finite-dimensional space.

2.4. Function decomposition

The decomposition of a multivariate function is similar to the classical ANOVA. In this section, we present the functional ANOVA which lays the foundation for SSANOVA models.

2.4.1. One-way ANOVA decomposition

We consider a classical one-way ANOVA model yij=μi+ϵij, where yijis the observed data, μiis the treatment mean for i=1,,Kand j=1,,J, and ϵijs are the random errors. The treatment mean μican be further decomposed as μi=μ+αi, where μis the overall mean and αiis the treatment effect with the constraint i=1Kαi=0.

Similar to the classical ANOVA decomposition, a univariate function fxcan be decomposed as

f=Af+IAf=fc+fx,E7

where Ais an averaging operator that averages the effect of xand Iis an identity operator. The operator Aaverages a function fto a constant function fcsatisfying AIA=0. For example, one can take Af=01fxdxin L101=f:01fxdx<. In (7), fc=Afis the mean function, and fx=IAfis the treatment effect.

2.4.2. Multiway ANOVA decomposition

On a d-dimensional product domain X=j=1dXjRd, a multivariate function fx1xdcan be decomposed similarly to the one-way ANOVA decomposition. Let Aj, j=1,,d, be the average operator on Xj, and then Ajfis a constant function on Xj. One can define the ANOVA decomposition on Xas

f=j=1dIAj+Ajf=SjSIAjjSAjf=SfS,E8

where S1d. The term fc=j=1dAjfis the constant function, fj=IAjαjAαfis the main effect term of xj, the term fμν=IAμIAναμ,νAαfis the interaction of xμand xν, and so on.

2.5. Some examples of model conduction

Smoothing splines on Cm01. If we define

Jη=01ηm2dx
in the space Cm01=f:fmL201, where fmdenotes the mth differentiation of f, L2=f:01fx2dx<, then the minimizer of (2) is called a polynomial smoothing spline.

Here, we use an inner product

fg=ν=0m101fνxdx01gνxdx+01fmxgmxdx.E9

One can easily check that (9) is a well-defined inner product in Cm01with H0=f:fm=0equipped with the inner product ν=0m101fνxdx01gνxdx[21]. To construct the reproducing kernel, define

kνx=μ=1+μ=1exp2πiμx2πiμν,E10

where i=1. One can verify that 01kνμxdx=δνμand ν,μ=0,1,,m1, where δνμis the Kronecker delta [26]. Indeed, k0km1forms an orthonormal basis of H0. Then, the reproducing kernel in H0is

R0xy=ν=0m1kνxkνy.

For space

H1=f:01fνxdx=0ν=01m1fmL201,
one can check that the reproducing kernel in H1is
R1xy=kmxkmy+1m1k2mxy,

(See details in [11]; Section~2.3).

SSANOVA models on product domains: A natural way to construct reproducing kernel Hilbert space on product domain j=1dXjis taking the tensor product of spaces constructed on the marginal domains Xjs. According to the Moore-Aronszajn theorem, every nonnegative definite function Rcorresponds to a reproducing kernel Hilbert space with Ras its reproducing kernel. Therefore, the construction of the tensor product reproducing kernel Hilbert space is induced by constructing its reproducing kernel.

Suppose that if R1x1y1is nonnegative definite on X1and R2x2y2is nonnegative definite on X2, then Rxy=R1x1y1R2x2y2is nonnegative definite on X=X1×X2.

Theorem 2.4 implies that a reproducing kernel Ron tensor product reproducing kernel Hilbert space can be derived from the reproducing kernels on marginal domains. Indeed, let Hjbe the space on Xjwith reproducing kernel Rj, where j=1,2. Then, R=R1R2is nonnegative definite on X1×X2. The space Hcorresponding to Ris called the tensor product space of H1and H2, denoted by H=H1H2.

One can decompose each marginal space Hjinto Hj=Hj0Hj1, where Hj0denotes the averaging space and Hj1denotes the orthogonal complement. Then, by the discussion in Section 2.4, the one-way ANOVA decomposition on each marginal space can be generalized to a multidimensional space H=j=1dHjas

H=j=1dHj0Hj1=SjSHj1jSHj0=SHS,E11

where Sdenotes all the subsets of 1d. The component fSin (8) is in the space HS. Based on the decomposition, the minimizer of (2) is called a tensor product smoothing spline. One can construct a tensor product smoothing spline following Theorem 2.3, in which the reproducing kernel term may be calculated in the same way as the tensor product (11).

In the following, we will give some examples of tensor product smoothing splines on product domains.

2.5.1. Smoothing splines on 1K×01

We construct the reproducing kernel Hilbert space by using

R10=1/KandR11=Ix1=y1
on 1K. On 01, assume that if m=2, then we have
R20=1+k1x2k1y2
and
R21=k2x2k2y2k4x2y2.

In this case, the space Hcan be further decomposed as

H=H10H11H200H201H21.E12

The reproducing kernels of tensor product cubic spline on 1K×01are listed in Table 1.

SubspaceReproducing kernel
H10H2001/K
H10H201k1x2k1y2/K
H10H21k2x2k2y2k4x2y2/K
H11H200Ix1=y11/K
H11H201Ix1=y11/Kk1x2k1y2
H11H21Ix1=y11/Kk2x2k2y2k4x2y2

Table 1.

Reproducing kernels of (12) on 1K×01when m=2.

On other product domains, for example, 012, the tensor product reproducing kernel Hilbert space can be decomposed in a similar way. More examples are available in ([11], Section~2.4).

2.5.1.1. General form

In general, a tensor product reproducing kernel Hilbert space can be specified as H=jHj, where jBis a genetic index. Suppose that Hjis equipped with a reproducing kernel Rjand an inner product fgj. Denote fjas the projection of fonto Hj. Then, an inner product in Hcan be defined as

Jfg=jθj1fjgjj,E13

where θj0are the tuning parameters. If a penalty Jin (2) has the form (13), the SSANOVA models can be defined on the space H=jHjwith the reproducing kernel:

R=jθjRj.E14

2.6. Estimation

In this section, we show the procedure of estimating the minimizer η̂of (2) under the Gaussian assumption and selecting the smoothing parameters.

2.6.1. Penalized least squares

We consider the same model shown in (1), and then the ηcan be estimated by minimizing the penalized least squares:

1ni=1nyiηxi2+λJη.E15

Let Sdenote the n×Mmatrix with the ijth entry ξjxias in (6) and Rdenote the n×nmatrix with the ijth entry Rxixjwith the form (14). Then, based on Theorem 2.3, ηcan be expressed as

η=Sd+Rc,

where η=ηx1ηxnT, d=d1dMT, and c=c1cnT. The least squares term in (15) becomes

1ni=1nyiηxi2=1nySdRcTySdRc,

where y=y1ynT.

By the reproducing property (5), the roughness penalty term can be expressed as

Jη=i=1nj=1nciRxixjcj=cTRc.

Therefore, the penalized least squares criterion (15) becomes

1nySdRcTySdRc+λcTRc.E16

The penalized least squares (16) is a quadratic form of both dand c. By differentiating (16), one can obtain the linear system:

STSSTRRTSRTR+nλRdc=STyRTy.E17

Note that (17) only works for penalized least squares (15), and hence a normal assumption is needed in this case.

2.6.2. Selection of smoothing parameters

In SSANOVA models, properly selecting smoothing parameters is important to estimate η[9, 27, 28], as shown in Figure 1. Here, we introduce the generalized cross validation (GCV) method for the smoothing parameter selection.

For the multivariate predictors, the penalty term in (15) has the form

λJf=λj=1Sθj1fjfjj,

where Sis the number of smoothing parameters, which is related to the functional ANOVA decomposition, and θj's are the extra smoothing parameters. To avoid overparameterization, we treat λ=λ/θ1λ/θSTas the effective smoothing parameters.

A GCV score is defined as

Vλ=n1yTIAλ2yn1trIAλ2,

where Aλis a symmetric matrix similar to the hat matrix in linear regression. We can select a proper λby minimizing the GCV score [21].

2.7. Case study: Twitter data

Tweets in the contiguous United States were collected over five weekdays in January 2014. The dataset contains information of time, GPS location, and tweet counts (see Figure 2). To illustrate the application of SSANOVA models, we study the time and spatial patterns in this data.

Figure 2.

Heatmaps of tweet counts in the contiguous United States. (a) Tweet counts at 2:00 a.m. (b) Tweet counts at 6:00 p.m.

The bivariate function ηx1x2is a function of time and location, where x1denotes the time and x2represents the longitude and latitude coordinates. We use the thin-plate spline for the spatial variable and cubic spline for the time variable. As a rotation-free method, the thin-plate spline is popular for modeling spatial data [29, 30, 31]. For a better interpretation, we decompose the function ηas

ηx1x2=ηc+η1x1+η2x2+η12x1x2,

where ηcis a constant function; η1and η2are the main effects of time and location, respectively; and η12is the spatial-time interaction effect.

The main effects of time and location are shown in Figure 3. Obviously, in panel (a), the number of tweets has the periodic effect, where it attains the maximum value at 8:00 p.m. and the minimum value at 5:00 a.m. The main effect of time shows the variations of Twitter usages in the United States. In addition, we can infer how the tweet counts vary across different locations based on panel (b) in Figure 3. There tend to be more tweets in the east than those in the west regions and more tweets in the coastal zone than those in the inland. We use the scaled dot product

π=η̂12Tŷ/ŷ2
to quantify the percentage decomposition of the sum of squares of ŷ[11], where ŷ=ŷ1ŷnTis the predicted values of log#Tweets, and η̂12=η12x1η12xnTis the estimated interaction effect term, where η12x=η12x1x2. In our fitted model, π=3×1016, which is so small that the interaction term is negligible. This indicates that there is no significant difference for the Twitter usages across time in the contiguous United States.

Figure 3.

(a) The main effect function of time (hours). (b) The main effect function of location.

3. Efficient approximation algorithm in massive datasets

In this section, we consider SSANOVA models under the big data settings. The computational cost of solving (17) is of the order On3and thus gives rise to a challenge on the application of SSANOVA models when the volume of data grows. To reduce the computational load, an obvious way is to select a subset of basis functions randomly. However, it is hard to keep the data features by uniform sampling. In the following section, we present an adaptive basis selection method and show its advantages over uniform sampling [14]. Instead of selecting basis functions, another approach to reduce the computational cost is shrinking the original sample size by rounding algorithm [15].

3.1. Adaptive basis selection

A natural way to select the basis functions is through uniform sampling. Suppose that we randomly select a subset x=x1xnfrom x1xn, where nis the subsample size. Thus, the kernel matrix would be Rxix, i=1,,n. Then, one minimizes (17) in the effective model space:

HE=NspanRxixi=12n.

The computational cost will be reduced significantly to Onn2if nis much smaller than n. Furthermore, it can be proven that the minimizer of (2), η, by uniform sampling basis selection, has the same asymptotic convergence rate as the full basis minimizer η̂.

Although the uniform basis selection reduces the computational cost and the corresponding ηachieves the optimal asymptotic convergence rate, it may fail to retain the data features occasionally. For example, when the data are not evenly distributed, it is hard for uniform sampling to capture the feature where there are only a few data points. In [14], an adaptive basis selection method is proposed. The main idea is to sample more basis functions where the response functions change largely and fewer basis functions on those flat regions. More details of adaptive basis selection method are shown in the following procedure:

Step 1 Divide the range of responses yii=1ninto Kdisjoint intervals, S1,,SK. Denote Skas the number of observations in Sk.

Step 2 For each Sk, draw a random sample of size nkfrom this collection. Let xk=x1kxnkkbe the predictor values.

Step 3 Combine x1,,xKtogether to form a set of sampled predictor values x1xn, where n=k=1Knk.

Step 4 Define

HE=H0spanRxii=12n
as the effective model space.

By adaptive basis selection, the minimizer of (2) keeps the same form as that in Theorem 2.3:

ηAx=k=1Mdkξkx+i=1nciRxix.

Let Rbe an n×nmatrix, and its ijth entry is Rxixj. Let Rbe an n×nmatrix, and its ijth entry is Rxixj. Then, the estimator ηAsatisfies

ηA=SdA+RcA,

where ηA=ηAx1ηAxnT, dA=d1dMT, and cA=c1cnT. Similar to (17), the linear system of equations in this case is

STSSTRRTSRTR+RdAcA=STyRTy.E18

The computational complexity of solving (18) is of the order Onn2, so the method decreases the computational cost significantly. It can also be shown that the adaptive sampling basis selection smoothing spline estimator ηAhas the same convergence property as the full basis method. More details about the consistency theory can be found in [14]. Moreover, adaptive sampling basis selection method for exponential family smoothing spline models was developed in [32].

3.2. Rounding algorithm

Other than sampling a smaller set of basis functions to save the computational resources, for example, the adaptive basis selection method presented previously, [15] proposed a new rounding algorithm to fit SSANOVA models in the context of big data.

Rounding algorithm: The details of rounding algorithm can be shown in the following procedure:

Step 1 Assume that all predictors are continuous.

Step 2 Convert all predictors to the interval 01.

Step 3 Round the raw data by using the transformation:

zij=RDxij/rjrj,fori1n,j1d,

where the rounding parameter rj01and rounding function RDtransform input data to the nearest integer.

Step 4 After replacing xijwith zij, we redefine Sand Rin (16) and then estimate ηby minimizing the penalized least squares (16).

In Step 3, if rjis the rounding parameter for jth predictor and its value is 0.03, then each zijis formed by rounding the corresponding xijto its nearest 0.03.

It is evident that the value of rounding parameter can influence the precision of approximation. The smaller the rounding parameter, the better the model estimation and the higher the computational cost.

Computational benefits: We now briefly explain why the implementation of rounding algorithm can reduce the computational loads. For example, if the rounding parameter r=0.01, it is obvious that u101, where udenotes the number of uniquely observed values. In conclusion, using user-tunable rounding algorithm can dramatically reduce the computational burden of fitting SSANOVA models from the order of On3to Ou3, where un.

Case study: To illustrate the benefit of the rounding algorithm, we apply the algorithm to the electroencephalography (EEG) dataset. Note that EEG is a monitoring method to record the electrical activity of the brain. It can be used to diagnose sleep disorders, epilepsy, encephalopathies, and brain death.

The dataset [33] contains 44 controls and 76 alcoholics. Each subject was repeatedly measured 10 times by using visual stimulus at a frequency of 256 Hz. This brings about n=10replications ×120subjects ×256time points =307,200observations. There are two predictors, time and group (control vs. alcoholic). We apply the cubic spline to the time effect and the nominal spline to the group effect.

After applying the model to the unrounded data, rounded data with rounding parameter r=0.01and r=0.05for time covariate, we can obtain a summary table about GCV, AIC [34], BIC [35], and running time in Table 2.

GCVAICBICCPU time (seconds)
Unrounded data85.95742,240,0192,240,56215.65
Rounded data with r=0.0186.66672,242,5442,242,8331.22
Rounded data with r=0.0586.76542,242,8932,243,0891.13

Table 2.

Fit statistics and running time for SSANOVA models.

Based on Table 2, we can easily see that there are no significant difference among the GCV scores and AIC/BIC. In addition using rounding algorithm reduces 92%CPU time compared to using unrounded dataset.

4. Conclusion

Smoothing spline ANOVA (SSANOVA) models are widely used in applications [11, 20, 36, 37]. In this chapter, we introduced the general framework of the SSANOVA models in Section 2. In Section 3, we discussed the models under the big data settings. When the volume of data grows, fitting the models is computing-intensive [11]. The adaptive basis selection algorithm [14] and rounding algorithm [15] we presented can significantly reduce the computational cost.

Acknowledgments

This work is partially supported by the NIH grants R01 GM122080 and R01 GM113242; NSF grants DMS-1222718, DMS-1438957, and DMS-1228288; and NSFC grant 71331005.

Conflict of interest

The authors whose names are listed immediately below certify that they have NO affiliations with or involvement in any organization or entity with any financial interest (such as honoraria; educational grants; participation in speakers’ bureaus; membership, employment, consultancies, stock ownership, or other equity interests; and expert testimony or patent-licensing arrangements) or nonfinancial interest (such as personal or professional relationships, affiliations, knowledge, or beliefs) in the subject matter or materials discussed in this manuscript.

In this appendix, we use two examples to illustrate how to implement smoothing spline ANOVA (SSANOVA) models in R. The gss package in R, which can be downloaded on the CRANhttps://cran.r-project.org/, is utilized.

We now load the gss package:

library(gss)

Example I: Apply the smoothing spline to a simulated dataset.

Suppose that the predictor xfollows a uniform distribution on 01, and the response yis generated based on y=5+2cos3πx+ϵ, where ϵN01.

x<-runif(100);y<-5+2*cos(3*pi*x)+rnorm(x)

Then, fit cubic smoothing spline model:

cubic.fit<-ssanova(y˜x)

To evaluate the predicted values, one uses:

new<-data.frame(x=seq(min(x),max(x),len=50))

est<-predict(cubic.fit,new,se=TRUE)

These.fitparameter indicates if one can get the pointwise standard errors for the predicted values. The predicted values and Bayesian confidence interval, shown in Figure 4, are generated by:

Figure 4.

The solid red line represents the fitted values. The green lines represent the 95% Bayesian confidence interval. The raw data are shown as the circles.

plot(x,y,col=1)

lines(new$x,est$fit,col=2)

lines(new$x,est$fit+1.96*est$se,col=3)

lines(new$x,est$fit-1.96*est$se,col=3)

Example II: Apply the SSANOVA model to a real dataset.

In this example, we illustrate how to implement the SSANOVA model using the gss package. The data is from an experiment in which a single-cylinder engine is run with ethanol to see how the nox concentrationnoxin the exhaust depends on the compression ratiocompand the equivalence ratioequi. The fitted model contains two predictors (compandequi) and one interaction term.

data(nox)

nox.fit <- ssanova(log10(nox)˜comp*equi,data=nox)

The predicted values are shown in Figure 5.

Figure 5.

The x-axis, y-axis, and z-axis represent the compression ratio, the equivalence ratio, and the predicted values, respectively.

x=seq(min(nox$comp),max(nox$comp),len=50)

y=seq(min(nox$equi),max(nox$equi),len=50)

temp <- function(x, y){

new=data.frame(comp=x,equi=y)

return(predict(nox.fit,new,se=FALSE))

}

z=outer(x, y, temp)

persp(x,y,z,theta = 30).

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Jingyi Zhang, Honghe Jin, Ye Wang, Xiaoxiao Sun, Ping Ma and Wenxuan Zhong (June 6th 2018). Smoothing Spline ANOVA Models and their Applications in Complex and Massive Datasets, Topics in Splines and Applications, Young Kinh-Nhue Truong and Muhammad Sarfraz, IntechOpen, DOI: 10.5772/intechopen.75861. Available from:

chapter statistics

253total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Model Testing Based on Regression Spline

By Na Li

Related Book

First chapter

Introductory Chapter: Time Series Analysis (TSA) for Anomaly Detection in IoT

By Nawaz Mohamudally

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More about us