Reproducing kernels of (12) on when .
Complex and massive datasets can be easily accessed using the newly developed data acquisition technology. In spite of the fact that the smoothing spline ANOVA models have proven to be useful in a variety of fields, these datasets impose the challenges on the applications of the models. In this chapter, we present a selected review of the smoothing spline ANOVA models and highlight some challenges and opportunities in massive datasets. We review two approaches to significantly reduce the computational costs of fitting the model. One real case study is used to illustrate the performance of the reviewed methods.
- smoothing spline
- smoothing spline ANOVA models
- reproducing kernel Hilbert space
- penalized likelihood
- basis sampling
Among the nonparametric models, smoothing splines have been widely used in many real applications. There has been a rich body of literature in smoothing splines such as the additive smoothing spline [1, 2, 3, 4, 5, 6], the interaction smoothing spline [7, 8, 9, 10], and smoothing spline ANOVA (SSANOVA) models [11, 12, 13, 14].
In this chapter, we focus on studying the SSANOVA models. Suppose that the data and are independent and identically distributed (i.i.d.) copies of , where is the response variable and is the covariate variable. We consider the regression model:
where is the response, is the nonparametric function varying in an infinite-dimensional space, is on the domain , and . More general cases, in which the conditional distribution of given , denoted as , which follows different distributions instead of the Gaussian distribution, will be discussed later. The nonparametric function in (1) can be decomposed into
In the model (1), can be estimated by minimizing the following penalized likelihood functional:
where is a log likelihood measuring the goodness of fit of , is a quadratic functional on to quantify its smoothness, and is the smoothing parameter balancing trade-offs between the goodness of fit and the smoothness of [11, 12, 13]. The computational complexity of estimating by minimizing (2) is of the order for the sample of size . Therefore, the high computational costs render SSANOVA models impractical for massive datasets. In this chapter, we review two methods to lower the computational costs. One approach is through the adaptive basis selection algorithm . By carefully sampling a smaller set of basic functions conditional on the response variables, the adaptive sampling reduces the computational costs to , where is the number of the sampled basis functions. The computational costs can also be reduced by the rounding algorithm . This algorithm can significantly decrease the sample size to by rounding the data with a given precision, where . After rounding, the computational costs can be dramatically reduced to .
The rest of the chapter is organized as follows. Section 2 provides a detailed introduction to SSANOVA models and the model estimation. The details of adaptive basis selection algorithm and rounding algorithm are reviewed in Section 3. In Appendix, we demonstrate the numerical implementations using the R software.
2. Smoothing spline ANOVA models
In this section, we first review smoothing spline models and the reproducing kernel Hilbert space. Second, we present how to decompose a nonparametric function on tensor product domains, which lays the theoretical foundation for SSANOVA models. In the end, we show the estimation of SSANOVA models and illustrate the model with a real data example.
2.1. Introduction of smoothing spline models
In the model (1), is located in an infinite-dimensional space. One way to estimate it is to add some constraints and estimate in a finite-dimensional space. With the smoothness constraint, we estimate by minimizing the penalized likelihood functional (2), and the minimizer of (2) is called a smoothing spline. Cubic smoothing splines Suppose that follows a normal distribution, that is, . Then, the penalized likelihood functional (2) can be reduced as the penalized least squares:
Cubic smoothing splines
Suppose that follows a normal distribution, that is, . Then, the penalized likelihood functional (2) can be reduced as the penalized least squares:
Exponential family smoothing splines
Suppose that follows an exponential family distribution:
Note that the cubic smoothing spline in Example 1 is a special case of exponential family smoothing splines when follows the Gaussian distribution.
The smoothing parameter is sensitive to the estimation of (see Figure 1). Therefore, it is crucial to implement some proper smoothing parameter selection methods to estimate . One of the most popular methods is the generalized cross validation (GCV) [21, 22]. More details will be discussed in Section 2.6.2.
2.2. Reproducing kernel Hilbert space
We assume that readers are familiar with Hilbert space, which is a complete vector space with an inner product well defined that allows length and angle to be measured . In a general Hilbert space, the continuity of a functional, which is required in minimizing (2) on , is not always satisfied. To circumvent the problem, we optimize (2) in a special Hilbert space named reproducing kernel Hilbert space .
For each , there exists a corresponding continuous linear functional such that , where and defines the inner product in . Conversely, an element can also be found such that for any continuous linear functional of . This is known as the Riesz representation theorem. Riesz representation Let be a Hilbert space. For any functional of , there uniquely exists an element such that
Let be a Hilbert space. For any functional of , there uniquely exists an element such that
where is called the representer of . The uniqueness is in the sense that and are considered as the same representer for any and satisfying , where defines the norm in .
For a better construction of estimator minimizing (2), one needs the continuity of evaluation functional . Roughly speaking, this means that if two functions and are close in norm, that is, is small, then and are also pointwise close, that is, is small for all . Reproducing kernel Hilbert space Consider a Hilbert space consisting of functions on domain . For every element , define an evaluation functional such that . If all the evaluation functional s are continuous, , then is called a reproducing kernel Hilbert space.
Reproducing kernel Hilbert space
Consider a Hilbert space consisting of functions on domain . For every element , define an evaluation functional such that . If all the evaluation functional s are continuous, , then is called a reproducing kernel Hilbert space.
By Theorem 2.1, for every evaluation functional , there exists a corresponding function on as the representer of , such that and . By the definition of evaluation functional, it follows
The bivariate function is called the reproducing kernel of , which is unique if it exists. The essential meaning of the name “reproducing kernel” comes from its reproducing property
We now introduce the concept of tensor sum decomposition. Suppose that is a Hilbert space and is a linear subspace of . The linear subspace is called the orthogonal complement of . It is easy to verify that for any , there exists a unique decomposition , where and . This decomposition is called a tensor sum decomposition, denoted by . Suppose that and are two Hilbert spaces with inner products and . If the only common element of these two spaces is , one can also define a tensor sum Hilbert space . For any , one has unique decompositions and , where and . Moreover, the inner product defined on would be . The following theorem provides the rules in the tensor sum decomposition of a reproducing kernel Hilbert space. Suppose that and are the reproducing kernel Hilbert spaces and , respectively. If , then has a reproducing kernel . Conversely, if the reproducing kernel of can be decomposed into , where both and are positive definite, and they are orthogonal to each other, that is, for , then the spaces and corresponding to the kernels and form a tensor sum decomposition .
Suppose that and are the reproducing kernel Hilbert spaces and , respectively. If , then has a reproducing kernel .
Conversely, if the reproducing kernel of can be decomposed into , where both and are positive definite, and they are orthogonal to each other, that is, for , then the spaces and corresponding to the kernels and form a tensor sum decomposition .
2.3. Representer theorem
In (2), the smoothness penalty term is nonnegative definite, that is, , and hence it is a squared semi-norm on the reproducing kernel Hilbert space . Denote as the null space of and as its orthogonal complement. By the tensor sum decomposition , one may decompose the into two parts: one in the null space that has no contribution on the smoothness penalty and the other in “reproduced” by the reproducing kernel . There exist coefficient vectors and such that
There exist coefficient vectors and such that
where is the basis of null space and is the reproducing kernel of .
This theorem indicates that although the minimization problem is in an infinite-dimensional space, the minimizer of (2) lies in a data-adaptive finite-dimensional space.
2.4. Function decomposition
The decomposition of a multivariate function is similar to the classical ANOVA. In this section, we present the functional ANOVA which lays the foundation for SSANOVA models.
2.4.1. One-way ANOVA decomposition
We consider a classical one-way ANOVA model , where is the observed data, is the treatment mean for and , and s are the random errors. The treatment mean can be further decomposed as , where is the overall mean and is the treatment effect with the constraint .
Similar to the classical ANOVA decomposition, a univariate function can be decomposed as
where is an averaging operator that averages the effect of and is an identity operator. The operator averages a function to a constant function satisfying . For example, one can take in . In (7), is the mean function, and is the treatment effect.
2.4.2. Multiway ANOVA decomposition
On a -dimensional product domain , a multivariate function can be decomposed similarly to the one-way ANOVA decomposition. Let , , be the average operator on , and then is a constant function on . One can define the ANOVA decomposition on as
where . The term is the constant function, is the main effect term of , the term is the interaction of and , and so on.
2.5. Some examples of model conduction
Smoothing splines on . If we define
Here, we use an inner product
where . One can verify that and , where is the Kronecker delta . Indeed, forms an orthonormal basis of . Then, the reproducing kernel in is
(See details in ; Section~2.3).
SSANOVA models on product domains: A natural way to construct reproducing kernel Hilbert space on product domain is taking the tensor product of spaces constructed on the marginal domains s. According to the Moore-Aronszajn theorem, every nonnegative definite function corresponds to a reproducing kernel Hilbert space with as its reproducing kernel. Therefore, the construction of the tensor product reproducing kernel Hilbert space is induced by constructing its reproducing kernel. Suppose that if is nonnegative definite on and is nonnegative definite on , then is nonnegative definite on .
Suppose that if is nonnegative definite on and is nonnegative definite on , then is nonnegative definite on .
Theorem 2.4 implies that a reproducing kernel on tensor product reproducing kernel Hilbert space can be derived from the reproducing kernels on marginal domains. Indeed, let be the space on with reproducing kernel , where . Then, is nonnegative definite on . The space corresponding to is called the tensor product space of and , denoted by .
One can decompose each marginal space into , where denotes the averaging space and denotes the orthogonal complement. Then, by the discussion in Section 2.4, the one-way ANOVA decomposition on each marginal space can be generalized to a multidimensional space as
where denotes all the subsets of . The component in (8) is in the space . Based on the decomposition, the minimizer of (2) is called a tensor product smoothing spline. One can construct a tensor product smoothing spline following Theorem 2.3, in which the reproducing kernel term may be calculated in the same way as the tensor product (11).
In the following, we will give some examples of tensor product smoothing splines on product domains.
2.5.1. Smoothing splines on
We construct the reproducing kernel Hilbert space by using
In this case, the space can be further decomposed as
The reproducing kernels of tensor product cubic spline on are listed in Table 1.
On other product domains, for example, , the tensor product reproducing kernel Hilbert space can be decomposed in a similar way. More examples are available in (, Section~2.4).
220.127.116.11. General form
In general, a tensor product reproducing kernel Hilbert space can be specified as , where is a genetic index. Suppose that is equipped with a reproducing kernel and an inner product . Denote as the projection of onto . Then, an inner product in can be defined as
In this section, we show the procedure of estimating the minimizer of (2) under the Gaussian assumption and selecting the smoothing parameters.
2.6.1. Penalized least squares
We consider the same model shown in (1), and then the can be estimated by minimizing the penalized least squares:
where , , and . The least squares term in (15) becomes
By the reproducing property (5), the roughness penalty term can be expressed as
Therefore, the penalized least squares criterion (15) becomes
2.6.2. Selection of smoothing parameters
In SSANOVA models, properly selecting smoothing parameters is important to estimate [9, 27, 28], as shown in Figure 1. Here, we introduce the generalized cross validation (GCV) method for the smoothing parameter selection.
For the multivariate predictors, the penalty term in (15) has the form
where is the number of smoothing parameters, which is related to the functional ANOVA decomposition, and 's are the extra smoothing parameters. To avoid overparameterization, we treat as the effective smoothing parameters.
A GCV score is defined as
where is a symmetric matrix similar to the hat matrix in linear regression. We can select a proper by minimizing the GCV score .
2.7. Case study: Twitter data
Tweets in the contiguous United States were collected over five weekdays in January 2014. The dataset contains information of time, GPS location, and tweet counts (see Figure 2). To illustrate the application of SSANOVA models, we study the time and spatial patterns in this data.
The bivariate function is a function of time and location, where denotes the time and represents the longitude and latitude coordinates. We use the thin-plate spline for the spatial variable and cubic spline for the time variable. As a rotation-free method, the thin-plate spline is popular for modeling spatial data [29, 30, 31]. For a better interpretation, we decompose the function as
where is a constant function; and are the main effects of time and location, respectively; and is the spatial-time interaction effect.
The main effects of time and location are shown in Figure 3. Obviously, in panel (a), the number of tweets has the periodic effect, where it attains the maximum value at 8:00 p.m. and the minimum value at 5:00 a.m. The main effect of time shows the variations of Twitter usages in the United States. In addition, we can infer how the tweet counts vary across different locations based on panel (b) in Figure 3. There tend to be more tweets in the east than those in the west regions and more tweets in the coastal zone than those in the inland. We use the scaled dot product
3. Efficient approximation algorithm in massive datasets
In this section, we consider SSANOVA models under the big data settings. The computational cost of solving (17) is of the order and thus gives rise to a challenge on the application of SSANOVA models when the volume of data grows. To reduce the computational load, an obvious way is to select a subset of basis functions randomly. However, it is hard to keep the data features by uniform sampling. In the following section, we present an adaptive basis selection method and show its advantages over uniform sampling . Instead of selecting basis functions, another approach to reduce the computational cost is shrinking the original sample size by rounding algorithm .
3.1. Adaptive basis selection
A natural way to select the basis functions is through uniform sampling. Suppose that we randomly select a subset from , where is the subsample size. Thus, the kernel matrix would be , . Then, one minimizes (17) in the effective model space:
The computational cost will be reduced significantly to if is much smaller than . Furthermore, it can be proven that the minimizer of (2), , by uniform sampling basis selection, has the same asymptotic convergence rate as the full basis minimizer .
Although the uniform basis selection reduces the computational cost and the corresponding achieves the optimal asymptotic convergence rate, it may fail to retain the data features occasionally. For example, when the data are not evenly distributed, it is hard for uniform sampling to capture the feature where there are only a few data points. In , an adaptive basis selection method is proposed. The main idea is to sample more basis functions where the response functions change largely and fewer basis functions on those flat regions. More details of adaptive basis selection method are shown in the following procedure:
Step 1 Divide the range of responses into disjoint intervals, . Denote as the number of observations in .
Step 2 For each , draw a random sample of size from this collection. Let be the predictor values.
Step 3 Combine together to form a set of sampled predictor values , where .
Step 4 Define
Let be an matrix, and its th entry is . Let be an matrix, and its th entry is . Then, the estimator satisfies
where , , and . Similar to (17), the linear system of equations in this case is
The computational complexity of solving (18) is of the order , so the method decreases the computational cost significantly. It can also be shown that the adaptive sampling basis selection smoothing spline estimator has the same convergence property as the full basis method. More details about the consistency theory can be found in . Moreover, adaptive sampling basis selection method for exponential family smoothing spline models was developed in .
3.2. Rounding algorithm
Other than sampling a smaller set of basis functions to save the computational resources, for example, the adaptive basis selection method presented previously,  proposed a new rounding algorithm to fit SSANOVA models in the context of big data.
Rounding algorithm: The details of rounding algorithm can be shown in the following procedure:
Step 1 Assume that all predictors are continuous.
Step 2 Convert all predictors to the interval .
Step 3 Round the raw data by using the transformation:
where the rounding parameter and rounding function transform input data to the nearest integer.
Step 4 After replacing with , we redefine and in (16) and then estimate by minimizing the penalized least squares (16). In Step 3, if is the rounding parameter for th predictor and its value is 0.03, then each is formed by rounding the corresponding to its nearest 0.03. It is evident that the value of rounding parameter can influence the precision of approximation. The smaller the rounding parameter, the better the model estimation and the higher the computational cost.
In Step 3, if is the rounding parameter for th predictor and its value is 0.03, then each is formed by rounding the corresponding to its nearest 0.03.
It is evident that the value of rounding parameter can influence the precision of approximation. The smaller the rounding parameter, the better the model estimation and the higher the computational cost.
Computational benefits: We now briefly explain why the implementation of rounding algorithm can reduce the computational loads. For example, if the rounding parameter , it is obvious that , where denotes the number of uniquely observed values. In conclusion, using user-tunable rounding algorithm can dramatically reduce the computational burden of fitting SSANOVA models from the order of to , where .
Case study: To illustrate the benefit of the rounding algorithm, we apply the algorithm to the electroencephalography (EEG) dataset. Note that EEG is a monitoring method to record the electrical activity of the brain. It can be used to diagnose sleep disorders, epilepsy, encephalopathies, and brain death.
The dataset  contains 44 controls and 76 alcoholics. Each subject was repeatedly measured 10 times by using visual stimulus at a frequency of 256 Hz. This brings about replications subjects time points observations. There are two predictors, time and group (control vs. alcoholic). We apply the cubic spline to the time effect and the nominal spline to the group effect.
|GCV||AIC||BIC||CPU time (seconds)|
|Rounded data with||86.6667||2,242,544||2,242,833||1.22|
|Rounded data with||86.7654||2,242,893||2,243,089||1.13|
Based on Table 2, we can easily see that there are no significant difference among the GCV scores and AIC/BIC. In addition using rounding algorithm reduces CPU time compared to using unrounded dataset.
Smoothing spline ANOVA (SSANOVA) models are widely used in applications [11, 20, 36, 37]. In this chapter, we introduced the general framework of the SSANOVA models in Section 2. In Section 3, we discussed the models under the big data settings. When the volume of data grows, fitting the models is computing-intensive . The adaptive basis selection algorithm  and rounding algorithm  we presented can significantly reduce the computational cost.
This work is partially supported by the NIH grants R01 GM122080 and R01 GM113242; NSF grants DMS-1222718, DMS-1438957, and DMS-1228288; and NSFC grant 71331005.
Conflict of interest
The authors whose names are listed immediately below certify that they have NO affiliations with or involvement in any organization or entity with any financial interest (such as honoraria; educational grants; participation in speakers’ bureaus; membership, employment, consultancies, stock ownership, or other equity interests; and expert testimony or patent-licensing arrangements) or nonfinancial interest (such as personal or professional relationships, affiliations, knowledge, or beliefs) in the subject matter or materials discussed in this manuscript.
In this appendix, we use two examples to illustrate how to implement smoothing spline ANOVA (SSANOVA) models in R. The gss package in R, which can be downloaded on the CRAN
We now load the gss package:
Example I: Apply the smoothing spline to a simulated dataset.
Suppose that the predictor follows a uniform distribution on , and the response is generated based on , where .
Then, fit cubic smoothing spline model:
To evaluate the predicted values, one uses:
Example II: Apply the SSANOVA model to a real dataset.
In this example, we illustrate how to implement the SSANOVA model using the gss package. The data is from an experiment in which a single-cylinder engine is run with ethanol to see how the nox concentration
The predicted values are shown in Figure 5.