InTechOpen uses cookies to offer you the best online experience. By continuing to use our site, you agree to our Privacy Policy.

Mathematics » "Applied Linear Algebra in Action", book edited by Vasilios N. Katsikis, ISBN 978-953-51-2420-7, Print ISBN 978-953-51-2419-1, Published: July 6, 2016 under CC BY 3.0 license. © The Author(s).

Chapter 6

Likelihood Ratio Tests in Multivariate Linear Model

By Yasunori Fujikoshi
DOI: 10.5772/62277

Article top

Likelihood Ratio Tests in Multivariate Linear Model

Yasunori Fujikoshi
Show details


The aim of this chapter is to review likelihood ratio test procedures in multivariate linear models, focusing on projection matrices. It is noted that the projection matrices to the spaces spanned by mean vectors in hypothesis and alternatives play an important role. Some basic properties are given for projection matrices. The models treated include multivariate regression model, discriminant analysis model, and growth curve model. The hypotheses treated involve a generalized linear hypothesis and no additional information hypothesis, in addition to a usual liner hypothesis. The test statistics are expressed in terms of both projection matrices and sums of squares and products matrices.

Keywords: algebraic approach, additional information hypothesis, generalized linear hypothesis, growth curve model, multivariate linear model, lambda distribution, likelihood ratio criterion (LRC), projection matrix

1. Introduction

In this chapter, we review statistical inference, especially likelihood ratio criterion (LRC) in multivariate linear model, focusing on matrix theory. Consider a multivariate linear model with p response variables y1, …, yp and k explanatory or dummy variables x1, …, xk. Suppose that y = (y1, …, yp)′ and x = (x1, …, xk)′ are measured for n subjects, and let the observation of the ith subject be denoted by yi and xi. Then, we have the observation matrices given by


It is assumed that y1, …, yn are independent and have the same covariance matrix Σ. We express the mean of Y as follows:


A multivariate linear model is defined by requiring that


where Ω is a given subspace in the n dimensional Euclid space Rn. A typical Ω is given by


Here, ℛ[X] is the space spanned by the column vectors of X. A general theory for statistical inference on the regression parameter Θ can be seen in texts on multivariate analysis, e.g., see [18]. In this chapter, we discuss with algebraic approach in multivariate linear model.

In Section 2, we consider a multivariate regression model in which xi's are explanatory variables and Ω = ℛ[X]. The maximum likelihood estimator (MLE)s and likelihood ratio criterion (LRC) for Θ2=O are derived by using projection matrices. Here, Θ=Θ1Θ2. The distribution of LRC is discussed by multivariate Cochran theorem. It is pointed out that projection matrices play an important role. In Section 3, we give a summary of projection matrices. In Section 4, we consider to test an additional information hypothesis of y2 in the presence of y1, where y1 = (y1. …, yq)′ and y2 = (yq + 1. …, yp)′. In Section 5, we consider testing problems in discriminant analysis. Section 6 deals with a generalized multivariate linear model which is also called the growth curve model. Some related problems are discussed in Section 7.

2. Multivariate regression model

In this section, we consider a multivariate regression model on p response variables and k explanatory variables denoted by y = (y1, …, yp)′ and x = (x1, …, xk)′, respectively. Suppose that we have the observation matrices given by (1.1). A multivariate regression model is given by

where Θ is a k × p unknown parameter matrix. It is assumed that the rows of the error matrix E are independently distributed as a p variate normal distribution with mean zero and unknown covariance matrix Σ, i.e., Np0,Σ.

Let L(Θ, Σ) be the density function or the likelihood function. Then, we have


The maximum likelihood estimators (MLE) Θ^ and Σ^ of Θ and Σ are defined by the maximizers of L(Θ, Σ) or equivalently the minimizers of −2log L(Θ, Σ).

Theorem 2.1 Suppose that Y follows the multivariate regression model in (2.1). Then, the MLEs of Θ and Σ are given as


where PX = X(XX)− 1X′. Further, it holds that


Theorem 2.1 can be shown by a linear algebraic method, which is discussed in the next section. Note that PX is the projection matrix on the range space Ω=X. It is symmetric and idempotent, i.e.


Next, we consider to test the hypothesis


against K;Θ2O , where X = (X1X2), X1n × j and Θ=Θ1Θ2,Θ1;j×p. The hypothesis means that the last k − j dimensional variate x2 = (xj + 1, …, xk)′ has no additional information in the presence of the first j variate x1 = (x1, …, xj)′. In general, the likelihood ratio criterion (LRC) is defined by


Then we can express


Using Theorem 2.1, we can expressed as


Here, Σ^Ω and Σ^ω are the maximum likelihood estimators of Σ under the model (2.1) or K and H, respectively, which are given by




Summarizing these results, we have the following theorem.

Theorem 2.2 Let λ = Λn/2 be the LRC for testing H in (2.2). Then, Λ is expressed as



and SΩ and Sω are given by (2.4) and (2.5), respectively.

The matrices Se and Sh in the testing problem are called the sums of squares and products (SSP) matrices due to the error and the hypothesis, respectively. We consider the distribution of Λ. If a p × p random matrix W is expressed as


where zjNpμj,Σ and z1, …, zn are independent, W is said to have a noncentral Wishart distribution with n degrees of freedom, covariance matrix Σ, and noncentrality matrix Δ=μ1μ1++μnμn. We write that WWpn,Σ;Δ. In the special case Δ=O, W is said to have a Wishart distribution, denoted by WWpn,Σ.

Theorem 2.3 (multivariate Cochran theorem) Let Y=y1yn , where yiNpμi,Σ , i = 1, …, n and y1, …, yn are independent. Let A, A1, and A2 be n × n symmetric matrices. Then:

1. YAYWpk,Σ;ΩA2=A,trA=k,Ω=EYAEY.

2. YA1Y and YA2Y are independentA1A2 = O.

For a proof of multivariate Cochran theorem, see, e.g. [3, 68]. Let B and W be independent random matrices following the Wishart distribution Wpq,Σ and Wpn,Σ respectively, with n ≥ p. Then, the distribution of


is said to be the p-dimensional Lambda distribution with (qn)-degrees of freedom and is denoted by Λp(qn). For distributional results of Λp(qn), see [1, 3].

By using multivariate Cochran’s theorem, we have the following distributional results:

Theorem 2.4 Let Se and Sh be the random matrices in (2.7). Let Λ be the Λ-statistic defined by (2.6). Then,

  1. Se and Sh are independently distributed as a Wishart distribution Wpnk,Σ and a noncentral Wishart distribution Wpkj,Σ;Δ respectively, where


  2. Under H, the statistic Λ is distributed as a lambda distribution Λp(k − jn − k).

Proof. Note that PΩ = PX = X(XX)− 1X′, Pω=PX1=X1X1X11X, and PΩPω = PωPΩ. By multivariate Cochran’s theorem the first result (1) follows by checking that


The second result (2) follows by showing that Δ0=O, where Δ0 is the Δ under H. This is seen that


since PΩX1 = PωX1 = X1.

The matrices Se and Sh in (2.7) are defined in terms of n × n matrices PΩ and Pω. It is important to give expressions useful for their numerical computations. We have the following expressions:


Suppose that x1 is 1 for all subjects, i.e., x1 is an intercept term. Then, we can express these in terms of the SSP matrix of (y', x')′ defined by


where y¯ and x¯ are the sample mean vectors. Along the partition of x=x1x2′, we partition S as




Here, we use the notation Syyx=SyySyxSxx1Sxy, Sy21=Sy2Sy1S111S1y , etc. These are derived in the next section by using projection matrices.

3. Idempotent matrices and max-mini problems

In the previous section, we have seen that idempotent matrices play an important role on statistical inference in multivariate regression model. In fact, letting EY=η=η1,,η,p consider a model satisfying


Then the MLE of Θ is Θ^=XX1XY, and hence the MLE of η is denoted by


Here, PΩ = X(XX)− 1X′. Further, the residual sums of squares and products (RSSP) matrix is expressed as


Under the hypothesis (2.2), the spaces ηi’s belong are the same and are given by ω = ℛ[X1]. Similarly, we have


where Θ^ω=(Θ^1ω'O) and Θ^1ω=X1X11X1Y. The LR criterion is based on the following decomposition of SSP matrices;


The degrees of freedom in the Λ distribution Λp(fhfe) are given by


In general, an n × n matrix P is called idempotent if P2 = P. A symmetric and idempotent matrix is called projection matrix. Let Rn be the n dimensional Euclid space, and Ω be a subspace in Rn. Then, any n × 1 vector y can be uniquely decomposed into direct sum, i.e.,


where Ω is the orthocomplement space. Using decomposition (3.2), consider a mapping


The mapping is linear, and hence it is expressed as a matrix. In this case, u is called the orthogonal projection of y into Ω, and PΩ is also called the orthogonal projection matrix to Ω. Then, we have the following basic properties:

(P1) PΩ is uniquely defined;

(P2) In − PΩ is the projection matrix to Ω;

(P3) PΩ is a symmetric idempotent matrix;

(P4) ℛ[PΩ] = Ω, and dim[Ω] = trPΩ;

Let ω be a subset of Ω. Then, we have the following properties:

(P5) PΩPω = PωPΩ = Pω.

(P6) PΩPω=PωΩ, where ω is the orthocomplement space of ω.

(P7) Let B be a q × n matrix, and let N(B) = {y; By = 0}. If ω = N[B] ∩ Ω, then ω ∩ Ω = R[PΩB '].

For more details, see, e.g. [3, 7, 9, 10].

The MLEs and LRC in multivariate regression model are derived by using the following theorem.

Theorem 3.1

1. Consider a function of f(Σ)=log|Σ|+trΣ1S of p × p positive definite matrix. Then, fΣ takes uniquely the minimum at Σ=S , and the minimum value is given by


2. Let Y be an n × p known matrix and X an n × k known matrix of rank k. Consider a function of p × p positive definite matrix Σ and k × p matrix Θ=θij given by


where m > 0, − ∞ < θij < ∞, for i = 1, …, k; j = 1, …, p. Then, gΘ,Σ takes the minimum at


and the minimum value is given by mlog|Σ|^+mp.

Proof. Let 1, …, p be the characteristic roots of Σ1S. Note that the characteristic roots of Σ1S and Σ1/2SΣ1/2 are the same. The latter matrix is positive definite, and hence we may assume 1 ≥ ⋯ ≥ p > 0. Then


The last inequality follows from x − 1 ≥ log x (x > 0). The equality holds if and only if 1 = ⋯ = p = 1 ⇔ Σ=S.

Next, we prove 2. we have


The first equality follows from that YXΘ=YXΘ^+XΘ^Θ and YXΘ^XΘ^Θ=O. In the last step, the equality holds when Θ=Θ^. The required result is obtained by noting that Θ^ does not depend on Σ and combining this result with the first result 1.

Theorem 3.2 Let X be an n × k matrix of rank k, and let Ω = ℛ[X] which is defined also by the set {y : y = X θ }, where θ is a k × 1 unknown parameter vector. Let C be a c × k matrix of rank c, and define ω by the set {y : y = Xθ , C θ = 0}. Then,

  1. PΩ = X(XX)− 1X′.

  2. PΩ − Pω = X(XX)− 1C′{C(XX)− 1C}− 1C(XX)− 1X′.

Proof. 1 Let ŷ = X(XX)− 1X′ and consider a decomposition y = ŷ + (y − ŷ). Then, ŷ′(y − ŷ) = 0. Therefore, PΩy = ŷ and hence PΩ = X(XX)− 1X′.

2. Since C θ = C(XX)− 1X′ ⋅ X θ, we can write ω = N[B] ∩ Ω, where B = C(XX)− 1X′. Using (P7),


The final result is obtained by using 1 and (P7).

Consider a special case C = (O Ik − q). Then ω = ℛ[X1], where X = (X1X2), X1 : n × q. We have the following results:


The expressions (2.11) for Se and Sh in terms of S can be obtained from projection matrices based on


4. General linear hypothesis

In this section, we consider to test a general linear hypothesis

against alternatives Kg : CΘD ≠ O under a multivariate linear model given by (2.1), where C is a c × k given matrix with rank c and D is a p × d given matrix with rank d. When C = (O Ik − j) and D = Ip, the hypothesis Hg becomes H : Θ2 = O.

For the derivation of LR test of (4.1), we can use the following conventional approach: If U=YD, then the rows of U are independent and normally distributed with the identical covariance matrix DΣD, and

where Ξ=ΘD. The hypothesis (4.1) is expressed as

Applying a general theory for testing Hg in (2.1), we have the LRC λ:






Theorem 4.1 The statistic Λ in (4.4) is an LR statistic for testing (4.1) under (2.1). Further, under Hg, Λ ∼ Λd(cn − k).

Proof. Let G = (G1G2) be a p × p matrix such that G1 = D, G1G2=O, and |G| ≠ 0. Consider a transformation from Y to UV=YG1G2.

Then the rows of UV are independently normal with the same covariance matrix




The conditional of V given U is normal. The rows of V given U are independently normal with the same covariance matrix Ψ112 , and


where Δ*=ΔΞΓ and Γ=Ψ111Ψ12. We see that the maximum likelihood of V given U does not depend on the hypothesis. Therefore, an LR statistic is obtained from the marginal distribution of U, which implies the results required.

5. Additional information tests for response variables

We consider a multivariate regression model with an intercept term x0 and k explanatory variables x1, …, xk as follows.


where Y and X are the observation matrices on y = (y1, …, yp)′ and x = (x1, …, xk)′. We assume that the error matrix E has the same property as in (2.1), and rank (1nX) = k + 1. Our interest is to test a hypothesis H2 ⋅ 1 on no additional information of y2 = (yq + 1, …, yp)′ in presence of y1 = (y1, …, yq)′.

Along the partition of y into (y1′, y2′) let Y, θ, Θ, and Σ partition as


The conditional distribution of Y2 given Y1 is normal with mean


and the conditional covariance matrix is expressed as


where Σ221=Σ22Σ21Σ111Σ12 , and


Here, for an n × p matrix Y=y1yp, vec (Y) means an np-vector y1yp. Now we define the hypothesis H2 ⋅ 1 as


The hypothesis H2 ⋅ 1 means that y2 after removing the effects of y1 does not depend on x. In other words, the relationship between y2 and x can be described by the relationship between y1 and x. In this sense, y2 is redundant in the relationship between y and x.

The LR criterion for testing the hypothesis H2 ⋅ 1 against alternatives K21:Θ˜21O can be obtained through the following steps.

(D1) The density function of Y=Y1Y2 can be expressed as the product of the marginal density function of Y1 and the conditional density function of Y2 given Y1. Note that the density functions of Y1 under H2 ⋅ 1 and K2 ⋅ 1 are the same.

(D2) The spaces spanned by each column of EY2|Y1 are the same, and let the spaces under K2 ⋅ 1 and H2 ⋅ 1 denote by Ω and ω, respectively. Then


and dim(Ω) = q + k + 1, dim(ω) = k + 1.

(D3) The likelihood ratio criterion λ is expressed as


where SΩ=Y2InPΩY2 and Sω=Y2InPωY2.

(D4) Note that EY2|Y1PωPωEY2|Y1=O under H2 ⋅ 1. The conditional distribution of Λ under H2 ⋅ 1 is Λp − q(kn − q − k − 1), and hence the distribution of Λ under H2 ⋅ 1 is Λp − q(kn − q − k − 1).

Note that the Λ statistic is defined through Y2InPΩY2 and Y2PΩPωY2 , which involve n × n matrices. We try to write these statistics in terms of the SSP matrix of (y′, x′)′ defined by


where y¯ and x¯ are the sample mean vectors. Along the partition of y=y1y2′, we partition S as


We can show that


The first result is obtained by using


The second result is obtained by using


where Y˜1=InP1nY1 and X˜=InP1nX.

Summarizing the above results, we have the following theorem.

Theorem 5.1 In the multivariate regression model (5.1), consider to test the hypothesis H2 ⋅ 1 in (5.4) against K2 ⋅ 1. Then the LR criterion λ is given by


whose null distribution is Λp − q(kn − q − k − 1).

Note that S221 can be decomposed as


This decomposition is obtained by expressing S221x in terms of S221 , S2x1 , Sxx1 , and Sx21 by using an inverse formula


The decomposition is expressed as


The result may be also obtained by the following algebraic method. We have






which gives an expression for PωΩ by using Theorem 3.1 (1). This leads to (5.6).

6. Tests in discriminant analysis

We consider q p-variate normal populations with common covariance matrix Σ and the ith population having mean vector θi. Suppose that a sample of size ni is available from the ith population, and let yij be the jth observation from the ith population. The observation matrix for all the observations is expressed as


It is assumed that yij are independent, and


The model is expressed as



Here, the error matrix E has the same property as in (2.1).

First, we consider to test


against alternatives K : θi ≠ θj for some ij. The hypothesis can be expressed as


The tests including LRC are based on three basic statistics, the within-group SSP matrix W, the between-group SSP matrix B, and the total SSP matrix T given by


where y¯i and Si are the mean vector and sample covariance matrix of the ith population, and y¯ is the total mean vector defined by 1/ni=1qniy,¯i and n=i=1qni. In general, W and B are independently distributed as a Wishart distribution Wpnq,Σ and a noncentral Wishart distribution Wpq1,Σ;Δ respectively, where


where θ¯=1/ni=1q+1niθi . Then, the following theorem is well known.

Theorem 6.1 Let λ = Λn/2 be the LRC for testing H in (6.4). Then, Λ is expressed as

where W, B , and T are given in (6.6). Further, under H, the statistic Λ is distributed as a lambda distribution Λp(q − 1, n − q).

Now we shall show Theorem 6.1 by an algebraic method. It is easy to see that


The last equality is also checked from that under H


We have


Further, it is easily checked that

  1. InPA2=InPA,PAP1n2=PAP1n.

  2. PAP1nPAP1n=O.

  3. fe = dim[ℛ[A]] = tr(In − PA) = n − q,


Related to the test of H, we are interested in whether a subset of variables y1, …, yp is sufficient for discriminant analysis, or the set of remainder variables has no additional information or is redundant. Without loss of generality, we consider the sufficiency of a subvector y1 = (y1, …, yk)′ of y, or redundancy of the remainder vector y2 = (yk + 1, …, yp)′. Consider to test






The testing problem was considered by [11]. The hypothesis can be formulated in terms of Maharanobis distance and discriminant functions. For its details, see [12, 13]. To obtain a likelihood ratio for H2 ⋅ 1, we partition the observation matrix as


Then the conditional distribution of Y2 given Y1 is normal such that the rows of Y2 are independently distributed with covariance matrix Σ221=Σ22Σ21Σ111Σ12 , and the conditional mean is given by


where Θ21˙=(θ1;21,,θq;21). The LRC for H2 ⋅ 1 can be obtained by use of the conditional distribution, and following the steps (D1)–(D4) in Section 5. In fact, the spaces spanned by each column of EY2|Y1 are the same, and let the spaces under K2 ⋅ 1 and H2 ⋅ 1 denote by Ω and ω, respectively. Then


dim(Ω) = q + k, and dim(ω) = q + 1. The likelihood ratio criterion λ can be expressed as


where SΩ=Y2InPΩY2 and Sω=Y2InPωY2. We express the LRC in terms of W , B , and T. Let us partition W , B , and T as


where W12:q×pq , B12:q×pq , and T12:q×pq. Noting that PΩ=PA+PInPAY1 , we have


Similarly, noting that Pω=P1n+PInP1nY1 , we have


Theorem 6.2 Suppose that the observation matrix Y in (6.1) is a set of samples from Npθi,Σ i = 1, …, q. Then the likelihood ratio criterion λ for the hypothesis H2 ⋅ 1 in (6.8) is given by


where W and T are given by (6.6). Further, under H2 ⋅ 1,


Proof. We consider the conditional distributions of W221 and T221 given Y1 by using Theorem 2.3, and see also that they do not depend on Y1. We have seen that


It is easy to see that Q12=Q1 , rankQ1=trQ1=nqk , Q1A=O , Q1X1 = O, and


This implies that W221|Y1Wpknqk,Σ221 and hence W22 ⋅ 1 ∼ Wp − k(n − q − kΣ22 ⋅ 1). For T221 , we have


and hence


Similarly, Q2 is idempotent. Using P1nPA=PAP1n=P1n , we have Q1Q2=Q2Q1=Q1 , and hence


Further, under H2 ⋅ 1,


7. General multivariate linear model

In this section, we consider a general multivariate linear model as follows. Let Y be an n × p observation matrix whose rows are independently distributed as p-variate normal distribution with a common covariance matrix Σ. Suppose that the mean of Y is given as

where A is an n × k given matrix with rank k, X is a p × q matrix with rank q, and Θ is a k × q unknown parameter matrix. For a motivation of (7.1), consider the case when a single variable y is measured at p time points t1, …, tp (or different conditions) on n subjects chosen at random from a group. Suppose that we denote the variable y at time point tj by yj. Let the observations yi1, …, yip of the ith subject be denoted by


If we consider a polynomial regression of degree q − 1 of y on the time variable t, then




If there are k different groups and each group has a polynomial regression of degree q − 1 of y, we have a model given by (7.1). From such motivation, the model (7.1) is also called a growth curve model. For its detail, see [14].

Now, let us consider to derive LRC for a general linear hypothesis

against alternatives Kg:CΘDO. Here, C is a c × k given matrix with rank c, and D is a q × d given matrix with rank d. This problem was discussed by [1517]. Here, we obtain LRC by reducing it to the problem of obtaining LRC for a general linear hypothesis in a multivariate linear model. In order to relate the model (7.1) to a multivariate linear model, consider the transformation from Y to UV :


where G1 = X(XX)− 1, G2=X˜ , and X˜ are a p × (p − q) matrix satisfying X˜X=O and X˜X˜=Ipq. Then, the rows of UV are independently distributed as p-variate normal distributions with means


and the common covariance matrix


This transformation can be regarded as one from y = (y1, …, yp)′ to a q-variate main variable u = (u1, …, uq)′ and a (p − q)-variate auxiliary variable v = (v1, …, vp − q)′. The model (7.1) is equivalent to the following joint model of two components:

  1. The conditional distribution of U given V is


  2. The marginal distribution of V is




Before we obtain LRC, first we consider the MLEs in (7.1). Applying a general theory of multivariate linear model to (7.4) and (7.5), the MLEs of Ξ , Ψ112 , and Ψ22 are given by




and partition W as


Theorem 7.1 For an n × p observation matrix Y , assume a general multivariate linear model given by (7.1). Then:

1. The MLE Θ^ of Θ is given by


2. The MLE Ψ^112 of Ψ112 is given by


Proof. The MLE of Ξ is Ξ^=A*A*1A*U. The inverse formula (see (5.5)) gives


Therefore, we have




we obtain 1. For a derivation of 2, let B=InPAV. Then, using PA*=PA+PB , the first expression of (1) is obtained. Similarly, the second expression of (2) is obtained.

Theorem 7.2 Let λ = Λn/2 be the LRC for testing the hypothesis (7.2) in the generalized multivariate linear model (7.1). Then,






Here Θ^ is given in Theorem 7.1.1. Further, the null distribution is Λd(cn − k − (p − q)).

Proof. The test of Hg in (7.2) against alternatives Kg is equivalent to testing

under the conditional model (7.4), where C* = (C O). Since the distribution of V does not depend on Hg, the LR test under the conditional model is the LR test under the unconditional model. Using a general result for a general linear hypothesis given in Theorem 4.1, we obtain




By reduction similar to those of MLEs, it is seen that S˜e=Se and S˜h=Sh. This completes the proof.

8. Concluding remarks

In this chapter, we discuss LRC in multivariate linear model, focusing on the role of projection matrices. Testing problems considered involve the hypotheses on selection of variables or no additional information of a set of variables, in addition to a typical linear hypothesis. It may be noted that various LRCs and their distributions are obtained by algebraic methods.

We have not discussed with LRCs for the hypothesis of selection of variables in canonical correlation analysis, and for dimensionality in multivariate linear model. Some results for these problems can be found in [3, 18].

In multivariate analysis, there are some other test criteria such as Lawley-Hotelling trace criterion and Bartlett-Nanda-Pillai trace criterion. For the testing problems treated in this chapter, it is possible to propose such criteria as in [12].

The LRCs for tests of no additional information of a set of variables will be useful in selection of variables. For example, it is possible to propose model selection criteria such as AIC (see [19]).


The author wishes to thank Dr. Tetsuro Sakurai for his variable comments for the first draft. The author’s research is partially supported by the Ministry of Education, Science, Sports, and Culture, a Grant-in-Aid for Scientific Research (C), no. 25330038, 2013–2015.


1 - Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis, 3rd ed. John Wiley & Sons, Inc., Hoboken, N.J.
2 - Arnold, S. F. (1981). The Theory of Linear Models and Multivariate Analysis, John Wiley & Sons, Inc., Hoboken, N.J.
3 - Fujikoshi, Y., Ulyanov, V. V., and Shimizu, R. (2010). Multivariate Statistics: High-Dimensional and Large-Sample Approximations. John Wiley & Sons, Inc., Hobeken, N.J.
4 - Muirhead, R. J. (1982). Aspect of Multivariate Statistical Theory, John Wiley & Sons, Inc., New York.
5 - Rencher, A. C. (2002). Methods of Multivariate Analysis, 2nd ed. John Wiley & Sons, Inc., New York.
6 - Seber, G. A. F. (1984). Multivariate Observations. John Wiley & Sons, Inc., New York.
7 - Seber, G. A. F. (2008). A Matrix Handbook for Statisticians. 2nd edn. John Wiley & Sons, Inc., Hobeken, N.J.
8 - Siotani, M., Hayakawa, T., and Fujikoshi, Y. (1985). Modern Multivariate Statistical Analysis: A Graduate Course and Handbook, American Sciences Press, Columbus, OH.
9 - Harville, D. A. (1997). Matrix Algebra from a Statistician’s Perspective. Springer-Verlag, New York.
10 - Rao, C. R. (1973). Linear Statistical Inference and Its Applications, 2nd ed. John Wiley & Sons, Inc., New York.
11 - Rao, C. R. (1948). Tests of significance in multivariate analysis. Biometrika. 35, 58–79.
12 - Fujikoshi, Y. (1989). Tests for redundancy of some variables in multivariate analysis. In Recent Developments in Statistical Data Analysis and Inference (Y. Dodge, ed.). Elsevier Science Publishers B.V., Amsterdam, 141–163.
13 - Rao, C. R. (1970). Inference on discriminant function coefficients. In Essays in Probability and Statistics, (R. C. Bose, ed.), University of North Carolina Press, Chapel Hill, NC, 587–602.
14 - Potthoff, R. F. and Roy, S. N. (1964). A generalized multivariate analysis of variance model useful especially for growth curve problems. Biometrika. 51, 313–326.
15 - Gleser, L. J. and Olkin, I. (1970). Linear models in multivariate analysis, In Essays in Probability and Statistics, (R.C. Bose, ed.), University of North Carolina Press, Chapel Hill, NC, 267–292.
16 - Khatri, C. G. (1966). A note on a MANOVA model applied to problems in growth curve. Ann. Inst. Statist. Math. 18, 75–86.
17 - Kshirsagar, A. M. (1995). Growth Curves. Marcel Dekker, Inc.
18 - Fujikoshi, Y. (1982). A test for additional information in canonical correlation analysis, Ann. Inst. Statist. Math., 34, 137–144.
19 - Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle. In 2nd International Symposium on Information Theory (B.N. Petrov and F. Csáki, eds.). Akadémia Kiado, Budapest, Hungary, 267–281.