Open access peer-reviewed chapter

# Likelihood Ratio Tests in Multivariate Linear Model

Written By

Yasunori Fujikoshi

Submitted: October 9th, 2015 Reviewed: January 22nd, 2016 Published: July 6th, 2016

DOI: 10.5772/62277

From the Edited Volume

## Applied Linear Algebra in Action

Edited by Vasilios N. Katsikis

Chapter metrics overview

View Full Metrics

## Abstract

The aim of this chapter is to review likelihood ratio test procedures in multivariate linear models, focusing on projection matrices. It is noted that the projection matrices to the spaces spanned by mean vectors in hypothesis and alternatives play an important role. Some basic properties are given for projection matrices. The models treated include multivariate regression model, discriminant analysis model, and growth curve model. The hypotheses treated involve a generalized linear hypothesis and no additional information hypothesis, in addition to a usual liner hypothesis. The test statistics are expressed in terms of both projection matrices and sums of squares and products matrices.

### Keywords

• algebraic approach
• generalized linear hypothesis
• growth curve model
• multivariate linear model
• lambda distribution
• likelihood ratio criterion (LRC)
• projection matrix

## 1. Introduction

In this chapter, we review statistical inference, especially likelihood ratio criterion (LRC) in multivariate linear model, focusing on matrix theory. Consider a multivariate linear model with p response variables y1, …, yp and k explanatory or dummy variables x1, …, xk. Suppose that y = (y1, …, yp)′ and x = (x1, …, xk)′ are measured for n subjects, and let the observation of the ith subject be denoted by yi and xi. Then, we have the observation matrices given by

Y = y 1 y 2 y n , X = x 1 x 2 x n . E1.1

It is assumed that y1, …, yn are independent and have the same covariance matrix Σ. We express the mean of Y as follows:

E Y = η = η 1 , , η p . E1.2

A multivariate linear model is defined by requiring that

η i Ω , f o r all i = 1 , , p , E1.3

where Ω is a given subspace in the n dimensional Euclid space Rn. A typical Ω is given by

Ω = X = η = X θ ; θ = θ 1 θ k , < θ i < , i = 1 , , k . E1.4

Here, ℛ[X] is the space spanned by the column vectors of X. A general theory for statistical inference on the regression parameter Θ can be seen in texts on multivariate analysis, e.g., see [18]. In this chapter, we discuss with algebraic approach in multivariate linear model.

In Section 2, we consider a multivariate regression model in which x i ' s are explanatory variables and Ω = ℛ[X]. The maximum likelihood estimator (MLE)s and likelihood ratio criterion (LRC) for Θ 2 = O are derived by using projection matrices. Here, Θ = Θ 1 Θ 2 . The distribution of LRC is discussed by multivariate Cochran theorem. It is pointed out that projection matrices play an important role. In Section 3, we give a summary of projection matrices. In Section 4, we consider to test an additional information hypothesis of y2 in the presence of y1, where y1 = (y1. …, yq)′ and y2 = (yq + 1. …, yp)′. In Section 5, we consider testing problems in discriminant analysis. Section 6 deals with a generalized multivariate linear model which is also called the growth curve model. Some related problems are discussed in Section 7.

## 2. Multivariate regression model

In this section, we consider a multivariate regression model on p response variables and k explanatory variables denoted by y = (y1, …, yp)′ and x = (x1, …, xk)′, respectively. Suppose that we have the observation matrices given by (1.1). A multivariate regression model is given by

Y = X Θ + E , E2.1

where Θ is a k × p unknown parameter matrix. It is assumed that the rows of the error matrix E are independently distributed as a p variate normal distribution with mean zero and unknown covariance matrix Σ, i.e., N p 0 , Σ .

Let L(Θ, Σ) be the density function or the likelihood function. Then, we have

2 log L Θ , Σ = n log | Σ | + t r Σ 1 Y X Θ Y X Θ + n p log 2 π . E60000

The maximum likelihood estimators (MLE) Θ ^ and Σ ^ of Θ and Σ are defined by the maximizers of L(Θ, Σ) or equivalently the minimizers of −2log L(Θ, Σ).

Theorem 2.1 Suppose that Y follows the multivariate regression model in (2.1). Then, the MLEs of Θ and Σ are given as

Θ ^ = X X 1 X Y , Σ ^ = 1 n Y X Θ ^ Y X Θ ^ = 1 n Y I n P X Y , E70000

where PX = X(XX)− 1X′. Further, it holds that

2 log L Θ ^ , Σ ^ = n log | Σ ^ | + n p log 2 π + 1 . E80000

Theorem 2.1 can be shown by a linear algebraic method, which is discussed in the next section. Note that PX is the projection matrix on the range space Ω = X . It is symmetric and idempotent, i.e.

P X = P X , P X 2 = P X . E90000

Next, we consider to test the hypothesis

H : E Y = X 1 Θ 1 Θ 2 = O , E2.2

against K ; Θ 2 O , where X = (X1X2), X1n × j and Θ = Θ 1 Θ 2 , Θ 1 ; j × p . The hypothesis means that the last k − j dimensional variate x2 = (xj + 1, …, xk)′ has no additional information in the presence of the first j variate x1 = (x1, …, xj)′. In general, the likelihood ratio criterion (LRC) is defined by

λ = max H L Θ , Σ max K L Θ , Σ . E2.3

Then we can express

2 log λ = min H 2 log L Θ , Σ min K 2 log L Θ , Σ = min H n log | Σ | + t r Y X Θ Y X Θ min K n log | Σ | + t r Y X Θ Y X Θ . E120000

Using Theorem 2.1, we can expressed as

λ 2 / n Λ = n Σ ^ Ω n Σ ^ ω . E130000

Here, Σ ^ Ω and Σ ^ ω are the maximum likelihood estimators of Σ under the model (2.1) or K and H, respectively, which are given by

n Σ ^ Ω = Y X Θ ^ Ω Y X Θ ^ Ω , Θ ^ Ω = X X 1 X Y = Y I n P Ω Y E2.4

and

n Σ ^ ω = Y X 1 Θ ^ 1 ω Y X 1 Θ ^ 1 ω , Θ ^ 1 ω = X 1 X 1 1 X 1 Y = Y I n P ω Y E2.5

Summarizing these results, we have the following theorem.

Theorem 2.2 Let λ = Λn/2 be the LRC for testing H in (2.2). Then, Λ is expressed as

Λ = S e S e + S h , E2.6

where

S e = Σ ^ Ω , S h = Σ ^ ω Σ ^ Ω , E2.7

and S Ω and S ω are given by (2.4) and (2.5), respectively.

The matrices S e and S h in the testing problem are called the sums of squares and products (SSP) matrices due to the error and the hypothesis, respectively. We consider the distribution of Λ. If a p × p random matrix W is expressed as

W = j = 1 n z j z j , E180000

where z j N p μ j , Σ and z1, …, zn are independent, W is said to have a noncentral Wishart distribution with n degrees of freedom, covariance matrix Σ, and noncentrality matrix Δ = μ 1 μ 1 + + μ n μ n . We write that WWpn,Σ;Δ. In the special case Δ=O, W is said to have a Wishart distribution, denoted by W W p n , Σ .

Theorem 2.3 (multivariate Cochran theorem) Let Y = y 1 y n , where y i N p μ i , Σ , i = 1, …, n and y1, …, yn are independent. Let A, A1, and A2 be n × n symmetric matrices. Then:

1. Y A Y W p k , Σ ; Ω A 2 = A , t r A = k , Ω = E Y A E Y .

2. Y A 1 Y and Y A 2 Y are independentA1A2 = O.

For a proof of multivariate Cochran theorem, see, e.g. [3, 68]. Let B and W be independent random matrices following the Wishart distribution W p q , Σ and W p n , Σ respectively, with n ≥ p. Then, the distribution of

Λ = W B + W E190000

is said to be the p-dimensional Lambda distribution with (qn)-degrees of freedom and is denoted by Λp(qn). For distributional results of Λp(qn), see [1, 3].

By using multivariate Cochran’s theorem, we have the following distributional results:

Theorem 2.4 Let S e and S h be the random matrices in (2.7). Let Λ be the Λ-statistic defined by (2.6). Then,

1. S e and S h are independently distributed as a Wishart distribution W p n k , Σ and a noncentral Wishart distribution W p k j , Σ ; Δ respectively, where

Δ = X Θ P X P X 1 X Θ . E2.8

2. Under H, the statistic Λ is distributed as a lambda distribution Λp(k − jn − k).

Proof. Note that PΩ = PX = X(XX)− 1X′, P ω = P X 1 = X 1 X 1 X 1 1 X, and PΩPω = PωPΩ. By multivariate Cochran’s theorem the first result (1) follows by checking that

I n P Ω 2 = I n P Ω , P Ω P ω 2 = P Ω P ω , I n P Ω P Ω P ω = O . E210000

The second result (2) follows by showing that Δ 0 = O, where Δ 0 is the Δ under H. This is seen that

Δ 0 = X 1 Θ 1 P Ω P ω X 1 Θ 1 = O , E220000

since PΩX1 = PωX1 = X1.

The matrices S e and S h in (2.7) are defined in terms of n × n matrices PΩ and Pω. It is important to give expressions useful for their numerical computations. We have the following expressions:

S e = Y Y Y X X X 1 X Y , S h = Y X X X 1 X Y Y X 1 X 1 X 1 1 X 1 Y . E230000

Suppose that x1 is 1 for all subjects, i.e., x1 is an intercept term. Then, we can express these in terms of the SSP matrix of (y', x')′ defined by

S = i = 1 n y i y ¯ x i x ¯ y i y ¯ x i x ¯ = S y y S y x S x y S x x , E2.9

where y ¯ and x ¯ are the sample mean vectors. Along the partition of x = x 1 x 2 ′, we partition S as

S = S y y S y 1 S y 2 S y 1 S 11 S 12 S y 2 S 21 S 22 . E2.10

Then,

S e = S y y x , S h = S y 2 1 S 22 1 1 S 2 y 1 . E2.11

Here, we use the notation S y y x = S y y S y x S x x 1 S x y, S y 2 1 = S y 2 S y 1 S 11 1 S 1 y , etc. These are derived in the next section by using projection matrices.

## 3. Idempotent matrices and max-mini problems

In the previous section, we have seen that idempotent matrices play an important role on statistical inference in multivariate regression model. In fact, letting E Y = η = η 1 , , η, p consider a model satisfying

η i Ω = R , f o r all i = 1 , , p , E3.1

Then the MLE of Θ is Θ ^ = X X 1 X Y, and hence the MLE of η is denoted by

η ^ Ω = X Θ ^ = P Ω Y . E280000

Here, PΩ = X(XX)− 1X′. Further, the residual sums of squares and products (RSSP) matrix is expressed as

S Ω = Y η ^ Ω Y η ^ Ω = Y I n P Ω Y . E290000

Under the hypothesis (2.2), the spaces ηi’s belong are the same and are given by ω = ℛ[X1]. Similarly, we have

η ^ ω = X Θ ^ ω = P ω Y . S ω = Y η ^ ω Y η ^ ω = Y I n P ω Y , E300000

where Θ ^ ω = ( Θ ^ 1ω ' O ) and Θ ^ 1 ω = X 1 X 1 1 X 1 Y . The LR criterion is based on the following decomposition of SSP matrices;

S ω = Y I n P ω Y = Y I n P Ω Y + Y P Ω P ω Y = S e + S h . E310000

The degrees of freedom in the Λ distribution Λp(fhfe) are given by

f e = n dim Ω , f h = k j = dim Ω dim ω . E320000

In general, an n × n matrix P is called idempotent if P2 = P. A symmetric and idempotent matrix is called projection matrix. Let Rn be the n dimensional Euclid space, and Ω be a subspace in Rn. Then, any n × 1 vector y can be uniquely decomposed into direct sum, i.e.,

y = u + v , u Ω , v Ω , E3.2

where Ω is the orthocomplement space. Using decomposition (3.2), consider a mapping

P Ω : y u , i . e . P Ω y = u . E340000

The mapping is linear, and hence it is expressed as a matrix. In this case, u is called the orthogonal projection of y into Ω, and PΩ is also called the orthogonal projection matrix to Ω. Then, we have the following basic properties:

(P1) PΩ is uniquely defined;

(P2) In − PΩ is the projection matrix to Ω;

(P3) PΩ is a symmetric idempotent matrix;

(P4) ℛ[PΩ] = Ω, and dim[Ω] = trPΩ;

Let ω be a subset of Ω. Then, we have the following properties:

(P5) PΩPω = PωPΩ = Pω.

(P6) P Ω P ω = P ω Ω, where ω is the orthocomplement space of ω.

(P7) Let B be a q × n matrix, and let N(B) = {y; By = 0}. If ω = N[B] ∩ Ω, then ω ∩ Ω = R[PΩB '].

For more details, see, e.g. [3, 7, 9, 10].

The MLEs and LRC in multivariate regression model are derived by using the following theorem.

Theorem 3.1

1. Consider a function of f( Σ )=log|Σ|+tr Σ 1 S of p × p positive definite matrix. Then, f Σ takes uniquely the minimum at Σ = S , and the minimum value is given by

min Σ > O f Σ = f S + p . E350000

2. Let Y be an n × p known matrix and X an n × k known matrix of rank k. Consider a function of p × p positive definite matrix Σ and k × p matrix Θ = θ i j given by

g Θ , Σ = m log | Σ | + t r Σ 1 Y X Θ Y X Θ , E360000

where m > 0, − ∞ < θij < ∞, for i = 1, …, k; j = 1, …, p. Then, g Θ , Σ takes the minimum at

Θ = Θ ^ = X X 1 X Y , Σ = Σ ^ = 1 m Y I n P X Y , E370000

and the minimum value is given by m log |Σ| ^ + m p .

Proof. Let 1, …, p be the characteristic roots of Σ 1 S . Note that the characteristic roots of Σ 1 S and Σ 1 / 2 S Σ 1 / 2 are the same. The latter matrix is positive definite, and hence we may assume 1 ≥ ⋯ ≥ p > 0. Then

f Σ f S = log | Σ S 1 | + t r Σ 1 S p = log | Σ 1 S | + t r Σ 1 S p = i = 1 p log i + i 1 0 . E380000

The last inequality follows from x − 1 ≥ log x (x > 0). The equality holds if and only if 1 = ⋯ = p = 1 ⇔ Σ = S .

Next, we prove 2. we have

t r Σ 1 Y X Θ Y X Θ = t r Σ 1 Y X Θ ^ Y X Θ ^ + t r Σ 1 X Θ ^ Θ X Θ ^ Θ t r Σ 1 Y I n P X Y . E390000

The first equality follows from that Y X Θ = Y X Θ ^ + X Θ ^ Θ and Y X Θ ^ X Θ ^ Θ = O . In the last step, the equality holds when Θ = Θ ^ . The required result is obtained by noting that Θ ^ does not depend on Σ and combining this result with the first result 1.

Theorem 3.2 Let X be an n × k matrix of rank k, and let Ω = ℛ[X] which is defined also by the set {y : y = X θ }, where θ is a k × 1 unknown parameter vector. Let C be a c × k matrix of rank c, and define ω by the set {y : y = Xθ , C θ = 0}. Then,

1. PΩ = X(XX)− 1X′.

2. PΩ − Pω = X(XX)− 1C′{C(XX)− 1C}− 1C(XX)− 1X′.

Proof. 1 Let ŷ = X(XX)− 1X′ and consider a decomposition y = ŷ + (y − ŷ). Then, ŷ′(y − ŷ) = 0. Therefore, PΩy = ŷ and hence PΩ = X(XX)− 1X′.

2. Since C θ = C(XX)− 1X′ ⋅ X θ, we can write ω = N[B] ∩ Ω, where B = C(XX)− 1X′. Using (P7),

ω Ω = P Ω B = X X X 1 C . E400000

The final result is obtained by using 1 and (P7).

Consider a special case C = (O Ik − q). Then ω = ℛ[X1], where X = (X1X2), X1 : n × q. We have the following results:

ω Ω = I n P X 1 X 2 , P ω Ω = I n P X 1 X 2 X 2 I n P X 1 X 2 1 X 2 I n P X 1 . E410000

The expressions (2.11) for S e and S h in terms of S can be obtained from projection matrices based on

Ω = X = 1 n + I n P 1 n X , ω Ω = I n P 1 n P I n P 1 n X 1 X 2 . E420000

## 4. General linear hypothesis

In this section, we consider to test a general linear hypothesis

H g : C Θ D = O , E4.1

against alternatives Kg : CΘD ≠ O under a multivariate linear model given by (2.1), where C is a c × k given matrix with rank c and D is a p × d given matrix with rank d. When C = (O Ik − j) and D = Ip, the hypothesis Hg becomes H : Θ2 = O.

For the derivation of LR test of (4.1), we can use the following conventional approach: If U = Y D, then the rows of U are independent and normally distributed with the identical covariance matrix DΣD, and

E U = X Ξ , E4.2

where Ξ = Θ D . The hypothesis (4.1) is expressed as

H g : C Ξ = O . E4.3

Applying a general theory for testing Hg in (2.1), we have the LRC λ:

λ 2 / n = Λ = S e S e + S h , E4.4

where

S e = U ' I n P X U = D Y I n P A Y D , E470000

and

S h = C X X 1 X U C X X 1 C 1 C X X 1 X U , = C X X 1 X Y D C X X 1 C 1 C X X 1 X Y D . E480000

Theorem 4.1 The statistic Λ in (4.4) is an LR statistic for testing (4.1) under (2.1). Further, under Hg, Λ ∼ Λd(cn − k).

Proof. Let G = (G1G2) be a p × p matrix such that G1 = D, G 1 G 2 = O, and |G| ≠ 0. Consider a transformation from Y to U V = Y G 1 G 2 .

Then the rows of U V are independently normal with the same covariance matrix

Ψ = G Σ G = Ψ 11 Ψ 12 Ψ 21 Ψ 22 , Ψ 12 : d × p d , E490000

and

E U V = X Θ G 1 G 2 = X Ξ Δ , Ξ = Θ G 1 , Δ = Θ G 2 . E500000

The conditional of V given U is normal. The rows of V given U are independently normal with the same covariance matrix Ψ 11 2 , and

E V | U = X Δ + U X Ξ Γ = X Δ + U Γ , E510000

where Δ * = Δ Ξ Γ and Γ = Ψ 11 1 Ψ 12 . We see that the maximum likelihood of V given U does not depend on the hypothesis. Therefore, an LR statistic is obtained from the marginal distribution of U, which implies the results required.

## 5. Additional information tests for response variables

We consider a multivariate regression model with an intercept term x0 and k explanatory variables x1, …, xk as follows.

Y = 1 θ + X Θ + E , E5.1

where Y and X are the observation matrices on y = (y1, …, yp)′ and x = (x1, …, xk)′. We assume that the error matrix E has the same property as in (2.1), and rank (1nX) = k + 1. Our interest is to test a hypothesis H2 ⋅ 1 on no additional information of y2 = (yq + 1, …, yp)′ in presence of y1 = (y1, …, yq)′.

Along the partition of y into (y1′, y2′) let Y, θ, Θ, and Σ partition as

Y = Y 1 Y 2 , Θ = Θ 1 Θ 2 , θ = θ 1 θ 2 , Σ = Σ 11 Σ 12 Σ 21 Σ 22 . E530000

The conditional distribution of Y 2 given Y 1 is normal with mean

E Y 2 | Y 1 = 1 θ 2 + X Θ 2 + Y 1 1 n θ 1 X Θ 1 Σ 11 1 Σ 12 = 1 n θ ˜ 02 + X Θ ˜ 2 + Y 1 Σ 11 1 Σ 12 , E5.2

and the conditional covariance matrix is expressed as

V a r v e c Y 2 | Y 1 = Σ 22 1 I n , E5.3

where Σ 22 1 = Σ 22 Σ 21 Σ 11 1 Σ 12 , and

θ ˜ 2 = θ 2 θ 1 Σ 11 1 Σ 12 , Θ ˜ 2 = Θ 2 Θ 1 Σ 11 1 Σ 12 . E560000

Here, for an n × p matrix Y = y 1 y p, vec (Y) means an np-vector y 1 y p . Now we define the hypothesis H2 ⋅ 1 as

H 2 1 : Θ 2 = Θ 1 Σ 11 1 Σ 12 Θ ˜ 2 = O . E5.4

The hypothesis H2 ⋅ 1 means that y2 after removing the effects of y1 does not depend on x. In other words, the relationship between y2 and x can be described by the relationship between y1 and x. In this sense, y2 is redundant in the relationship between y and x.

The LR criterion for testing the hypothesis H2 ⋅ 1 against alternatives K 2 1 : Θ ˜ 2 1 O can be obtained through the following steps.

(D1) The density function of Y = Y 1 Y 2 can be expressed as the product of the marginal density function of Y 1 and the conditional density function of Y 2 given Y 1 . Note that the density functions of Y 1 under H2 ⋅ 1 and K2 ⋅ 1 are the same.

(D2) The spaces spanned by each column of E Y 2 | Y 1 are the same, and let the spaces under K2 ⋅ 1 and H2 ⋅ 1 denote by Ω and ω, respectively. Then

Ω = 1 n Y 1 X , ω = 1 n Y 1 , E580000

and dim(Ω) = q + k + 1, dim(ω) = k + 1.

(D3) The likelihood ratio criterion λ is expressed as

λ 2 / n = Λ = S Ω S ω = S Ω S Ω + S ω S Ω . E590000

where S Ω = Y 2 I n P Ω Y 2 and S ω = Y 2 I n P ω Y 2 .

(D4) Note that E Y 2 | Y 1 P ω P ω E Y 2 | Y 1 = O under H2 ⋅ 1. The conditional distribution of Λ under H2 ⋅ 1 is Λp − q(kn − q − k − 1), and hence the distribution of Λ under H2 ⋅ 1 is Λp − q(kn − q − k − 1).

Note that the Λ statistic is defined through Y 2 I n P Ω Y 2 and Y 2 P Ω P ω Y 2 , which involve n × n matrices. We try to write these statistics in terms of the SSP matrix of (y′, x′)′ defined by

S = i = 1 n y i y ¯ x i x ¯ y i y ¯ x i x ¯ = S y y S y x S x y S x x , E600000

where y ¯ and x ¯ are the sample mean vectors. Along the partition of y = y 1 y 2 ′, we partition S as

S = S 11 S 12 S 1 x S 21 S 22 S 2 x S x 1 S x 2 S x x . E610000

We can show that

S ω = S 22 1 = S 22 S 21 S 11 1 S 12 , S Ω = S 22 1 x = S 22 x S 21 x S 11 x 1 S 12 x . E620000

The first result is obtained by using

ω = 1 n + I n P 1 n Y 1 . E630000

The second result is obtained by using

Ω = 1 n + Y ˜ 1 X ˜ = 1 n + I n P 0 X + I n P X I n P 0 Y 1 , E640000

where Y ˜ 1 = I n P 1 n Y 1 and X ˜ = I n P 1 n X .

Summarizing the above results, we have the following theorem.

Theorem 5.1 In the multivariate regression model (5.1), consider to test the hypothesis H2 ⋅ 1 in (5.4) against K2 ⋅ 1. Then the LR criterion λ is given by

λ 2 / n = Λ = S 22 1 x S 22 1 , E650000

whose null distribution is Λp − q(kn − q − k − 1).

Note that S 22 1 can be decomposed as

S 22 1 = S 22 1 x + S 2 x 1 S x x 1 1 S x 2 1 . E660000

This decomposition is obtained by expressing S 22 1 x in terms of S 22 1 , S 2 x 1 , S x x 1 , and S x 2 1 by using an inverse formula

H 11 H 12 H 21 H 22 1 = H 11 1 O O O + H 11 1 H 12 I H 22 1 1 H 21 H 11 1 I . E5.5

The decomposition is expressed as

S 22 1 S 22 1 x = S 2 x 1 S x x 1 1 S x 2 1 . E5.6

The result may be also obtained by the following algebraic method. We have

S 22 1 S 22 1 x = Y 2 P Ω P ω Y 2 = Y 2 P ω Ω Y 2 , E690000

and

Ω = 1 n + Y ˜ 1 X ˜ , ω = 1 n + Y ˜ 1 . E700000

Therefore,

ω Ω = I n P 1 1 P Y ˜ 1 Y ˜ 1 X ˜ = I n P 1 1 P Y ˜ 1 X ˜ , E710000

which gives an expression for P ω Ω by using Theorem 3.1 (1). This leads to (5.6).

## 6. Tests in discriminant analysis

We consider q p-variate normal populations with common covariance matrix Σ and the ith population having mean vector θi. Suppose that a sample of size ni is available from the ith population, and let yij be the jth observation from the ith population. The observation matrix for all the observations is expressed as

Y = y 11 y 1 n 1 y 21 y q 1 y q n q . E6.1

It is assumed that yij are independent, and

y i j ~ N θ i , Σ , j = 1 , , n i ; i = 1 , , q , E6.2

The model is expressed as

Y = A Θ + E , E6.3

where

A = 1 n 1 0 0 0 1 n 2 0 0 0 1 n q , Θ = θ 1 θ 2 θ q . E750000

Here, the error matrix E has the same property as in (2.1).

First, we consider to test

H : θ 1 = = θ q = θ , E6.4

against alternatives K : θi ≠ θj for some ij. The hypothesis can be expressed as

H : C Θ = O , C = I q 1 , 1 q 1 . E6.5

The tests including LRC are based on three basic statistics, the within-group SSP matrix W, the between-group SSP matrix B, and the total SSP matrix T given by

W = i = 1 q n i 1 S i , B = i = 1 q n i y ¯ i y ¯ y ¯ i y ¯ , T = B + W = i = 1 q j = 1 n i y i j y ¯ y i j y ¯ , E6.6

where y ¯ i and S i are the mean vector and sample covariance matrix of the ith population, and y ¯ is the total mean vector defined by 1 / n i = 1 q n i y, ¯ i and n = i = 1 q n i . In general, W and B are independently distributed as a Wishart distribution W p n q , Σ and a noncentral Wishart distribution W p q 1 , Σ ; Δ respectively, where

Δ = i = 1 q n i θ i θ ¯ θ i θ ¯ , E790000

where θ¯= 1 / n i=1q+1niθi . Then, the following theorem is well known.

Theorem 6.1 Let λ = Λn/2 be the LRC for testing H in (6.4). Then, Λ is expressed as

Λ = W W + B = W T , E6.7

where W, B , and T are given in (6.6). Further, under H, the statistic Λ is distributed as a lambda distribution Λp(q − 1, n − q).

Now we shall show Theorem 6.1 by an algebraic method. It is easy to see that

Ω = A , ω = N C A A 1 A Ω = 1 n . E810000

The last equality is also checked from that under H

E Y = A 1 q θ = 1 n θ . E820000

We have

T = Y I n P 1 n Y = Y I n P A Y + Y P A P 1 n Y = W + B . E830000

Further, it is easily checked that

1. I n P A 2 = I n P A , P A P 1 n 2 = P A P 1 n .

2. P A P 1 n P A P 1 n = O .

3. fe = dim[ℛ[A]] = tr(In − PA) = n − q,

f h = dim 1 n A = t r P A P 1 n = q 1 .

Related to the test of H, we are interested in whether a subset of variables y1, …, yp is sufficient for discriminant analysis, or the set of remainder variables has no additional information or is redundant. Without loss of generality, we consider the sufficiency of a subvector y1 = (y1, …, yk)′ of y, or redundancy of the remainder vector y2 = (yk + 1, …, yp)′. Consider to test

H 2 1 : θ 1 ; 2 1 = = θ q ; 2 1 = θ 2 1 , E6.8

where

θ i = θ i ; 1 θ i ; 2 , θ i ; 1 ; k × 1 , i = 1 , , q , E850000

and

θ i ; 2 1 = θ i ; 2 Σ 21 Σ 11 1 θ i ; 1 , i = 1 , , q . E860000

The testing problem was considered by [11]. The hypothesis can be formulated in terms of Maharanobis distance and discriminant functions. For its details, see [12, 13]. To obtain a likelihood ratio for H2 ⋅ 1, we partition the observation matrix as

Y = Y 1 Y 2 , Y 1 : n × k . E870000

Then the conditional distribution of Y 2 given Y 1 is normal such that the rows of Y 2 are independently distributed with covariance matrix Σ 22 1 = Σ 22 Σ 21 Σ 11 1 Σ 12 , and the conditional mean is given by

E Y 2 | Y 1 = A Θ 2 1 + Y 1 Σ 11 1 Σ 12 , E6.9

where Θ 2 1 ˙ = ( θ 1;21 ,, θ q;21 ) . The LRC for H2 ⋅ 1 can be obtained by use of the conditional distribution, and following the steps (D1)–(D4) in Section 5. In fact, the spaces spanned by each column of E Y 2 | Y 1 are the same, and let the spaces under K2 ⋅ 1 and H2 ⋅ 1 denote by Ω and ω, respectively. Then

Ω = A Y 1 , ω = 1 n Y 1 , E890000

dim(Ω) = q + k, and dim(ω) = q + 1. The likelihood ratio criterion λ can be expressed as

λ 2 / n = Λ = S Ω S ω = S Ω S Ω + S ω S Ω . E900000

where S Ω = Y 2 I n P Ω Y 2 and S ω = Y 2 I n P ω Y 2 . We express the LRC in terms of W , B , and T . Let us partition W , B , and T as

W = W 11 W 12 W 21 W 22 , B = B 11 B 12 B 21 B 22 , T = T 11 T 12 T 21 T 22 , E6.10

where W 12 : q × p q , B 12 : q × p q , and T 12 : q × p q . Noting that P Ω = P A + P I n P A Y 1 , we have

S Ω = Y 2 I n P A I n P A Y 1 Y 1 I n P A Y 1 1 Y 1 I n P A Y 2 = W 22 W 21 W 11 1 W 12 = W 22 1 . E920000

Similarly, noting that P ω = P 1 n + P I n P 1 n Y 1 , we have

S ω = Y 2 I n P 1 n I n P 1 n Y 1 Y 1 I n P 1 n Y 1 1 Y 1 I n P 1 n Y 2 = T 22 T 21 T 11 1 T 12 = T 22 1 . E930000

Theorem 6.2 Suppose that the observation matrix Y in (6.1) is a set of samples from N p θ i , Σ i = 1, …, q. Then the likelihood ratio criterion λ for the hypothesis H2 ⋅ 1 in (6.8) is given by

λ = W 22 1 T 22 1 n / 2 , E940000

where W and T are given by (6.6). Further, under H2 ⋅ 1,

W 22 1 T 22 1 Λ p k q 1 , n q k . E950000

Proof. We consider the conditional distributions of W 22 1 and T 22 1 given Y 1 by using Theorem 2.3, and see also that they do not depend on Y 1 . We have seen that

W 22 1 = Y 2 Q 1 Y 2 , Q 1 = I n P A P I n P A Y 1 . E960000

It is easy to see that Q 1 2 = Q 1 , rank Q 1 = t r Q 1 = n q k , Q 1 A = O , Q1X1 = O, and

E Y 2 | Y 1 Q 1 E Y 2 | Y 1 = O . E970000

This implies that W 22 1 | Y 1 W p k n q k , Σ 22 1 and hence W22 ⋅ 1 ∼ Wp − k(n − q − kΣ22 ⋅ 1). For T 22 1 , we have

T 22 1 = Y 2 Q 2 Y 2 , Q 2 = I n P 1 n P I n P 1 n Y 1 , E980000

and hence

T 22 1 W 22 1 = Y 2 Q 2 Q 1 Y 2 . E990000

Similarly, Q 2 is idempotent. Using P 1 n P A = P A P 1 n = P 1 n , we have Q 1 Q 2 = Q 2 Q 1 = Q 1 , and hence

Q 2 Q 1 2 = Q 2 Q 1 , Q 1 Q 2 Q 1 = Q . E1000000

Further, under H2 ⋅ 1,

E X 2 | X 1 Q 2 Q 1 E X 2 | X 1 = O . E1010000

## 7. General multivariate linear model

In this section, we consider a general multivariate linear model as follows. Let Y be an n × p observation matrix whose rows are independently distributed as p-variate normal distribution with a common covariance matrix Σ . Suppose that the mean of Y is given as

E Y = A Θ X , E7.1

where A is an n × k given matrix with rank k, X is a p × q matrix with rank q, and Θ is a k × q unknown parameter matrix. For a motivation of (7.1), consider the case when a single variable y is measured at p time points t1, …, tp (or different conditions) on n subjects chosen at random from a group. Suppose that we denote the variable y at time point tj by yj. Let the observations yi1, …, yip of the ith subject be denoted by

y i = y i 1 y i p , i = 1 , , n . E1030000

If we consider a polynomial regression of degree q − 1 of y on the time variable t, then

E y i = X θ , E1040000

where

X = 1 t 1 t 1 q 1 1 t p t p q 1 , θ = θ 1 θ 2 θ q . E1050000

If there are k different groups and each group has a polynomial regression of degree q − 1 of y, we have a model given by (7.1). From such motivation, the model (7.1) is also called a growth curve model. For its detail, see [14].

Now, let us consider to derive LRC for a general linear hypothesis

H g : C Θ D = O , E7.2

against alternatives K g : C Θ D O . Here, C is a c × k given matrix with rank c, and D is a q × d given matrix with rank d. This problem was discussed by [1517]. Here, we obtain LRC by reducing it to the problem of obtaining LRC for a general linear hypothesis in a multivariate linear model. In order to relate the model (7.1) to a multivariate linear model, consider the transformation from Y to U V :

U V = Y G , G = G 1 G 2 , E7.3

where G1 = X(XX)− 1, G 2 = X ˜ , and X ˜ are a p × (p − q) matrix satisfying X ˜ X = O and X ˜ X ˜ = I p q . Then, the rows of U V are independently distributed as p-variate normal distributions with means

E U V = A Θ O , E1080000

and the common covariance matrix

Ψ = G Σ G = G 1 Σ G 1 G 1 Σ G 2 G 2 Σ G 1 G 2 Σ G 2 = Ψ 11 Ψ 12 Ψ 21 Ψ 22 . E1090000

This transformation can be regarded as one from y = (y1, …, yp)′ to a q-variate main variable u = (u1, …, uq)′ and a (p − q)-variate auxiliary variable v = (v1, …, vp − q)′. The model (7.1) is equivalent to the following joint model of two components:

1. The conditional distribution of U given V is

U | V N n × q A * Ξ , Ψ 11 2 . E7.4

2. The marginal distribution of V is

V N n × p q O , Ψ 22 , E7.5

where

A * = A V , Ξ = Θ Γ , Γ = Ψ 22 1 Ψ 21 , Ψ 11 2 = Ψ 11 Ψ 12 Ψ 22 1 Ψ 21 . E1120000

Before we obtain LRC, first we consider the MLEs in (7.1). Applying a general theory of multivariate linear model to (7.4) and (7.5), the MLEs of Ξ , Ψ 11 2 , and Ψ 22 are given by

Ξ ^ = A * A * 1 A * U , n Ψ 11 2 = U I n P A * U , n Ψ 22 = V V . E7.6

Let

S = Y I n P A Y , W = G S G = U V I n P A U V , E1140000

and partition W as

W = W 11 W 12 W 21 W 22 , W 12 : q × p q . E1150000

Theorem 7.1 For an n × p observation matrix Y , assume a general multivariate linear model given by (7.1). Then:

1. The MLE Θ ^ of Θ is given by

Θ ^ = A A A 1 A Y S 1 X X S 1 X 1 . E1160000

2. The MLE Ψ ^ 11 2 of Ψ 11 2 is given by

n Ψ 11 2 = W 11 2 = X S 1 X 1 . E1170000

Proof. The MLE of Ξ is Ξ ^ = A * A * 1 A * U . The inverse formula (see (5.5)) gives

Q = A * A * 1 = A A 1 O O O + A A 1 A V I p q V I n P A V 1 A A 1 A V I p q = Q 11 Q 12 Q 21 Q 22 . E1180000

Therefore, we have

Θ ^ = Q 11 A + Q 12 V U = A A 1 A Y G 1 A A 1 A Y G 2 G 2 S G 2 1 G 2 S G 1 . E1190000

Using

G 2 G 2 S G 2 1 G 2 = S 1 G 1 G 1 S 1 G 2 1 G 1 S 1 , E1200000

we obtain 1. For a derivation of 2, let B = I n P A V . Then, using P A * = P A + P B , the first expression of (1) is obtained. Similarly, the second expression of (2) is obtained.

Theorem 7.2 Let λ = Λn/2 be the LRC for testing the hypothesis (7.2) in the generalized multivariate linear model (7.1). Then,

Λ = | S e | / | S e + S h | , E1210000

where

S e = D X ' S 1 X 1 D , S h = C Θ ^ D C R C 1 C Θ ^ D E1220000

and

R = A A 1 + A A 1 A Y S 1 S X X S 1 X 1 X × S 1 Y A A A 1 . E1230000

Here Θ ^ is given in Theorem 7.1.1. Further, the null distribution is Λd(cn − k − (p − q)).

Proof. The test of Hg in (7.2) against alternatives Kg is equivalent to testing

H g : C * Ξ D = O E7.7

under the conditional model (7.4), where C* = (C O). Since the distribution of V does not depend on Hg, the LR test under the conditional model is the LR test under the unconditional model. Using a general result for a general linear hypothesis given in Theorem 4.1, we obtain

Λ = | S ˜ e | / | S ˜ e + S ˜ h | , E1250000

where

S ˜ e = D U I n A * A * A * 1 A * U D , S ˜ h = C Ξ ^ D C * A * A * 1 C * 1 C Ξ ^ D . E1260000

By reduction similar to those of MLEs, it is seen that S ˜ e = S e and S ˜ h = S h . This completes the proof.

## 8. Concluding remarks

In this chapter, we discuss LRC in multivariate linear model, focusing on the role of projection matrices. Testing problems considered involve the hypotheses on selection of variables or no additional information of a set of variables, in addition to a typical linear hypothesis. It may be noted that various LRCs and their distributions are obtained by algebraic methods.

We have not discussed with LRCs for the hypothesis of selection of variables in canonical correlation analysis, and for dimensionality in multivariate linear model. Some results for these problems can be found in [3, 18].

In multivariate analysis, there are some other test criteria such as Lawley-Hotelling trace criterion and Bartlett-Nanda-Pillai trace criterion. For the testing problems treated in this chapter, it is possible to propose such criteria as in [12].

The LRCs for tests of no additional information of a set of variables will be useful in selection of variables. For example, it is possible to propose model selection criteria such as AIC (see [19]).

## Acknowledgments

The author wishes to thank Dr. Tetsuro Sakurai for his variable comments for the first draft. The author’s research is partially supported by the Ministry of Education, Science, Sports, and Culture, a Grant-in-Aid for Scientific Research (C), no. 25330038, 2013–2015.

## References

1. 1. Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis, 3rd ed. John Wiley & Sons, Inc., Hoboken, N.J.
2. 2. Arnold, S. F. (1981). The Theory of Linear Models and Multivariate Analysis, John Wiley & Sons, Inc., Hoboken, N.J.
3. 3. Fujikoshi, Y., Ulyanov, V. V., and Shimizu, R. (2010). Multivariate Statistics: High-Dimensional and Large-Sample Approximations. John Wiley & Sons, Inc., Hobeken, N.J.
4. 4. Muirhead, R. J. (1982). Aspect of Multivariate Statistical Theory, John Wiley & Sons, Inc., New York.
5. 5. Rencher, A. C. (2002). Methods of Multivariate Analysis, 2nd ed. John Wiley & Sons, Inc., New York.
6. 6. Seber, G. A. F. (1984). Multivariate Observations. John Wiley & Sons, Inc., New York.
7. 7. Seber, G. A. F. (2008). A Matrix Handbook for Statisticians. 2nd edn. John Wiley & Sons, Inc., Hobeken, N.J.
8. 8. Siotani, M., Hayakawa, T., and Fujikoshi, Y. (1985). Modern Multivariate Statistical Analysis: A Graduate Course and Handbook, American Sciences Press, Columbus, OH.
9. 9. Harville, D. A. (1997). Matrix Algebra from a Statistician’s Perspective. Springer-Verlag, New York.
10. 10. Rao, C. R. (1973). Linear Statistical Inference and Its Applications, 2nd ed. John Wiley & Sons, Inc., New York.
11. 11. Rao, C. R. (1948). Tests of significance in multivariate analysis. Biometrika. 35, 58–79.
12. 12. Fujikoshi, Y. (1989). Tests for redundancy of some variables in multivariate analysis. In Recent Developments in Statistical Data Analysis and Inference (Y. Dodge, ed.). Elsevier Science Publishers B.V., Amsterdam, 141–163.
13. 13. Rao, C. R. (1970). Inference on discriminant function coefficients. In Essays in Probability and Statistics, (R. C. Bose, ed.), University of North Carolina Press, Chapel Hill, NC, 587–602.
14. 14. Potthoff, R. F. and Roy, S. N. (1964). A generalized multivariate analysis of variance model useful especially for growth curve problems. Biometrika. 51, 313–326.
15. 15. Gleser, L. J. and Olkin, I. (1970). Linear models in multivariate analysis, In Essays in Probability and Statistics, (R.C. Bose, ed.), University of North Carolina Press, Chapel Hill, NC, 267–292.
16. 16. Khatri, C. G. (1966). A note on a MANOVA model applied to problems in growth curve. Ann. Inst. Statist. Math. 18, 75–86.
17. 17. Kshirsagar, A. M. (1995). Growth Curves. Marcel Dekker, Inc.
18. 18. Fujikoshi, Y. (1982). A test for additional information in canonical correlation analysis, Ann. Inst. Statist. Math., 34, 137–144.
19. 19. Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle. In 2nd International Symposium on Information Theory (B.N. Petrov and F. Csáki, eds.). Akadémia Kiado, Budapest, Hungary, 267–281.

Written By

Yasunori Fujikoshi

Submitted: October 9th, 2015 Reviewed: January 22nd, 2016 Published: July 6th, 2016