Open access peer-reviewed chapter

Bilinear Applications and Tensors

Written By

Rodrigo Garcia Eustaquio

Submitted: 30 October 2019 Reviewed: 19 December 2019 Published: 09 September 2020

DOI: 10.5772/intechopen.90904

From the Edited Volume

Advances on Tensor Analysis and their Applications

Edited by Francisco Bulnes

Chapter metrics overview

802 Chapter Downloads

View Full Metrics

Abstract

In this chapter, a theoretical approach to the vector space of tensor of order 3 and the vector space of bilinear applications will be presented in order to present an isomorphism between these spaces and several properties about tensor and bilinear applications. With this well-defined isomorphism, we will present how to calculate the product between tensor of second derivatives and a vector, where such a product is used in several numerical methods such as Chebyshev-Halley class and others mentioned in the introduction. In addition, concepts on differentiability are presented, allowing a better understanding for the reader about second-order derivatives seen as a tensor.

Keywords

  • tensor
  • bilinear application
  • isomorphism
  • second derivative
  • inexact tensor-free Chebyshev-Halley class

1. Introduction

Frequently, discretization of mathematical models demands solving a system of equations, which is generally nonlinear. Such mathematical problems might be written as

findxIRnsuch thatFx=0E1

where F:IRnIRn.

There exist iterative methods for solving (1) that have cubic convergence rate, for instance, the methods belonging to the following class of methods named Chebyshev-Halley class, which was introduced by Hernández and Gutiérrez in [1]:

xk+1=xkI+12LxkIαLxk1JFxk1Fxk,E2

for all kIN, where

Lx=JFx1TFxJFx1Fx,E3

and JFx and TFx denote the first and second derivatives of F evaluated at x, respectively. The parameter α is a real number and I is the identity matrix in IRn×n.

Discretized versions of Chebyshev-Halley class have already been considered in [2] in such a way that the tensor of second derivatives of the function F was approximated by bilinear operators. A tensor is a multi-way array or multidimensional matrix. A generalization of the Chebyshev-Halley class 2 where no second-order derivative information is required but that also has cubic convergence rate, named inexact tensor-free Chebyshev-Halley class, was introduced by Eustaquio, Ribeiro, and Dumett [3]. Other families of iterative methods with cubic convergence rate were extensively described in Traub’s book [4].

Several alternatives exist for the product of the tensor of second derivatives of F by vectors [5, 6, 7, 8], and this needs to be elucidated.

The aim of this chapter is to present concepts and relationships between tensors of order 3 and bilinear applications, in order to relate them to the second derivative of a two-differentiable application. We will see later that given the vectors u,vIRn, the i-th row of the matrix TFxv is defined by vT2fix, where 2fix is the Hessian of the i-th component of F evaluated at x. The i-th component of the vector TFxvu is defined by vT2fixu.

Advertisement

2. Tensors

Tensors naturally arise in some applications, such as chemometry [9], signal processing [10], and others. According to [8], for many applications involving high-order tensors, the known results of matrix algebra seemed to be insufficient in the twentieth century. There were some workshops and congresses on the study of tensors, such as:

  • Workshop on Tensor Decomposition at the American Institute of Mathematics which took place at the Palo Alto, California, 2004, organized by Golub, Kolda, Nagy, and Van Loan. Details in [11];

  • Workshop on Tensor Decompositions and Applications, 2005, organized by Comon and De Lathauwer. Details in [12]; and

  • Minisymposium on Numerical Multilinear Algebra: A New Beginning, 2007, organized by Golub, Comon, De Lathauwer, and Lim and which took place at the Zurich.

Readers interested in multilinear singular value decomposition, eigenvalues, and eigenvectors may consult as references [5, 6, 7, 8, 13, 14]. In this text, we will focus our attention on tensors of order 3.

Let I1,I2, and I3 be three positive integers. A tensor T of order 3 is an three-way array where its elements ti1i2i3 are indexed by i1=1,,I1, i2=1,,I2, and i3=1,,I3 and the n-th dimension of the tensor is denoted by In, for n=1,2,3. For example, the first, second, and third dimensions of a tensor TIR2×4×3 are 2,4,3, respectively.

Obviously, tensors are generalizations of matrices. A matrix can be viewed as a tensor of order 2, while a vector can be viewed as a tensor of order 1.

From an algebraic point of view, a tensor T of order 3 is an element of the vector space IRI1×I2×I3, whereas from the geometric point of view, a tensor T of order 3 can be seen as a parallelepiped [15], with I1 rows, I2 columns, and I3 tubes. Figure 1 illustrates a tensor TIR2×4×3.

Figure 1.

A tensor TIR2×4×3.

In linear algebra, it is common to see a matrix through its columns. If AIRm×n, then A can be viewed as A=a1an, where ajIRm denotes the j-th column of the matrix A. In the case of tensor of order 3, we can see them through fibers and slices. Hence follow the definitions.

Definition 1.1. A tensor fiber of a tensor of order 3 is a one-dimensional fragment obtained by fixing only two indices.

Definition 1.2. A tensor slice of a tensor of order 3 is a two-dimensional section (fragment), obtained by fixing only one index.

Generally in tensors of order 3, a fiber is a vector and a slice is a matrix. We have three types of fibers:

  • column fibers (or mode-1 fiber), where the indices i2 and i3 are fixed;

  • row fibers (or mode-2 fiber), where the indices i1 and i3 are fixed; and

  • tube fibers (or mode-3 fiber), where the indices i1 and i2 are fixed.

    We also have three types of slices:

  • horizontal slice, where the index i1 is fixed;

  • lateral slice, where the index i2 is fixed; and

  • frontal slice, where the index i3 is fixed.

For example, consider a tensor TIR2×4×3 with i=1,2, j=1,2,3,4, and k=1,2,3. The i-th horizontal slice, denoted by Ti::, is the matrix

Ti::=ti11ti12ti13ti21ti22ti23ti31ti32ti33ti41ti42ti43,

the j-th lateral slice, denoted by T:j:, is the matrix

T:j:=t1j1t1j2t1j3t2j1t2j2t2j3

and the k-th frontal slice, denoted by T::k, is the matrix

T::k=t11kt12kt13kt14kt21kt22kt23kt24k.E4

Figures 2 and 3 illustrate the three types of fibers and slices, respectively, of a tensor TIR2×4×3.

Figure 2.

Columns, rows, and tube fibers, respectively.

Figure 3.

Horizontal, lateral, and frontal slices, respectively.

2.1 Tensor operations

The first issue to consider in this subsection is how to calculate the product between tensors and matrices. It is well known from elementary algebra that given matrices AIRm×n and BIRR×m, it is possible to calculate the product BA, because the first dimension (number of rows) of matrix A agrees with the second dimension (number of columns) of matrix B, and each product element is the result of the inner product between rows of matrix B and columns of matrix A.

The product between tensors of order 3 and matrices or vectors is a bit more complicated. In order to obtain an element of the product between a tensor and a matrix, it is necessary to specify what dimension of the tensor will be chosen to agree with the number of columns of the matrix, and each resulting element will be a result of the inner product between the mode-n fibers (column, row, or tube) and the columns of the matrix. We will use the solution adopted by [8], which defines the product mode-n between tensors and matrices and the solution adopted by [5] that defines the contracted product mode-n between tensors and vectors.

The mode-n product is useful when one wants to decompose into singular values a high-order tensor in order to avoid the use of the generalized transpose concept. We refer to [5, 7, 8, 13] for details.

Definition 1.3. (mode-n tensor matrix product) The mode-1 product between a tensor TIRm×n×p and a matrix AIRR×m is a tensor

Y=T×1AIRR×n×p

where its elements are defined by

yrjk=i=1mtijkariwherer=1,,R,j=1,,n,andk=1,,p.

The mode-2 product between a tensor TIRm×n×p and a matrix AIRR×n is a tensor

Y=T×2AIRm×R×p

where its elements are defined by

yirk=j=1ntijkarjwherei=1,,m,r=1,,Randk=1,,p.

The mode-3 product between a tensor TIRm×n×p and a matrix AIRR×p is a tensor

Y=T×3AIRm×n×R

where its elements are defined by

yijr=k=1ptijkarkwherei=1,,m,j=1,,nandr=1,,R.

To understand the mode-n product in terms of matrix, consider matrices AIRm×n, BIRk×m, and CIRq×n. By Definition 1.3 we have

A×1B=BAIRk×nandA×2C=ACTIRm×q.

Thus, the singular value decomposition of matrix A can be written as

UΣVT=Σ×1U×2V=Σ×2V×1U.

The mode-n product satisfies the following property [8]:

Property 1. Let T be a tensor of order 3 and matrices A and B of convenient sizes. We have for all r,s=1,2,3

T×rA×sB=T×sB×rA=T×rA×sBforrsandE5
T×rA×rB=T×rBAE6

The idea of Bader and Kolda [5] to calculate the product between tensor and vector is to calculate the inner product of each mode-n fiber (column, row, or tube) with the vector. It is not advantageous to treat an m-dimensional vector as a matrix m×1. For example, if we consider a tensor TIRm×n×p and a vector vIRm×1, with m,n,p1, by Definition 1.3, the product between T and v is not well defined, but it is possible to calculate T×1vT.

Definition 1.4. (Contracted product mode-n between tensors and vectors) The contracted product mode-1 between a tensor TIRm×n×p and a vector vIRm is the matrix

A=Tׯ1vIRn×p

where its elements are defined by

ajk=i=1mtijkviwherej=1,,nandk=1,,p

where vi is the i-th component of the vector v.

The contracted product mode-2 between a tensor TIRm×n×p and a vector vIRn is the matrix

A=Tׯ2vIRm×p

where its elements are defined by

aik=j=1ntijkvjwherei=1,,mandk=1,,p

where vj is the j-th component of the vector v.

The contracted product mode-3 between a tensor TIRm×n×p and a vector vIRp is the matrix

A=Tׯ3vIRm×n

where its elements are defined by

aij=k=1ptijkvkwherei=1,,mandj=1,,n

where vk is the k-th component of the vector v.

A caution must be added when calculating the product between matrices and vectors by considering the definitions 1.3 and 1.4. For example, note that if AIRm×n, uIRn, and vIRm, then Aׯ2u and A×2uT have the same elements, but

Aׯ2uA×2uT,

because Aׯ2uIRm (column vector) and A×2uTIR1×m (row vector). Note that, in relation to the matrix product of elementary algebra, we have

Au=Aׯ2uE7
vTA=A×1vTAׯ1v.E8

In particular, given a tensor TIRn×m×m and a vector vIRm, by Definition 1.4 together with (8), it follows that Tׯ2vIRn×m and

Tׯ2vׯ2v=Tׯ2vvIRn.

The contracted product mode-n satisfies the following property [5]:

Property 2. Given a tensor T of order 3 and vectors u and v of convenient sizes, we have for all r=1,2,3 and s=2,3 that

Tׯruׯs1v=Tׯsvׯruforr<s.

For example, consider a tensor TIR2×4×3, and denote the k-th column and the q-th row of matrix A by colkA and rowqA, respectively. Note that if:

  1. 1. xIR2, then Tׯ1xIR4×3 and

colkTׯ1x=a1ka2ka3ka4k=t11kt21kt12kt22kt13kt23kt14kt24kx1x2=T::kTxandrowjTׯ1x=aj1aj2aj3=x1x2t1j1t1j2t113t2j1t2j2t213=xTT:j:

  1. 2. xIR4, then Tׯ2xIR2×3 and

colkTׯ2x=a1ka2k=t11kt12kt13kt14kt21kt22kt23kt24kx1x2x3x4=T::kxandrowiTׯ2x=ai1ai2ai3=x1x2x3x4ti11ti12ti13ti21ti22ti23ti31ti32ti33ti41ti42ti43=xTTi::

  1. 3. xIR3, then Tׯ3xIR2×4 and

coljTׯ3x=a1ja2j=t1j1t1j2t1j3t2j1t2j2t2j3x1x2x3=T:j:xandrowiTׯ3x=ai1ai2ai3=x1x2x3ti11ti21ti31ti41ti12ti22ti32ti42ti13ti23ti33ti43=xTTi::T

This example can be easily generalized to arbitrary dimensions. In particular, for a tensor TIRm×n×n and a vector xIRn, we have

rowiTׯ2x=xTTi::E9
rowiTׯ3x=xTTi::TE10

Lemma 1.5. Let TIRn×n×n be a tensor. If Ti:: is a symmetric matrix for all i=1,,n, then

Tׯ2uv=Tׯ2vu

for all u,vIRn.

Proof. By Property 2, it follows that Tׯ2uv=Tׯ3vu. By (10), (11), and the symmetry of Ti::, we have Tׯ3v=Tׯ2v.

Advertisement

3. Space of bilinear applications

In this section, we define bilinear applications on finite dimensional vector spaces, in order to relate them to the second derivative of a two-differentiable application, as well as a tensor of order 3.

Definition 1.6. Let U,V,W be vector spaces. An application f:U×VW is a bilinear application if:

  1. fλu1+u2v=λfu1v+fu2v for all λIR, u1,u2U, and vV.

  2. fuλv1+v2=λfuv1+fuv2 for all λIR, uU, and v1,v2V.

In other words, an application f:U×VW is a bilinear application if it is linear in each of the variables when the other variable is fixed. We denote by BU×VW the set of all bilinear applications of U×V in W. In particular, if U=V and W=IR in Definition 1.6, then f:U×UIR is a bilinear form in which we are used to quadratic forms, for example.

A simple example of bilinear application is the function f:U×VIR defined by

fuv=hugv,E11

with hU and gV, where U denotes the dual space to U. In fact, we have for all λIR,u1,u2UandvV such that

fλu1+u2v=hλu1+u2gv=λhu1+hu2gv=λfu1v+fu2v.

Similarly, it is easy to see that fuλv1+v2=λfuv1+fuv2 for all λIR,uUandv1,v2V.

The next theorem ensures that a bilinear application f:U×VW is well defined when the image of f applied in the bases elements of U and V is known.

Theorem 1.7. Let U, V, and W be vector spaces; u1um and v1vn bases of the U and V, respectively; and wiji=1mandj=1n a subset of W. Then, there exists an only bilinear application f:U×VW such that fuivj=wij.

Proof. Let u=i=1mαiui and v=j=1nβjvj be arbitrary elements of U and V, respectively. We defined an application f:U×VW by

fuv=i=1mj=1nαiβjwij.

It is easy to see that f is a bilinear application and fuivj=wij. Such an application is unique because if g is another bilinear application satisfying guivj=wij, then

guv=gi=1mαiuij=1nβjvj=i=1mj=1nαiβjguivj=E12
=i=1mj=1nαiβjwij=fuv.E13

Therefore g=f.

The following theorem guarantees the isomorphism between space of bilinear applications and space of tensor of order 3.

Theorem 1.8. Let U, V, and W be vector spaces with dimensions n, p, and m, respectively. Then, the space BU×VW has dimension mnp.

Proof. The idea of the proof is to exhibit a basis for space BU×VW. For this, let w1wm, u1un, and v1vp be bases of the W, U, and V, respectively. For each triple ijk, with i=1,,m, j=1,,n, and k=1,,p, we define a bilinear application fijk:U×VW such that

fijkurvs=wiifr=jands=k0ifrjorsk.E14

Theorem 1.7 ensures the existence of the fijk. We will then show that the set

A=fijki=1mj=1nandk=1p

is a basis of the space BU×VW. Let fBU×VW. We note in passing that

furvs=i=1mairswiE15

for all r=1,,n and s=1,,p. Consider the bilinear application

g=i=1mj=1nk=1paijkfijk.

Our goal is to show that g=f. In particular, we have

gurvs=i=1mj=1nk=1paijkfijkurvs=i=1mairswi=furvs

for all r=1,,n and s=1,,p. Therefore g=f. The set A is linearly independent, because if

i=1mj=1nk=1paijkfijk=0,

then

0=k=1pi=1mj=1naijkfijkurvs=i=1mairswi.

Since w1wm is a basis of W, it follows that airs=0 for all i=1,,m, r=1,,n, and k=1,,p.

In particular, if the dimensions of the vector spaces U and V are m and n, respectively, then the vector space BU×VIR has dimension mn. Now, as two vector spaces of the same finite dimension are isomorphic [16], there exists a matrix m×n associated with each fBU×VIR. By considering B=u1um and C=v1vn bases of U and V, respectively, and if u=i=1mαiui and v=j=1nβjvj, then by doing fuivj=aij for all i=1,,m and j=1,,n, we have

fuv=i=1mj=1nαiaijβj,

which in matrix form is fuv=uBTAvC, where A=aij and vC denote the vector components v in the basis C. Hence follows the next definition:

Definition 1.9. Let U and V be vector spaces of finite dimension and ordered bases B=u1umU and C=v1vnV. We define, for each fBU×VIR, the matrix A=aijIRm×n of the f relative to the ordered bases B and C, whose elements are given by aij=fuivj with i=1,,m and j=1,,n.

Consider now the space BIRm×IRnIRp and the canonical bases e1em, e¯1e¯n, ê1êp of the IRm, IRn, and IRp, respectively. Consider fBIRm×IRnIRp. For all uIRm and vIRn, we have

fuv=j=1mk=1nujvkfeje¯k

where uj and vk are the components of the u and v in the canonical bases of IRm and IRn, respectively. Denote the i-th component of the f by fi. Note that fiBIRm×IRnIR. So for each i=1,,p, we have

fiuv=j=1mk=1nujvkfieje¯k.

By Definition 1.9, the matrix of the fi in relation to the canonical bases is the matrix

Ai=tijkIRm×n,

where tijk=fieje¯k. So, we can write

fiuv=uTAiv.

In general, we can define p matrices m×n as a tensor TIRp×m×n; this means that the p matrices can be seen as the horizontal slices of the tensor T. We note in passing that we can write fuv as a product between tensor T and vectors u and v, that is,

fuv=uTA1vuTA2vuTApv=Tׯ2uv.E16

Thus, we can generalize Definition 1.9 as follows:

Definition 1.10. Let U and V be finite dimension vector spaces. For fixed bases B=u1um and C=v1vn of the U and V, respectively, we define, for each fBU×VIRp, the tensor T=tijkIRp×m×n of the f relative to the ordered bases B and C, whose elements are given by tijk=fiujvk where fi is the i-th component of the f, that is, fiBU×VIR, with i=1,,p, j=1,,m, and k=1,,n.

Advertisement

4. Differentiability

Let U be an open subset of IRm and F:UIRmIRn a differentiable application throughout U and aU. Denote LIRmIRn the set of all linear applications of IRm in IRn. When F:UIRmLIRmIRn is differentiable in aU, we say that the application F is twice differentiable in aU and then the linear transformation FaL(IRm,LIRmIRn) is the second derivative of F in aU.

The norm of Fa is naturally defined. For any hIRm, it follows that

Fah=supk=1FahkcomkIRm

and then

Fa=suph=1Fah=suph=1supk=1Fahk.

An important observation with respect to Theorem 1.8 is that the spaces L(IRm,LIRmIRn) and BIRm×IRmIRn are isomorphic. This means that Fa is a bilinear application belonging to space BIRm×IRmIRn. Such isomorphism can be found in classical analysis books [17, 18]. On the other hand, by the same theorem, the space of bilinear applications BIRm×IRmIRn and space of tensor IRn×m×m are also isomorphic.

In many practical applications, such as algorithm implementations, the second derivative Fa may be implemented as a tensor belonging to space IRn×m×m. The question now is how the tensor elements are formed. For this, consider the application A:IRIRn×m and αIR. We have Aα as a matrix with n rows and m columns. Its elements are denoted by aijα where aij are components functions of A with i=1,,n and j=1,,m. Case aij:IRIR is differentiable in α for all i=1,,n and j=1,,m; the derivative of A in α is the matrix

Aα=aijαIRn×m.E17

The definition of the derivative of Aα(17) is a classical definition. We refer to [19] for details.

In the sense of generalizing (17), consider now A:UIRpIRn×m a differentiable application in uU with component function aij:IRpIR with i=1,,n and j=1,,m. When aij is differentiable in u for all i=1,,n and j=1,,m, we defined the derivative of A in u as the tensor

Au=aijuIRn×m×p.E18

Note that in fact (18) is a generalization of (17). With fixed i and j, aiju is a tube fiber of the tensor Au, whose elements are

Auijk=aijxkuE19

for all k=1,,p.

For example, consider an application F:UIR2IR3 twice differentiable in aU where U is an open set. The Jacobian matrix of F in a is given by

JFa=f1aTf2aTf3aT=f1x1af1x2af2x1af2x2af3x1af3x2a

and its derivative is, by (18), the tensor

JFa=TFa=fixjaIR3×2×2E20

where, by (19), its elements are described as

tijk=2fixkxja.

With fixed i, it is easy to see that the i-th horizontal slice of the TFa is the Hessian matrix 2fia defined by

2fia=TFai::=2fix1x1a2fix1x2a2fix2x1a2fix2x2a.E21

We note in passing that any column of the matrix 2fix is a row fiber of the i-th horizontal slice.

As mentioned in the introduction, some numerical methods need to calculate the product between tensor TFa and vectors in IR2.

From Definition 1.4, it is possible to calculate the contracted product mode-2 and mode-3. As Hessian matrices are symmetrical, given vIR2, by Lemma 1.5 together with (10) and (11), we have

TFaׯ3v=TFaׯ2v=row1TFaׯ2vrow2TFaׯ2vrow3TFaׯ2v=vT2f1avT2f2avT2f3aIR3×2

and consequently it follows that

TFaׯ2vu=vT2f1auvT2f2auvT2f3auIR3E22

for all u,vIR2.

This means that the tensor TFa defined by (20) is the associate tensor to bilinear application Fa, in relation to canonical basis of IR2, by means of Definition 1.10. Without loss of generality, we have

TFaׯ3v=TFaׯ2v=TFav

and by means of Lemma 1.5, it follows that

TFauv=TFavu=TFavu.

To finish, we consider the following particular case. We know that the k-th column of Jacobian JFx is equal to product JFxek, where ek is the k-th canonical vector of IRn. It is worth noting what the slice of the matrix TFxek is. By definition, we have

TFxek=ekT2f1xekT2f2xekT2fnx=rowk2f1xrowk2f2xrowk2fnx

Given that rowk2fix is the k-th tube fiber of i-th horizontal slice, we have TFxek as the k-th lateral slice, or, by symmetry of Hessians, it is the transpose of k-th frontal slice. In short, for the twice differentiable application F:UIRnIRm, we have TFxIRm×n×n where the m horizontal slices are the Hessians 2fix, with i=1,,m and the n lateral and frontal slices obtained by the following product TFxek, with k=1,,n.

Advertisement

5. Conclusions

In this text, we have shown some properties of tensors, in particular those of order 3. In addition, we have approached bilinear applications, and we have shown the isomorphism between space of bilinear applications and of tensor of order 3. As mentioned in the introduction, to solve a nonlinear system, some numerical methods use tensors, either in the iterative scheme or in the proof of theorems. For this reason, we have written a section on differentiability of applications by showing how to calculate the product between tensor of second derivatives and vectors.

References

  1. 1. Hernández MA, Gutiérrez JM. A family of Chebyshev-Halley type methods in Banach spaces. Bulletin of the Australian Mathematical Society. 1997;55:113-130
  2. 2. Ehle GP, Schwetlick H. Discretized Euler-Chebyshev multistep methods. SIAM Journal on Numerical Analysis. 1976;13(3):432-447
  3. 3. Eustaquio RG, Ribeiro AA, Dumett MA. A new class of root-finding methods in IRn: The inexact tensor-free Chebyshev-Halley class. Computational and Applied Mathematics. 2018;37:6654-6675
  4. 4. Traub JF. Iterative Methods for the Solution of Equations. Prentice-Hall Series in Automatic Computation. Englewood Cliffs, NJ: Prentice-Hall; 1964
  5. 5. Bader BW, Kolda TG. Algorithm 862: MATLAB tensor classes for fast algorithm prototyping. ACM Transactions on Mathematical Software. 2006;32(4):635-653. DOI: 10.1145/1186785.1186794
  6. 6. Cichocki A, Zdunek R, Phan AH, Amari S. Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multiway Data Analysis and Blind Source Separation. John Wiley Sons, Ltd; 2009
  7. 7. Kolda TG, Bader BW. Tensor decompositions and applications. SIAM Review. 2009;51(3):455-500
  8. 8. De Lathauwer L, De Moor B, VandeWalle J. A multilinear singular value decomposition. SIAM Journal on Matrix Analysis and Applications. 2000;21:1253-1278
  9. 9. Smilde A, Bro R, Geladi P. Multi-Way Analysis: Applications in the Chemical Sciences. Wiley; 2004
  10. 10. Chen B, Petropulu A, De Lathauwer L. Blind identification of convolutive MIMO systems with 3 sources and 2 sensors. Applied Signal Processing. 2002;5:487-496. Special Issue: Space-time Coding and Its Applications—Part II. Available from: http://publi-etis.ensea.fr/2002/CPD02a
  11. 11. Golub GH, Kolda TG, Nagy JG, Van Loan CF. Workshop on Tensor Decompositions. Palo Alto, California: American Institute of Mathematics; 2004. Available from http://www.aimath.org/WWN/tensordecomp/
  12. 12. De Lathauwer L, Comon P. Workshop on Tensor Decompositions and Applications. Marseille, France; 2005. Available from: http://www.etis.ensea.fr/wtda/
  13. 13. Bader BW, Kolda TG. Efficient MATLAB Computations with Sparse and Factored Tensors. Albuquerque, NM/Livermore, CA: Sandia National Laboratories; 2006. SAND2006-7592. Available from: http://www.prod.sandia.gov/cgi-bin/techlib/access-control.pl/2006/067592.pdf
  14. 14. Ishteva M. Numerical methods for the best low multilinear rank approximation of higher-order tensors [PhD thesis]. Belgium: Faculty of Engineering, Katholieke Universiteit Leuven; 2009
  15. 15. Kiers HAL. Towards a standardized notation and terminology in multiway analysis. Journal of Chemometrics. 2000;14:105-122
  16. 16. Coelho FU, Loureno ML. Um Curso de lgebra Linear. So Paulo, Brasil: Editora da Universidade de So Paulo; 2007
  17. 17. Lima EL. Anlise no Espao IRn. So Paulo: Editora Universidade de Braslia; 1970
  18. 18. Lima EL. Curso de Anlise, Volume 2. Rio de Janeiro, Brasil: IMPA; 1981
  19. 19. Golub GH, Van Loan CF. Matrix Computations. The Johns Hopkins University Press; 1996

Written By

Rodrigo Garcia Eustaquio

Submitted: 30 October 2019 Reviewed: 19 December 2019 Published: 09 September 2020