Open access peer-reviewed chapter

Identification of Multilinear Systems: A Brief Overview

Written By

Laura-Maria Dogariu, Constantin Paleologu, Jacob Benesty and Silviu Ciochină

Reviewed: 19 January 2022 Published: 10 March 2022

DOI: 10.5772/intechopen.102765

From the Edited Volume

Advances in Principal Component Analysis

Edited by Fausto Pedro García Márquez

Chapter metrics overview

209 Chapter Downloads

View Full Metrics

Abstract

Nonlinear systems have been studied for a long time and have applications in numerous research fields. However, there is currently no global solution for nonlinear system identification, and different used approaches depend on the type of nonlinearity. An interesting class of nonlinear systems, with a wide range of popular applications, is represented by multilinear (or multidimensional) systems. These systems exhibit a particular property that may be exploited, namely that they can be regarded as linearly separable systems and can be modeled accordingly, using tensors. Examples of well-known applications of multilinear forms are multiple-input/single-output (MISO) systems and acoustic echo cancellers, used in multi-party voice communications, such as videoconferencing. Many important fields (e.g., big data, machine learning, and source separation) can benefit from the methods employed in multidimensional system identification. In this context, this chapter aims to briefly present the recent approaches in the identification of multilinear systems. Methods relying on tensor decomposition and modeling are used to address the large parameter space of such systems.

Keywords

  • nonlinear systems
  • tensor decomposition
  • multilinear forms
  • Wiener filter
  • adaptive filters
  • system identification

1. Introduction

System identification is an important topic nowadays since it can be used in solving numerous problems [1]. The aim of system identification is to estimate an unknown model using the available and observed data, namely the input and output of the system. In this context, the well-known Wiener filter is a popular solution, along with the adaptive filters which can be derived starting from this approach.

In multilinear system identification, dealing with a large parameter space represents an important challenge [2, 3]. The huge length of the filter (hundreds or thousands of coefficients) is also a serious problem [4, 5]. The methods used for addressing these issues usually rely on tensor decomposition and modeling [2, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17], meaning that a high-dimension problem is rewritten as a combination of lower-dimension structures, using the Kronecker product decomposition [18].

In the context of multilinear forms identification, a few approaches were proposed recently, addressing the cases when the large system is decomposed into two or three smaller components (i.e., bilinear and trilinear forms, respectively) [10, 17, 18, 19, 20, 21, 22, 23]. The aforementioned solutions outperform their conventional counterparts, offering at the same time a lower computational complexity.

Motivated by the appealing performance of these previous developments, we extended the tensor decomposition technique to higher-order systems, and in this framework, this chapter presents a part of the work and results obtained recently by the authors in the context of multilinear system identification. An iterative Wiener filter and a family of LMS-based algorithms tailored for multilinear forms are presented. For more details on the results summarized here, the works [24, 25, 26] can be consulted.

Related to the work presented here, several other tensor-based solutions relying on the recursive least-squares (RLS) algorithm were also developed recently [27, 28]. Possible applications of such system identification frameworks can be encountered in topics such as big data [29], machine learning [14], but they may be also useful in nonlinear acoustic echo cancelation [30, 31], source separation [13, 32, 33], channel equalization [12, 34], array beamforming [16, 35], blind identification [36], object recognition [37, 38], and cardiac applications [39].

The rest of this chapter is organized in the following way. In Section 2, we introduce the system model for the multiple-input/single-output (MISO) system identification problem. In this context, Section 3 presents an iterative Wiener filter tailored for the identification of multilinear systems. Next, in Section 4, an LMS-based algorithm is presented, together with its normalized version, and then, in Section 5, the performance of these algorithms is proved through simulations. Finally, conclusions are drawn in Section 6.

Advertisement

2. System model in the multilinear framework

Let us consider a MISO system, whose output signal at the time index t can be written as

yt=l1=1L1l2=1L2lN=1LNxl1l2l3lNth1,l1h2,l2hN,lN,E1

where the individual channels are modeled by the vectors:

hi=hi,1hi,2hi,LiT,i=1,2,,N,E2

the superscript T denotes the transpose operator, and the input signals may be expressed in a tensorial form as XtRL1×L2×L3××LN, with the elements Xl1l2lNt=xl1l2lNt. Consequently, the output signal becomes

yt=Xt×1h1T×2h2T×3×NhNT,E3

where ×i (for i=1,2,,N) denotes the mode-i product [7]. It can be said that yt is a multilinear form because it is a linear function of each of the vectors hi,i=1,2,,N, when the other N1 vectors are fixed. In this context, yt may be regarded as an extension of the bilinear form [19]. Next, let us define

H=h1h2hN,E4

where is the vector outer product, i.e., h1h2=h1h2T, h1h2i,j=h1,ih2,j, vech1h2=h2h1, and

Hl1,l2,,lN=h1,l1h2,l2hN,lN,E5
vecH=hNhN1h1,E6

where denotes the Kronecker product and vec is the vectorization operation:

vecH=vecH:::1vecH:::LN,E7
vecH:::li=vecH:::1,ivecH:::LN1,i,E8

and so on, where H:::liRL1×L2×L3××LN1 represent the frontal slices of the tensor H. Therefore, the output signal can be expressed as

yt=vecTHvecXt,E9

where

vecXt=vecX:::1tvecX:::LNt=xt,E10

with X:::litRL1×L2×L3××LN1 being the frontal slices of the tensor Xt. Let us denote the global impulse response of length L1L2LN as

g=vecH=hNhN1h1.E11

Here, an observation can be made: the solution of the decomposition in Eq. (11) is not unique [17, 24]. Despite this, no scaling ambiguity occurs in the identification of the global impulse response, g.

Using Eqs. (9)–(11), we may rewrite yt as

yt=gTxt.E12

We aim to identify the global impulse response, g. We can define the reference (or desired) signal as

dt=gTxt+wt,E13

where wt denotes the additive noise, which is uncorrelated with the input signals, and whose variance is

σd2=gTExtxTtg+σw2=gTRg+σw2,E14

with E denoting mathematical expectation, R=ExtxTt, and σw2=Ew2t. Next, the error signal can be defined as

et=dtĝTxt,E15

where ĝ denotes an estimate of the global impulse response.

The optimization criterion is the minimization of the mean-squared error (MSE), which can be defined using Eq. (15):

Jĝ=Ee2t=σd22ĝTp+ĝTRĝ,E16

where p=Edtxt denotes the cross-correlation vector between dt and xt. The solution to this minimization problem is given by the popular Wiener filter [40]:

ĝW=R1p.E17

Relation (17) provides the global impulse response. In order to obtain the N coefficient vectors hi,i=1,2,,N, the nonlinear equation set containing L1L2LN equations with L1+L2++LN scalar variables needs to be solved:

ĝW=ĥW,NĥW,N1ĥW,1.E18
Advertisement

3. Multilinear iterative Wiener filter

It can be easily checked that

g=hNhN1h1=hNhN1IL1h1=hNhN1h3IL2h1h2=hNhN1ILihLi1h1hi=ILNhN1h1hN,E19

where ILi denotes the identity matrix of size Li×Li.

Hence, the cost function given by Eq. (16) may be expressed in N equivalent forms:

Jĥ1ĥ2ĥN=σd22ĝTp+ĝTRĝ=σd22ĥiTĥNĥN1ILiĥLi1ĥ1Tp+ĥiTĥNĥN1ILiĥLi1ĥ1T×RĥNĥN1ILiĥLi1ĥ1ĥi=σd22ĥiTpi+ĥiTRiĥi,i=1,2,,N,E20

where

pi=ĥNĥN1ILiĥLi1ĥ1Tp,E21
Ri=ĥNĥN1ILiĥLi1ĥ1TR×ĥNĥN1ILiĥLi1ĥ1.E22

If all coefficients except ĥi are kept fixed, we may define

Jĥ1,ĥ2,,ĥi1,ĥi+1,,ĥNĥi=σd22ĥiTpi+ĥiTRiĥi,i=1,2,,N.E23

The minimization of this convex cost function with respect to ĥi yields

ĥi=Ri1pi,i=1,2,,N.E24

Using this result, an iterative approach can be derived. A set of initial values ĥi0,i=1,2,,N are chosen for starting the algorithm, and then we can compute

p10=ĥN0ĥN10ĥ20ÎL1Tp,E25
R10=ĥN0ĥN10ĥ20ÎL1TRĥN0ĥN10ĥ20ÎL1,E26
Jĥ2,ĥ3,,ĥNĥ11=σd22ĥ11Tp10+ĥ11TR10ĥ11.E27

The minimization of the cost function yields

ĥ11=R101p10.E28

Using ĥ11 and ĥi0, i=3,,N, we can now compute ĥ21. Then, the cost function becomes

Jĥ1,ĥ3,,ĥNĥ21=σd22ĥ21Tp21+ĥ21TR21ĥ21,E29

where

p21=ĥN0ĥN10ĥ30ÎL2ĥ11Tp,E30
R21=ĥN0ĥN10ĥ30ÎL2ĥ11TRĥN0ĥN10ĥ30ÎL2ĥ11.E31

The minimization of the cost function yields

ĥ21=R211p21.E32

All the other estimates ĥi1,i=3,4,,N can be computed in a similar manner. By further iterating up to iteration n, the estimates of the N vectors are obtained. This minimization technique is called “block coordinate descent” [41].

Advertisement

4. LMS and NLMS algorithms for multilinear forms

The limitations of the Wiener filter (e.g., matrix inversion, statistics estimation) can restrict the applicability of the previously presented approach in real-world situations (for example, in nonstationary conditions, or when real-time processing is needed). Therefore, a better approach may be represented by adaptive filters. In this context, the well-known least-mean-square (LMS) algorithm is among the most popular solutions, due to its simplicity. In the following, a family of LMS-based algorithms for multilinear forms identification is presented.

By using the estimated impulse responses ĥit,i=1,2,,N, the corresponding a priori error signals can be defined

eĥ2ĥ3ĥNt=dtĥ1Tt1xĥ2ĥ3ĥNt,E33
eĥ1ĥ3ĥNt=dtĥ2Tt1xĥ1ĥ3ĥNt,E34
eĥ1ĥ2ĥN1t=dtĥNTt1xĥ1ĥ2ĥN1t,E35

where

xĥ2ĥ3ĥNt=ĥNt1ĥN1t1ĥ2t1IL1xt,E36
xĥ1ĥ3ĥNt=ĥNt1ĥN1t1ĥ3t1IL2ĥ1t1xt,E37
xĥ1ĥ2ĥN1t=ILNĥN1t1ĥ2t1ĥ1t1xt.E38

We can easily check that eĥ2ĥ3ĥNt=eĥ1ĥ3ĥNt==eĥ1ĥ2ĥN1t. Hence, the LMS updates of the individual filters will be

ĥ1t=ĥ1t1μĥ12×eĥ2ĥ3ĥN2tĥ1t1=ĥ1t1+μĥ1xĥ2ĥ3ĥNteĥ2ĥ3ĥNt,E39
ĥ2t=ĥ2t1μĥ22×eĥ1ĥ3ĥN2tĥ2t1=ĥ2t1+μĥ2xĥ1ĥ3ĥNteĥ1ĥ3ĥNt,E40
ĥNt=ĥNt1μĥN2×eĥ1ĥ2ĥN12tĥNt1=ĥNt1+μĥNxĥ1ĥ2ĥN1teĥ1ĥ2ĥN1t,E41

where μĥi>0,i=1,2,,N represent the step-size parameters. Relations (39–41) describe the LMS algorithm for multilinear forms (LMS-MF). The initialization of the estimated impulse responses could be

ĥ10=100T,E42
ĥk0=1Lk111T,k=2,3,,N.E43

The global filter estimate is obtained as

ĝt=ĥNtĥN1tĥ1t.E44

We may also identify the global impulse response using the classical LMS algorithm:

ĝt=ĝt1+μĝxtet,E45
et=dtĝt1xt,E46

where μĝ denotes the global step-size parameter, but this would involve a single long-length adaptive filter.

When choosing the constant values of the step-size parameters from Eqs. (39)–(41), we need to consider the compromise between convergence rate and steady-state misadjustment. In certain cases, it can be more useful to have variable step-size parameters. Hence, the update equations become

ĥ1t=ĥ1t1+μĥ1txĥ2ĥ3ĥNteĥ2ĥ3ĥNt,E47
ĥ2t=ĥ2t1+μĥ2txĥ1ĥ3ĥNteĥ1ĥ3ĥNt,E48
ĥNt=ĥNt1+μĥNtxĥ1ĥ2ĥN1teĥ1ĥ2ĥN1t.E49

Then, the a posteriori error signals can be defined as

εĥ2ĥ3ĥNt=dtĥ1Ttxĥ2ĥ3ĥNt,E50
εĥ1ĥ3ĥNt=dtĥ2Ttxĥ1ĥ3ĥNt,E51
εĥ1ĥ2ĥN1t=dtĥNTtxĥ1ĥ2ĥN1t.E52

After replacing Eq. (39) in Eq. (50), Eq. (40) in Eq. (51), and Eq. (41) in Eq. (52), and then canceling the a posteriori error signals, we get

eĥ2ĥ3ĥNt1μĥ1txĥ2ĥ3ĥNTtxĥ2ĥ3ĥNt=0,E53
eĥ1ĥ3ĥNt1μĥ2txĥ1ĥ3ĥNTtxĥ1ĥ3ĥNt=0,E54
eĥ1ĥ2ĥN1t1μĥNtxĥ1ĥ2ĥN1Ttxĥ1ĥ2ĥN1t=0.E55

We assume that eĥ2ĥ3ĥNt0, eĥ1ĥ3ĥNt0,,eĥ1ĥ2ĥN1t0. Hence, the step-sizes become

μĥ1t=1xĥ2ĥ3ĥNTtxĥ2ĥ3ĥNt,E56
μĥ2t=1xĥ1ĥ3ĥNTtxĥ1ĥ3ĥNt,E57
μĥNt=1xĥ1ĥ2ĥN1Ttxĥ1ĥ2ĥN1t.E58

In the numerators of Eqs. (56)–(58), the normalized step-size parameters 0<αĥi<1,i=1,2,,N can be used to achieve a good compromise between convergence rate and misadjustment. Some regularization constants denoted by δĥi>0,i=1,2,,N are also introduced in the denominators of the step-sizes in order to ensure robust adaptation. Consequently, we obtain the update equations of the normalized LMS (NLMS) algorithm for multilinear forms (NLMS-MF):

ĥ1t=ĥ1t1+αĥ1xĥ2ĥ3ĥNteĥ2ĥ3ĥNtδĥ1+xĥ2ĥ3ĥNTtxĥ2ĥ3ĥNt,E59
ĥ2t=ĥ2t1+αĥ2xĥ1ĥ3ĥNteĥ1ĥ3ĥNtδĥ2+xĥ1ĥ3ĥNTtxĥ1ĥ3ĥNt,E60
ĥNt=ĥNt1+αĥNxĥ1ĥ2ĥN1teĥ1ĥ2ĥN1tδĥN+xĥ1ĥ2ĥN1Ttxĥ1ĥ2ĥN1t.E61

The initialization of the individual impulse responses can be done using Eqs. (42, 43). We may also identify the global impulse response using the regular NLMS algorithm:

ĝt=ĝt1+αĝxtetxTtxt+δĝ,E62

where 0<αĝ1 denotes the normalized step-size parameter, δĝ>0 represents the regularization constant, and et is defined in Eq. (46). However, this would involve a single (long length) adaptive filter, with L=L1L2LN.

Advertisement

5. Experimental results

The purpose of this section is to illustrate through simulations the improved performance of the proposed solutions for the identification of multilinear forms. We performed experiments involving MISO system identification. As input signals, we used white Gaussian noises and AR(1) processes, obtained by filtering white Gaussian noises through a first-order system with the transfer function 1/10.99z1. The noise is white and Gaussian, having the variance equal to σw2=0.01. We use two different orders of the system (N=4 and N=5). The first individual impulse response, h1, of length L1=16, is formed using the first 16 coefficients of the first network echo path from the ITU-T G168 Recommendation [42]. The second impulse response, h2, of length L2=8, is randomly generated from a Gaussian distribution, whereas the other three impulse responses, h3, h4, and h5, of lengths L3=L4=4 and L5=2, respectively, are obtained using an exponential decay based on the rule hj,lj=ajlj1, with j=3,4,5, where aj takes the values 0.9, 0.5, and 0.1, respectively. Hence, the global impulse responses g, computed using Eq. (11), have lengths 2048, when N=4, and 4096, when N=5. Figure 1 illustrates the individual impulse responses h1, h2, h3, and h4, as well as the resulting global impulse response g=h4h3h2h1, in the case when N=4.

Figure 1.

Impulse responses used in simulations of the multiple-input/single-output (MISO) system identification scenario (for N=4): (a) h1 (of length L1=16) contains the first 16 coefficients of the first impulse response from G168 recommendation [42], (b) h2 (of length L2=8) is a randomly generated impulse response, (c) h3 (of length L3=4) has the coefficients computed as h3,l3=0.9l31, with l3=1,2,,L3, (d) h4 (of length L4=4) has the coefficients computed as h4,l4=0.5l41, with l4=1,2,,L4, and (e) g (of length L=L1L2L3L4=2048) is the global impulse response, which results based on Eq. (11).

The measure of performance is the normalized misalignment (in dB) for the identification of the global impulse response, computed as 20log10gĝn2/g2. A sudden change in the sign of the coefficients of h1 is introduced in the middle of each experiment, with the goal of observing the tracking capabilities of the proposed algorithms.

First, we aim to show comparatively the performances of the LMS-MF and LMS algorithms. When choosing the step-size parameter values, we need to take into account the theoretical upper bound, which for the conventional LMS is 2/Lσx2, where σx2 is the input signal’s variance [40]. In practical scenarios, this limit may not be usable, due to stability issues. The step-size parameters for the LMS-MF and LMS algorithms were chosen in our simulations in such a manner that similar misalignment values are obtained, in order to compare their convergence rate and tracking. The largest value of the step-size parameter for the conventional LMS algorithm is chosen close to its stability limit, such that the fastest convergence possible is obtained.

Figure 2 shows the case when N=4 and the input signals are white Gaussian noises. The resulting global filter length is L=2048. It can be seen that the LMS-MF achieves a higher convergence rate and faster tracking as compared to the conventional LMS algorithm. Of course, when the value of the step-size parameter decreases, the misalignment decreases, but at the same time, the convergence becomes slower.

Figure 2.

Performance of the LMS-MF and LMS algorithms (using different step-size parameters), for the identification of the global impulse response g. The input signals are white Gaussian noises, N=4, and L=2048.

Next, in Figure 3, N=4 and the input signals are highly-correlated AR(1) processes. The performance gain of the LMS-MF with respect to the conventional LMS algorithm in terms of both convergence rate/tracking and steady-state misalignment is even higher in this scenario.

Figure 3.

Performance of the LMS-MF and LMS algorithms (using different step-size parameters), for the identification of the global impulse response g. The input signals are AR(1) processes, N=4, and L=2048.

When the system order increases, the improvement in performance brought by the LMS-MF is even more apparent. This can be seen in Figure 4, where N=5 and the input signals are AR(1) processes. The resulting global impulse response length is L=4096. It is easily observed that the proposed algorithm is superior to the conventional LMS in terms of both convergence rate/tracking and misalignment.

Figure 4.

Performance of the LMS-MF and LMS algorithms (using different step-size parameters), for the identification of the global impulse response g. The input signals are AR(1) processes, N=5, and L=4096.

In the following, we aim to illustrate the performance of the NLMS-MF and NLMS algorithms in the identification of the global system. Since the step-size parameter does no longer have a constant value, the normalized algorithms can work better in nonstationary environments. The fastest-convergence bound for the value of the normalized step-size parameter of the conventional NLMS algorithm is 1 [40].

Figure 5 illustrates the case when the inputs are white Gaussian noises and N=4, leading to a length of the global system of L=2048. Similar to the case of the LMS-MF and LMS algorithms from Figure 2, it can be concluded that the performance of the NLMS-MF algorithm is significantly better than the one of its conventional counterparts. This is even more apparent in the case of smaller normalized step-size values.

Figure 5.

Performance of the NLMS-MF and NLMS algorithms (using different normalized step-size parameters), for the identification of the global impulse response g. The input signals are white Gaussian noises, N=4, and L=2048.

The improvement offered by the proposed approach is even more significant for correlated inputs. In Figure 6, the input signals are AR(1) processes. It is noticed that even when the NLMS-MF algorithm uses lower values for the normalized step-sizes, it can still outperform the NLMS algorithm working in the fastest convergence mode.

Figure 6.

Performance of the NLMS-MF and NLMS algorithms (using different normalized step-size parameters), for the identification of the global impulse response g. The input signals are AR(1) processes, N=4, and L=2048.

The same conclusion applies when the order N is increased. Figure 7 shows the case when N=5 and, hence, the length of the global impulse response is L=4096. Again, the NLMS-MF algorithm achieves a significantly better convergence rate and tracking with respect to the conventional NLMS algorithm.

Figure 7.

Performance of the NLMS-MF and NLMS algorithms (using different normalized step-size parameters), for the identification of the global impulse response g. The input signals are AR(1) processes, N=5, and L=4096.

Next, we aim to show the influence of the normalized step-size values on the performance of the proposed algorithm. In Figure 8, the order of the system is N=4 and the input signals are AR(1) processes. As it was expected, lower values of the normalized step-sizes improve the misalignment, but at the cost of a slower convergence and tracking.

Figure 8.

Performance of the NLMS-MF algorithm using different normalized step-size parameters (with equal values of αi,i=1,2,,N), for the identification of the global impulse response g. The input signals are AR(1) processes, N=4, and L=2048.

The last experiment involving the NLMS-MF algorithm aims to show the performance in the case when the normalized step-size parameters αi (i=1,2,,N) take different values for each individual filter. In Figure 9, the value of the normalized step-size parameter for the first filter, of highest length, L1, is α1=0.25, whereas the other normalized step-sizes αj,j=2,3,4 are varied. The system order is N=4 and the input signals are AR(1) processes. The compromise between convergence rate and misalignment can be seen again. Since the convergence is mostly influenced by the individual filter with the highest length [20], different values of the normalized step-sizes may be used in different scenarios.

Figure 9.

Performance of the NLMS-MF algorithm using different normalized step-size parameters (with different values of αi,i=1,2,,N), for the identification of the global impulse response g. The input signals are AR(1) processes, N=4, and L=2048.

Due to the important improvement in performance brought by the adaptive tensor-based LMS algorithms, observed through experiments, these algorithms may represent appealing solutions for the identification of long-length separable system impulse responses.

Advertisement

6. Conclusions

In this chapter, we have presented a decomposition-based approach for dealing with the identification of high-dimension MISO systems. Unlike the conventional method, which is based on the identification of the global system impulse response, our solution focuses on regarding the system as an N-order tensor and thus estimating N shorter filters. At the end, the solutions are combined (“tensorized” together) into the original high-dimension tensor. Based on the tensor decomposition technique, an iterative Wiener filter was proposed, along with a family of LMS-based algorithms suitable for the identification of such systems, also called multilinear systems. In addition to the lower computational complexity, the proposed solutions achieve superior performance as compared to their conventional counterparts from the point of view of convergence rate, tracking capability, and steady-state misadjustment. Experiments have also shown the performance improvement of the proposed adaptive algorithms for multilinear forms in long-length system identification in different scenarios, even for highly correlated input signals.

Advertisement

Acknowledgments

This work was supported by a grant from the Romanian Ministry of Education and Research, CNCS–UEFISCDI, Project Number PN-III-P1-1.1-PD-2019-0340, within PNCDI III.

References

  1. 1. Ljung L. System Identification: Theory for the User. 2nd ed. Upper Saddle River, NJ, USA: Prentice-Hall; 1999
  2. 2. Rupp M, Schwarz S. A tensor LMS algorithm. In: Proceedings of IEEE ICASSP. Brisbane, Queensland, Australia: ICASSP; 2015. pp. 3347-3351
  3. 3. Rupp M, Schwarz S. Gradient-based approaches to learn tensor products. In: Proc. EUSIPCO. Nice, France: EUSIPCO; 2015. pp. 2486-2490
  4. 4. Gay SL, Benesty J. Acoustic Signal Processing for Telecommunication. Boston, MA, USA: Kluwer Academic Publisher; 2000
  5. 5. Benesty J, GAnsler T, Morgan DR, Sondhi MM, Gay SL. Advances in Network and Acoustic Echo Cancellation. Berlin, Germany: Springer-Verlag; 2001
  6. 6. Lathauwer LD. Signal Processing Based on Multilinear Algebra. Leuven, Belgium: Katholieke Universiteit Leuven; 1997
  7. 7. Kolda TG, Bader BW. Tensor decompositions and applications. SIAM Review. 2009;51:455-500
  8. 8. Comon P. Tensors: A brief introduction. IEEE Signal Processing Magazine. 2014;31:44-53
  9. 9. Cichocki A, Mandic DP, Huyphan A, Caiafa CF, Zhou G, Zhao Q, et al. Tensor decompositions for signal processing applications. IEEE Signal Processing Magazine. 2015;32:145-163
  10. 10. Ribeiro LN, de Almeida ALF, Mota JCM. Identification of Separable Systems Using Trilinear Filtering. Kos, Greece: Proceedings of IEEE CAMSAP; 2015. pp. 189-192
  11. 11. Da Silva AP, Comon P, de Almeida ALF. A finite algorithm to compute rank-1 tensor approximations. IEEE Signal Processing Letters. 2016;23:959-963
  12. 12. Ribeiro LN, Schwarz S, Rupp M, de Almeida ALF, Mota JCM. A low-complexity equalizer for massive MIMO systems based on array separability. Kos, Greece: Proceedings of EUSIPCO; 2017. pp. 2522-2526
  13. 13. Boussé M, Debals O, De Lathauwer L. A tensor-based method for large-scale blind source separation using segmentation. IEEE Transactions on Signal Processing. 2017;65(2):346-358
  14. 14. Sidiropoulos N, De Lathauwer L, Fu X, Huang K, Papalexakis E, Faloutsos C. Tensor decomposition for signal processing and machine learning. IEEE Transactions on Signal Processing. 2017;65(13):3551-3582
  15. 15. da Costa MN, Favier G, Romano JMT. Tensor modelling of MIMO communication systems with performance analysis and Kronecker receivers. Signal Processing. 2018;145:304-316
  16. 16. Ribeiro LN, de Almeida ALF, Mota JCM. Separable linearly constrained minimum variance beamformers. Signal Processing. 2019;158:15-25
  17. 17. Dogariu L-M, Ciochină S, Benesty J, Paleologu C. System identification based on tensor decompositions: A trilinear approach. Symmetry. 2019;11:556
  18. 18. Van Loan CF. The ubiquitous Kronecker product. Journal of Computational Applied Mathematics. 2000;123:85-100
  19. 19. Benesty J, Paleologu C, Ciochin˘a S. On the identification of bilinear forms with the wiener filter. IEEE Signal Processing Letters. 2017;24:653-657
  20. 20. Paleologu C, Benesty J, Ciochină S. Adaptive filtering for the identification of bilinear forms. Digital Signal Processing. 2018;75:153-167
  21. 21. Elisei-Iliescu C, Stanciu C, Paleologu C, Benesty J, Anghel C, Ciochina S. Efficient recursive least-squares algorithms for the identification of bilinear forms. Digital Signal Processing. 2018;83:280-296
  22. 22. Dogariu L, Paleologu C, Ciochină S, Benesty J, Piantanida P. Identification of bilinear forms with the Kalman filter. In: Proceedings IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Calgary, AB, Canada: IEEE; 2018. pp. 4134-4138
  23. 23. Elisei-Iliescu C, Dogariu L-M, Paleologu C, Benesty J, Enescu AA, Ciochina S. A recursive least-squares algorithm for the identification of trilinear forms. Algorithms. 2020;13:135
  24. 24. Dogariu L-M, Ciochină S, Paleologu C, Benesty J, Oprea C. An Iterative Wiener Filter for the Identification of Multilinear Forms. Milan, Italy: Proceedings of IEEE TSP; 2020. pp. 193-197
  25. 25. Dogariu L-M, Paleologu C, Benesty J, Oprea C, Ciochină S. LMS Algorithms for Multilinear Forms. Timisoara, Romania: Proceedings of IEEE ISETC; 2020. pp. 1-4
  26. 26. Dogariu L-M, Paleologu C, Benesty J, Oprea C, Ciochină S. Tensor-based adaptive filtering algorithms. Symmetry. 2021;13(3):481
  27. 27. Fîciu ID, Stanciu C-L, Anghel C, Elisei-Iliescu C. Low-complexity Recursive Least-Squares adaptive algorithm based on tensorial forms. Applied Sciences. 2021;11(18):8656
  28. 28. Fîciu ID, Stanciu C-L, Elisei-Iliescu C, Anghel C. Tensor-based Recursive Least-Squares adaptive algorithms with low-complexity and high robustness features. Electronics. 2022;11(2):237
  29. 29. Vervliet N, Debals O, Sorber L, De Lathauwer L. Breaking the curse of dimensionality using decompositions of incomplete tensors: Tensor-based scientific computing in big data analysis. IEEE Signal Processing Magazine. 2014;31:71-79
  30. 30. Stenger A, Kellermann W. Adaptation of a memoryless preprocessor for nonlinear acoustic echo cancelling. Signal Processing. 2000;80:1747-1760
  31. 31. Huang Y, Skoglund J, Luebs A. Practically efficient nonlinear acoustic echo cancellers using cascaded block RLS and FLMS adaptive filters. In: Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). New Orleans, LA, USA: IEEE; 2017. pp. 596-600
  32. 32. Cichocki A, Zdunek R, Pan AH, Amari S. Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multiway Data Analysis and Blind Source Separation. Chichester, UK: Wiley; 2009
  33. 33. Domanov I, Lathauwer LD. From computation to comparison of tensor decompositions. SIAM Journal on Matrix Analysis and Applications. 2021;42(2):449-474
  34. 34. Gesbert D, Duhamel P. Robust blind joint data/channel estimation based on bilinear optimization. In: Proceedings of the 8th Workshop on Statistical Signal and Array Processing. Corfu, Greece: IEEE; 1996. pp. 168-171
  35. 35. Benesty J, Cohen I, Chen J. Array Processing–Kronecker Product Beamforming. Cham, Switzerland: Springer; 2019
  36. 36. Ayvaz M, De Lathauwer L. Tensor-based Multivariate Polynomial Optimization with Application in Blind Identification. Dublin, Ireland: Proceedings of EUSIPCO; 2021
  37. 37. Vasilescu MAO, Kim E. Compositional Hierarchical Tensor Factorization: Representing Hierarchical Intrinsic and Extrinsic Causal Factors. Anchorage, AK, USA: Proceedings of ACM SIGKDD; 2019
  38. 38. Vasilescu MAO, Kim E, Zeng XS. CausalX: Causal eXplanations and Block Multilinear Factor Analysis. Milan, Italy: Proceedings of IEEE ICPR; 2021
  39. 39. Padhy S, Goovaerts G, Boussé M, De Lathauwer L, Van Huffel S. The Power of Tensor-Based Approaches in Cardiac Applications. Singapore: Springer; 2020
  40. 40. Haykin S. Adaptive Filter Theory. Fourth ed. Upper Saddle River, NJ: Prentice-Hall; 2002
  41. 41. Bertsekas D. Nonlinear Programming. 2nd ed. Belmont, Massachusetts: Athena Scientific; 1999
  42. 42. Digital Network Echo Cancellers. ITU-T Recommendations G.168. Geneva, Switzerland: WHO; 2002

Written By

Laura-Maria Dogariu, Constantin Paleologu, Jacob Benesty and Silviu Ciochină

Reviewed: 19 January 2022 Published: 10 March 2022