Open access peer-reviewed chapter - ONLINE FIRST

Space-Time-Parameter PCA for Data-Driven Modeling with Application to Bioengineering

Written By

Florian De Vuyst, Claire Dupont and Anne-Virginie Salsac

Reviewed: February 16th, 2022 Published: April 20th, 2022

DOI: 10.5772/intechopen.103756

IntechOpen
Principal Component Analysis Edited by Fausto Pedro García Márquez

From the Edited Volume

Principal Component Analysis [Working Title]

Prof. Fausto Pedro García Márquez

Chapter metrics overview

16 Chapter Downloads

View Full Metrics

Abstract

Principal component analysis is a recognized powerful and practical method in statistics and data science. It can also be used in modeling as a dimensionality reduction tool to achieve low-order models of complex multiphysics or engineering systems. Model-order reduction (MOR) methodologies today are an important topic for engineering design and analysis. Design space exploration or accelerated numerical optimization for example are made easier by the use of reduced-order models. In this chapter, we will talk about the use of higher-order singular value decompositions (HOSVD) applied to spatiotemporal problems that are parameterized by a set of design variables or physical parameters. Here we consider a data-driven reduced order modeling based on a design of computer experiment: from high-dimensional computational results returned by high-fidelity solvers (e.g. finite element ones), the HOSVD allows us to determine spatial, time and parameters principal components. The dynamics of the system can then be retrieved by identifying the low-order discrete dynamical system. As application, we will consider the dynamics of deformable capsules flowing into microchannels. The study of such fluid-structure interaction problems is motivated by the use of microcapsules as innovative drug delivery carriers through blood vessels.

Keywords

  • HOSVD
  • spatio-temporal parametrized problem
  • approximation
  • reduced-order model
  • bioengineering
  • fluid-structure interaction
  • deformable capsules
  • dynamical system
  • machine learning
  • artificial intelligence

1. Introduction

Manufactured deformable microcapsules are intended to be used as drug carriers within the human vascular network to deliver drugs at specific targets (tumors, etc.). In order to design reliable capsules, one can make help of numerical simulation and high performance computing. The transportation of such capsules into microchannels is a three-dimensional fluid-structure interaction (FSI) problem involving a fluid flow within a confined environment and the deformation of hyperelastic membranes [1, 2]. The behavior of the capsule depends on dimensionless parameters such as the capillary number denoted by Caand the aspect ratio a/between the capsule radius aand the channel characteristic length . The parameter vector μ=Caa/, for which a capsule steady shape exists, lies in a bounded domain DR2. We look for the time evolution of the capsule shape under a Lagrangian description. From an initial shape X, we are interested in determining the capsule position xXtμin the microchannel domain ΩR3at time tfor a parameter vector μD. By denoting uthe displacement vector from the initial position, we have

xXμt=X+uXμtE1

with uXμ0=0. The governing equations of the FSI problem include both kinematics and motion equations. At the membrane, we have equilibrium of the mechanical forces (mechanical equilibrium of the membrane and viscous stresses from the fluid). By denoting vthe vector field of velocity at the membrane, the system of differential algebraic equations in abstract form reads

u̇=v,E2
v=FμuX.E3

Practically, there are different candidate computational approaches to discretize this system of equations. First, the initial capsule membrane has to be discretized by using a finite element triangular mesh made of nodes Xii=1I. Regarding time discretization, in [2], an explicit time scheme is used and the velocity field is determined by the use of a boundary integral method (BIM) coupled with a finite element method (FEM). A numerical stability condition imposes the use of small time steps. For a given parameter μ, the time evolution of capsule dynamics on the time intervals of interest generally requires hours of CPU time. To better understand the membrane behavior with respect to μ, a design of computer experiment (DoCE) is done: from a set of Jparameter samples of μjD, j=1,,J, a spatio-temporal solution is computed for each μj, leading to a database of shape solutions under the form of a third-order tensor

Tx=xXiμjtni=1,,I,j=1,,J,n=0,,N,R3I×J×N+1E4

using a triangular finite element discretization of the membrane, a time discretization tn=nΔt(assuming that the time step is constant) and the parameter samples μj. Typically, for practical computations, I=O1000, N=O10000and J=O100, so that the tensor database becomes rather huge (about O10gigabytes). Of course, one can only store the solutions at coarser times steps and reduce Nto O100but the database remains rather big even in this case.

From this data tensor, one can imagine different use cases leading to different tools:

  1. Data exploration and knowledge extraction;

  2. Real-time rendering of capsule dynamics for better understanding;

  3. Data-driven modeling of capsule dynamics in the whole parameter domain.

First and second items can be achieved by means of data dimensionality reduction. This leads to a lower storage of data in memory as well as a lower numerical complexity of processing and manipulation. In this chapter, we will consider a Higher Order Singular Value Decomposition (HOSVD), which is a generalization of Principal Analysis Component (PCA) to tensors. The third item involves a model-order reduction (MOR) methodology. From computed data, we would like to derive a lightweight dynamical system that reproduces the data and, even more, that is able to accurately estimate solutions for different parameter values μ. Data-driven model-order reduction first makes use of data-dimensionality reduction by a low-order tensor decomposition of the solutions according to some suitable spatial, temporal and parameter reduced bases (see [3]). In our application, this will give the truncated decomposition (where the solutions are here seen as functions):

x˜Xμt=X+k=1K=1Lm=1MakmφkXψμωmtE5

for some expansion coefficients akmand some spatial functions φk, parameter functions ψand temporal functions ωmwith ωm0=0ensuring xXμ0=X. The truncation ranks K, Land Mare expected to be small enough (KI, LJand MN). Discretized shape solutions returned by the Full-Order computational Model (FOM) are stored into a third-order tensor of data Tx. Let T˜xdenote the truncated tensor expansion related to (5). It reads:

T˜x=A0eN+k=1K=1Lm=1MakmΦkΨwmTxE6

where A0RI×J, A0ij=Xi, eN=11TRN, ΦkRI, ΨRJand wmRN. In this chapter, we will seamlessly use the functional representation or the tensor one.

From this reduced form, we will then apply a kernel-based Dynamic Mode Decomposition (k-DMD, see [4]) to derive a dynamical system able to predict the capsule shape evolution over time for any value of the parameter μ. This will be developed in the next sections.

Advertisement

2. Higher-order singular value decomposition and truncation

2.1 Compact HOSVD

The higher-order singular value decomposition (HOSVD) of a tensor is a specific orthogonal Tucker decomposition. The classical computation of a HOSVD was introduced by L. R. Tucker [5] and further developed by L. De Lathauwer et al. [6]. Robust computations or improvements have been since proposed [6, 7, 8]. For a tensor Tof order d, the idea is to compute the singular value decomposition of each factor- kflattening Tkof a tensor T, i.e. a “matricisation” of the tensor where the rows of the matrix are related to the k-th dimension.

In our case, we consider a third-order tensor and successive spatial, parameter and temporal flattening to find the singular vectors. The spatial tensor flattening TXRI×J.Nof the tensor TxA0eNis

TXip=xXiμjtnXi,p=j+n1JE7

(meaning that the μand tdimensions are stacked in columns in the matrix). The SVD of TXprovides rxnonzero singular values with rxcorresponding singular orthonormal vectors Φk, k=1,,rx. Similarly the parameter flattening leads to a rank rμwith rμmodes Ψ, =1,,rμand the time flattening a rank rtwith rtmodes wm, m=1,,rt. The tuple rXrμrtis the multilinear rank of Tx. Then tensor Txcan be written as

Tx=A0eN+k=1rX=1rμm=1rtakmΦkΨwm.E8

2.2 Approximation

Among the applications, HOSVD can be used to define a low-order approximation of tensors. The so-called truncated HOSVD [9, 10, 11] consists in truncating the expansion (8) at a given multilinear rank KLM, KrX, Lrμ, Mrt, leading to (6). Let mlrankTdenote the multilinear rank of the tensor T. It has been shown that the approximation (6) returns a quasi-optimal solution of the nonlinear non-convex least-square problem

minT˜12TxT˜F2E9

(here .Fis the Frobenius norm for tensors) subject to mlrankT˜=KLM. The truncation ranks can be determined a prioriaccording to the classical relative information content (RIC) of the SVD theory.

Eq. (6) can already be used as a compressed representation of the data, allowing for a lower storage complexity and a simpler manipulation, with low information loss if the RIC is high.

Advertisement

3. Reduced-order modeling of capsule dynamics

Eq. (6) provides a summarization of the family of spatio-temporal capsule shape solutions in the time interval t0tN. Unfortunately, this algebraic model has no predictability capability for time t>tN. To derive a predictable time-dependent model from the data tensor Tx, one has to derive a differential system that approximates the FSI system of Eqs. (2) and (3). The HOSVD reduction can thus be valued in the context of model-order reduction.

Consider of parameter vector of interest μD. The capsule position approximate solution (5) can be rewritten as

x˜Xμt=X+k=1Kaμ,ktφkXE10

with

aμ,kt==1Lm=1Makmψμωmt.E11

Let aμtbe the vector-valued function

aμt=aμ,1taμ,KtTRK.E12

We would like to derive a differential system of unknowns aμt. Since xXμ0=X, we have the natural initial condition aμ0=0. Practically, from expansion (8), we can compute coefficients aμ,ktat discrete times tn, and have thus access to the list of coefficient vectors

aμn=aμtn,n=0,,N.E13

A dynamical reduced-order model consists in determining (or approximating) a Lipschitz continuous mapping Fμ:RKRKsuch that

aμ0=0,aμn+1Fμaμnn0,,N1E14

from the data aμnn=0N. We get a low-order discrete dynamical system. Note that, here, we do not search for parameters of a model, but for the equations of the model themselves. Since the problem of finding such a mapping is infinite dimensional, one has to restrict the search to a mapping in a (suitable) finite dimensional functional space.

Advertisement

4. Koopman theory and dynamic mode decomposition

4.1 Koopman operator for discrete dynamical systems

Koopman theory is a powerful mathematical framework that re-expresses a general nonlinear discrete dynamical system as the knowledge of a linear (infinite dimensional) operator, the so-called Koopman operator or compositional operator. Today it is commonly used in machine learning and data-driven model-order reduction methodologies [4, 12]. Let us assume a discrete dynamical system in the form

an+1=Fan,nNE15

for a Lipschitz continuous mapping Ffrom Rdto Rd. Let gbe a function of a Banach space X, g:RdR. So we have gan+1=gFan. The Koopman operator related to Fis defined as

Kg=gFgX.E16

Then we have

gan+1=Kgan.E17

The knowledge of Kincludes the knowledge of F. Indeed, by taking the particular observables gia=aei, i=1,,dwhere eiis the i-th vector of the canonical basis of Rd, we retrieve an+1i=Fian, i.e. the i-th equation of (15). Of course, the linear Koopman operator acts on an infinite-dimensional functional space, so it is impossible to determine it exactly. However, one can search for an approximate Koopman operator K˜that acts on an approximate finite-dimensional space X˜X.

The concept of (nonlinear) observables is to have a sufficiently large set of independent nonlinear functions of the state vector and measurements of them in order to identify the mapping F. A natural question of interest is what are the best observables to choose. There is no absolute answer to this question, and the choice may depend on the underlying Physics. Without any a priori knowledge on the system of equations, one can use basis functions of a universal approximators like polynomials, Fourier or kernel-based functions for example.

4.2 Dynamic mode decomposition

The simplest choice of observables is the linear functions gia=aei, i=1,,d. It leads to the search of a finite-dimensional approximation Aof Tfrom the full state vector data. The matrix Acan be searched as the solution of the least square minimization problem

minAMdR12YAXF2E18

where X=a0a1aN1and Y=a1a2aN. Assuming Nd, the solution is given by A=YXTXXT1. The least square problem (18) can be eventually regularized for better conditioning by a Tykhonov regularization term [13, 14]. This practical approach of Koopman operator approximation is referred to as the dynamic mode decomposition (DMD) [4, 12]. This provides a linear dynamical model

an+1=AanE19

starting from a given initial condition a0. The solution of (19) which is an=Ana0is bounded for any initial condition a0as soon as ρA1.

Advertisement

5. Kernel-based identification of dynamical systems

In the case of a strongly linear dynamical system, the linear model (19) can be not accurate enough. We have to include suitable nonlinear observables in the data and the model. In this section, observables are selected from kernel-based approximations [15]. Then we use the variant kernel-DMD (k-DMD, [4]) approach to identify F.

A real-valued function kon Rd×Rdis called a positive definite kernel function if it is symmetric and if the following property holds:

mN,zii=1mRdm,αii=1mRm,i=1mj=1mαiαjkzizj0.E20

In other words, the square matrix K=kzizji=1,,m,j=1,,mis positive semi-definite. A standard kernel function is for example the Gaussian one

kzz=exp12zz2σ2E21

for a given parameter σ>0.

5.1 Kernel-based interpolation

Kernel functions can be used for interpolation in spaces of arbitrary dimension. Let g:RdRbe a continuous function and assume that we know the values of gziat particular points zi, i=1,,m. Then one can define an interpolator Igof gdefined as

Igz=j=1mαjkzzjE22

where the coefficient vector α=α1αmTis determined such that the interpolation property

Igzi=gzii1m.E23

holds. The interpolation conditions clearly lead to the solution of the symmetric linear square system of size m

=bE24

where b=gz1gzmT). Assuming that Kis positive definite, then Eq. (24) has a unique solution. Let

kiz=kzzi,i=1,,mE25

and V=spank1km. Considering any function g˜V, it is easy to check that g˜=Ig˜. One can derive the interpolation error

gIgL=gg˜+g˜IgL=gg˜+Ig˜IgLgg˜L+Igg˜LE26

so that

gIgL1+Iinfg˜Vgg˜L.E27

The interpolation error is controlled by the best approximation error multiplied by a stability constant depending on the norm of the interpolation operator.

5.2 Use of kernel features and k-DMD

Let us go back to the parameterized dynamical system of interest (14) and consider a point cloud ajj=1mof sample states in the admissible reduced state space XRK. The functions

kja=kaajE28

can be seen as features and thus be used as suitable nonlinear observables to approximate the Koopman operator. From any known full state vector aμnat time tn, we build the vector of observables

κaμn=k1aμnk2aμnkmaμnT.E29

By definition of the Koopman operator, we have kiaμn+1=kiFaμn=Kkiaμn. Then we look for a finite-dimensional approximation Aμof the Koopman operator Kin the sense

κaμn+1Aμκaμnn0N1E30

The matrix Aμis searched as the minimum of the least square problem

minAMMmR12YμAXμF2E31

where the entry and output data matrices are now Xμ=κaμ0κaμ1κaμN1and Yμ=κaμ1κaμ2κaμN. We get the dynamical system κaμn+1=Aμκaμnwith a specified initial condition κaμ0.

Let us emphasize that the computational variables are now the κaμn. But we still need the full state variables anto determine the displacements or the capsule shapes. The full state can be retrieved for example by interpolation: taking gia=aei, we get

aμin+1Igiaμn+1=j=1mαjikiaμn+1E32

with interpolation coefficients vectors αjj=1msuch that

ai=j=1mαjkjaii1mE33

(linear system of dimension m×K). The coefficient vectors αjcan be computed once for all.

5.3 Including the full state vector and the constants into the features

The k-DMD approach can be improved by including the full-state vector aμitself into the input feature vector. Moreover, if the kernel functions are not able to perfectly reproduce the constant ones, for accuracy reasons it is justified to add the constants into the features. Considering the augmented vector ηaμof observables

ηaμ=aμκaμ1RK+m+1,E34

we look for a dynamical system in the form

aμn+1=AμηaμnFμaμnE35

with a constant rectangular matrix Aμof size K×K+m+1to identify. The first advantage is that the output vector is the full state itself. The second one is the number of elements of Aμwhich is less than m2as soon as KK+m+1m2. In this case, there are less coefficients to identify. The input and output matrices now are

Xμ=ηaμ0ηaμ1ηaμN1,Yμ=aμ1aμ2aμN.E36

Assuming NK+m, the rectangular matrix Aμsolution of the least square problem

minAMK,K+m+1R12YμAXμF2E37

is computed as Aμ=YμXμTXμXμT1.

5.4 Summary and whole algorithm

We give a summary of the model-order reduction algorithm:

  1. Input data: third-order tensor Tx(4) made of capsule shape solutions of size 3I×J×N+1.

  2. HOSVD + truncated approximation: compute (8) and get the truncated approximation with the truncated multilinear ranks KLM:

T˜x=A0eN+k=1K=1Lm=1MakmΦkΨwmE38

  1. 3. Online stage: choose a parameter vector μ. From (13), compute the data coefficients aμtn=aμn(see Eq. (13)) from (11). Choose a kernel function k.., choose mand the centers aj, j=1,,m. Assemble the observables ηaμnand assemble the matrices Xμand Yμ(Eq. (36)). Compute the k-DMD matrix Aμ=YμXμTXμXμT1. Get the reduced-order dynamical system (35) with aμ0=0as initial value.

Advertisement

6. Numerical results for capsule dynamics

The algorithm is applied to a problem of a deformable capsule flowing into a square-based microchannel (typical for microfluidic channels created by soft lithography) with a mean inflow velocity V[1, 2]. Related works and variant ROM approaches on this topic can be found in [14, 16]. The capsule dynamics is simulated by the full-order fluid–structure interaction solver Cite [1]. The fluid solver is based on the solution of the Stokes equations and a nonlinear Neo-Hookean law is used for the membrane. The initial capsule is spherical, corresponding to the shape at rest. The sphere is discretized with a finite element mesh made of I=2562vertices.

6.1 Study for a particular parameter couple Caa/

The objective of this subsection being to illustrate the methods on an example and show how to apply them, we only consider the snapshot FOM solutions for Caa/=0.1,0.9for the sake of simplicity and brevity. Figure 1 shows both membrane shapes and positions in the channel at different instants.

Figure 1.

Example of a microcapsule dynamics within a square-base channel for (Ca,a/) = (0.1, 0.90). Shapes and locations of the initially spherical capsule are shown att=0(in transparency), 0.4, 2.8, 5.2, 7.6.

At each time tn=nΔt, where the time step is equal to Δt=0.04, a snapshot is saved and stored in the database. Note that all time quantities are non-dimensionalized by the factor /V. The resulting generated data matrix is used as the entry matrix for the ROM learning process. Then a truncated SVD is applied to get the spatial POD modes. In Figure 2, the four first eigenmodes Φkare plotted (more precisely this is a superposition of each mode onto the original spherical shape for a better visualization and understanding of their influence). Based on, the graphics k1RICkplotted in log scale in Figure 3, we decide to use a truncation rank Kequal to K=10, returning a relative information content of about 13.5×105.

Figure 2.

SVD: Four first spatial principal components computed by the HOSVD. Each mode has been added on the initial spherical shape and amplified by a factor 2 for better visualization. Higher-order modes show oscillations at the rear of the capsule.

Figure 3.

SVD: Plot ofk1RICk, whereRICkrepresents the relative information content at truncation rankk.

Then a reduced-order dynamical system for the capsule time evolution is searched. In this example, we compare two models: the first one is the affine approximation (denoted by ROM-A)

aμn+1=Aμaμn+bμE39

with the matrix Aand the vector bto identify. It is equivalent to consider the vector of features ηa=a1T. The second nonlinear model is built from the observable vector ηa=aκa1Twith the recurrent time scheme

aμn+1=AμηaμnE40

(ROM-B). For ROM-B, the Gaussian kernel function (21) is used. The standard deviation parameter σin (21) is chosen as σ=maxnaμn=aμN+12. In both cases ROM-A and ROM-B, the determination of the matrix Aμby minimization of the least square problem (37) leads to a very small residual. In Figure 4, the logarithm of the relative time residual

Figure 4.

Matrix identification. Log of the normalized residualnlog10aμn+1Aμηaμn2/aμn+12for both ROM-A and ROM-B.

nlog10aμn+1Aμηaμn2/aμn+12E41

is plotted for each ROM model. One can observe values between 1014and 108. The residual for ROM-B appears to be slightly smaller than that of ROM-A thanks to the added nonlinear terms. A surprising result is that the affine ROM-A model returns rather accurate results whereas the fluid–structure interaction problem is intrinsically nonlinear. In order to study the stability of the model ROM-A, in Figure 5 we plot the complex eigenvalues of the square matrix Aμ. We observe that all the eigenvalues have a modulus less or equal to one. One of the eigenvalues is equal to 1exactly up to Double Precision roundoff errors, meaning that there is a physical invariant in the system. It is known that the capsule volume is kept constant during time, because of the incompressibility of the fluid. For ROM-A, since aμn+1=Aμaμn+band aμ0=0, we have

Figure 5.

Matrix identification. Eigenvalues of the computed matrixAμplotted in the complex plane for the ROM-A model. One of the eigenvalue is1up to round-off error.

aμn=Aμnaμ0+k=0n1Aμkb=k=0n1Aμkb.E42

The matrix Aμis observed to be diagonalizable in . There is an invertible matrix Pμsuch that Aμ=PμΛμPμ1where Λμis the diagonal matrix of the eigenvalues. Since it is observed that ρΛμ=1, we have

aμnCondPμnb,nN,E43

showing that the coefficients in the PCA space grow at most linearly in time.

In Figure 6, we compare the computed capsule shapes and positions in the channel for the computed FOM capsules obtained at different times: t=0, 0.4, 2.8, 5.2and 7.6. What can be observed is that the ROM-B model returns very satisfactory results where the shape solutions fully overlap the FOM ones’at the eye norm’. Finely, we plot in Figure 7 the time evolution of the errors in the capsule 3D shape of the ROM solutions as compared to the FOM solutions. The difference between the shapes are quantified by εShapet, the ratio between the modified Hausdorff distance (MHD) computed between the FOM shape SFOMtand the ROM shape SROMtand the channel characteristic length :

Figure 6.

Sequence of cross-section capsule shapes and positions in the microchannel from the initial spherical shape shown in light green at the beginning of the channel: Comparison of the FOM solutions (gray dots) and of the solutions computed from the dynamical k-DMD reduced-order model (dark green solid line) at the same instants as inFigure 1:t=0, 0.4,2.8,5.2,7.6.

Figure 7.

(a) Comparison of the time evolution of the shape error between the affine DMD model (ROM-A withK=10) and the kernel-based one (ROM-B withK=10,M=5). One can observe a maximum error less than 0.1% in both cases. (b) Sensitivity analysis of the parameterσ(aN+12=667.14.)

εShapet=MHDSFOMtSROMt.E44

The modified Hausdorff distance measures the maximum value of the mean distance between the two shapes to compare [17]. The ROM-A and ROM-B models return very accurate solutions with maximum 0.1% error. It is also observed that the ROM-B models is slightly more accurate than the affine approximation.

6.2 HOSVD on the whole data tensor and error measurements on the whole database

Now the consider the whole database made of 55 samples in the parameter domain. In Figure 8 we plot the location of the 55 chosen samples in the plane Caa/. The design zone of interest corresponds to capsule shapes that can reach a steady state after a certain time.

Figure 8.

Samples of the design of experiment in the parameter spaceCaa/. The zone of interest corresponds to capsule shapes that reach a steady state after a certain time.

A SVD is first performed on the μ-flattening of the data tensor Tx. In Figure 9, we plot the four first parameter (normalized) eigenmodes in the parameter domain. These modes give an idea on the dependency of the capsule shapes with respect to the parameters. To complete the analysis, we show in Figure 10 the spectrum of the singular values for the μ-flattening matrix. One can observe a rather fast decay of the singular values especially for the ten first modes.

Figure 9.

HOSVD: The first four parameter eigenmodes in the parameter domain, computed from theμ-flattening of the data cube.

Figure 10.

Spectrum of the singular values of theμ-flattening matrix.

Next, we perform the SVD of the time t-flattening matrix of Tx. The SVD provides us temporal eigenmodes. In Figure 11, the four first temporal eigenmodes ωm, m=1,,4are plotted. The first one appears to be the linear function while the others add details especially in the transient zone of the capsule dynamics. The spectrum of the singular values again shows a fast decay especially for the six first modes.

Figure 11.

HOSVD. Left: First four temporal eigenmodes computed from the SVD decomposition of thet-flattening data tensor. Right: Spectrum of the singular values.

To conclude this section, we have tested the accuracy of both ROM-A and ROM-B on the whole database. For each sample, we have derived a ROM model, i.e. a low-order dynamical system formulated in the PCA space. Then we have compared the ROM solution to the FOM solution by calculating εShapebetween the two capsule shapes. In Figure 12, the heat maps of εShapeare plotted for ROM-A and ROM-B. One can observe a uniform accuracy over the whole parameter domain, with errors less than 0.1%, thus showing the efficiency of the approach. Reported computational speedups are between 5000 and 10,000 using ROM models. A computer with two Intel Xeon GOLD 6130 CPU (2.1 Ghz) has been used for the numerical tests.

Figure 12.

Heat maps of the modified Hausdorff distance between the FOM solutions and the ROM ones at dimensionless timeVt/=10. Left: ROM-A, right: ROM-B. Errors are less than 0.1%.

Figure 13.

Interpretation of the method as a recurrent neural network (RNN) in the PCA space.

Advertisement

7. Discussion

Having shown how to apply the affine DMD and k-DMD models and the very high precision in prediction that they offer, we now would like to provide some further comments and remarks on model order reduction and DMD-type approaches.

7.1 Kolmogorov n-width

The method is efficient if the spectrum of singular values decays rapidly, leading to a small truncation rank K. If the spectrum decays slowly, there are two possible reasons for that: either the entropy (variety) of information in the data is high, or the solutions do not live in a linear space but rather on a nonlinear manifold. To fix the problem, one can proceed by performing a preliminary clustering of the data, scanning the parameter dimension. One can either use standard clustering techniques such as K-means, or a multidimensional scaling (MDS) approach. Then for each cluster, one can consider again a HOSVD and reduced-order approach suitable for each element of the cluster.

7.2 Selection of the kernel functions and kernel interpolation points

As already mentioned, the choice of the kernel function depends on the applications, on the behavior of solutions and/or on the underlying Physics. Without any a priori information, one can use universal approximation kernels like the Gaussian one. The accuracy of the results will also strongly depends on the choice of the kernel interpolation points aj. The sampling ajj=1mhas to correctly fill in the admissible space, or at least the state-space trajectory of interest. There are different possible strategies. A first candidate is the use of a clustering approach applied to the state-space data. The points ajare then the centroids of each cluster. But one can consider more sophisticated approaches like a greedy iterative approach that controls the interpolation error on the data. At each iteration an interpolation point ajis added at the location of worst interpolation error, considering all the sample solutions.

7.3 Interpretation in terms of a recurrent neural network

Let us remark that the approach can be reinterpreted as a (supervized) two-layer recurrent artificial neural network (RNN) (Figure 13) [18]. The first layer consists in generating the features kiaμ. The second layer is a linear matrix–vector multiplication using the matrix Aμ.

Advertisement

8. Conclusions

In this chapter, the higher-order singular value decomposition has been proved to be a flexible and valuable tool in the data-driven reduced-order modeling of solutions of space–time-parameter problems, which are today at the heart of many industrial applications. The methodology has been tested on a problem of fluid–structure interaction of a deformable microcapsule flowing into a microchannel. Stokes equations have been used in the fluid whereas a nonlinear hypereleastic law has been used for the membrane. Different shape solutions computed by the full-order model have been stored into a third-order tensor. First, HOSVD allows us to compute spatial, temporal and parameter principal components and at the same time to compress the data. We get a low-order representation of the solutions with a shared spatial reduced basis. Spatial principal components are observed to provide suitable details in the shape solutions. The modes are arranged in decreasing order of importance according to the relative information content criterion. Next, additional ingredients such as kernel approximation and kernel-based dynamic mode decomposition are used to determine a reduced-order dynamical system for any parameter vector in the admissible parameter domain. The resulting low-dynamical system can be seen as an encoded recurrent neural network set into the PCA space. The approach allows us to explore the different shape solutions and visualize their evolution in the channel in real time.

Advertisement

Acknowledgments

The authors would like to thank Prof. Pierre Villon (UTC) for fruitful discussions on this topic. The project received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 Research and Innovation Programme (Grant agreement No. ERC-2017-COG - MultiphysMicroCaps).

Advertisement

Conflict of interest

The authors declare no conflict of interest.

Nomenclature

SVDSingular value decomposition
HOSVDHigher-order singular value decomposition
PCAPrincipal component analysis
MORModel order reduction
FOMFull order model
ROMReduced order model
FSIFluid–structure interaction
DoCEDesign of computer experiment
PODProper orthogonal decomposition
RICRelative information content
DMDDynamic mode decomposition
k-DMDKernel-based dynamic mode decomposition
MHDModified Hausdorff distance
RNNRecurrent neural network

References

  1. 1. Hu X-Q, Salsac A-V, Barthès-Biesel D. Flow of a spherical capsule in a pore with circular or square cross-section. Journal of Fluid Mechanics. 2012;705:176-194. DOI: 10.1017/jfm.2011.462
  2. 2. Walter J, Salsac A-V, Barthès-Biesel D, Le Tallec P. Coupling of finite element and boundary integral methods for a capsule in a stokes flow: Numerical methods for a capsule in a stokes flow. International Journal for Numerical Methods in Engineering. 2010;83:829-850. DOI: 10.1002/nme.2859
  3. 3. Audouze C, De Vuyst F, Nair PB. Reduced-order modeling of parameterized PDEs using time–space-parameter principal component analysis. International Journal for Numerical Methods in Engineering. 2009;80:1025-1057
  4. 4. Kutz JN, Brunton SL, Brunton BW, Proctor JL. Dynamic Mode Decomposition: Data-driven Modeling of Complex Systems, Society for Industrial and Applied Mathematics. Philadelphia, PA: SIAM; 2016. p. 234. DOI: 10.1137/1.9781611974508
  5. 5. Tucker LR. Some mathematical notes on three-mode factor analysis. Psychometrika. 1966;31(3):279-311. DOI: 10.1007/bf02289464
  6. 6. De Lathauwer L, De Moor B, Vandewalle J. A multilinear singular value decomposition. SIAM Journal on Matrix Analysis and Applications. 2000;21(4):1253-1278. DOI: 10.1137/s0895479896305696
  7. 7. Hackbusch W. Tensor spaces and numerical tensor calculus. In: Springer Series in Computational Mathematics. Vol. 42. Berlin, Heidelberg: Springer; 2012. p. 605. DOI: 10.1007/978-3-642-28027-6
  8. 8. Grasedyck L. Hierarchical singular value decomposition of tensors. SIAM Journal on Matrix Analysis and Applications. 2010;31(4):2029-2054. DOI: 10.1137/090764189
  9. 9. Vannieuwenhoven N, Vandebril R, Meerbergen K. A new truncation strategy for the higher-order singular value decomposition. SIAM Journal on Scientific Computing. 2012;34(2):A1027-A1052. DOI: 10.1137/110836067
  10. 10. Goldfarb D, Zhiwei Q. Robust low-rank tensor recovery: Models and algorithms. SIAM Journal on Matrix Analysis and Applications. 2014;35(1):225-253. DOI: 10.1137/130905010
  11. 11. Eldén L, Savas B. A Newton–Grassmann method for computing the best multilinear rank -(r1,r2,r3)approximation of a tensor. SIAM Journal on Matrix Analysis and Applications. 2009;31(2):248-271. DOI: 10.1137/070688316
  12. 12. Brunton SL, Budišić M, Kaiser E, Kutz JN. Modern Koopman theory for dynamical systems. ArXiv. 2021
  13. 13. Benner P, Himpe C, Mitchell T. On reduced input-output dynamic mode decomposition. Advances in Computational Mathematics. 2018;44:1751-1768
  14. 14. Dupont C, De Vuyst F, Salsac AV. Data-driven kinematics-consistent model order reduction of fluid-structure interaction problems: Application to deformable microcapsules in a stokes flow. Journal of Fluid Mechanics. Under revision. ArXiv Preprint. DOI: 10.48550/arXiv.2203.13725
  15. 15. Fasshauer G, McCourt M. Kernel-based approximation methods using MATLAB. Interdisciplinary Mathematical Sciences. 2015;19:536. DOI: 10.1142/9335
  16. 16. Boubehziz T, Quesada-Granja C, Dupont C, Villon P, De Vuyst F, Salsac A-V. A data-driven space-time-parameter reduced-order model with manifold learning for coupled problems: Application to deformable capsules flowing in microchannels. Entropy. 2021;23:1193. DOI: 10.3390/e23091193
  17. 17. Dubuisson M, Jain AK. A modified Hausdorff distance for object matching. Proceedings of 12th International Conference on Pattern Recognition. 1994;1:566-568
  18. 18. Tsoi AC, Back A. Discrete time recurrent neural network architectures: A unifying review. Neurocomputing. 1997;15(3–4):183-223. DOI: 10.1016/S0925-2312(97)00161-6

Written By

Florian De Vuyst, Claire Dupont and Anne-Virginie Salsac

Reviewed: February 16th, 2022 Published: April 20th, 2022