Open access peer-reviewed chapter

A Reduced-Order Gauss-Newton Method for Nonlinear Problems Based on Compressed Sensing for PDE Applications

Written By

Horacio Florez and Miguel Argáez

Submitted: 16 October 2017 Reviewed: 25 January 2018 Published: 18 July 2018

DOI: 10.5772/intechopen.74439

From the Edited Volume

Nonlinear Systems - Modeling, Estimation, and Stability

Edited by Mahmut Reyhanoglu

Chapter metrics overview

1,136 Chapter Downloads

View Full Metrics

Abstract

A global regularized Gauss-Newton (GN) method is proposed to obtain a zero residual for square nonlinear problems on an affine subspace built by wavelets, which allows reducing systems that arise from the discretization of nonlinear elliptic partial differential equations (PDEs) without performing a priori simulations. This chapter introduces a Petrov-Galerkin (PG) GN approach together with its standard assumptions that ensure retaining the q-quadratic rate of convergence. It also proposes a regularization strategy, which maintains the fast pace of convergence, to avoid singularities and high nonlinearities. It also includes a line-search method for achieving global convergence. The numerical results manifest the capability of the algorithm for reproducing the full-order model (FOM) essential features while decreasing the runtime by a significant magnitude. This chapter refers to a wavelet-based reduced-order model (ROM) as WROM, while PROM is the proper orthogonal decomposition (POD)-based counterpart. The authors also implemented the combination of WROM and PROM as a hybrid method referred herein as (HROM). Preliminary results with Bratu?s problem show that if the WROM could correctly reproduce the FOM behavior, then HROM can also reproduce that FOM accurately.

Keywords

  • Gauss-Newton method
  • line search
  • Petrov-Galerkin direction
  • data compression
  • wavelets

1. Introduction

The Newton method for solving square nonlinear problems is one of the most popular techniques used in engineering applications due to its simplicity and fast convergence rate [1, 2, 3]. However, the quality of the final numerical results is affected by the possible Jacobian’s singularity and high nonlinearity. Another drawback is that the method depends on the initial point. Therefore, it is necessary to implement a globalization strategy to get the solution independently of the initial guess. One such approach is the line-search method that relies on a suitable merit function that yields that the iterations progress toward a solution of the problem.

From the numerical point of view, the technique requires solving square linear systems several times, and it is necessary to carry out function evaluations of the order of the problem, as well as first-order information of the square law of the function to compute the Jacobians. In the case of high-dimensional nonlinear problems, the method can overcome the capacity of the computer memory or decrease speed for solving these linear systems, even in the case of a few iterations. One of the current research activities focuses on solving large-scale square nonlinear problems in real time. The purpose of this chapter is to provide an algorithm for solving large-scale square nonlinear problems, in real time, while retaining the fast convergence rate. One strategy for addressing such challenges is to characterize an affine subspace, of much lower dimension than the original one, that contains the initial solution and thus reproduces the problem’s principal features.

One procedure to characterize the affine subspace consists of solving the full-order model (FOM) in several input points whose solutions are called snapshots, then using a principal component analysis method such as singular value decomposition (SVD) to build an orthonormal basis that spans the snapshots’ majority of energy. This oblique subspace, where one seeks a solution, is projected on the original one. Good numerical results have been already reported in the literature [1, 3, 4, 5, 6, 7]. But there are still open questions about this procedure as for how and when to choose the snapshots and their number [8, 9]. It is important to emphasize that at every picture it is required solving the FOM regardless of its cost. This chapter thus promotes a new strategy that is snapshot free. The approach originated in signal processing and consists of using the notion of wavelets to compress data in a subspace of smaller dimension which retains the majority of the original energy [10, 11]. The discrete wavelet’s low-pass matrix is used as the affine subspace; then, the optimization is performed in this compressed subspace to obtain a cheaper solution that can decompress to its original size.

Advertisement

2. Reduced-order models using wavelet transformations

2.1. Wavelets and data compression

The rationale for using wavelet transformations for model reduction originates from the fields of image processing, image compression, and transform coding. In these fields, massive amounts of information, for example, images, are broadcasted over limited bandwidth communication lines and networks such as the internet. One needs to compress these signals for quick transmission and to diminish storage requirements [10]. In summary, data compression consists of, given a signal xRn find a lower dimensional signal x̂Rr with rn to broadcast or store those lower dimensional ones x̂. The most widely used techniques for data compression are based on wavelets transform. The key to using wavelets is to find a lower dimensional signal x̂ that relies on a known subspace properly denoted as energy compaction. It is well comprehended that the wavelets tend to accumulate energy in the low-frequency sub-band of the wavelet decomposition [3, 12, 13, 14]. The energy relates to the L2-norm and is defined as ϵ=x2=i=1nxi2. To demonstrate the energy compaction, consider Figure 1, in which one has the original image xR512×512. Notice that the upper left quadrant of the wavelet decomposition is a low-dimensional approximation x̂R256×256 that is one-fourth the size of the original signal and resembles the low-frequency sub-band wavelet coefficients. Using the previously measured energy, the energy enclosed in x̂ is 95.75% using just one-fourth of the coefficients. Since x̂ comprises most of the energy, a simple data compression scheme would execute all of the other wavelet sub-bands to zero and store only the low-frequency information as in Figure 1 (see in the bottom center). By only employing x̂, one can reproduce an approximation of xRn by its generalized inverse of the sub-band compression [10]. Next section will provide details.

Figure 1.

Sample compression and decompression.

2.2. Reduced-order models

Let WRn×n describe an orthonormal wavelet. The transformation encompasses a low-pass and a high-pass submatrix that is given by

Wn×n=Lr×nHs×n,r+s=n.E1

By orthogonality, rankW=n, rankL=r, and rankH=s. Now, by orthogonality of W, WWT=I, then LLT=Ir, HHT=Is. The energy of a signal xRn is

x2=Lx2+Hx2.E2

Choosing L that contains the majority of the energy such that the energy Hx0, one has xLx. This means that the energy of the original data xRn is approximately equal to the energy to the compressed data x̂=LxRr. That is: xx̂. The decompressed data are LTx̂=x˜Rn. On the other hand, the compressed and decompressed energies are equal. That is x̂=x˜ since LLT=Ir. Therefore, the original, compressed, and decompressed data are related as follows

xx̂=x˜,E3

where xRn is the original data, x̂Rr is the compressed data, and x˜Rn is the decompressed data. Thus, once an appropriate low-pass submatrix L is determined, one proposes solving the corresponding optimization problem in the reduced affine subspace determined by LT and later coming back to the original size by its generalized inverse.

Advertisement

3. Problem formulation

3.1. Statement of the problem

Given a nonlinear function R from Rn to Rn, find a solution in an affine subspace determined by an initial displacement point xoRn and an orthonormal base Ln×rT, r<n. That is: find xRn with Rx=0 and xxo+ηLT, where ηLT is the subspace generated by the linear combination of the operator LT.

3.2. Overdetermined problem

This section formulates this problem by using the overdetermined functions H and ϕ from Rr to Rn

Hp=Rϕp=0,ϕp=xo+LTp,andpRr.E4

The fact that finding a solution p of the overdetermined problem draws attention, Hp=0 for pRr, is equivalent to finding the solution one is initially seeking. That is: x=ϕp and Rx=0 is a solution on the affine subspace. Therefore, one studies the problem by finding a zero residual of the nonlinear least-squares problem associated to H. Problem (4) is called an overdetermined zero-residual problem.

3.3. Nonlinear least-squares problem

The residual problem (4) is immediately seen to be equivalent to solving the nonlinear zero-residual least-square problem

minimizefp=12i=1nri2ϕp,pRr.E5

3.4. First derivatives for the residual functions R and H

The Jacobian of R at xRn is given by

JRx=Jx=rixT1in.E6

A direct application of the chain rule, since Jϕp=LT yields:

JHp=JϕpLT.E7

3.5. First and second derivatives for problem (5)

The gradient of each term of the problem is ri2ϕp=2LTriϕpriϕp. Therefore the gradient of fp is

fp=(JϕpLTTRϕp.E8

The second-order information is

2fp=JϕpLTTpLT)+i=1nhip2hip).E9
Advertisement

4. Gauss-Newton method

This section presents a Gauss-Newton method to solve the nonlinear problem (5) which is equivalent to solving the overdetermined nonlinear composite function (4). It describes the standard Newton assumptions for this composite function problems that yield q-quadratic rate of convergence. The inconvenience to use Newton method is that the second-order information associated with the Hessian method is not easily accessible or is impractical for computational time. The latter makes the Newton method impractical for very large-scale problems.

4.1. Model order-reduction-based Gauss-Newton algorithm

This subsection presents a reduced-order Gauss-Newton algorithm for solving problem (5), which is the interest herein.

Algorithm 1. Reduced-order Gauss-Newton (ROGN)

Inputs: Given the compressed base LTRn×r, and an initial displacement xoRn.

Output: Approximate solution in the affine subspace xRn.

1: Initial point of the problem. Given poRr.

2: Initial point in the affine subspace. ϕpo=xo+LTpoRn.

3: For k=0: until convergence (Rϕpkϵ.

4: Gauss-Newton direction (compressed direction). Solve for Δpk+1

JϕpkLTTpkLT)Δpk+1=JϕpkLTTRϕpk.E10

5: Compressed update: pk+1=pk+Δpk+1.

6: Decompressed update: ϕpk+1=xo+LTpk+1.

Remarks:

1. The algorithm presents two initial points. The first xo is the displacement to characterize the affine subspace, and the second one po is the initial point for the algorithm.

2. The update ϕpk+1 is the approximation to the solution one is looking for which one denotes by xk+1.

3. Finding Gauss-Newton direction Δpk+1 is equivalent to solving the following linear least-squares problem:

minΔpk+112pkLTΔp+Rϕpk2.E11

4. The Gauss-Newton direction is the Petrov-Galerkin direction obtained by approximating Newton’s direction of square nonlinear problems for the following weighted problem:

minΔpk+112(LTΔp+RxQ12Q=JxTJx>0.E12

4.2. Local convergence of reduced Gauss-Newton algorithm

It is known that the Gauss-Newton method retains q-quadratic rate of convergence under standard assumptions for zero-residual single-function problems [15]. The natural question is: What are the standard assumptions that guarantee the Gauss-Newton conditions for the composite function one is working with, that conserve q-quadratic rate of convergence? The next theorem establishes these assumptions.

Theorem: Let H from Rr to Rn be defined by Hp=Rxo+LTp, xoRn, pRr, and LRr×n are orthonormal operators with r<n. Assume there exists a solution pD˜Rr, with D˜ convex and open. Define D=xo+LTD˜, where LTD˜ is the image of D˜ under LTRn×r. Assume that JRLγD, JR is bounded on D, and the minimum eigenvalue of JxTJx is positive. Then, the sequence pk+1 given ROGN algorithm 1 is well defined, converges, and has q-quadratic rate of convergence. That is:

pk+1p12pkpE13
pk+1pc˜γ˜2σ˜pkp2withc˜,γ˜,σ˜R+.E14

Proof: The residual problem R has solution x on D. First, one proves that the Jacobian of H is Lipchitz on D˜. Since JHp=JϕpLT, then

JHp1JHp2=Jϕp1Jϕp2LT,forp1,p2D˜.E15

Since the Jacobian of R is Lipschitz on D, one concludes

JHp1JHp2γ˜p1p2,γ˜=γLT2,forp1,p2D˜.E16

Second, one proves that the Jacobian of H is bounded on D˜. Since the Jacobian of H at p is J(ϕpLT, then

JHp=J(ϕpLTforpD˜.E17

Now, since the Jacobian of R is bounded on D, one concludes

JHpc˜,c˜=cLTforpD˜.E18

Finally, one proves that the smallest eigenvalue of JHpTJHp is greater than zero.

JHpTJHp=JxLTTJxLTwithx=xo+LTp.E19

Let p0Rk and σR be an eigenvector and eigenvalue associated with the last symmetric matrix. Then

LTpQ2=σp2,Q=JxTJx>0.E20

Therefore, σ>0 since LT is a full rank and p0. The convergence and its fast rate of convergence given by the last two inequalities follow from the Theorem 10.2.1 in the Dennis and Schnabel book [15].

Advertisement

5. Regularization

Despite the advantages of the Gauss-Newton method, the algorithm will not perform well if either the problem is ill conditioned or in the presence of high nonlinearity of some components of it. The purpose of this section is to introduce two regularizations to overcome these difficulties while retaining the fast rate of convergence of the Gauss-Newton method.

5.1. Levenberg-Marquardt method

To prevent the Gauss-Newton algorithm to preclude in case some eigenvalues are near zero or in case of rank deficiency of the linear systems to solve, the least-squares directions are regularized by

minΔp12JϕpLTΔp+Rϕp2+μ2Δp2E21

where μ>0. The solution is given by

JϕpLTTJϕxLT+μIΔp=JϕpLTTRϕp.E22

Under the standard Gauss-Newton assumptions written before and choosing the regularization parameter as μ=O(JϕpLTTRϕp, the regularized Gauss-Newton algorithm converges and the q-quadratic rate of convergence is retained; see Theorem 10.2.6 [15].

pk+1p12α˜γ˜λ˜pkp2.E23

5.2. Scaling regularization

To avoid the influence of the high order of magnitude of some components with respect to the rest of the components of the problem, one presents the following regularization:

minΔp12JϕpLTΔp+Rϕp2+σ2LTΔpQ2,E24

where Q=JϕpLTTJϕpLT. The solution is given by

JϕpLTTJϕpLTΔp=JϕpLTTRϕp1+σRϕp.E25

This regularization prevents that large components of the problem affect the behavior of the algorithm. It is important to observe the Lipschitz constant of the problem is improved. Considering the preceding two regularizations, one has

JϕpLTTJϕpLT+μIΔp=JϕpLTTRϕp1+σRϕp.E26

This last regularizations prevent the smallest eigenvalue affecting the behavior of the Gauss-Newton algorithm and at the same time, through rescaling, components with small values are not considered by the influence of large components while retain its fast rate of convergence.

Advertisement

6. Globalization strategy

The good performance of the Gauss-Newton algorithm depends on a suitable initial point that must be inside its region of convergence. Rather than absorbing the computational cost associated with choosing an appropriate initial point, the chapter proposes a line-search method that provides convergence for initial points outside of the region of convergence. The goal of this approach is to obtain a sufficient decrease in the merit function. If the direction fails, then a backtracking is used until a sufficient reduction is obtained. A merit function should allow moving toward a solution of the problem.

6.1. Merit function

It is natural to think that the merit function for the unconstrained minimization problem (5) is itself. That is: Mp=fp.

6.2. Descent direction

One proves that Gauss-Newton direction is a descent direction for the merit function Mp=fp.

Property: The regularized Gauss-Newton direction Δp given by (26) is a descent direction for the merit function Mp=fp.

Proof: One proves that the directional derivative of f at the direction Δp is less than zero. The gradient of fp is given by fx=JϕpLTTRϕp. Therefore

fpTΔp=JϕpLTTRϕpQ12<0E27

since Q=JϕpLTTJϕpLT+μ is positive definite.

Consequently, it is possible to progress toward a solution of the problem in the Δp direction. The purpose is to find a step length α01 that yields a sufficient decrease. To that effect one follows the Armijo-Goldstein conditions given by

fp+αΔpfp+αλfpTΔpE28

and

fp+αpTΔpβfpTΔp,E29

for fixed values λ,β01. The first inequality allows sufficient decrease of the merit function, and the second one avoids step lengths that are very small. It is important to observe that if β is chosen, βλ1, then the two inequalities can be satisfied simultaneously. Wolfe proved that if f is continuously differentiable on Rr,Δp is a descent direction, and assuming the set {fp+αΔp;α(0,1]} is bounded below, then there exists an α01 such that the two inequalities be satisfied simultaneously [16].

It is important to realize that these two inequalities can be reached by using a back-tracking procedure. Therefore, this work uses a line-search strategy to satisfy the inequalities. Next section proposes a line-search regularized Gauss-Newton algorithm for solving the zero-residual composite function problem.

Advertisement

7. A line-search regularized Gauss-Newton method

This section proposes the following regularized Gauss-Newton method with line search to find a solution on the affine subspace xo+ηLT for problem (4).

Algorithm 2: A reduced-order regularized Gauss-Newton (RORGN)

Input: Given the compressed base LTRn×r, and a displacement xoRn.

Output: The approximate solution in the affine subspace xRn.

1: Initial point of the problem. Given poRr.

2: Initial point in affine subspace. x1=xo+LTpoRn.

3: For k=1: until convergence (Rxkε.).

4: Choose μk=σkRxk, and σk01.

5: Regularized Gauss-Newton direction. Solve for Δpk

(JxkLTTJxkLT+μkIΔpk=JxkLTTRxk1+σkRxk.E30

6: Line search (sufficient decrease). Find αk01 such that

Rxk+αkLTΔpk2<Rxk2+2104αkfpkTΔpk.E31

7: Update. xk+1=xk+αkLTΔpk.

Remarks: The algorithm is amenable to the use of any suitable basis, not necessarily a wavelet basis. The algorithm can be tested with different initial displacement points. On the other hand, the election of the initial point of the algorithm is not limited to the origin.

Advertisement

8. Numerical examples

The authors run on a MacBook Pro laptop equipped with an Intel(R) Quad-Core(TM) i7-2720QM CPU @ 2.20GHz and 8 GB of RAM. Section 8.2 presents Bratu’s 3D problem. This problem is more challenging since the nonlinear systems become ill conditioned where one approaches the bifurcation point. Therefore, the RORGN algorithm was tested to solve them efficiently. All these Bratu’s problems utilized wavelets-based ROM. One takes the regularization by μk=σkRxk, with σk01. One employs as stopping criteria for the algorithm; either the norm of the residual, ϵk=Rxk, is less than some small positive real value given, ϵ, or a maximum number of iterations reached, kmax.

8.1. Bratu’s 2D problem

The Bratu’s 1D equation can be generalized by replacing the second derivative by a Laplacian [1, 17]. This section numerically studies the nonlinear diffusion equation with exponential source term in two and three dimensions. Let Ω=01n,n=2,3 be a unitary square or cube, where xi01,i=1,,n are the spatial variables while n is the space dimension.

Δu+ζeu=0onΩ;u=ux¯,u=0on∂Ω,E32

and ζR is a coefficient. The Laplacian is defined by

Δ=i=1n2xi2.E33

One can discretize (32) by means of central finite differences on regular tensor product meshes. Homogenous Dirichlet boundaries are enforced conditions in all square or cube faces, see [17] for details.

8.2. Bratu’s 3D problem

Figures 2 and 3 present results from Bratu’s 3D problem. Figure 2 shows the parameter continuation problem while Figure 3 compares FOM and WROM results in the whole domain. For visualization purposes, one cuts away the front half of the cube to see inside it. Figure 2 shows the FOM in continuous line while dashed blue and magenta lines correspond to WROM at 85 and 90%, respectively. The mesh size is 10×10×10 and ζ=1.5 is fixed. In the figure, one has from left to right the FOM, the WROM, and the absolute error. Neither of these WROM models could reproduce the FOM behavior beyond ζ>9. They could not get into neither the second branch nor close to the bifurcation point, where the system becomes highly nonlinear. On the other hand, Figure 3 compares FOM versus WROM at 90% in order to show that these WROM could properly reproduce the FOM behavior in the whole cube. Table 1 summarizes the performance of the family of Gauss-Newton algorithms applied to FOM Bratu’s 3D problem. One employs these numerical values, ϵtol=103 and kmax=32. Once again, one gets close to the bifurcation point by choosing, ζ=9.9, to pose a challenging nonlinear system while σ was tuned to achieve performance for a given rank.

Figure 2.

FOM vs. WROM parameter continuation solutions of (32), for n=3, are shown.

Figure 3.

FOM and ROM are compared: FOM (left), WROM at 90% (center), and error (right).

Newton methodϵkδk#IterSuccess
N = 512, σ=0.03
Standard1.637075E−044.497266E−087True
Combined5.089393E−048.391213E−075True
Regularized5.089393E−048.391213E−075True
Scaled1.637075E−044.497266E−087True
Line search1.637075E−044.497266E−087True
Com. and Line search5.089393E−048.391213E−075True
N = 1000, σ=0.025
Standard3.303599E−041.682251E−077True
Combined1.424983E−057.450719E−106True
Regularized1.424983E−057.450719E−106True
Scaled3.303599E−041.682251E−077True
Line search3.303599E−041.682251E−077True
Com. and Line search1.424983E−057.450719E−106True
N = 1728, σ=0.020
Standard5.792054E−044.806531E−077True
Combined1.197055E−046.096723E−085True
Regularized1.197055E−046.096723E−085True
Scaled5.792054E−044.806531E−077True
Line search5.792054E−044.806531E−077True
Com. and Line search1.197055E−046.096723E−085True
N = 2744, σ=0.015
Standard1.143068E−051.756738E−108True
Combined1.678400E−041.052920E−076True
Regularized1.678400E−041.052920E−076True
Scaled1.143068E−051.756738E−108True
Line search1.143068E−051.756738E−108True
Com. and Line search1.678400E−041.052920E−076True

Table 1.

Gauss-Newton results for Bratu’s 3D problem presented in Section 8.2.

One observes for all ranks reported herein that the regularized method provides convergence tolerances likewise but it usually spent two iterations less than standard Newton and hence CPU time reduces as well. One notices for this particular problem that line search is equivalent to standard Newton as well as combined matches the performance of the combined plus line-search method.

8.3. Nonlinear benchmark problems

One also considers a benchmark nonlinear problem from the literature in order to challenge the proposed algorithms. The Yamamura [18] problem is a nonlinear system of equations defined by:

R:RnRn,xRn,Rix=0,1in,Rix=2.5xi310.5xi2+11.8xii+i=1nxi=0.E34

where n is the size of the nonlinear system. One implemented the algorithms with ϵtol=106, kmax=128, and σ=0.025.

The objective is to challenge the algorithms presented in this research, and the results are reported in Table 2. One can infer from this table that the standard Newton method could not converge in any of these realizations except n=32. Conversely, the regularized method converged for all realizations. This latter method outperformed all others. On the other hand, the regularized and line-search method could consistently converge for all realizations but n=2048. For larger ranks, that is, 512 and 1024, the latter method is the more efficient bottom line; the regularized method performed well in this example.

Newton methodϵkδk#IterSuccess
N = 32
Standard6.643960E−061.282327E−0747True
Regularized3.547835E−058.310947E−0732True
Line search8.835078E−092.415845E−1378True
Reg. and Line search4.271577E−068.288108E−0935True
N = 256
Standard2.992259E−013.018468E+07128False
Regularized4.260045E−065.643326E−0821True
Line search2.569192E−022.618397E+03128False
Reg. and Line search2.893999E−062.380562E−0835True
N = 512
Standard2.280299E−023.448757E+05128False
Regularized2.384802E−072.579578E−1046True
Line search1.127484E−011.845466E+07128False
Reg. and Line search3.551655E−082.368659E−1142True
N = 1024
Standard9.540053E−054.501800E+01128False
Regularized1.730306E−068.414453E−0941True
Line search7.199610E−055.095672E+00128False
Reg. and Line search8.857816E−064.250476E−0735True
N = 2048
Standard1.769950E+008.504131E+14128False
Regularized7.270952E−081.578029E−1068True
Line search6.195869E−024.180347E+09128False
Reg. and Line search1.069042E−023.229689E−01128False

Table 2.

Gauss-Newton results for Yamamura problem.

Advertisement

9. Hybrid method: HROM

The idea is simple since the wavelet subspace is not a function of a priori known snapshots, it can be determined without executing the so-called, computationally expensive, off-line stage, in which one thoroughly studies the FOM and can sample the input space to record a representative set of snapshots. HROM considers as input snapshots those outputted by a WROM procedure. As shown below, if this WROM happens to reproduce the FOM behavior correctly, then one should expect the resulting PROM, that is, HROM, to replicate the original FOM behavior accurately. At first, glance, depending upon the WROM compression ratio, this procedure reduces the runtime of the off-line stage.

This section conducts a series of preliminary numerical experiments on the well-known Bratu’s nonlinear benchmark problem, in particular in one and two dimensions [5] to sell this case. Figures 4 and 5 depict results for the 1D continuation problem. They utilize the following WROM compression ratios: 10 and 5%, where the compression ratio is constant during the continuation problem. All plots display two distinct HROM compression ratios. The WROM at 20% that is not depicted completely misses the bifurcation zone and the second branch thus the HROM is also way off. However, as the WROM starts to catch up with the FOM then HROM too does. Indeed, it is observed that HROM yields comparable results when comparing it to the version that takes the original snapshots, that is, PROM.

Figure 4.

Bratu?s 1D, 10% compression.

Figure 5.

Bratu?s 1D, 5% compression.

One can repeat a similar experiment with Bratu’s 2D continuation problem, which produces the same trend as before. Indeed, if the input WROM is way off targeting then, HROM is off as well. For instance, 27% compression implies that all HROM miss the second branch, but still, the 21% model could slightly reproduce the proper FOM trend. Things significantly improve on models with 21 and 20% as shown in Figures 6 and 7. However, these models still miss the bifurcation zone. They just render a flat profile there. These insights suggest that if the WROM can correctly reproduce the FOM behavior, then HROM can do so. One is probably able to improve the accuracy of the WROM by changing the compression ratio during the online stage. For Bratu’s problem, one can assume a graded energy distribution which provides more of it while approaching the bifurcation point. This approach is referred as “adaptive WROM” or AWROM as shorthand.

Figure 6.

Bratu?s 2D, 10% compression.

Figure 7.

Bratu?s 2D, 5% compression.

This section presents an alternative strategy that can rely on insights from the FOM, such as Newton tolerances and number of iterations, which are in turn indirect error estimates. Figure 8 plots the energy distribution that was utilized as a function of the continuation parameter, ζ, for Bratu’s 1D problem. When one approaches the bifurcation point, ζ=3.5, one should gradually bump up energy as shown. Notice that the distribution tends to concentrate more points toward the bifurcation point. With this energy distribution into account, one obtains the AWROM results in Figure 9. This ROM accurately reproduces the FOM behavior as noted. Let now conduct the following experiment. One must run the FOM, and at every snapshot, one needs to store the number of Newton iterations and the resulting error tolerance. One then computes the energy distribution that Figure 10 depicts. The following formula was employed to do so:

Ei=1θ+θnItersi/nMaxIters,E35

where nItersi is the number of iterations at the current location, and nMaxIters is the maximum number of iterations reported by the FOM and θ01. Figure 11 depicts excellent accordance between FOM and AWROM θ=0.2. One should expect that if this last AWROM model is inputted for an HROM simulation, HROM should reproduce the original FOM correctly.

Figure 8.

Linear energy distribution.

Figure 9.

10–20% variable compression.

Figure 10.

Variable energy distribution.

Figure 11.

Variable compression.

Figures 12 and 13 depict preliminary results of HROM applied over a couple of AWROM models whose energy distribution was described in Figures 8 and 10, respectively θ=0.2. These ROM reproduce the FOM behavior accordingly, which proves that there is potential to study the performance of HROM further. Another important question that arises from further research is how to improve the compression ratio of the AWROM scheme.

Figure 12.

Linear AWROM.

Figure 13.

Variable AWROM.

Advertisement

10. Concluding remarks

This chapter introduced a global regularized Gauss-Newton method for resolving square nonlinear algebraic systems in an affine subspace that enable real-time solutions without a priori simulations. Solving a nonlinear least-squares composite function poses the problem where the answer is derived from the inside argument. The authors thus presented the standard Newton assumptions that guarantee a q-quadratic rate of convergence. The findings include that the Petrov-Galerkin projection directions for the Newton method are no other than the Gauss-Newton ones for a composite function. The technique uses two initial points, one that determines the affine subspace and the other is the starting guess for solving the composition mentioned earlier. The notion of compressed sensing with wavelets produces the characterization of an affine subspace that comprises the majority of the energy of the problem. The chapter showed some numerical experimentations that back up the proposed globalization methodology for solving highly nonlinear dynamic systems in real time. These last ones reproduce the principal features of the FOM. Results underline the fact that one does not need to employ information at any particular point. The Bratu’s 3D FOM results prove that the proposed RORGN algorithm outperforms the standard GN method while retaining its q-quadratic rate of convergence. This chapter concludes that the regularized and line-search-enabled scheme is the most robust and efficient algorithm for the problems presented herein. The numerical results imply that this approach performs well and it does not significantly increase the CPU time.

From the numerical results presented in Section 9 for Bratu’s 1D and 2D problems, the data fusion procedure (HROM) can be used as an alternative procedure when the simulation time for a problem can be limited. This method uses two different sceneries. In the first, one implies that the data come through out of the model, while in the second, the data are obtained independently of the latter. That is, in the first case the model governs the data, and the other the input information rules the model.

Acknowledgments

The authors recognize the financial support of the project “Reduced-Order Parameter Estimation for Underbody Blasts” financed by the Army Research Laboratory, through the Army High-Performance Computing Research Center under Cooperative Agreement W911NF-07-2-0027, and also acknowledge Dr. Martine Ceberio at UTEP for proofreading the manuscript.

References

  1. 1. Florez H, Argáez M. A model-order reduction method based on wavelets and POD to solve nonlinear transient and steady-state continuation problems. Applied Mathematical Modelling. 2018;53:12-31
  2. 2. Florez H. Linear Thermo-Poroelasticity and Geomechanics. Chapter 10 in Finite Element Method - Simulation, Numerical Analysis and Solution Techniques. In: Pacurar R, editor.Rijeka, Croatia: InTech Open; 2018. pp. 223-242. ISBN: 978-953-51-3849-5. http://dx.doi.org/10.5772/intechopen.71873
  3. 3. Argáez M, Ceberio M, Florez H, Mendez O. A model reduction for highly non-linear problems using wavelets and Gauss-Newton method. In: 2016 Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS); IEEE; 2016. pp. 1-6
  4. 4. Carlberg K, Bou-Mosleh C, Farhat C. Efficient non-linear model reduction via a least-squares Petrov–Galerkin projection and compressive tensor approximations. International Journal for Numerical Methods in Engineering. 2011;86:155-181
  5. 5. Willcox K, Peraire J. Balanced model reduction via the proper orthogonal decomposition. AIAA Journal. 2002;40(11):2323-2330
  6. 6. Chaturantabut S, Sorensen DC. Application of POD and DEIM on dimension reduction of non-linear miscible viscous fingering in porous media. Mathematical and Computer Modelling of Dynamical Systems. 2011;17(4):337-353. DOI: 10.1080/13873954.2011.547660. URL: http://dx.doi.org/10.1080/13873954.2011.547660
  7. 7. Florez H. Applications of model-order reduction to thermo-poroelasticity. In: 51st US Rock Mechanics/Geomechanics Symposium; American Rock Mechanics Association; 2017
  8. 8. Amsallem D, Zahr MJ, Farhat C. Nonlinear model order reduction based on local reduced-order bases. International Journal for Numerical Methods in Engineering. 2012;92:891-916
  9. 9. Kerfriden P, Gosselet P, Adhikari S, Bordas SPA. Bridging proper orthogonal decomposition methods and augmented Newton–Krylov algorithms: An adaptive model order reduction for highly nonlinear mechanical problems. Computer Methods in Applied Mechanics and Engineering. 2011;200:850-866
  10. 10. Hernandez M. A reduced order parameter estimation technique using orthonormal wavelets [PhD thesis]. El Paso: ETD Collection for University of Texas; 2011
  11. 11. Le T-H, Caracoglia L. Reduced-order wavelet-Galerkin solution for the coupled, nonlinear stochastic response of slender buildings in transient winds. Journal of Sound and Vibration. 2015;344:179-208
  12. 12. Daubechies I. Ten lectures on wavelets. In: CBMS-NSF Conference Series in Applied Mathematics. Philadelphia, PA: SIAM Ed.; 1992
  13. 13. Teolis A. Computational Signal Processing with Wavelets. Boston, MA, USA: Birkhauser; 1998
  14. 14. Mallat S. A Wavelet Tour of Signal Processing. London: Academic Press; 1999
  15. 15. Dennis J, Schnabel R. Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Philadelphia, PA, USA: Society for Industrial and Applied Mathematics; 1996. DOI: 10.1137/1.9781611971200. URL: http://epubs.siam.org/doi/abs/10.1137/1.9781611971200
  16. 16. Wolfe P. Convergence conditions for ascent methods. SIAM Review. 1969;11(2):226-235. ISSN: 00361445. URL: http://www.jstor.org/stable/2028111
  17. 17. Florez H, Argáez M. Applications and comparison of model-order reduction methods based on wavelets and POD. In: 2016 Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS); IEEE; 2016. pp. 1-8
  18. 18. Yamamura K, Kawata H, Tokue A. Interval solution of nonlinear equations using linear programming. BIT. 1998;38:186-199

Written By

Horacio Florez and Miguel Argáez

Submitted: 16 October 2017 Reviewed: 25 January 2018 Published: 18 July 2018