Open access peer-reviewed chapter

An Effective H2-LU Preconditioner for Iterative Solution of MQS Integral-Based Formulation P

Written By

Salvatore Ventre, Bruno Carpentieri, Gaspare Giovinco, Antonello Tamburrino, Fabio Villone and Guglielmo Rubinacci

Submitted: 08 April 2022 Reviewed: 15 September 2022 Published: 26 October 2022

DOI: 10.5772/intechopen.108106

From the Edited Volume

Advances in Fusion Energy Research - From Theory to Models, Algorithms, and Applications

Edited by Bruno Carpentieri and Aamir Shahzad

Chapter metrics overview

67 Chapter Downloads

View Full Metrics

Abstract

We present iterative solution strategies for solving efficiently Magneto-Quasi-Static (MQS) problems expressed in terms of an integral formulation based on the electric vector potential. Integral formulations give rise to discrete models characterized by linear systems with dense coefficient matrices. Iterative Krylov subspace methods combined with fast compression techniques for the matrix-vector product operation are the only viable approach for treating large scale problems, such as those considered in this study. We propose a fully algebraic preconditioning technique built upon the theory of H2-matrix representation that can be applied to different integral operators and to changes in the geometry, only by tuning a few parameters. Numerical experiments show that the proposed methodology performs much better than the existing one in terms of ability to reduce the number of iterations of a Krylov subspace method, especially for fast transient analysis.

Keywords

  • H2-matrix
  • LU decomposition
  • MQS volume integral formulation
  • time domain
  • preconditioning
  • eddy currents
  • controlled thermonuclear fusion

1. Introduction

In this work, we consider the numerical solution of the Magneto-Quasi-Static (MQS) problem expressed in terms of an integral formulation based on the electric vector potential [1, 2, 3]. Integral formulations give rise to discrete models characterized by linear systems with dense coefficient matrices. Iterative Krylov subspace methods combined with fast compression techniques for the matrix-vector product operation are the only viable approach for treating large scale problems, such as those considered in this work. In this framework, a key role is played by the choice of the preconditioner, which has to be (i) robust enough to reduce significantly the number of iterations required to converge toward an accurate numerical solution, and (ii) efficient to reduce the overall computational and memory costs for the iterative solution. In this work, we focus on this critical computational aspect for the MQS problem. We propose a fully algebraic preconditioning technique built upon the theory of H2-matrix representation, which can be applied to different integral operators and to changes in the geometry only by tuning a few parameters.

Many algebraic preconditioning methods for solving dense linear systems arising from the discretization of integral operators are computed from a sparse approximation of the stiffness matrix, which retains the near-field mesh interactions giving rise to the most relevant contributions to the singular integrals of the model and is easy to be factorized or inverted [4, 5, 6]. However, our past experience in using such methods on our MQS problem, e.g., multilevel incomplete LU factorizations [7], sparse approximate inverses [8], and preconditioners computed from the sparse resistance matrix of the discrete system of equations (see Eq. 5 below) was not completely satisfactory. We found that the lack of global information due to the compact support of the sparse near-field matrix often results in slow convergence. For instance, in the CARIDDI code that implements the electric vector potential formulation of our problem [1, 9], a preconditioner based on the sparsity pattern arising from local interactions among neighboring basis functions is used to accelerate the iterative solution (see [10]). The aforementioned limitations are clearly shown in our numerical experiments, suggesting the need to develop global preconditioners that can incorporate contributions from the “far-field” interactions for this class of problems. In our model, far-field interactions can be included by taking into account the vector potential.

Prompted by these considerations, in this paper, we propose a class of preconditioners based on the so-called Hierarchical or H-matrices [11], which provide a general mathematical framework for a highly compact and kernel independent accurate representation of integral equations with almost linear complexity [12]; see [13] for an example of application. H -matrices can be used to define global preconditioners built by means of the decomposition of small rank blocks at a moderate memory requirement. In previous works by other authors, highly efficient approximate LU-factorization and multilevel sparse approximate inverses preconditioners of boundary element matrices requiring an almost linear computational cost have been developed (see [14]) by means of the fast arithmetic of H -matrices. The novelty of this work is to use the latest development of H -matrices, that is, H2 -matrix theory [15, 16].

The paper is organized as follows. In Section 2, we present the MQS volume integral formulation used in our study and its numerical implementation. In Section 3, we establish the need for an effective preconditioner for our problem. In Section 4, we outline some fundamental properties of H2-matrix theory, and in Section 5, we describe the computational steps of a matrix factorization that has ONlogN complexity, N being the number of DoFs (Degrees of Freedom) and is applied for the design of our preconditioner. We assess the effectiveness of the proposed method for eddy current computations in fusion devices simulations by numerical experiments in Section 6, also against other algebraic preconditioning strategies. Finally, in Section 7, we draw some concluding remarks arising from the study and some perspective for future work.

Advertisement

2. The volume integral numerical formulation

Here we recall the main assumptions at the basis of a Magneto-Quasi-Static volume integral formulation and of its numerical implementation. We refer to a conducting region VC, without any topological constraint, characterized by the ohmic constitutive equation J=σE, where J is the current density, σ is the material conductivity, and E is the electric field induced in the conductor by a set of time-varying external sources distributed with assigned current density JS.

The relevant equations in the magneto-quasi-stationary limit are

E=AtφE1
×A=BE2

where

Art=μ04πVcJr'trr'dV+AsrtE3

and

Asrt=μ04π3VcJsr'trr'dV.E4

By imposing the Ohm law in weak form, we have

VcWσ1J+tμ04πVcJr'trr'dV+AstdV=0E5

for any WS and with JS, S being the set of solenoidal vector functions with continuous normal component across VC and zero normal component on the boundary of VC.

The numerical model implemented in the CARIDDI code [1, 9] is obtained by applying the Galerkin method to Eq. (5), having expanded the unknown currents J as Jrt=kIkt×Nkr, where Nk is the set of edge element basis functions whose uniqueness is assured by the tree-cotree gauge [17]. Upon discretization, we obtain the following linear dynamical system:

LdIdt+RI=dV0dt,t0I0=i0,E6

where It=Iit,R=Rij,L=Lij,V0t=V0,it and

Rij=Vc×Nirσ1×NjrdV,E7
Lij=μ04πVcVc×Nir×Njr'rr'dVdV,E8
V0,kt=Vc×NkrAsrtdV.E9

The quantity i0 in (6) denotes the prescribed initial condition. The vector potential Asrt is produced by a known source current density Jsrt, and it can be computed by means of the Biot-Savart law. For the sake of simplicity, we assume that the source current density Js is due to the current ist flowing in a given coil.

The ODE in (6) is integrated in time by the following scheme [18]:

AIn=LIn1+V0,nV0,n1I0=i0,E10

where In=InΔt and Vn=VnΔt, and matrix A is expressed as

A=RΔt+L.E11

The quantity Δt is the time integration step. Matrix R is sparse and local, whereas L is fully populated because it arises from the discretization of an integral operator. Note that R, L, and A are positive definite. The solution of large linear systems with a dense coefficient matrix poses a relevant computational challenge as it requires fast matrix solvers with reduced algorithmic and memory complexity. In this work, we solve Eq. (10) iteratively, for each prescribed n. To speed up the iterative solution, we develop an accurate and computationally efficient preconditioner P built upon the H2 -matrix representation of matrix A.

Advertisement

3. The need of an effective preconditioner

As it is well known, the main operations that are performed at each step of an iterative solver are: (i) the computation of the matrix-by-vector product Ax, and (ii) the application of the preconditioner. The overall computational cost of the iterative solution is then proportional to the number of arithmetic operations required to carry out the multiplication Ax and to the number of iterations that are necessary to converge to a user-defined accuracy. The role of the preconditioner is to reduce the number of iterations and, ultimately, the overall solution cost [19]. In this work, we apply the preconditioner from the left and replace the linear system in (10) with the new equivalent system

P1AIn=P1LIn1+V0,nV0,n1,E12

where the preconditioned stiffness matrix P1A is assumed to be well conditioned and to have most of its eigenvalues clustered around point one of the spectrum. Therefore, the “ideal” preconditioner has to be a good approximation to A1. A critical issue in this context is the dependence of matrix A from Δt. In fast transient analysis, Δt has to be small enough to model properly the underlying dynamics. On the other hand, in slow transients Δt has to be large enough to cover the full-time interval of interest at acceptable computational cost. The dependence of A on Δt yields the dependence of P on Δt. According to (8), for small Δt the preconditioner should be tailored mainly on L, whereas for large Δt the preconditioner should be tailored mostly on R. Here and in the following, “small” and “large” Δt should be intended as compared with the slowest eigenmodes of the dynamical matrix L1R. In intermediate cases (Δt neither too small nor too large), the preconditioner should depend on both R and L. Since the use of an implicit time stepping does not impose specific restrictions on Δt, the appropriate value of Δt depends on the underlying dynamics induced by the source, i.e., on the rate of change of V0t defined in (9) and appearing in (6).

A canonical waveform for ist, which can be used as the elementary building block to approximate more complex waveforms1, is given by a linear ramp ist=αt, for t0, being α the rate of change for the electrical current. Consequently, V0t=Wt for t0 is also linear. The source term V0,nV0,n1 is equal to WΔt, which is constant for any n. Let us consider a transient starting from a vanishing initial condition i0=0 in (6). As long as the current at the previous step is negligible, Eq. (10) reduces to AIn=W. Therefore, the matrix to be inverted is A, and the preconditioner P has to be “tuned” on such matrix. On the other hand, when the stationary state is achieved, the solution In is constant, and Eq. (10) reduces to ΔtR+LI=LI+W, that is, ΔtRI=W. In this case, the matrix to be inverted is R, and the preconditioner needs to be “tuned” on R. This explains the reason why the preconditioner based on matrix R is not effective in dealing with fast dynamics. As an example, we measured its performances on a 3D transient eddy current problem where the driving current has a time-varying waveform given by ist=αt, for t0, where α=107Hz. This electrical current flows in a circular ring (R=10cm), centered on the axis of a flat conducting disk (radius R0=1m, thickness 3mm). Validation is carried out against the Femlab© code [Version 3.51], as shown in Figure 1 (right). Figure 1(left) shows the induced current distribution at t=3.4ms together with the finite element mesh used in the computation and the ohmic losses in the conductor. Figure 2 shows the characteristics of the iterative inversion using P=R as a preconditioner and the Generalized Minimal Residual Method (GMRES) [20] as the iterative solver. Since the number of discrete unknowns for this test problem is 22,657, the results of Figure 2 clearly show that that the resistive preconditioner P=R is not effective at the early stage of the transient phase, when the inductive effects accounted by matrix L are not negligible.

Figure 1.

Left: discretization of the plate used for assessing the performances of the preconditioner, together with the induced current distribution due to a current on a coaxial ring in a significant instant of the transient. Right: power losses on the plate as a function of time computed by CARIDDI (blue) and by Femlab© code (red), under the hypothesis of axisymmetry.

Figure 2.

Number of iterations of GMRES using R as a preconditioner during the transient.

Advertisement

4. H2-matrix representation

The H2-matrix representation is widely used for storing efficiently a dense matrix arising from the discretization of a boundary integral equation [15]. In H2-matrix theory, the degrees of freedom (DoFs) of the underlying finite element mesh are partitioned into small subsets (or cluster) of nodes (they are edges in our problem). We illustrate the partitioning process with reference to the geometry shown in Figure 3 that will be described in detail in the next section. At the first step, the set of DoFs is split in two clusters of nodes (see Figure 3 (left)); then, each cluster is split recursively (see Figure 3 (center) and (right)) until a minimum number of DoFs (called leaf size or LS) is reached by the recursive partitioning. The binary split is carried out using geometric information from the mesh. First, the coordinates of the barycenter of the DoFs are computed along the x, y, and z axes. Then, the axis (either x or y or z) corresponding to the largest geometric coordinate in magnitude is selected, and the DoFs are split accordingly by introducing a plane orthogonal to this axis and passing through the median of the distribution of the coordinates of the DoFs along the same axis (see Figure 4). A key point of the partitioning is that cluster of nodes are built to contain DoFs that are geometrically neighbors in the finite element mesh. From a geometric viewpoint, all the DoFs belonging to a given cluster of edges are contained within a parallelepiped, which is parallel to the coordinate planes (see Figure 5). The key parameters related to a cluster Nk are: (i) the number of DoFs contained in the cluster, (ii) the parallelepiped Pk containing the DoFs of Nk, (iii) the diameter dk=Pk and the center ck of Pk.

Figure 3.

The hierarchical partitioning of the DoFs. Left: The DoFs of the mesh are split into two sets (or clusters) of nodes. Center: The DoFs of the red set at the first step are partitioned in two subsets. Right: The DoFs of the red set at the second step are further partitioned into two subsets and so on.

Figure 4.

Example of DOFs splitting for a two-dimensional (2D) unstructured mesh.

Figure 5.

Top: Two clusters that are classified as far. The size of the individual regions (given by the diagonal of the parallelepiped that bounds the region) occupied by the DoFs is larger than the distance between the barycenter of the clusters. Bottom: Two clusters that are classified as near. The size of the individual regions occupied by the DoFs is not large enough if compared with the distance between the barycenter of the clusters.

This partitioning process can be efficiently represented in a binary tree data structure. The binary tree is of paramount importance to classify interactions between clusters of DoFs in terms of near (or inadmissible) interactions and far (or admissible) interactions. An interaction is defined far if the two clusters of DoFs are geometrically well separated in the mesh, and near otherwise, as shown in Figure 5. We will use a simple criterion based on geometric distance for the classification. Two cluster of nodes Nk and Nj are named far if the ratio

ξkj=̂ckcjmaxdkdj,E13

is greater than a suitable threshold ξTHR>1.

The mesh partitioning into clusters of nodes is represented by means of the so-called Block Cluster Tree (BCT) data structure (see Figure 6 and [16] for details). Thanks to its binary structure, the BCT can be easily mapped into a corresponding partitioning of the discretization matrix L and, hence, of A into small blocks, yielding naturally a decomposition A=Anear+Afar.

Figure 6.

Example of cluster tree partitioning (n=8 clusters). Left: The block cluster tree. Right: Matrix representation of the block cluster tree after four levels of matrix block partitioning. Admissible blocks are green, and inadmissible blocks are red.

Interactions of near clusters lead to high or full-rank blocks (the near part Anear of A) that need to be explicitely computed and stored, whereas far clusters interactions give rise to low-rank blocks (the far part Afar of A). In the H2 -matrix representation, the far interactions between, say, the i-th and j-th clusters, are approximated by a low-rank matrix factorization of the form

AijViSijVjT.E14

Such factorization can be obtained by using a degenerate Lagrange polynomial approximation of the kernel [3]. This is possible because the kernel defined in Eq. (8) is smooth when it is evaluated on the far field. The columns of matrix Vi represent the expansion basis for the DoFs associated to the i-th cluster, and Sij is the coupling matrix of dimensions ki×kj where ki and kj are the column dimensions of Vi and Vj, respectively. It is worth noting that for far interactions, integers ki and kj are expected to be much smaller than the number of rows of matrices Vi and Vj. This representation of the far field is highly efficient in terms of both memory and computational costs, since it involves only small-rank matrices (Vi for any i, and Sij for any i and j). Indeed, linear memory complexity can be achieved by means of the H2 -matrix representation [12].

Hereafter, the following notation will be used: we denote by Np the number of Lagrange points per direction used for the degenerate kernel approximation, and by eij the relative error of the low-rank representation of a matrix block Aij associated to the interactions of the i-th and j-th far clusters:

eij=AijViSijVjTAij,E15

where is the conventional 2-norm for matrices.

Advertisement

5. The H2-matrix factorization

To develop an effective preconditioner P for solving Eq. (12), first we set P equal to AH2, the H2-matrix representation of A. Then, by exploiting the hierarchical structure of AH2, we build an efficient LU factorization of P. A core operation of the factorization of P is the computation of the Shur complement for the block matrix AH2. In general, this computation is very expensive. However, the H2-approximation introduces many off-diagonal blocks, which are vanishing upon a suitable orthonormal transformation of AH2. Hence, the overall factorization costs of P can be significantly reduced to ONlogN, N being the number of DoFs [16, 21].

The factorization algorithm proceeds level by level, starting from the last level of the BCT and computing a partial LU decomposition at each cluster of the tree. With reference to the leaf level, the last level in the BCT, the original stiffness matrix P is expressed as:

P=LoC1UpE16

where C1=diagId1Id2Id#leaf clustersC and Lo and Up can be easily inverted because they are lower and upper triangular matrices, respectively. In (16), Idi is an identity matrix of proper size, and C is a full matrix of dimension much smaller than the original matrix P. After the factorization (16) has been computed at the current level, the algorithm continues at a higher level of the BCT by applying recursively the same process to matrix C.

Two key computational aspects have to be carefully considered and monitored during the factorization. The first one is the accuracy of the approximation AH2 to A, which is used to define the preconditioner P. Obviously, the closer AH2 to A, the more effective P is as a preconditioner for the iterative solver. The second aspect refers to the error introduced in the approximated factorization of P. In the rest of the section, we discuss in detail the factorization of P.

Specifically, matrix P is factorized in terms of a partial LU decomposition [16]. For doing this operation efficiently, using some properties of H2-matrix theory is of paramount importance to reduce the overall cost. The complete multilevel factorization algorithm is presented in Algorithm 1.

Algorithm 1: The iterative factorization algorithm.

  1. Set Acurr=AH2.

  2. fork=last level to the root do.

  3.   for any cluster i at level kdo.

  4.     Set Acurr=QiHAcurrQi, where Qi is defined in Eq. (14).

  5.     Compute the partial LU factorization of Acurr as.

    Acurr=LiI00A˜currUi, where Li and Ui are lower and upper triangular matrices, respectively.

  6.     Set Acurr=I00A˜curr

  7.   end for.

  8. end for.

At step 5, matrix Qi encompasses the basis related to the i-th cluster as follows:

Qi=diagId1Id2ViViI#leaf clusters.E17

The orthonormal transformation at step 5 is computationally cheap to carry out because of the simplified structure of matrix Qi, which involves only the i-th diagonal block of matrix Acurr, of dimension di×di. In the definition of Qi in Eq. (17), Vi is the orthogonal complement of the cluster basis Vi. This update of Acurr, which essentially corresponds to a change of basis, is very effective on the far blocks to the i-th diagonal block of Acurr. Thanks to (14), it is easy to verify that the first dik rows of these blocks are zeroed out, where k is the rank used for the approximate factorization at the -th level of the BCT. Since most of the blocks in the original matrix A are far, they will be efficiently compressed by the procedure described above (see [16]). At step 10, a complete LU factorization of a dense matrix is required. However, this operation is rather cheap because matrix A˜curr, resulting from the previous steps, is small.

We highlight an important computational aspect of Algorithm 1. Upon processing the i-th cluster at step 5, the partial LU factorization of Acurr writes as

Acurr=AiiAipApiApp=Lii0ApiUii1ILiI00A˜currUiiLii1AipApiIUiE18

where Aii=LiiUii, i is equal to the number of nodes in the i-th clusters minus the approximation rank at the current level, and

A˜curr=AppApiUii1Lii1Aip.E19

Clearly, matrix A˜curr still has a block structure, which is inherited from the cluster-to-cluster interactions. Contributions to the update term ApiUii1Lii1Aip due to near clusters mesh interactions are computed explicitely. On the other hand, contributions due to far clusters interactions are approximated using a low-rank factorization of the form given by Eq. (11), where cluster bases are simply updated by appending new columns pointing in the direction of the space orthogonal to the range of Vi, thus avoiding to re-compute the whole basis from scratch. This approximate computation of the Schur complement is essential to achieve ONlogN complexity. For further details, see [16].

Summing up, in this method two important sources of errors are introduced during the factorization:

  • The far-field approximation. Rather than factorizing the original matrix A, we factorize its H2-matrix representation AH2. The accuracy of this approximation depends on the number of Lagrange points (Np) used to compute the H2-matrix approximation of A.

  • The error introduced in the Schur complement update (6) for each processed cluster. This error is controlled by a suitable threshold parameters and is the dominant one in the presented algorithm.

Advertisement

6. Numerical results and discussion

The numerical example described in this section refers to eddy current computations in fusion devices [22, 23]. The mesh of our case study is depicted in Figure 7. The plasma made of hydrogen isotopes inside the vacuum chamber of the device can be broadly described as a deformable current-carrying conductor in the frame of the so-called Magneto-Hydro-Dynamics (MHD) equations. Consequently, the electromagnetic interaction of the plasma with surrounding conducting structures is fundamental in the description of the behavior of such devices. In nominal situations, when one wants to intentionally control plasma current, position, and shape [24], this interaction occurs on a time scale of the order of tens or hundreds of milliseconds in the largest existing devices. Conversely, also off-normal events can occur, such as the so-called disruptions [25]; in such cases, the plasma energy content is released on a much shorter time (milliseconds or submilliseconds time range).

Figure 7.

The analyzed case study. Left: mesh of ITER magnet system. Right: A 45 sector.

It should be noted that the geometry of fusion devices, which is ideally axisymmetric, in practice can show significant three-dimensional features (ports, holes, cuts, slits, etc.), which must be taken into account when computing the magnetic fields, the current densities, and the related force densities, which are required to get a satisfactory electromagnetic modeling. Summing up, fusion devices are particularly challenging from the computational electromagnetic viewpoint: they are multiphysics devices, with complicated three-dimensional features giving rise to huge computational models, with different timescales of interests involved. The methods developed in the present paper are designed to enable us to tackle such intrinsic difficulties to some extent. The number of nodes and elements of the finite element mesh is 249849 and 132562, respectively, resulting in 145243 DoFs. The first three partitioning levels for such mesh are schematically depicted in Figure 3. In this study, the time step Δt is set equal to 1ms and 0.1ms, to account for the different timescales mentioned above.

As aforementioned, the main operations required by an iterative solver are: (i) the computation of the matrix-by-vector product Ax and (ii) the application of the preconditioner at every iteration. We choose GMRES as the iterative solver. The preconditioner is the compressed approximate matrix AH2 presented in Section 5. Clearly, the use of a preconditioner introduces additional costs in terms of memory and computational cost, during the preprocessing and at each iteration. Therefore, it pays off only if there is a proper reduction of the number of required iterations. Here, to assess the performances of the preconditioner, we use matrix A without any approximation in computing the matrix-by-vector product during GMRES iterations. We stop the iterative process either when the initial preconditioned residual norm is reduced by five orders of magnitude or after 90 iterations. In our case, the computational time required to apply the preconditioner is proportional to its memory footprint. This is because of the almost linear complexity of the overall procedure (see [16] for details): both computational time and memory requirements increase linearly with the number of DoFs.

The performance of the proposed approach depends mainly on two key parameters: the number of Lagrange points Np and the leaf size LS. We remind that LS determines the size of matrix Anear arising from near cluster interactions, whereas Np affects the number of columns of Vi, the size of Sij and, consequently, the accuracy of Afar. It is worth noting that Afar in the H2 -matrix representation does not contribute significantly to the overall required memory: in all numerical experiments of this work, the memory required by Afar is less than 10% of the storage required by Anear.

From (11), we have that the preconditioner memory footprint depends only on the storage of matrices L0, Up and the dense but smaller block C of matrix C1 appearing in (16). Figure 8 shows the memory required by the preconditioner (MLU) as a function of LS and Np. The result is normalized with respect to the memory required to store the full matrix A, which is about 169GB.

Figure 8.

Memory occupation as LS and Np varies.

The memory footprint required by matrix C1 decreases with LS. Indeed, an increase of LS enlarges the size of the block-to-block interactions, resulting in a smaller reduced matrix C1 because of a more effective compression of the block-to-block interactions. On the other hand, the sizes of L0 and Up depend on: (i) the size of the near part, strictly depending on LS, and (ii) the ability of the H2 -matrix representation to compress the admissible block-to-block far-field interactions.

If we reduce LS, the memory required by the near interactions matrix Anear decreases, but the memory required by the far-field block-to-block interactions matrix increases. Indeed, we observe that at LS=500 the required memory depends on Np, whereas for LS=2000 the required memory is almost independent on Np. Eventually, we have an intermediate dependence for LS=1000. The drawback related to the memory demands can be addressed in a parallel environment.

To assess the effectiveness of the proposed preconditioner, we make a comparison with the “reference” preconditioner based solely on matrix R. This choice resulted to be much more effective than many other state-of-the-art algebraic preconditioners, such as robust multilevel, incomplete LU factorizations, and sparse approximate inverse methods to accelerate the GMRES convergence. Also, we evaluated a preconditioner based on the sum of matrix R plus a contribution from matrix L on the same pattern of R. This preconditioner failed to converge at increasing mesh size. Summing up, finding a proper preconditioner for this class of large-scale problems is challenging.

Figure 9 (top Δt=1ms) and (bottom Δt=0.1ms) shows the preconditioned relative residual norm ER of the GMRES solver versus the number of iterations for different LS and Np.

Figure 9.

Performance of H2 -LU preconditioner, for different values of the leaf size (LS) and number of Lagrange points (Np). Top: Δt=1 ms. bottom: Δt=0.1 ms. performances from the reference preconditioner based on R are also shown.

The numerical results clearly show that:

  • The H2 -LU generally performs much better than the preconditioner based on R preconditioner.

  • The efficiency of the preconditioner strongly depends on the choice of the parameters Np and LS that control the accuracy of Afar.

We highlight that the accuracy of Afar is rather low for Np=1 and that the preconditioner based on R performs better in this situation. In the other cases, the proposed H2 -LU preconditioner outperforms the reference one based on R, proving that it incorporates a good mechanism to balance contributions from matrices L and RΔt in A. This explains why at the smaller Δt, when L gives the dominant contribution to A, the number of iterations for the R based preconditioner increases. On the other hand, when using the H2 -LU preconditioner, which accounts for both RΔt and the dominant part of L, the number of iterations remains almost unchanged with Δt.

The accuracy of Afar is a critical point for the effectiveness of the proposed preconditioner. To asses this, we define two quantities, denoted as EA and ESD, that are the mean and the standard deviation for the relative error eij defined in (15) arising from all the admissible block-to-block pairs ij. As expected, Figure 10 shows that the error decreases as Np increases. Note that for Np=1, a relative error larger than 0.1 can be obtained. Eventually, the accuracy of the preconditioner is shown in Figure 11 for different values of LS, Np, and Δt. To quantify the error, we introduce the metric ES defined as

Figure 10.

Mean and standard deviation of the relative error in Afar.

Figure 11.

Solution error Es, as LS, Np, and Δt vary.

ES=IappII,E20

where I is computed using a direct method, whereas Iapp is computed with the proposed preconditioned iterative solver. In (19), is the 2-norm, and both I and Iapp are evaluated at a prescribed time.

Figure 11 confirms that the accuracy of the solution increases with Np. Only for LS=500 and Δt=1ms the solution with Np=3 is a little bit less accurate than the one with Np=2. It is worth noting that as the accuracy of Afar increases, the iterative scheme converges in very few iterations. Summing up, these numerical results confirm that an efficient iterative solver for the CARIDDI integral model of the eddy current problem needs a preconditioner incorporating some far-field approximations from the interaction matrix L.

Advertisement

7. Conclusions and future work

In this paper, a new preconditioner based on the H2 -LU decomposition has been introduced for solving integral equations models for eddy current problems. The application case refers to a large-scale model arising in nuclear fusion devices. The numerical results show that the preconditioner exhibits better performances compared with the purely resistive preconditioner. Specifically, the required number of iterations has been strongly reduced at different time steps. Moreover, the increase of costs in terms of both memory and CPU time during the preprocessing and at each iteration is largely paid off by the reduced number of iterations. In our view, two possible areas of future developments are: (i) deploying a parallel implementation of the H2 -LU decomposition in order to speed up the code execution and reduce the memory demands, and (ii) studying the performances of the method when it is used as a direct solver.

Advertisement

Acknowledgments

This work was partially supported by the program “Dipartimenti di Eccellenza 2018–2022,” MIUR, Italian Ministry of University and Research, by the project “CAR4PES,” 4th Marconi Fusion cycle, MUR PRIN Grant# 20177BZMAH and by the Research Südtirol/Alto Adige 2019 grant FH2ASTER, Provincia autonoma di Bolzano/Alto Adige – Ripartizione Innovazione, Ricerca, Università e Musei, contract nr. 19/34.

References

  1. 1. Albanese R, Rubinacci G. Finite element methods for the solution of 3D eddy current problems. Advances in Imaging and Electron Physics. 1997;102:1-86
  2. 2. Rubinacci G, Tamburrino A, Villone F. Circuits/fields coupling and multiply connected domains in integral formulations. IEEE Transactions on Magnetics. 2002;38(2):581-584
  3. 3. Rubinacci G, Tamburrino A. Automatic treatment of multiply connected regions in integral formulations. IEEE Transactions on Magnetics. 2010;46(8):2791-2794
  4. 4. Carpentieri B. Algebraic preconditioners for the fast multipole method in electromagnetic scattering analysis from large structures: Trends and problems. Electronic Journal of Boundary Element. 2009;7(1):13-49
  5. 5. Carpentieri B. Fast iterative solution methods in electromagnetic scattering. Progress in Electromagnetic Research. 2007;79:151-178
  6. 6. Carpentieri B. A matrix-free two-grid preconditioner for boundary integral equations in electromagnetism. Computing. 2006;77(3):275-296
  7. 7. Carpentieri B, Bollhöfer M. Symmetric inverse-based multilevel ILU preconditioning for solving dense complex non-hermitian systems in electromagnetics. Progress In Electromagnetics Research. 2012;128:55-74
  8. 8. Carpentieri B, Duff I, Giraud L, Sylvand G. Combining fast multipole techniques and an approximate inverse preconditioner for large electromagnetism calculations. SIAM Journal on Scientific Computing. 2005;27(3):774-792
  9. 9. Albanese R, Rubinacci G. Solution of three dimensional eddy current problems by integral and differential methods. IEEE Transactions on Magnetics. 1988;24(1):98-101
  10. 10. Rubinacci G, Ventre S, Villone F, Liu Y. A fast technique applied to the analysis of resistive wall modes with 3D conducting structures. Journal of Computational Physics. 2009;228(5):1562-1572
  11. 11. Hackbusch W. A sparse matrix arithmetic based on -matrices. Part i: Introduction to -matrices. Computing. 1999;62(2):89-108
  12. 12. Börm S, Grasedyck L, Hackbusch W. Hierarchical Matrices. Leipzig: Max Planck Institute for Mathematics in the Sciences; 2003
  13. 13. Voltolina D, Bettini P, Alotto P, Moro F, Torchio R. High-performance PEEC analysis of electromagnetic scatterers. IEEE Transactions on Magnetics. 2019;55(6):1-4
  14. 14. Bebendorf M. Hierarchical LU decomposition-based preconditioners for BEM. Computing. 2005;74(3):225-247
  15. 15. Börm S. Efficient Numerical Methods for Non-Local Operators: 2-Matrix Compression, Algorithms and Analysis. Vol. 14. Zürich, Switzerland: European Mathematical Society; 2010
  16. 16. Ma M, Jiao D. Accuracy directly controlled fast direct solution of general 2-matrices and its application to solving Electrodynamic volume integral equations. IEEE Transactions on Microwave Theory and Techniques. 2017;66(1):35-48
  17. 17. Albanese R, Rubinacci G. Integral formulation for 3D eddy-current computation using edge elements. IEE Proceedings A-Physical Science, Measurement and Instrumentation, Management and Education-Reviews. 1988;135(7):457-462
  18. 18. Atkinson K, Han W, Stewart D. Numerical Solution of Ordinary Differential Equations. Vol. 108. Hoboken, New Jersey, U.S.: John Wiley & Sons; 2011
  19. 19. Golub G, Loan CV. Matrix Computations: Johns Hopkins Studies in the Mathematical Sciences. third ed. Baltimore, MD, USA: The Johns Hopkins University Press; 1996
  20. 20. Saad Y. Iterative Methods for Sparse Linear Systems. 2nd ed. Philadelphia: SIAM; 2003
  21. 21. Ventre S, Carpentieri B, Giovinco G, Tamburrino A, Rubinacci G, Villone F. An 2-LU preconditioner for MQS model based on integral formulation. In: Poster Presentation at the 22nd International Conference on the Computation of Electromagnetic Fields, COMPUMAG 2019, July 15–19. Paris, France; 2019
  22. 22. Testoni P, Forte R, Portone A, Rubinacci G, Ventre S. Damping effect on the ITER vacuum vessel displacements during slow downward locked and rotating asymmetric vertical displacement events. Fusion Engineering and Design. 2018;136:265-269
  23. 23. Ramogida G, Maddaluno G, Villone F, et al. First disruption studies and simulations in view of the development of the DEMO physics basis. Fusion Engineering and Design, Open Access. 2015;96–97:348-352
  24. 24. Ariola M, Pironti A. Magnetic Control of Tokamak Plasmas. Vol. 187. Berlin/Heidelberg, Germany: Springer; 2008
  25. 25. Hender T, Wesley J, Bialek J, et al. MHD stability, operational limits and disruptions. Nuclear Fusion. 2007;47(6):S128

Notes

  • The linear ramp can be used to build a piecewise linear approximation of a continuous function.

Written By

Salvatore Ventre, Bruno Carpentieri, Gaspare Giovinco, Antonello Tamburrino, Fabio Villone and Guglielmo Rubinacci

Submitted: 08 April 2022 Reviewed: 15 September 2022 Published: 26 October 2022