Open access peer-reviewed chapter

Modified Moving Least Squares Method for Two-Dimensional Linear and Nonlinear Systems of Integral Equations

By Massoumeh Poura’bd Rokn Saraei and Mashaallah Matinfar

Submitted: February 5th 2019Reviewed: August 28th 2019Published: May 13th 2020

DOI: 10.5772/intechopen.89394

Downloaded: 216

Abstract

This work aims at focusing on modifying the moving least squares (MMLS) methods for solving two-dimensional linear and nonlinear systems of integral equations and system of differential equations. The modified shape function is our main aim, so for computing the shape function based on the moving least squares method (MLS), an efficient algorithm is presented. In this modification, additional terms is proposed to impose based on the coefficients of the polynomial base functions on the quadratic base functions (m = 2). So, the MMLS method is developed for solving the systems of two-dimensional linear and nonlinear integral equations at irregularly distributed nodes. This approach prevents the singular moment matrix in the context of MLS based on meshfree methods. Also, determining the best radius of the support domain of a field node is an open problem for MLS-based methods. Therefore, the next important thing is that the MMLS algorithm can automatically find the best neighborhood radius for each node. Then, numerical examples are presented that determine the main motivators for doing this so. These examples enable us to make comparisons of two methods: MMLS and classical MLS.

Keywords

  • moving least squares
  • modified moving least squares
  • systems of integral equations
  • algorithm of shape function
  • numerical solutions
  • MSC 2010: 45G15
  • 45F05
  • 45F35
  • 65D15

1. Introduction

In mathematics, there are many functional equations of the description of a real system in the natural sciences (such as physics, biology, Earth science, meteorology) and disciplines of engineering. For instance, we can point to some mathematical model from physics that describe heat as a partial differential equation and the inverse problem of it’s as integro-differential equations. Also, another example in nature is Laplace’s equation which corresponds to the construction of potential for a vector field whose effect is known at the boundary of Domain alone. Especially, the integral equations have wide applicability which has been cited in [1, 2, 3, 4].However, there are many significant analytical methods for solving integral equations but most of them especially in nonlinear cases, finding an analytical representation of the solution is so difficult, therefore, it is required to obtain approximate solutions. The interested reader can find several numerical methods for approximating the solution of these problems in [5, 6, 7, 8, 9, 10, 11, 12, 13, 14] and the references therein.

Moreover, there are various numerical and analytical methods have been used to estimate the solution of integrodifferential equations or Abels integral equations [12, 15, 16, 17, 18]. Recently the meshless based methods, particularly Moving Least Squares (MLS) method, for a solution of partial differential equations and ordinary differential equations have been paid attention. Using this approach some new methods such as meshless local boundary integral equation method [19], Boundary Node Method (BNM) [20], moving least square reproducing polynomial meshless method [21] and other relative methods are constructed. The new class of meshless methods has been developed which only relied on a set of nodes without the need for an additional mesh in the solution of a one-dimensional system of integral equations [22].

A local approximation of unknown function presented in the MLS method give us to possible choose the compact support domain for each data point as a sphere or a parallelogram box centered on a point [23, 24]. So each data point has an associated with the size of its compact support domain as dilatation parameter. Therefore the number of data point and dilatation parameter are direct effects on the MLS, Also by increasing the degree of the polynomial base function for complex data distributions give a more validated fashion. Nevertheless, in this case, it becomes more difficult to ensure the independence of the shape functions, and the least-squares minimization problem becomes ill-posed.

The common solution for increased the number of admissible node distribution is increasing the size of the support domains (a valid node distribution is referred to as an œadmissible node distribution [23]). There have been several proposed for choosing the radius of support domain [25], but one of the efficient suggestion was raised by Chen shen [26]. The author in [27] has introduced a new algorithm for selecting the suitable radius of the domain of influence. Also in [28], presented a modified MLS(MMLS) approximation on the shape function generation algorithm with additional terms based on the coefficients of the polynomial basis functions. It is an efficient method which has been proposed for handling a singular moment matrix in the MLS based methods. The advantage of this method compared to methods based on mesh such as a finite element or finite volume is this the domain of the problem is not important because this approximation method is based on a set of scattered points instead of domain elements for interpolation or approximation. So the geometry of the domain does not interfere in the MLS.

Advertisement

2. Methodology

2.1 Introduction of the MLS approximation

The Moving Least Square (MLS) method is a feasible numerical approximation method that is an extension of the least squares method, also it is the component of the class of meshless schemes that have a highly accurate approximation. The MlS approximation method is a popular method used in the many meshless methods [12, 19, 21, 22, 29, 30]. In many procedures used to construct the MLS shape function is used support-domain concept. The support domain of the shape function is a set of nodes in the problem domain that just those points directly contributes to the construction of the shape function, so the MLS shape function is locally supported. According to the classical least squares method, an optimization problem should be solved as follows

minj=1muhxjuxj2

Where Ideally the approximation function uhxshould match the function ux. Therefore, in the MLS approach, a weight optimization problem will be solved which is dependent on nodal points. We start, with the basic idea of taking a set of the nodal points in Ωso that ΩRd.Also ΩxΩis neighboring nodes of point xand finding an approximation function with mbasis functions, in a system with nequations as

TU=F

where Tconsists of linear and nonlinear operators and U=u1u2unis the unknown vector of functions, also F=f1f2fnis the known vector of functions.

So for the approximation of any of the ui,i=1,2,,nin Ωx,xΩx,uihxcan be defined as

uihx=j=1majxpjx=PTxax.E1

Let P=p1p2pma set of polynomial of degree at most m,mN.Let axis a vector containing unknown coefficients ajx,j=1,2,mdependent on the intrest point position. Also munknown functions ax=a1xa2xamxare chosen such that:

Jx=j=1mPTxjaxuixj2wix=P.auiT.W.P.aui,E2

is minimized. Note that the weight function wixis associated with node j. As we know, each redial basis function that define in [31] can be used as weight function, we can define wjr=ϕrδwhere r=xxi2(the Euclidean distance between xand xj)and ϕ:RdRis a nonnegative function with compact support. In this chapter, we will use following weight functions and will compare them to each other, corresponding to the node j, in the numerical examples.

a. Guass weight function

wr=expr2c2expδ2c21expδ2c20rδ0elsewhere.E3

b. RBF weight function

wr=1r66+36r+82r2+72r3+30r4+5r50rδ0elsewhere.E4

c. Spline weight function

wr=16rδ2+8rδ33rδ40rδ0elsewhere.E5

Where cis constant and is called shape parameter. Also δis the size of support domain.

Nis the number of nodes in Ωxwith wix>0,the matrices Pand Ware defined as

P=pTx1pTx2pTxNN×m+1TE6
W=diag(wix,i=1,2,,NE7

and

uh=u1hu2hunh.E8

It is important to note that uiT,i=1,2,n,in (2) and (8) are the artificial nodal values, and not the nodal values of the unknown trial function uhxin general. With respect to axand uiTwill be obtained,

Axax=Bxui,E9

where the matrices Axand Bxare defined by:

Bx=w1px1w2px2wNpxNE10
Ax=i=1NwixpTxipxi=pTxwxpx.E11

The matrix Axin (11) is non-singular when the rank of matrix Pxequals to mand vice versa. In such a case, the MLS approximation is well-defined. With computing ax,uihcan be obtained as follows:

uihx=j=1Nϕjxuixj=φT.uiE12

ϕjxis called the shape function of the MLS approximation corresponding to the nodal point xj,where

φx=pTxA1xBxE13

Also with use the weight function, matrix A,Bare computed and then ϕixis determined from (13), If, further, ϕis sufficiently smooth, derivatives of Uare usually approximated by derivatives of Uh,

DαuixDαuihx=j=1NDαajxuixj,xΩE14

2.2 Modify algorithm of MLS shape function

In the MLS approximation method, a local evaluation of the approximating unknown function is desired, and therefore for any nodal points the compact support domain is chosen as a sphere or a parallelogram box centered on the point [23, 29, 32]. This finding which the support domains contain what points. Each data point has a connected dilatation parameter λwhich is given δi=λhi. Also, δiis the size of compact support domain in a node point xi.

Also, the necessary condition for that the moment matrix A be nonsingular is that for any point xiΩ,i=1,2,,N, [31].

jxjΩδim,j=1,2,,N

So the dilatation parameters λdetermine the number of points of support domain, Also these points participate in the production of the shape function Therefore, λis very important and should be chosencorrectly so that the moment matrix Ais nonsingular.

In the remainder of this section, we introduce the new algorithm, with the aim of avoiding the singularity of the matrix A by choosing the correct λparameter by the algorithm.

Algorithm 1

Require:X=xi:i=12N- Coordinates of points whose MLS shape function to be evaluated.

1: procedureMatrix A

2: λnewλ

3: α0.01(This value selected experimentally.)

4: δ=λnew×h(h: the fill distance is defined to be h=supxΩmin1jNxxi2)

5: Loop

6: set Ix=j12Nxxi2δ(Using set of indices I, by localization at a fixed point x)

7: forjIxdo

8: fori=1:Ndo

9: Compute wifor any xjΩi

10: A=A+wipi2

11: end

12: end

13: ifcondA1εthen

14: {

15: λnew=λnew+αλ

16: δ=λnew×h

17: else

18: gotoend

19: }

20: ifδiXΩ2then

21: gotoLoop

22: end

In Algorithm 1.

X: is a set containing N scattered points which are called centers or data site and I(x) is the Index of points which MLS shape function is evaluated.

α: is a small positive number that is selected experimentally.

Then in every node points, matrix A is computed.

By running the algorithm the new value is assigned to λ, this value is related to the condition number of matrix A and its amount will increase. Therefore, the size of the support domain is increased and then the matrix A with new nodal points in the support domain is reproduced. This loop is repeated until 1condAε.

The main idea of the moving least squares approximation is that for every point x can solve a locally weighted least squares problem [30], it is a local approximation, thus the additional condition to stop the loop is the size of the local support domain, the value of λshould be well enough to pave the local approximation, Line 20 is said to satisfy this condition.

2.3 Modified MLS approximation method

One of the common problems in Classic MLS method is the singularity of the moment matrix A in irregularity chosen nodal points. To avoid the nodal configurations which lead to a singular moment matrix, the usual solution is to enlarge the support domains of any nodal point. But it is not an appropriate solution, in [31] to tackle such problems is proposed a modified Moving least squares(MMLS)approximation method. This modifies allows, base functions in m2to be used with the same size of the support domain as linear base functions m=1.We should note that,impose additional terms based on the coefficients of the polynomial base functions is the main view of the modified technique. As we know, in the basis function pxis

px=1xx2xmTE15

where xR, Also the correspond coefficients aj,that should be determined are [24]:

ax=a1axax2axmTE16

For obtaining these coefficients, the functional (2) rewrite as follows:

J¯x=j=1mPTxjaxuixj2wix+ν=1m2w¯νxa¯ν2x,i=1,2,,nE17

Where w¯is a vector of positive weights for the additional constraints, also a¯=ax2ax3axmTis the modified matrix.

The matrix form of (17) is as follows:

J¯x=P.auiT.W.P.aui+aTHa,i=1,2,,nE18

where His as,

H=O2,2Om2,m2O2,2diagw¯,E19

where, Oi,jis the null matrix. By minimizing the functional (18), the coefficients axwill be obtained. So we have

A¯xax=Bxui,E20

where

A¯=PT.W.P+HE21

And the matrics Bxis determined as the same before. So we have

φmx=ax=pTxA¯1xBxE22

where φmxis the shape function of the MMLS approximation method.

3. Stiff systems of ordinary differential equations

In this section, we use MLS approximation method for numerical solution of the Stiff system of ordinary differential equations so consider the following differential equation

AUFx=0,U0=U0,xΩE23

with boundary conditions,

BUUx=0,x∂Ω.

where A is a general differential operator, U0is an initial approximation of (23), Fxis a vector of known analytical functions on the domain Ωand ∂Ωis the boundary of Ω. The operator can be divided by A=L+N,where Lis the linear part, and Nis the nonlinear part of its. So (23) can be, rewritten as follows;

LU+NUFx=0E24

We assume that a=a1a2amare the MLS shape functions so in order to solve system (24), Nnodal points xiare selected on the Ω, which xii=12Nis q-unisolvent. The distribution of nodes could be selected regularly or randomly. Then instead of ujfrom U, we can replace ujhfrom (13). So we have

ujhx=i=1NaixujxiE25

where j=1,2,,nis the number of unknown functions. we estimate the unknown functions uiby (25), so the system (24) becomes the following form

Li=1Naixu1xii=1Naixu2xii=1Naixunxi+Ni=1Naixu1xii=1Naixu2xii=1Naixunxi=f1xf2xfnx+rx.E26

where rxis residual error function which vanishes to zero in collocation points thus by installing the collocation points yr;r=1,2,,N, so

Li=1Naiyru1xii=1Naiyru2xii=1Naiyrunxi+Ni=1Naiyru1xii=1Naiyru2xii=1Naiyrunxi=i=1NLaiyru1xi,i=1NLaiyru2xi,,i=1NLaiyrunxi)+Ni=1Naiyru1xii=1Naiyru2xii=1Naiyrunxi=f1yrf2yrfnyrE27

therefore

CU=La1y1La2y1LaNy1La1y2La2y2LaNy2La1yNLa2yNLaNyNu1x1u2x1unx1u1x2u2x2unx2u1xNu2xNunxNE28

And the matrix form of (27) as follows

CN×NUN×n+NN×naU=FN×nyrE29

where

Ci=La1yrLaNyri=1nUi=uix1uix2uixNTi=1nFyr=f1yrr=1NTf2yrr=1NTfnyrr=1NT.E30

by imposing the initial conditions at t=0, we have

i=1Nai0u1tii=1Nai0u2tii=1Nai0unti=U0E31

and Solving the non-linear system (29) and (31), lead to finding quantities ujxi. Then the values of ujxat any point xΩ, can be approximated by Eq. (25) as:

ujxi=1Naixujxi

Remark

Note that, for simplicity, the modification derivation is made for bivariate data, but can be easily extended to higher dimensions. Also, for implementation, the modified moving least squares approximation method it is sufficient to use φminstead of φ.

3.1 Error analysis

The convergence analysis of the method in matrix norm has been investigated for the systems of one and two-dimensional Fredholm integral equations by authors of [22]. It should be noted that The convergence analysis of the method for implementation classic moving least squares approximation method on a system of integral equations has been discussed and it should be investigated for modified Mls method. we can use the results for this type of equations.

So in continuation of this section, the error estimations for the system of differential equations is developed. In [26], has obtained error estimates for moving least square approximations in the one-dimensional case. Also in [33], is developed for functional in n-dimensional and was used the error estimates to prove an error estimate in Galerkin coercive problems. In this work, have improved error estimate for the systems of stiff ordinary differential equations.

Given δ>0let Wδ0be a function such that suppwδBδ0¯=zzδand Xδ=x1x2xn,n=nδ, a set of points in ΩRan open interval and let U=u1u2uNbe the unknown function such that ui1,ui2,,uinbe the values of the function uiin those points, i.e., ui,j=uixj,i=1,,N,j=1,,n. A class of functions W=ωjj=1Nis called a partition of unity subordinated to the open covering INif it possesses the following properties:

  • WjCs0,s>0ors=,

  • supωjΛ¯j,

  • ωjx>0,xΛj,

  • i=1Nωj=1 for  everyxΩ¯

There is no unique way to build a partition of unity as defined above [34].

As we know, the MLS approximation is well defined if we have a unique solution at every point xΩ¯. for minimization problem. And non-singularity of matrix Ax, ensuring it is. In [33] the error estimate was obtained with the following assumption about the system of nodes and weight functions ΘNWN:

We define

uv=j=1nwxxjuxjvxj

then

ux2=j=1nwxxjuxj2

Also for vector of unknown functions, we define

U=maxuixi=12N

are the discrete norm on the polynomial space Pm1if the weight function wsatisfy the following properties.

a. For each xΩ, wxxj>0at least for m+1indices j.

b. For any xΩ, the moment matrix Ax=wxPTis nonsingular.

Definition 3.1.GivenxΩ¯, the setSTx=j:ωj0will be called the star ofx.

Theorem 3.1.[34,35] A necessary condition for the satisfaction of Propertybis that for anyxΩ¯

n=cardSTxcardPm=m+1

For a sample point cΩ¯, if STc=j1jk, the mesh-size of the star STcdefined by the number is hSTc=maxhj1hjk.

Assumptions.Consider the following global assumptions about parameters. There exist

a1An over bound of the overlap of clouds:

E=supcΩ¯cardSTc.

a2Upper bounds of the condition number:

CBq=supcΩ¯CNqSTcq=12.

where the numbers CNqSTcare computable measures of the quality of the star STcwhich defined in Theorem 7 of [19].

a3An upper bound of the mesh-size of stars:

R=supcΩ¯hSTc.

a4An uniform bound of the derivatives of wj.That is the constant Gq>0,q=1,2, such that

DμWjLGqRμ1<μ<q,

a5There exist the number γ>0such that any two points x,yΩcan be joined by a rectifiable curve Γin Ωwith length Γγxy.

Assuming all these conditions, Zuppa [34] proved.

Lemma 3.1.U=u1u2unsuch thatuiCm+1Ω¯andU=uk,1<k<n, There exist constantsCq,q=1 or 2,

C1=C1γdEG1CB1,
C2=C1γdEG2CB1CB2,

then

DμUDμUh<CqRq+1μukm+1LΩ0<μ<qE32

Proof:see [36].

3.2 System of ODE

If in (24) the non-linear operator Nbe zero, we have

LU=f1f2fnE33

where Uis the vector of unknown function and Lis a matrix of derivative operators,

LU.=ς=1nλςς.ςU..E34

And from (25), we define

Uht=i=1NaitUti

where aii=1Nare the MLs shape functions defined on the interval 01satisfying the homogeneous counterparts of the boundary conditions in (23). Also if the weight function wpossesses kcontinuous derivatives then the shape functions ajis also in Ck[33]. By the collocation method, is obtained an approximate solution Uht. And demand that

LhU.=ς=0nλςς.ςUh.E35

where (λ=0 or 1). It is assumed that in the system of ODE derivative of order at most n=2. Each of the basis functions aihas compact support contained in 01then the matrix Cin (30) is a bounded matrix. If δbe fixed, independent of N, then the resulting system of linear equations can be solved in ONarithmetic operations.

Lemma 3.2.LetU=u1u2unandF=f1f2fnso thatuiCm+1Ω¯m1andui=uk,k12nwhereΩbe a closed, bounded set inR. Assume the quadrature scheme is convergent for all continuous functions onΩ.Further, assume that the stiff system of ODE(23)with the fixd initial condition is uniquely solvable for givenfiCΩ.Moreover take a suitable approximationUhofUThen for all sufficiently largen, the approximate matrixLfor linearly case exist and are uniformly bounded,LMwith a suitable constantM<.For the equationsLU=FandLhU=Fwe have

Et=LUtLhUt

so that

EtCqKλςRm+1μukm+1L.

Proof.we have

LUtLhUt=ς=0nλςςtςUtς=0nλςςtςUht

so due to the lemma (36),

LUtLhUtς=0nλςςtςUtςtςUhtmaxiς=0nλςςtςuitςtςuihtς=0nCqλςukm+1LRm+1ς

where should be mςso,

ς=0nλςRm+1ςKλςRm+1μ

where μis the highest order derivative And Kλς=ς=0nλς, so demanded that

EtCqKλςRm+1μukm+1L.

It should be noted that in the nonlinear system the upper bound of error depends on the nonlinear operator.

4. Two-dimensional linear systems of integral equations

4.1 Fredholm type

In this section, we will provide an approximation solution of the 2-D linear system of Fredholm integral equations by the MLS method. The matrix form of this system could be considered as

Uxy=Fxy+ΩKxyθsUθsdθds,xyΩ,E36

where Ω=ab×cdas ΩR2, Also Kxyθs=κijxyθs, i,j=1,2,,nis the matrix of kernels, Uxy=u1xyu2xyunxyTis the vector of unknown function and Fxy=f1xyf2xyfnxyTis the vector of known functions.

In addition, is took that two cases for the domain, the rectangular shape, and nonrectangular one and three cases relative to the geometry of the nonrectangular domain are considered where can be transformed into the rectangular shape [35].

The first one is Ω=θsR2:asbg1sθg2swhere g1sand g2sare continues functions of s, the second one can be consider as Ω=θsR2:cθdg1θsg2θwhere g1θand g2θare continues functions of θ, Also the last one is a domain which is neither of the first nor the second kinds but could be separated to finite numbers of first or second sub-domains, it is labeled as a domain of third kind. Without loss of generality, the first kind domain can be assumed as

Ω=θsR2:1s1g1sθg2sE37

by the following linear transformation

θts=g2θg1θ2t+g2θg1θ2,E38

the intervalg1θg2θis converted to the fixed interval 11, so we have

Uxy=Fxy+1111KxytsUtsdtds, such thatxy11×11E39

where

Kxyts=g2θg1θ2KxyθsE40

Also, the second kind is straight similarly by commuting the variables and the third kind can be separated to finite numbers of sub-domains of the first or second kinds, so the method can be applied in each sub-domain as described earlier.

So, for the numerical integration Ω=l=1LΩland ΩlΩk, 1k,lL

Ωgsds=l=1LΩlgsdsE41

Here, the MLS method is applied for the general case where the domain is ab×cd.

To apply the method, as described in section 2.1, instead of uifrom U, we can replace uihfrom (12). So we have

Uhx=u1hxu2hxunhxTE42

Also, obviously from (12)

ujhxy=i=1NϕixyujxiyiE43

in this section, is assumed that the domain has rectangular shape, so system (36) becomes as follows

u1hxyu2hxyunhxyT=fxy+cdabkijxytsu1htsu2htsunhtsTdtds.E44

By substituting (42) in (44), and it holds at points xryr,r=1,2,,Nwe have

fxryr=i=1Nϕixryru1xiyii=1Nϕixryru2xiyii=1NϕixryrunxiyiTcdabkijxryrtsi=1Nϕitsu1xiyii=1NϕitsunxiyiTdtds.E45

We consider the m1-point numerical integration scheme over Ωrelative to the coefficients τkςpand weights ωkand ωpfor solving integrals in (45), i.e.,

ϜNjujxy=p=1m1k=1m1i=1Nkjixryrτkςkϕiτkςkωkωp,xyΩ,uiE46

Applying the numerical integration rule (46) yields

fxryr=(i=1Nϕixryrp=1m1k=1m1j=1Nkj1xryrτkςkϕiτkςkωkωpu1i,i=1Nϕiτkςkp=1m1k=1m1j=1Nkj2xryrτkςkϕiτkςkωkωpu2i,i=1Nϕiτkςkp=1m1k=1m1j=1Nkjnxryrτkςkϕiτkςkωkωpuni)T,r=1,2,N

where ujiare the approximate quantities of ujwhen we use a quadrature rule instead of the exact integral. Now if we set Fl, l=1,2,nas a Nby Nmatrices defined by:

Fli,j=ϕixryrp=1m1k=1m1j=1NkjlxryrτkςpϕiτkςpωkωpE47

So, the moment matrix Fis defined by (47) as follows

F=F1F2FnnN×nNE48

And

U=u11u12u1NTu21u22u2NTun1un2unNTTfxryr=f1xryrr=1NTf2xryrr=1NTfnxryrr=1NT.E49

So by solving the following linear system of equations with a proper numerical method such as Gauss elimination method or etc. leads to quantities, uji.

FU=fE50

Therefore any ujxyat any arbitrary point xyfrom the domain of the problem, can be approximated by Eq. (43) as

ujxyi=1NϕixyujixiyiE51

4.2 Volterra type

Implementation of the proposed method on the Volterra integral equations is very simple and effective. In this case, the domain under study is as Ω=ax×cysuch that 0x1,0y1and a,care constant, so a Volterra system type of integral equations can be consider as

Uxy=Fxy+ΩKxytsUtsdtds,xyΩ,E52

like the Fredholm type, it is the matrix form of a system, so we have

Uxy=u1xyu2xyunxyT,the  vector of unknown functionsFxy=f1xyf2xyfnxyT,the  vector of known functionsKxyts=κijxytsi,j=1,2,,nthe matrix of kernels.E53

By the following transformation the interval axand cycan be transferred to a fixed interval aband cd,

txθ=xabaθ+bxbaa.E54
syξ=ycdcξ+dydcc.E55

Then instead of uifrom U,we can replace uihfrom (12). So we have

Uhx=u1hxu2hxunhxTE56

where

uihx=j=1NϕjxuixjE57

where x=xyab×cd,thus, system (52) becomes

u1hxu2hxunhxT=Fx+axcyκijxyts.u1hxyu2hxyunhxyTdtds.E58

Therefore from (54) and (55), the system (58) takes the following form

u1hxu2hxunhxT=Fx+abcdκijxyts.u1hxyu2hxyunhxyTdθdξ.E59

Where

K....=xabaycdcK....,E60

Using techniques employed in the Fredholm case yields the same final linear system where

Fli,j=ϕixryrp=1m1k=1m1j=1NkjlxryrτkςpϕiτkςpωkωpE61

where l=1,2,,n.

5. Nonlinear systems of two-dimensional integral equation

5.1 Fredholm type

In the nonlinear system, the unknown function cannot be written as a linear combination of the unknown variables or functions that appear in them, so the matrix form of Fredholm integral equations defined as the following form [27].

Uxy=Fxy+ΩKxyθsUθsdθds,xyΩ,E62

Where Uxy,Kand Fare defined as,

Uxy=u1xyu2xyunxyTKxyθsUθs=kijxyθsUθs,i,j=1,2,,nF=f1f2fnT

As mentioned above, we assume that Ω=ab×cd.

To apply the aproximation MLS method, we estimate the unknown functions uiby (12), so the system (62) becomes the following form

u1hxyu2hxyunhxyT=fxy+cdabkijxytsUhtsdtdsE63

We consider the m1-point numerical integration scheme over the domain under study relative to the coefficients τkςpand weights ωkand ωpfor solving tow-dimentional integrals in (63), i.e.,

ϜNiuixy=p=1m1k=1m1kjixyτkςpj=1Nϕjτkςpuixyωkωp,xyΩ,uiE64

Applying the numerical integration rule (64) in (63) yields

u1hxryru2hxryrunhxryrT=fxryr+p=1m1k=1m1kijxryrτkςpUhτkςpdtdsE65

Finding unknowns Uhby solving the nonlinear system of algebraic Eq. (65) yields the following approximate solution at any point xtΩ.

ujxyi=1NϕixyujixiyiE66

5.2 Volterra type

Two-dimensional nonlinear system of Volterra integral equations can be considered as the following form

Uxy=Fxy+axcyKxyθsUθsdθds,xyΩ,E67

where K, Fare known function and Uthe vector of unknown functions are defined in (63) [27]. In order to apply the MLS approximation method, as same as the linear type, the interval axand cytransferred to a fixed interval aband cd. Then uih,i=1,2,,ninstead of uiin U=(u1,u2,,unfrom (12) is replaced. So the nonlinear system (67) is converted to

u1hxyu2hxyunhxyT=fxy+cdabK¯xytsUhtsdtdsE68

where

K¯xytsUhts=xabaycdcKxytsUhts,E69

Using the numerical integration technique (64) which applied in the Fredholm case yields the same final nonlinear system (65), so the approximation solution of Uwould be found by solving this system of equations.

Advertisement

6. Examples

In this section, the proposed method can be applied to the system of 2-dimensional linear and nonlinear integral equations [37] and the system of differential equations. Also, the results of the examples illustrate the effectiveness of the proposed method Also the relative errors for the collocation nodal points is used.

ei=uiexxyuihxyuiexxy

where uihis the approximate solution of the exact solution uiex. Linear and quadratic basis functions are utilized in computations.

6.1 Example 1

As the first example, we consider the following system of nonlinear Fredholm integral equations [27].

u1xy=f1xy+Ωu1stu2stdsdtu2xy=f2xy+Ωu1stu2st+u22stdsdt

where Ω=01×01.The exact solutions are Uxy=x+yxand the Fxy=x+y712x1112.The distribution of randomly nodes is shown in Figure 1. By attention to the irregular nodal points distribution, unsuitable δcan lead to a singular matrix A. So in this example, the adapted algorithm can tackle such problems. The MLS and MMLS shape functions are computed by using Algorithm 1, so the exact value of the radius of the domain of influence is not important; in fact, it is chosen as an initial value.

Figure 1.

The scatter data of example 1.

The condition numbers of the matrix A is shown in Figure 2 and the determinant of A at sample points pis shown in Figure 3, where the radius of support domains for any nodal points is started from δ=0.05. Note that, there is a different radius of support domain for any node point, it might be increased due to the inappropriate distribution of scattered points by the algorithm. These variations are shown in Table 1 for sample points xy, where CondAis the conditions number A, its initial case (δ=0.005) and final case (Newδ,) and N.O.iterationis the number of iteration of the algorithm for determining a suitable radius of support domain.

Figure 2.

The condition numbers of a at a sample pointpandδ=0.05for example 1. Using algorithm 1.

Sample pointsCond(A)Result of algorithm
nxyinitialfinalNewδN.O.iteration
10.25750.47331.1005e171.0117e060.129711
20.25750.616003.1111e060.156913
30.25750.92935.3204e175.2445e070.09748
40.25510.616003.1115e060.156913
50.69910.24352.7166e171.4652e070.05502

Table 1.

Change of the radius and the condition number A at sample points (x,y) using algorithm 1, for example 1.

In computing, δ=2rwhere r=0.05and c=23r.Also In MMLS, w¯ν=0.1,ν=1,2,3.It should be noted that, these values were also selected experimentally. Relative errors of the MLS method for different Gauss-Legendre number points at m=1,2and 3compared in Table 2, also investigating the proposed methods shown that increasing the number of numerical integration points does not guarantee the error decreases. jkjk.

m = 1 em = 2 em = 3 e
Nu1u2u1u2u1u2
56.28×1042.7×1033.45×1071.15×1063.1×1062.8×106
101.2×1045.9×1042.46×1078.41×1072.1×1075.41×107
152.3×1045.9×1044.47×1081.34×1074.47×1082.15×107
202.3×1045.14×1046.12×1072.46×1063.2×1071.9×106
303.2×1045.9×1041.84×1066.64×1062.34×1067.14×106

Table 2.

Maximum relative errors for different points Gauss-Legendre quadrature rule δ=2r,for example 1.

In Table 3, we can see that the CPU times for solving the nonlinear system (65) are much larger in MMLS method; but, the errors are very smaller (Figure 3).

MMLS eMLS eCPU times
Nu1u2u1u2MLSMMLS
103.78×10108.13×10102.46×1078.41×107389.95742.9776×103
151.35×10102.00×10114.47×1081.34×107410.90832.1109×103
208.63×10112.51×1096.12×1072.46×106634.83733.2115×103
303.99×10101.58×1091.84×1066.64×1061.0331×1032.5844×103

Table 3.

Compare relative errors and CPU times of MLS and MMLS for different points Gauss-Legendre quadrature rule,m=2,for example 1.

Figure 3.

The determinant of a at a sample pointpandδ=0.05for example 1. Using algorithm 1.

6.2 Example 2

Consider the system of linear Fredholm integral equations with [27].

Kxyts=xt+sttsy+xt,E70

Such that Uxy=x+yxis the vector of The exact solutions and the vector of unknown function is Fxy=16x+y+1343x13+13y.Also the domain of the problem determine by Ω=01×01.In this example, initial value of ras radius of support domain set by 0.05. Also Algorithm 1 is used for producing shape function at m=1,2,3. In computing, we put w¯ν=0.1,ν=1,2,3.

Table 4 shows relative errors and CPU times of MLS for different Gauss-Legendre number points at m=1,3.As shown in Table 5, comparing the errors of MMLS and MLS method determines the capability and accuracy of the proposed technique to solve systems of linear Fredholm integral equations. This indicates the advantage of the proposed method over these systems of equations.

m = 1m = 3
Nu1u2CPU.T.u1u2CPU.T.
52.94×1046.02×104108.9572.08×1042.91×104152.1474
101.79×1043.89×104159.2581.7×1042.37×104170.7879
151.86×1043.989×104198.1352.47×1043.30×104252.75424
201.86×1043.986×104221.3212.41×1043.29×104247.0093
301.85×1043.987×104308.9872.25×1043.37×104314.8173
401.85×1043.980×104395.1251.87×1042.86×104402.9594

Table 4.

Relative errors and CPU times of MLS for different points Gauss-Legendre quadrature rule at m=1,3,for example 2.

MLSMMLS
mNu1u2CPU.T.u1u2CPU.T.
251.24×1073.89×106289.754.47×1076.12×106297.7
103.16×1075.23×107189.992.16×1086.26×108210.09
151.68×1074.13×107389.952.02×1088.12×108390.15
201.61×1073.88×107410.902.04×1085.24×108425.87
301.45×1073.80×107604.801.98×1083.25×108618.44
401.44×1073.84×107689.95741.04×1086.87×108697.04

Table 5.

Compare relative errors and CPU times of MLS and MMLS for different points Gauss-Legendre quadrature rule at m=2,for example 2.

Comparing the errors of MMLS and MLS method determines the capability and accuracy of the proposed method to solve systems of linear Fredholm integral equations.

6.3 Example 3

The third example that we want to approximate is the system of linear Volterra-Fredholm integral equations with [27].

Kxyts=x+yexpt+sx+yexpt+s11,E71

The domain is considered as Ω=xyR2:0x10y1yso that y01. Also the exact solutions are expx+yexpxy.It is important to note that the linear transformation used in the experiment is only (55) and from (40) the kernel becomes

K....=ycdcK.....E72

In this example, the effect of increasing radius of the domain of influence δiin MLS method on error has been investigated. Therefore the δiwas considered as follows

δi=λri=1,2,,NE73

In this way, useful information is obtained about the performance of the proposed method. By investigating the results in Tables 6 and 7 we found that the relative error in MLS was also related to the radius of the domain of influence (i.e. δ=λrso that λ=3,5,7); however, it cannot be greater than 7.For example, the relative errors by choosing λ=10(i.e. δ=0.05λ) and 10point GaussLegendre quadrature rule are eu1=3.3×102and eu2=1.8×102at m=1.Also, Table 8 depicts, the number of points in the numerical integration rule cannot be effective to increase the accuracy of the method.

λ=3eλ=5eλ=7e
ru1u2u1u2u1u2
0.29×1035×1031.9×1031×1036.02×1043.36×104
0.11.2×1036.87×1041.34×1047.46×1054.57×1062.91×106
0.051.44×1048.14×1052.23×1051.26×1056.27×1063.6×106

Table 6.

Maximum relative errors of MLS for 10Gauss-Legendre points and different values of δ=λrat m=2,for example 3.

λ=3eλ=5eλ=7e
ru1u2u1u2u1u2
0.21.5×1038.77×1036.56×1053.01×1051.09×1046.19×105
0.11.08×1046.51×1053.47×1052.08×1051.24×1057.22×106
0.056.63×1063.93×1063.9×1062.31×1062.41×1061.38×106

Table 7.

Maximum relative errors of MLS for 10Gauss-Legendre points and different values of δ=λrat m=3for example 3.

λ=3eλ=5eλ=7e
Nu1u2u1u2u1u2
52.2×1031.5×1036.33×1054.52×1052.21×1051.59×105
101.1×1037.4×1044.17×1052.89×1052.96×1062.72×106
152×1031.3×1037.9×1055.07×1056.31×1065.11×106
202.1×1031.3×1038.06×1055.17×1053.96×1063.69×106
302.1×1031.3×1038.16×1055.17×1054.11×1063.74×106

Table 8.

Maximum relative errors of MLS for different values of δ=λr,r=0.05and Gauss-Legendre points at m=1,using algorithm 1 for example 3.

Then the relative error by MMLS shape function described in section (2.3) were obtained, using δ=7rand w¯ν=10, such that ν=1,2,3as weights of additional coefficients for MMLS. We can see that in Table 9 the errors of the system of linear Volterra-Fredholm integral equations are similar in both methods (i.e. MLS and MMLS methods).

MLSMMLSCPU time
ru1u2u1u2MLSMMLS
0.21.09×1046.19×1055.52×1043.08×1044.05563.3893
0.11.24×1051.08×1061.25×1057.36×10638.061631.7945
0.051.91×1061.085×1061.54×1058.77×106496.1746409.0342

Table 9.

Compare relative errors and CPU times of MLS and MMLS for w¯ν=10and 10Gauss-Legendre points and different values of δ=7rat m=2for example 3.

6.4 Example 4

Consider the following nonlinear stiff systems of ODEs [38].

u1t=1002u1t+1000u22tu2t=u1tu2tu22t

With the initial condition u10=1and u20=1.The exact solution is

u1t=exp2tu2t=expt.

In this numerical example, two scheme are compared and as explained the main task of the modified method tackle the singularity of the moment matrix. Table 10 presents the maximum relative error by MLS on a set of evaluation points (with h=0.1and0.02) and δ=4hand3h. Also in Table 11 MLS and MMLS at different number of nodes for h=0.004and δ=5hand8h, were compared (Tables 1012).

m = 2,δ = 4 rm = 2, δ = 3 r
ru1u2u1u2
0.15×1034.1×1048.85×1042.2×103
0.025.8×1026.5×1055.42×1046.52×105

Table 10.

Maximum relative errors by MLS, example 4.

m = 3, δ = 5 rm = 3, δ = 8 r
Typeu1u2u1u2
MLS1.03×1000.98×1011.01×1009.2×100
MMLS9.23×1049.22×1046.89×1046.96×104

Table 11.

Maximum relative errors for h = 0.004 by MMLS and MLS, example 4.

MLSMMLS
weight typeu1u2Cputimeu1u2Cputime
Guass3.06×1039.92×10561.17068.5×1046.5×1040.5598
Spline5.06×1045.3×10464.58971.93×1024.23×1040.6714
RBF6.407×1043.02×10459.17906.9×1036.9×1030.5768

Table 12.

Maximum relative errors by MLS t05,h=0.004,, example 2.

6.5 Example 5

In this example, we consider Ut=14795exp2t48exp96t14748exp96texp2tas the exact solution and U00=11as the initial conditions for the following system of ODE,

xt=xt+95ytyt=xt97yt

Table 12 presents the maximum relative norm of the errors on a fine set of evaluation points (with h=0.004) and δ=5hfor MLS and MMLS at different type of weight functions. As seen in this table, one major advantage of MMLS is that the computational time used by MMLS is less than MLS.

7. Conclusion

In this paper, two meshless techniques called moving least squares and modified Moving least-squares approximation are applied for solving the system of functional equations. Comparing the results obtained by these methods with the results obtained by the exact solution shows that the moving least squares methods are the reliable and accurate methods for solving a system of functional equations. Meshless methods are free from choosing the domain and this makes it suitable to study real-world problems. Also, the modified algorithm has changed the ability to select the support range radius In fact, the user can begin to solve any problem with an arbitrary radius from the domain and the proposed algorithm can correct it during execution.

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Massoumeh Poura’bd Rokn Saraei and Mashaallah Matinfar (May 13th 2020). Modified Moving Least Squares Method for Two-Dimensional Linear and Nonlinear Systems of Integral Equations, Nonlinear Systems -Theoretical Aspects and Recent Applications, Walter Legnani and Terry E. Moschandreou, IntechOpen, DOI: 10.5772/intechopen.89394. Available from:

chapter statistics

216total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Informational Time Causal Planes: A Tool for Chaotic Map Dynamic Visualization

By Felipe Olivares, Lindiane Souza, Walter Legnani and Osvaldo A. Rosso

Related Book

First chapter

Applications of 2D Padé Approximants in Nonlinear Shell Theory: Stability Calculation and Experimental Justification

By Igor Andrianov, Jan Awrejcewicz and Victor Olevs’kyy

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us