Open access peer-reviewed chapter

Modified Moving Least Squares Method for Two-Dimensional Linear and Nonlinear Systems of Integral Equations

Written By

Massoumeh Poura’bd Rokn Saraei and Mashaallah Matinfar

Submitted: 05 February 2019 Reviewed: 28 August 2019 Published: 13 May 2020

DOI: 10.5772/intechopen.89394

From the Edited Volume

Nonlinear Systems -Theoretical Aspects and Recent Applications

Edited by Walter Legnani and Terry E. Moschandreou

Chapter metrics overview

834 Chapter Downloads

View Full Metrics

Abstract

This work aims at focusing on modifying the moving least squares (MMLS) methods for solving two-dimensional linear and nonlinear systems of integral equations and system of differential equations. The modified shape function is our main aim, so for computing the shape function based on the moving least squares method (MLS), an efficient algorithm is presented. In this modification, additional terms is proposed to impose based on the coefficients of the polynomial base functions on the quadratic base functions (m = 2). So, the MMLS method is developed for solving the systems of two-dimensional linear and nonlinear integral equations at irregularly distributed nodes. This approach prevents the singular moment matrix in the context of MLS based on meshfree methods. Also, determining the best radius of the support domain of a field node is an open problem for MLS-based methods. Therefore, the next important thing is that the MMLS algorithm can automatically find the best neighborhood radius for each node. Then, numerical examples are presented that determine the main motivators for doing this so. These examples enable us to make comparisons of two methods: MMLS and classical MLS.

Keywords

  • moving least squares
  • modified moving least squares
  • systems of integral equations
  • algorithm of shape function
  • numerical solutions
  • MSC 2010: 45G15
  • 45F05
  • 45F35
  • 65D15

1. Introduction

In mathematics, there are many functional equations of the description of a real system in the natural sciences (such as physics, biology, Earth science, meteorology) and disciplines of engineering. For instance, we can point to some mathematical model from physics that describe heat as a partial differential equation and the inverse problem of it’s as integro-differential equations. Also, another example in nature is Laplace’s equation which corresponds to the construction of potential for a vector field whose effect is known at the boundary of Domain alone. Especially, the integral equations have wide applicability which has been cited in [1, 2, 3, 4].However, there are many significant analytical methods for solving integral equations but most of them especially in nonlinear cases, finding an analytical representation of the solution is so difficult, therefore, it is required to obtain approximate solutions. The interested reader can find several numerical methods for approximating the solution of these problems in [5, 6, 7, 8, 9, 10, 11, 12, 13, 14] and the references therein.

Moreover, there are various numerical and analytical methods have been used to estimate the solution of integrodifferential equations or Abels integral equations [12, 15, 16, 17, 18]. Recently the meshless based methods, particularly Moving Least Squares (MLS) method, for a solution of partial differential equations and ordinary differential equations have been paid attention. Using this approach some new methods such as meshless local boundary integral equation method [19], Boundary Node Method (BNM) [20], moving least square reproducing polynomial meshless method [21] and other relative methods are constructed. The new class of meshless methods has been developed which only relied on a set of nodes without the need for an additional mesh in the solution of a one-dimensional system of integral equations [22].

A local approximation of unknown function presented in the MLS method give us to possible choose the compact support domain for each data point as a sphere or a parallelogram box centered on a point [23, 24]. So each data point has an associated with the size of its compact support domain as dilatation parameter. Therefore the number of data point and dilatation parameter are direct effects on the MLS, Also by increasing the degree of the polynomial base function for complex data distributions give a more validated fashion. Nevertheless, in this case, it becomes more difficult to ensure the independence of the shape functions, and the least-squares minimization problem becomes ill-posed.

The common solution for increased the number of admissible node distribution is increasing the size of the support domains (a valid node distribution is referred to as an œadmissible node distribution [23]). There have been several proposed for choosing the radius of support domain [25], but one of the efficient suggestion was raised by Chen shen [26]. The author in [27] has introduced a new algorithm for selecting the suitable radius of the domain of influence. Also in [28], presented a modified MLS(MMLS) approximation on the shape function generation algorithm with additional terms based on the coefficients of the polynomial basis functions. It is an efficient method which has been proposed for handling a singular moment matrix in the MLS based methods. The advantage of this method compared to methods based on mesh such as a finite element or finite volume is this the domain of the problem is not important because this approximation method is based on a set of scattered points instead of domain elements for interpolation or approximation. So the geometry of the domain does not interfere in the MLS.

Advertisement

2. Methodology

2.1 Introduction of the MLS approximation

The Moving Least Square (MLS) method is a feasible numerical approximation method that is an extension of the least squares method, also it is the component of the class of meshless schemes that have a highly accurate approximation. The MlS approximation method is a popular method used in the many meshless methods [12, 19, 21, 22, 29, 30]. In many procedures used to construct the MLS shape function is used support-domain concept. The support domain of the shape function is a set of nodes in the problem domain that just those points directly contributes to the construction of the shape function, so the MLS shape function is locally supported. According to the classical least squares method, an optimization problem should be solved as follows

minj=1muhxjuxj2

Where Ideally the approximation function uhx should match the function ux. Therefore, in the MLS approach, a weight optimization problem will be solved which is dependent on nodal points. We start, with the basic idea of taking a set of the nodal points in Ω so that ΩRd. Also ΩxΩ is neighboring nodes of point x and finding an approximation function with m basis functions, in a system with n equations as

TU=F

where T consists of linear and nonlinear operators and U=u1u2un is the unknown vector of functions, also F=f1f2fn is the known vector of functions.

So for the approximation of any of the ui,i=1,2,,n in Ωx,xΩx,uihx can be defined as

uihx=j=1majxpjx=PTxax.E1

Let P=p1p2pm a set of polynomial of degree at most m,mN. Let ax is a vector containing unknown coefficients ajx,j=1,2,m dependent on the intrest point position. Also m unknown functions ax=a1xa2xamx are chosen such that:

Jx=j=1mPTxjaxuixj2wix=P.auiT.W.P.aui,E2

is minimized. Note that the weight function wix is associated with node j. As we know, each redial basis function that define in [31] can be used as weight function, we can define wjr=ϕrδ where r=xxi2 (the Euclidean distance between x and xj) and ϕ:RdR is a nonnegative function with compact support. In this chapter, we will use following weight functions and will compare them to each other, corresponding to the node j, in the numerical examples.

a. Guass weight function

wr=expr2c2expδ2c21expδ2c20rδ0elsewhere.E3

b. RBF weight function

wr=1r66+36r+82r2+72r3+30r4+5r50rδ0elsewhere.E4

c. Spline weight function

wr=16rδ2+8rδ33rδ40rδ0elsewhere.E5

Where c is constant and is called shape parameter. Also δ is the size of support domain.

N is the number of nodes in Ωx with wix>0, the matrices P and W are defined as

P=pTx1pTx2pTxNN×m+1TE6
W=diag(wix,i=1,2,,NE7

and

uh=u1hu2hunh.E8

It is important to note that uiT,i=1,2,n, in (2) and (8) are the artificial nodal values, and not the nodal values of the unknown trial function uhx in general. With respect to ax and uiT will be obtained,

Axax=Bxui,E9

where the matrices Ax and Bx are defined by:

Bx=w1px1w2px2wNpxNE10
Ax=i=1NwixpTxipxi=pTxwxpx.E11

The matrix Ax in (11) is non-singular when the rank of matrix Px equals to m and vice versa. In such a case, the MLS approximation is well-defined. With computing ax,uih can be obtained as follows:

uihx=j=1Nϕjxuixj=φT.uiE12

ϕjx is called the shape function of the MLS approximation corresponding to the nodal point xj, where

φx=pTxA1xBxE13

Also with use the weight function, matrix A,B are computed and then ϕix is determined from (13), If, further, ϕ is sufficiently smooth, derivatives of U are usually approximated by derivatives of Uh,

DαuixDαuihx=j=1NDαajxuixj,xΩE14

2.2 Modify algorithm of MLS shape function

In the MLS approximation method, a local evaluation of the approximating unknown function is desired, and therefore for any nodal points the compact support domain is chosen as a sphere or a parallelogram box centered on the point [23, 29, 32]. This finding which the support domains contain what points. Each data point has a connected dilatation parameter λ which is given δi=λhi. Also, δi is the size of compact support domain in a node point xi.

Also, the necessary condition for that the moment matrix A be nonsingular is that for any point xiΩ,i=1,2,,N, [31].

jxjΩδim,j=1,2,,N

So the dilatation parameters λ determine the number of points of support domain, Also these points participate in the production of the shape function Therefore, λ is very important and should be chosencorrectly so that the moment matrix A is nonsingular.

In the remainder of this section, we introduce the new algorithm, with the aim of avoiding the singularity of the matrix A by choosing the correct λ parameter by the algorithm.

Algorithm 1

Require:X=xi:i=12N- Coordinates of points whose MLS shape function to be evaluated.

1: procedure Matrix A

2: λnewλ

3: α0.01 (This value selected experimentally.)

4: δ=λnew×h (h: the fill distance is defined to be h=supxΩmin1jNxxi2)

5: Loop

6: set Ix=j12Nxxi2δ (Using set of indices I, by localization at a fixed point x)

7: forjIxdo

8: fori=1:Ndo

9: Compute wi for any xjΩi

10: A=A+wipi2

11: end

12: end

13: ifcondA1εthen

14: {

15: λnew=λnew+αλ

16: δ=λnew×h

17: else

18: gotoend

19: }

20: ifδiXΩ2then

21: gotoLoop

22: end

In Algorithm 1.

X: is a set containing N scattered points which are called centers or data site and I(x) is the Index of points which MLS shape function is evaluated.

α: is a small positive number that is selected experimentally.

Then in every node points, matrix A is computed.

By running the algorithm the new value is assigned to λ, this value is related to the condition number of matrix A and its amount will increase. Therefore, the size of the support domain is increased and then the matrix A with new nodal points in the support domain is reproduced. This loop is repeated until 1condAε.

The main idea of the moving least squares approximation is that for every point x can solve a locally weighted least squares problem [30], it is a local approximation, thus the additional condition to stop the loop is the size of the local support domain, the value of λ should be well enough to pave the local approximation, Line 20 is said to satisfy this condition.

2.3 Modified MLS approximation method

One of the common problems in Classic MLS method is the singularity of the moment matrix A in irregularity chosen nodal points. To avoid the nodal configurations which lead to a singular moment matrix, the usual solution is to enlarge the support domains of any nodal point. But it is not an appropriate solution, in [31] to tackle such problems is proposed a modified Moving least squares(MMLS)approximation method. This modifies allows, base functions in m2 to be used with the same size of the support domain as linear base functions m=1. We should note that,impose additional terms based on the coefficients of the polynomial base functions is the main view of the modified technique. As we know, in the basis function px is

px=1xx2xmTE15

where xR, Also the correspond coefficients aj, that should be determined are [24]:

ax=a1axax2axmTE16

For obtaining these coefficients, the functional (2) rewrite as follows:

J¯x=j=1mPTxjaxuixj2wix+ν=1m2w¯νxa¯ν2x,i=1,2,,nE17

Where w¯ is a vector of positive weights for the additional constraints, also a¯=ax2ax3axmT is the modified matrix.

The matrix form of (17) is as follows:

J¯x=P.auiT.W.P.aui+aTHa,i=1,2,,nE18

where H is as,

H=O2,2Om2,m2O2,2diagw¯,E19

where, Oi,j is the null matrix. By minimizing the functional (18), the coefficients ax will be obtained. So we have

A¯xax=Bxui,E20

where

A¯=PT.W.P+HE21

And the matrics Bx is determined as the same before. So we have

φmx=ax=pTxA¯1xBxE22

where φmx is the shape function of the MMLS approximation method.

Advertisement

3. Stiff systems of ordinary differential equations

In this section, we use MLS approximation method for numerical solution of the Stiff system of ordinary differential equations so consider the following differential equation

AUFx=0,U0=U0,xΩE23

with boundary conditions,

BUUx=0,x∂Ω.

where A is a general differential operator, U0 is an initial approximation of (23), Fx is a vector of known analytical functions on the domain Ω and ∂Ω is the boundary of Ω. The operator can be divided by A=L+N, where L is the linear part, and N is the nonlinear part of its. So (23) can be, rewritten as follows;

LU+NUFx=0E24

We assume that a=a1a2am are the MLS shape functions so in order to solve system (24), N nodal points xi are selected on the Ω, which xii=12N is q-unisolvent. The distribution of nodes could be selected regularly or randomly. Then instead of uj from U, we can replace ujh from (13). So we have

ujhx=i=1NaixujxiE25

where j=1,2,,n is the number of unknown functions. we estimate the unknown functions ui by (25), so the system (24) becomes the following form

Li=1Naixu1xii=1Naixu2xii=1Naixunxi+Ni=1Naixu1xii=1Naixu2xii=1Naixunxi=f1xf2xfnx+rx.E26

where rx is residual error function which vanishes to zero in collocation points thus by installing the collocation points yr;r=1,2,,N, so

Li=1Naiyru1xii=1Naiyru2xii=1Naiyrunxi+Ni=1Naiyru1xii=1Naiyru2xii=1Naiyrunxi=i=1NLaiyru1xi,i=1NLaiyru2xi,,i=1NLaiyrunxi)+Ni=1Naiyru1xii=1Naiyru2xii=1Naiyrunxi=f1yrf2yrfnyrE27

therefore

CU=La1y1La2y1LaNy1La1y2La2y2LaNy2La1yNLa2yNLaNyNu1x1u2x1unx1u1x2u2x2unx2u1xNu2xNunxNE28

And the matrix form of (27) as follows

CN×NUN×n+NN×naU=FN×nyrE29

where

Ci=La1yrLaNyri=1nUi=uix1uix2uixNTi=1nFyr=f1yrr=1NTf2yrr=1NTfnyrr=1NT.E30

by imposing the initial conditions at t=0, we have

i=1Nai0u1tii=1Nai0u2tii=1Nai0unti=U0E31

and Solving the non-linear system (29) and (31), lead to finding quantities ujxi. Then the values of ujx at any point xΩ, can be approximated by Eq. (25) as:

ujxi=1Naixujxi

Remark

Note that, for simplicity, the modification derivation is made for bivariate data, but can be easily extended to higher dimensions. Also, for implementation, the modified moving least squares approximation method it is sufficient to use φm instead of φ.

3.1 Error analysis

The convergence analysis of the method in matrix norm has been investigated for the systems of one and two-dimensional Fredholm integral equations by authors of [22]. It should be noted that The convergence analysis of the method for implementation classic moving least squares approximation method on a system of integral equations has been discussed and it should be investigated for modified Mls method. we can use the results for this type of equations.

So in continuation of this section, the error estimations for the system of differential equations is developed. In [26], has obtained error estimates for moving least square approximations in the one-dimensional case. Also in [33], is developed for functional in n-dimensional and was used the error estimates to prove an error estimate in Galerkin coercive problems. In this work, have improved error estimate for the systems of stiff ordinary differential equations.

Given δ>0 let Wδ0 be a function such that suppwδBδ0¯=zzδ and Xδ=x1x2xn,n=nδ, a set of points in ΩR an open interval and let U=u1u2uN be the unknown function such that ui1,ui2,,uin be the values of the function ui in those points, i.e., ui,j=uixj,i=1,,N,j=1,,n. A class of functions W=ωjj=1N is called a partition of unity subordinated to the open covering IN if it possesses the following properties:

  • WjCs0,s>0ors=,

  • supωjΛ¯j,

  • ωjx>0,xΛj,

  • i=1Nωj=1 for  everyxΩ¯

There is no unique way to build a partition of unity as defined above [34].

As we know, the MLS approximation is well defined if we have a unique solution at every point xΩ¯. for minimization problem. And non-singularity of matrix Ax, ensuring it is. In [33] the error estimate was obtained with the following assumption about the system of nodes and weight functions ΘNWN:

We define

uv=j=1nwxxjuxjvxj

then

ux2=j=1nwxxjuxj2

Also for vector of unknown functions, we define

U=maxuixi=12N

are the discrete norm on the polynomial space Pm1 if the weight function w satisfy the following properties.

a. For each xΩ, wxxj>0 at least for m+1 indices j.

b. For any xΩ, the moment matrix Ax=wxPT is nonsingular.

Definition 3.1.GivenxΩ¯, the setSTx=j:ωj0will be called the star ofx.

Theorem 3.1.[34,35] A necessary condition for the satisfaction of Property b is that for anyxΩ¯

n=cardSTxcardPm=m+1

For a sample point cΩ¯, if STc=j1jk, the mesh-size of the star STc defined by the number is hSTc=maxhj1hjk.

Assumptions. Consider the following global assumptions about parameters. There exist

a1 An over bound of the overlap of clouds:

E=supcΩ¯cardSTc.

a2 Upper bounds of the condition number:

CBq=supcΩ¯CNqSTcq=12.

where the numbers CNqSTc are computable measures of the quality of the star STc which defined in Theorem 7 of [19].

a3 An upper bound of the mesh-size of stars:

R=supcΩ¯hSTc.

a4 An uniform bound of the derivatives of wj. That is the constant Gq>0,q=1,2, such that

DμWjLGqRμ1<μ<q,

a5 There exist the number γ>0 such that any two points x,yΩ can be joined by a rectifiable curve Γ in Ω with length Γγxy.

Assuming all these conditions, Zuppa [34] proved.

Lemma 3.1.U=u1u2unsuch thatuiCm+1Ω¯andU=uk,1<k<n, There exist constantsCq,q=1 or 2,

C1=C1γdEG1CB1,
C2=C1γdEG2CB1CB2,

then

DμUDμUh<CqRq+1μukm+1LΩ0<μ<qE32

Proof: see [36].

3.2 System of ODE

If in (24) the non-linear operator N be zero, we have

LU=f1f2fnE33

where U is the vector of unknown function and L is a matrix of derivative operators,

LU.=ς=1nλςς.ςU..E34

And from (25), we define

Uht=i=1NaitUti

where aii=1N are the MLs shape functions defined on the interval 01 satisfying the homogeneous counterparts of the boundary conditions in (23). Also if the weight function w possesses k continuous derivatives then the shape functions aj is also in Ck [33]. By the collocation method, is obtained an approximate solution Uht. And demand that

LhU.=ς=0nλςς.ςUh.E35

where (λ=0 or 1). It is assumed that in the system of ODE derivative of order at most n=2. Each of the basis functions ai has compact support contained in 01 then the matrix C in (30) is a bounded matrix. If δ be fixed, independent of N, then the resulting system of linear equations can be solved in ON arithmetic operations.

Lemma 3.2.LetU=u1u2unandF=f1f2fnso thatuiCm+1Ω¯m1andui=uk,k12nwhereΩbe a closed, bounded set inR. Assume the quadrature scheme is convergent for all continuous functions onΩ.Further, assume that the stiff system of ODE(23)with the fixd initial condition is uniquely solvable for givenfiCΩ.Moreover take a suitable approximationUhofUThen for all sufficiently largen, the approximate matrixLfor linearly case exist and are uniformly bounded,LMwith a suitable constantM<.For the equationsLU=FandLhU=Fwe have

Et=LUtLhUt

so that

EtCqKλςRm+1μukm+1L.

Proof. we have

LUtLhUt=ς=0nλςςtςUtς=0nλςςtςUht

so due to the lemma (36),

LUtLhUtς=0nλςςtςUtςtςUhtmaxiς=0nλςςtςuitςtςuihtς=0nCqλςukm+1LRm+1ς

where should be mς so,

ς=0nλςRm+1ςKλςRm+1μ

where μ is the highest order derivative And Kλς=ς=0nλς, so demanded that

EtCqKλςRm+1μukm+1L.

It should be noted that in the nonlinear system the upper bound of error depends on the nonlinear operator.

Advertisement

4. Two-dimensional linear systems of integral equations

4.1 Fredholm type

In this section, we will provide an approximation solution of the 2-D linear system of Fredholm integral equations by the MLS method. The matrix form of this system could be considered as

Uxy=Fxy+ΩKxyθsUθsdθds,xyΩ,E36

where Ω=ab×cd as ΩR2, Also Kxyθs=κijxyθs, i,j=1,2,,n is the matrix of kernels, Uxy=u1xyu2xyunxyT is the vector of unknown function and Fxy=f1xyf2xyfnxyT is the vector of known functions.

In addition, is took that two cases for the domain, the rectangular shape, and nonrectangular one and three cases relative to the geometry of the nonrectangular domain are considered where can be transformed into the rectangular shape [35].

The first one is Ω=θsR2:asbg1sθg2s where g1s and g2s are continues functions of s, the second one can be consider as Ω=θsR2:cθdg1θsg2θ where g1θ and g2θ are continues functions of θ, Also the last one is a domain which is neither of the first nor the second kinds but could be separated to finite numbers of first or second sub-domains, it is labeled as a domain of third kind. Without loss of generality, the first kind domain can be assumed as

Ω=θsR2:1s1g1sθg2sE37

by the following linear transformation

θts=g2θg1θ2t+g2θg1θ2,E38

the intervalg1θg2θ is converted to the fixed interval 11, so we have

Uxy=Fxy+1111KxytsUtsdtds, such thatxy11×11E39

where

Kxyts=g2θg1θ2KxyθsE40

Also, the second kind is straight similarly by commuting the variables and the third kind can be separated to finite numbers of sub-domains of the first or second kinds, so the method can be applied in each sub-domain as described earlier.

So, for the numerical integration Ω=l=1LΩl and ΩlΩk, 1k,lL

Ωgsds=l=1LΩlgsdsE41

Here, the MLS method is applied for the general case where the domain is ab×cd.

To apply the method, as described in section 2.1, instead of ui from U, we can replace uih from (12). So we have

Uhx=u1hxu2hxunhxTE42

Also, obviously from (12)

ujhxy=i=1NϕixyujxiyiE43

in this section, is assumed that the domain has rectangular shape, so system (36) becomes as follows

u1hxyu2hxyunhxyT=fxy+cdabkijxytsu1htsu2htsunhtsTdtds.E44

By substituting (42) in (44), and it holds at points xryr,r=1,2,,N we have

fxryr=i=1Nϕixryru1xiyii=1Nϕixryru2xiyii=1NϕixryrunxiyiTcdabkijxryrtsi=1Nϕitsu1xiyii=1NϕitsunxiyiTdtds.E45

We consider the m1-point numerical integration scheme over Ω relative to the coefficients τkςp and weights ωk and ωp for solving integrals in (45), i.e.,

ϜNjujxy=p=1m1k=1m1i=1Nkjixryrτkςkϕiτkςkωkωp,xyΩ,uiE46

Applying the numerical integration rule (46) yields

fxryr=(i=1Nϕixryrp=1m1k=1m1j=1Nkj1xryrτkςkϕiτkςkωkωpu1i,i=1Nϕiτkςkp=1m1k=1m1j=1Nkj2xryrτkςkϕiτkςkωkωpu2i,i=1Nϕiτkςkp=1m1k=1m1j=1Nkjnxryrτkςkϕiτkςkωkωpuni)T,r=1,2,N

where uji are the approximate quantities of uj when we use a quadrature rule instead of the exact integral. Now if we set Fl, l=1,2,n as a N by N matrices defined by:

Fli,j=ϕixryrp=1m1k=1m1j=1NkjlxryrτkςpϕiτkςpωkωpE47

So, the moment matrix F is defined by (47) as follows

F=F1F2FnnN×nNE48

And

U=u11u12u1NTu21u22u2NTun1un2unNTTfxryr=f1xryrr=1NTf2xryrr=1NTfnxryrr=1NT.E49

So by solving the following linear system of equations with a proper numerical method such as Gauss elimination method or etc. leads to quantities, uji.

FU=fE50

Therefore any ujxy at any arbitrary point xy from the domain of the problem, can be approximated by Eq. (43) as

ujxyi=1NϕixyujixiyiE51

4.2 Volterra type

Implementation of the proposed method on the Volterra integral equations is very simple and effective. In this case, the domain under study is as Ω=ax×cy such that 0x1,0y1 and a,c are constant, so a Volterra system type of integral equations can be consider as

Uxy=Fxy+ΩKxytsUtsdtds,xyΩ,E52

like the Fredholm type, it is the matrix form of a system, so we have

Uxy=u1xyu2xyunxyT,the  vector of unknown functionsFxy=f1xyf2xyfnxyT,the  vector of known functionsKxyts=κijxytsi,j=1,2,,nthe matrix of kernels.E53

By the following transformation the interval ax and cy can be transferred to a fixed interval ab and cd,

txθ=xabaθ+bxbaa.E54
syξ=ycdcξ+dydcc.E55

Then instead of ui from U, we can replace uih from (12). So we have

Uhx=u1hxu2hxunhxTE56

where

uihx=j=1NϕjxuixjE57

where x=xyab×cd, thus, system (52) becomes

u1hxu2hxunhxT=Fx+axcyκijxyts.u1hxyu2hxyunhxyTdtds.E58

Therefore from (54) and (55), the system (58) takes the following form

u1hxu2hxunhxT=Fx+abcdκijxyts.u1hxyu2hxyunhxyTdθdξ.E59

Where

K....=xabaycdcK....,E60

Using techniques employed in the Fredholm case yields the same final linear system where

Fli,j=ϕixryrp=1m1k=1m1j=1NkjlxryrτkςpϕiτkςpωkωpE61

where l=1,2,,n.

Advertisement

5. Nonlinear systems of two-dimensional integral equation

5.1 Fredholm type

In the nonlinear system, the unknown function cannot be written as a linear combination of the unknown variables or functions that appear in them, so the matrix form of Fredholm integral equations defined as the following form [27].

Uxy=Fxy+ΩKxyθsUθsdθds,xyΩ,E62

Where Uxy,K and F are defined as,

Uxy=u1xyu2xyunxyTKxyθsUθs=kijxyθsUθs,i,j=1,2,,nF=f1f2fnT

As mentioned above, we assume that Ω=ab×cd.

To apply the aproximation MLS method, we estimate the unknown functions ui by (12), so the system (62) becomes the following form

u1hxyu2hxyunhxyT=fxy+cdabkijxytsUhtsdtdsE63

We consider the m1-point numerical integration scheme over the domain under study relative to the coefficients τkςp and weights ωk and ωp for solving tow-dimentional integrals in (63), i.e.,

ϜNiuixy=p=1m1k=1m1kjixyτkςpj=1Nϕjτkςpuixyωkωp,xyΩ,uiE64

Applying the numerical integration rule (64) in (63) yields

u1hxryru2hxryrunhxryrT=fxryr+p=1m1k=1m1kijxryrτkςpUhτkςpdtdsE65

Finding unknowns Uh by solving the nonlinear system of algebraic Eq. (65) yields the following approximate solution at any point xtΩ.

ujxyi=1NϕixyujixiyiE66

5.2 Volterra type

Two-dimensional nonlinear system of Volterra integral equations can be considered as the following form

Uxy=Fxy+axcyKxyθsUθsdθds,xyΩ,E67

where K, F are known function and U the vector of unknown functions are defined in (63) [27]. In order to apply the MLS approximation method, as same as the linear type, the interval ax and cy transferred to a fixed interval ab and cd. Then uih,i=1,2,,n instead of ui in U=(u1,u2,,un from (12) is replaced. So the nonlinear system (67) is converted to

u1hxyu2hxyunhxyT=fxy+cdabK¯xytsUhtsdtdsE68

where

K¯xytsUhts=xabaycdcKxytsUhts,E69

Using the numerical integration technique (64) which applied in the Fredholm case yields the same final nonlinear system (65), so the approximation solution of U would be found by solving this system of equations.

Advertisement

6. Examples

In this section, the proposed method can be applied to the system of 2-dimensional linear and nonlinear integral equations [37] and the system of differential equations. Also, the results of the examples illustrate the effectiveness of the proposed method Also the relative errors for the collocation nodal points is used.

ei=uiexxyuihxyuiexxy

where uih is the approximate solution of the exact solution uiex. Linear and quadratic basis functions are utilized in computations.

6.1 Example 1

As the first example, we consider the following system of nonlinear Fredholm integral equations [27].

u1xy=f1xy+Ωu1stu2stdsdtu2xy=f2xy+Ωu1stu2st+u22stdsdt

where Ω=01×01. The exact solutions are Uxy=x+yx and the Fxy=x+y712x1112. The distribution of randomly nodes is shown in Figure 1. By attention to the irregular nodal points distribution, unsuitable δ can lead to a singular matrix A. So in this example, the adapted algorithm can tackle such problems. The MLS and MMLS shape functions are computed by using Algorithm 1, so the exact value of the radius of the domain of influence is not important; in fact, it is chosen as an initial value.

Figure 1.

The scatter data of example 1.

The condition numbers of the matrix A is shown in Figure 2 and the determinant of A at sample points p is shown in Figure 3, where the radius of support domains for any nodal points is started from δ=0.05. Note that, there is a different radius of support domain for any node point, it might be increased due to the inappropriate distribution of scattered points by the algorithm. These variations are shown in Table 1 for sample points xy, where CondA is the conditions number A, its initial case (δ=0.005) and final case (Newδ,) and N.O.iteration is the number of iteration of the algorithm for determining a suitable radius of support domain.

Figure 2.

The condition numbers of a at a sample point p and δ=0.05 for example 1. Using algorithm 1.

Sample pointsCond(A)Result of algorithm
nxyinitialfinalNewδN.O.iteration
10.25750.47331.1005e171.0117e060.129711
20.25750.616003.1111e060.156913
30.25750.92935.3204e175.2445e070.09748
40.25510.616003.1115e060.156913
50.69910.24352.7166e171.4652e070.05502

Table 1.

Change of the radius and the condition number A at sample points (x,y) using algorithm 1, for example 1.

In computing, δ=2r where r=0.05 and c=23r. Also In MMLS, w¯ν=0.1,ν=1,2,3. It should be noted that, these values were also selected experimentally. Relative errors of the MLS method for different Gauss-Legendre number points at m=1,2 and 3 compared in Table 2, also investigating the proposed methods shown that increasing the number of numerical integration points does not guarantee the error decreases. jkjk.

m = 1 em = 2 em = 3 e
Nu1u2u1u2u1u2
56.28×1042.7×1033.45×1071.15×1063.1×1062.8×106
101.2×1045.9×1042.46×1078.41×1072.1×1075.41×107
152.3×1045.9×1044.47×1081.34×1074.47×1082.15×107
202.3×1045.14×1046.12×1072.46×1063.2×1071.9×106
303.2×1045.9×1041.84×1066.64×1062.34×1067.14×106

Table 2.

Maximum relative errors for different points Gauss-Legendre quadrature rule δ=2r, for example 1.

In Table 3, we can see that the CPU times for solving the nonlinear system (65) are much larger in MMLS method; but, the errors are very smaller (Figure 3).

MMLS eMLS eCPU times
Nu1u2u1u2MLSMMLS
103.78×10108.13×10102.46×1078.41×107389.95742.9776×103
151.35×10102.00×10114.47×1081.34×107410.90832.1109×103
208.63×10112.51×1096.12×1072.46×106634.83733.2115×103
303.99×10101.58×1091.84×1066.64×1061.0331×1032.5844×103

Table 3.

Compare relative errors and CPU times of MLS and MMLS for different points Gauss-Legendre quadrature rule,m=2, for example 1.

Figure 3.

The determinant of a at a sample point p and δ=0.05 for example 1. Using algorithm 1.

6.2 Example 2

Consider the system of linear Fredholm integral equations with [27].

Kxyts=xt+sttsy+xt,E70

Such that Uxy=x+yx is the vector of The exact solutions and the vector of unknown function is Fxy=16x+y+1343x13+13y. Also the domain of the problem determine by Ω=01×01. In this example, initial value of r as radius of support domain set by 0.05. Also Algorithm 1 is used for producing shape function at m=1,2,3. In computing, we put w¯ν=0.1,ν=1,2,3.

Table 4 shows relative errors and CPU times of MLS for different Gauss-Legendre number points at m=1,3. As shown in Table 5, comparing the errors of MMLS and MLS method determines the capability and accuracy of the proposed technique to solve systems of linear Fredholm integral equations. This indicates the advantage of the proposed method over these systems of equations.

m = 1m = 3
Nu1u2CPU.T.u1u2CPU.T.
52.94×1046.02×104108.9572.08×1042.91×104152.1474
101.79×1043.89×104159.2581.7×1042.37×104170.7879
151.86×1043.989×104198.1352.47×1043.30×104252.75424
201.86×1043.986×104221.3212.41×1043.29×104247.0093
301.85×1043.987×104308.9872.25×1043.37×104314.8173
401.85×1043.980×104395.1251.87×1042.86×104402.9594

Table 4.

Relative errors and CPU times of MLS for different points Gauss-Legendre quadrature rule at m=1,3, for example 2.

MLSMMLS
mNu1u2CPU.T.u1u2CPU.T.
251.24×1073.89×106289.754.47×1076.12×106297.7
103.16×1075.23×107189.992.16×1086.26×108210.09
151.68×1074.13×107389.952.02×1088.12×108390.15
201.61×1073.88×107410.902.04×1085.24×108425.87
301.45×1073.80×107604.801.98×1083.25×108618.44
401.44×1073.84×107689.95741.04×1086.87×108697.04

Table 5.

Compare relative errors and CPU times of MLS and MMLS for different points Gauss-Legendre quadrature rule at m=2, for example 2.

Comparing the errors of MMLS and MLS method determines the capability and accuracy of the proposed method to solve systems of linear Fredholm integral equations.

6.3 Example 3

The third example that we want to approximate is the system of linear Volterra-Fredholm integral equations with [27].

Kxyts=x+yexpt+sx+yexpt+s11,E71

The domain is considered as Ω=xyR2:0x10y1y so that y01. Also the exact solutions are expx+yexpxy. It is important to note that the linear transformation used in the experiment is only (55) and from (40) the kernel becomes

K....=ycdcK.....E72

In this example, the effect of increasing radius of the domain of influence δi in MLS method on error has been investigated. Therefore the δi was considered as follows

δi=λri=1,2,,NE73

In this way, useful information is obtained about the performance of the proposed method. By investigating the results in Tables 6 and 7 we found that the relative error in MLS was also related to the radius of the domain of influence (i.e. δ=λr so that λ=3,5,7); however, it cannot be greater than 7. For example, the relative errors by choosing λ=10 (i.e. δ=0.05λ) and 10point GaussLegendre quadrature rule are eu1=3.3×102 and eu2=1.8×102 at m=1. Also, Table 8 depicts, the number of points in the numerical integration rule cannot be effective to increase the accuracy of the method.

λ=3eλ=5eλ=7e
ru1u2u1u2u1u2
0.29×1035×1031.9×1031×1036.02×1043.36×104
0.11.2×1036.87×1041.34×1047.46×1054.57×1062.91×106
0.051.44×1048.14×1052.23×1051.26×1056.27×1063.6×106

Table 6.

Maximum relative errors of MLS for 10 Gauss-Legendre points and different values of δ=λr at m=2, for example 3.

λ=3eλ=5eλ=7e
ru1u2u1u2u1u2
0.21.5×1038.77×1036.56×1053.01×1051.09×1046.19×105
0.11.08×1046.51×1053.47×1052.08×1051.24×1057.22×106
0.056.63×1063.93×1063.9×1062.31×1062.41×1061.38×106

Table 7.

Maximum relative errors of MLS for 10 Gauss-Legendre points and different values of δ=λr at m=3 for example 3.

λ=3eλ=5eλ=7e
Nu1u2u1u2u1u2
52.2×1031.5×1036.33×1054.52×1052.21×1051.59×105
101.1×1037.4×1044.17×1052.89×1052.96×1062.72×106
152×1031.3×1037.9×1055.07×1056.31×1065.11×106
202.1×1031.3×1038.06×1055.17×1053.96×1063.69×106
302.1×1031.3×1038.16×1055.17×1054.11×1063.74×106

Table 8.

Maximum relative errors of MLS for different values of δ=λr,r=0.05 and Gauss-Legendre points at m=1, using algorithm 1 for example 3.

Then the relative error by MMLS shape function described in section (2.3) were obtained, using δ=7r and w¯ν=10, such that ν=1,2,3 as weights of additional coefficients for MMLS. We can see that in Table 9 the errors of the system of linear Volterra-Fredholm integral equations are similar in both methods (i.e. MLS and MMLS methods).

MLSMMLSCPU time
ru1u2u1u2MLSMMLS
0.21.09×1046.19×1055.52×1043.08×1044.05563.3893
0.11.24×1051.08×1061.25×1057.36×10638.061631.7945
0.051.91×1061.085×1061.54×1058.77×106496.1746409.0342

Table 9.

Compare relative errors and CPU times of MLS and MMLS for w¯ν=10 and 10 Gauss-Legendre points and different values of δ=7r at m=2 for example 3.

6.4 Example 4

Consider the following nonlinear stiff systems of ODEs [38].

u1t=1002u1t+1000u22tu2t=u1tu2tu22t

With the initial condition u10=1 and u20=1. The exact solution is

u1t=exp2tu2t=expt.

In this numerical example, two scheme are compared and as explained the main task of the modified method tackle the singularity of the moment matrix. Table 10 presents the maximum relative error by MLS on a set of evaluation points (with h=0.1and0.02) and δ=4hand3h. Also in Table 11 MLS and MMLS at different number of nodes for h=0.004 and δ=5hand8h, were compared (Tables 1012).

m = 2,δ = 4 rm = 2, δ = 3 r
ru1u2u1u2
0.15×1034.1×1048.85×1042.2×103
0.025.8×1026.5×1055.42×1046.52×105

Table 10.

Maximum relative errors by MLS, example 4.

m = 3, δ = 5 rm = 3, δ = 8 r
Typeu1u2u1u2
MLS1.03×1000.98×1011.01×1009.2×100
MMLS9.23×1049.22×1046.89×1046.96×104

Table 11.

Maximum relative errors for h = 0.004 by MMLS and MLS, example 4.

MLSMMLS
weight typeu1u2Cputimeu1u2Cputime
Guass3.06×1039.92×10561.17068.5×1046.5×1040.5598
Spline5.06×1045.3×10464.58971.93×1024.23×1040.6714
RBF6.407×1043.02×10459.17906.9×1036.9×1030.5768

Table 12.

Maximum relative errors by MLS t05,h=0.004,, example 2.

6.5 Example 5

In this example, we consider Ut=14795exp2t48exp96t14748exp96texp2t as the exact solution and U00=11 as the initial conditions for the following system of ODE,

xt=xt+95ytyt=xt97yt

Table 12 presents the maximum relative norm of the errors on a fine set of evaluation points (with h=0.004) and δ=5h for MLS and MMLS at different type of weight functions. As seen in this table, one major advantage of MMLS is that the computational time used by MMLS is less than MLS.

Advertisement

7. Conclusion

In this paper, two meshless techniques called moving least squares and modified Moving least-squares approximation are applied for solving the system of functional equations. Comparing the results obtained by these methods with the results obtained by the exact solution shows that the moving least squares methods are the reliable and accurate methods for solving a system of functional equations. Meshless methods are free from choosing the domain and this makes it suitable to study real-world problems. Also, the modified algorithm has changed the ability to select the support range radius In fact, the user can begin to solve any problem with an arbitrary radius from the domain and the proposed algorithm can correct it during execution.

References

  1. 1. Scudo FM. Vito Volterra and theoretical ecology. Theoretical Population Biology. 1971;2:1-23
  2. 2. Small RD. Population growth in a closed model. In: Mathematical Modelling: Classroom Notes in Applied Mathematics. Philadelphia: SIAM; 1989
  3. 3. TeBeest KG. Numerical and analytical solutions of Volterra’s population model. SIAM Review. 1997;39:484-493
  4. 4. Wazwaz AM. Partial Differential Equations and Solitary Waves Theory. Beijing and Berlin: HEP and Springer; 2009
  5. 5. Mckee S, Tang T, Diogo T. An Euler-type method for two-dimensional Volterra integral equations of the first kind. IMA Journal of Numerical Analysis. 2000;20:423-440
  6. 6. Hanson R, Phillips J. Numerical solution of two-dimensional integral equations using linear elements. SIAM Journal on Numerical Analysis. 1978;15:113-121
  7. 7. Babolian E, Masouri Z. Direct method to solve Volterra integral equation of the first kind using operational matrix with block-pulse functions. Journal of Computational and Applied Mathematics. 2008;220:51-57
  8. 8. Beltyukov BA, Kuznechichina LN. A RungKutta method for the solution of two-dimensional nonlinear Volterra integral equations. Differential Equations. 1976;12:1169-1173
  9. 9. Masouri Z, Hatamzadeh-Varmazyar S, Babolian E. Numerical method for solving system of Fredholm integral equations using Chebyshev cardinal functions. Advanced Computational Techniques in Electromagnetics. 2014:1-13
  10. 10. Jafarian A, Nia SAM, Golmankhandh AK, Baleanu D. Numerical solution of linear integral equations system using the Bernstein collocation method. Advances in Difference Equations. 2013:1-23
  11. 11. Singh P. A note on the solution of two-dimensional Volterra integral equations by spline. Indian Journal of Mathematics. 1979;18:61-64
  12. 12. Assari P, Adibi H, Dehghan M. A meshless method based on the moving least squares (MLS) approximation for the numerical solution of two-dimensional nonlinear integral equations of the second kind on non-rectangular domains. Numerical Algorithm. 2014;67:423-455
  13. 13. Brunner H, Kauthen JP. The numerical solution of two-dimensional Volterra integral equations by collocation and iterated collocation. IMA Journal of Numerical Analysis. 1989;9:47-59
  14. 14. Wazwaz A. Linear and Nonlinear Integral Equations Methods and Applications. 2011
  15. 15. Dehghan M, Abbaszadeha M, Mohebbib A. The numerical solution of the two-dimensional sinh-Gordon equation via three meshless methods. Engineering Analysis with Boundary Elements. 2015;51:220-235
  16. 16. Mirzaei D, Schaback R, Dehghan M. On generalized moving least squares and diffuse derivatives. IMA Journal of Numerical Analysis. 2012;32:983-1000
  17. 17. Salehi R, Dehghan M. A generalized moving least square reproducing kernel method. Journal of Computational and Applied Mathematics. 2013;249:120-132
  18. 18. Saadatmandi A, Dehghan M. A collocation method for solving Abels integral equations of first and second kinds. Zeitschrift für Naturforschung. 2008;63a(10):752-756
  19. 19. Dehghan M, Mirzaei D. Numerical solution to the unsteady two-dimensional Schrodinger equation using meshless local boundary integral equation method. International Journal for Numerical Methods in Engineering. 2008;76:501-520
  20. 20. Mukherjee YX, Mukherjee S. The boundary node method for potential problems. International Journal for Numerical Methods in Engineering. 1997;40:797-815
  21. 21. Salehi R, Dehghan M. A moving least square reproducing polynomial meshless method. Applied Numerical Mathematics. 2013;69:34-58
  22. 22. Matin far M, Pourabd M. Moving least square for systems of integral equations. Applied Mathematics and Computation. 2015;270:879-889
  23. 23. Li S, Liu WK. Meshfree Particle Methods. Berlin: Springer-Verlag; 2004
  24. 24. Liu GR, Gu YT. A matrix triangularization algorithm for the polynomial point interpolation method. Computer Methods in Applied Mechanics and Engineering. 2003;192:2269-2295
  25. 25. Belinha J. Meshless Methods in Biomechanics. Springer; 2014
  26. 26. Chen S. Building interpolating and approximating implicit surfaces using moving least squares [Phd thesis] EECS-2007-14. Berkeley: EECS Department, University of California. 2007
  27. 27. Matin far M, Pourabd M. Modified moving least squares method for two-dimensional linear and nonlinear systems of integral equations. Computational and Applied Mathematics. 2018;37:5857-5875
  28. 28. Joldesa GR, Chowdhurya HA, Witteka A, Doylea B, Miller K. Modified moving least squares with polynomial bases for scattered data approximation. Applied Mathematics and Computation. 2015;266:893-902
  29. 29. Lancaster P, Salkauskas K. Surfaces generated by moving least squares methods. Mathematics of Computation. 1981;37:141-158
  30. 30. Wendland H. Local polynomial reproduction, and moving least squares approximation. IMA Journal of Numerical Analysis. 2001;21:285-300
  31. 31. Zuppa C. Error estimates for moving least square approximations. Bulletin of the Brazilian Mathematical Society, New Series. 2001;34:231249
  32. 32. Liu GR. Mesh Free Methods: Moving beyond the Finite Element Method. Boca Raton: CRC Press; 2003
  33. 33. Zuppa C. Error estimates for moving least squareapproximations. Bulletin of the Brazilian Mathematical Society, New Series. 2003;34(2):231-249
  34. 34. Zuppa C. Good quality point sets error estimates for moving least square approximations. Applied Numerical Mathematics. 2003;47:575-585
  35. 35. Mirzaei D, Dehghan M. A meshless based method for solution of integral equations. Applied Numerical Mathematics. 2010;60:245-262
  36. 36. McLain D. Two dimensional interpolation from random data. Computer Journal. 1976;19:178181
  37. 37. Pourabd M. Meshless method based moving least squares for solving systems of integral equations with investigating the computational complexity of algorithms [Phd thesis], IRANDOC-2408208. IRAN: Department of mathematic, University of Mazandaran. 2017
  38. 38. Biazar J, Asadi MA, Salehi F. Rational Homotopy perturbation method for solving stiff systems of ordinary differential equations. Applied Mathematical Modelling. 2015;39:12911299

Written By

Massoumeh Poura’bd Rokn Saraei and Mashaallah Matinfar

Submitted: 05 February 2019 Reviewed: 28 August 2019 Published: 13 May 2020