Open access peer-reviewed chapter - ONLINE FIRST

A New Monte Carlo Approach for the Solution of Semi-Linear Neumann Boundary Value Problem

Written By

Abdujabar Rasulov

Submitted: 30 June 2023 Reviewed: 13 July 2023 Published: 06 February 2024

DOI: 10.5772/intechopen.1002452

Recent Advances in Monte Carlo Methods IntechOpen
Recent Advances in Monte Carlo Methods Edited by Abdo Abou Jaoudé

From the Edited Volume

Recent Advances in Monte Carlo Methods [Working Title]

Abdo Abou Jaoudé

Chapter metrics overview

13 Chapter Downloads

View Full Metrics

Abstract

In this paper we will consider the Neumann boundary-value problem for the Laplace operator with a polynomial nonlinearity on the right-hand side. We utilize Greens formula and an elliptic mean-value theorem. This allows us to derive a probabilistic representation as special integral equation which equates the value of the function u(x) at the centre-point x, with its integral over the domain D, and on boundary of the domain G. According to this probabilistic representation, a Markov branching process is constructed and an unbiased estimator of the solution of the nonlinear problem is formed by taking the mathematical expectation over these branching processes. The unbiased estimator which we derive has a finite variance. The complexity of the proposed algorithms is investigated, and the ways decreasing of the computational work are proposed. Some particular cases are considered in detail. The effectiveness of two estimators “by absorption” and “by collision” is studied in the nonlinear case.

Keywords

  • Neumann boudary value problem
  • Monte Carlo method
  • Markov branching process
  • martingale
  • unbiased estimator

1. Introduction

In this work, we will propose new probabilostic algorithms for the solution of the Laplace operator with a polynomial nonlinearity on the right-hand side. The complexity of the proposed algorithms is investigated, and the ways decreasing of the computational work are proposed. For the solution of the semi-linear Neumann problem, a new and effective Monte Carlo algorithm is derived. Some particular cases are considered in detail. The effectiveness of two estimators “by absorption” and “by collision” is studied in the nonlinear case.

Advertisement

2. Solutions of semi-linear Neumann problems

Assume D is a bounded convex domain in Rn,n3 with a regular smooth boundary Γ, and x=x1xnD. Consider the following boundary value problem:

LuxΔux+gxux=fxux,uxnxΓ=0.E1

where nx is the external normal on the boundary Γ at the point x, and the function gx>0 and gxCD¯. We shall determine the class of functions

v=ωx:BM(x0)+ψxt0inD;E2

Here Mxy is the Levy function for operator L (see: [1]), ψx is a continuous function, which has continuous derivatives of second order in DΓ, and B is a positive constant. It is known, that this class of functions is not empty.

Suppose that the function fxωx satisfies the following three conditions:

  1. fxω/ω0 is an increasing function at ω0;

  2. There exists such k, that fxω=ork2 and fxωω=ork1, here r is the distance from the origin to the point x;

  3. The limit limx0ωxh1(xy)=B holds, where y=y1ynD and

hxy=11n/2n2ωn1ni=1nxiyi)22n/n,E3

and ωn is a surface element of the unit sphere.

The next assertion is then true.

Lemma 1 The unique positive solution of(1)exists within the class of functions,v, if the above conditions a), b), and c) are satisfied.

This assertion follows from the results of [2], therefore the proof is omitted. Thus, there exists a unique function ωx=ux0 (from lemma 1) that satisfies conditions (a), (b), and (c) and which is the solution of problem (1).

Let us construct a computational scheme for the solution of problem (1) using proposed algorithms from the work [3]. Suppose Γ is a sufficiently smooth boundary and the function vxC2DC1D. Then, we can use Green’s identity to obtain

DvyLuyuyLvydy=ΓuyvynyvyuynydyΓE4

Let us c2>maxxD¯gx. then using formula (4) we can derive the integral equation for ux

ux=1ωnDexpρn2fyuydy++DA2ρgyn2ρexpρn1uydy+ΓA1ρcosφxyρn1dyΓ.E5

Here φxy is the angle between the normal ny with the vector yx, ρ=xy,

A1ρ=1+n2exp,A2ρ=c2ρcn3n2exp.E6

It is easy to show that A1ρ=A2ρ, A10=1. Thus,

0ρA2ρ=1A1ρE7

Furthermore, if xΓ, then ωn should be changed into ωn/2.

The linear case, when fxux=fx was considered by A.Sipin. In his work [4] sequences of unbiased estimators were developed and the question of simulating the required special densities was discussed. Furthermore, using the results of [4], we will present an algorithm to construct of unbiased estimators for the solution of problem (1).

For ux, after transforming Eq. (1) as discussed below, we obtain the following integral equation

ux=D1πxyΠxyuyy+++1+χΓxωnDfyuyexpρn2y.E8

Here

χΓx=0,xΓ,1,xΓ.πxy=0,yΓgyρexpn2A2ρ,yDΠxy=1+χΓxωnA1ρcosφxy/ρn1,yΓ,A2ρ/ρn2,yDE9

The measure, μ, is determined by the σ-algebra of Borel subsets D¯ with the equality μE=λE+SEΓ, where Lebesque λ-measure is in Rn, S is a surface element. It is easy to see that Πxy0. Because the domain D is convex, equality (7) implies that Πxy is a transition probability density. Assume qx0,c2>maxxD¯gx. Then, 0πxy1. Let now consider

fxux=i=0mbixuix,xD¯,E10

where bix0. We choose αi,i=0,,m+1, to satisfy the condition:

i=0m+1αi=1,i=1miαi+αm+11E11

The iterative method for the solution of Eq. (8) will converge only with small coefficients, bix. Therefore, we assume that parameters αi satisfy the following condition for each i, 0im+1:

bixCD¯2αin1/d2,E12

here d is diameter of the domain D, and αi0, i=0,,m. Using (10) and (11) we can transform Eq. (8), as follows

ux=αm+1D¯km+1xyΠxyuyy++1+χΓxωni=1mαiDKixyρn2uiyy++1+χΓxωnα0DK0xyρn2xE13

where

Km+1xy=1πxy/αm+1,k0xy=b0yexp/α0,Kixy=biyexp/αi,i=1,,m¯E14

We can use a method based on branching random process to Eq. (13), by applying the method for integral equations with polynomial nonlinearities.

We define a branching Markov process in D¯, and a sequence of unbiased estimators on the trajectory of the process as follows. Assume at the initial time and at x there is only one particle. With probability α0 the particle moves to the point yD, where y=x+ρω¯ where ω¯ is an isotropic vector in Rn (if xD) and ρ is distributed uniformly in 0d with density 2ρ/d2. If xΓ, then ω¯ is an isotropic vector in the half-space G, in which D lies. At the point y, the particle is absorbed. In this case, the estimator is

ξ1x=d22n2expb˜0x+ρω¯/α0,E15

where

b˜0x=b0x,xD¯0,xD¯E16

With probability αi, i=1,,m, the particle at point x moves to y=x+ρω¯ (here ρ is as above) and at y we generate i particles.

In this case, the estimator will be

ξ1y=(d22n2expb˜x+ρω¯/alphaiuix+ρω¯,E17

here

b˜ix=bix,xD¯0,x¯D¯E18

With probability αm+1, the particle moves to the point y, which is distributed with density Πxy, and generates one particle. The estimator is thus

ξ1x=1πxy/αm+1uy1.E19

The algorithm for simulating a random point with density functions Πxy is the following [3]:

  1. Let n0 and xn be known.

  2. If xnG, then Πxn denote a half-space in which DΠxn, and is defined by the tangent plane to D at the point xn.

  3. Let ω is an isotropic vector in Pixn, if xnG and an isotropic vector in Rm, if xnD

  4. ρ=ρxnω=supt>0:xn+ is the length of a ray, that emitting from the point xn and lies in D pointing in the ω direction. xn+ρω is always a point on the boundary.

  5. The Markov chain jumps from xn to xn+1, where xn+1=ξ=xn+ρω with probability A1ρ, and with probability 1A1ρξ is distributed on the segment xnxn+ρϖ with the probability density p1r=A2ρ/1A1ρ.

The simulation method, which was defined above gives a vector ξ, which is distributed viaΠxy.

It is not difficult (see Ref. [3]), that ξtx is a martingale relative to the σ-algebra Ftt=1, where Ft was generated by the process up to time t. Because ξtx0 (from lemma 1), ξzx is a supermartingale, then (see Ref. [5], p. 112) with probability one if tends to the integrable random variable ξx. To show that Exξx<+ and xinx=Exξx/Fn, we must show that ξtx is uniformly integrable. We will show that our estimator is bounded and that ξx is an unbiased estimator for ux. It is known [5] that if Exxit2x<+, then ξtx is uniformly integrable.

Theorem 1 If condition(12)is satisfied and

αm+1maxxyDc2gyρ+cn3/(c2p2+cn3E20

then ξtx is a supermartingale and tends to integrable random variable ξx almost surely.

Proof: From the definition of our estimator, it is easy to see that if we choose αm+1 to satisfy the inequality below

αm+1maxx,yDc2gyρ+cn3c2ρ+cn3E21

Then, Exξt2x<+ from condition (12). From (21) one can see that the inequality 0<αm+11 holds. The parameter c we could choose, so that the inequality (21) is less than one because gy>0. The rest of the αi can be chosen differently from zero..

Thus, ξx is an unbiased estimator for ux at xD. Here, the estimator ξx is more amenable to practical computation. It differs from the estimator used for the nonlinear Dirichlet problem, and it is not necessary to create an ε -neighborhood of the boundary.

Advertisement

3. The case when n=2 and gx=c2

Let D be a bounded domain in R3, with a regular smooth boundary Γ. Consider the following nonlinear problem

Δux+c2ux=bxu2+fxuxnxΓ=0E22

Since A2ρ=c2ρexp the volume integral in (5) vanishes. In this particular case, we obtain the following integral equation

ux=1+χΓx4πDexp/ρbyu2ydy++1+χΓx4πDexp/ρfydy++1+χΓx4πΓ1+expcosφxy/ρ2)uydyΓE23

We denote.

πxy=1exp1+, and p1xy=1+XΓx4πρ2cosφxy and transform the above integral equation to get

ux=1+XΓx2d2D2ρexp/4πd2ρ2byu2ydy+1+χΓx2d2D2ρexp/4πd2ρ2fydy++Γ1πxyp1xyuydyΓE24

From Eq. (24) it is obvious how to construct the branching Markov process and the sequence of unbiased estimators analogous to those determined above [6, 7]. Here the difference is that we do not introduce artificial parameters αi. Usually, when the parameters αi are introduced, the estimation will change, and sometimes, if we are not careful, the variance of the estimators may be unbounded.

Assume bx0, fx0 are such that

maxbxCD¯fxCD¯minx,yDπxy/d2E25

Here d is the diameter of the domain D.

Now, we will determine the branching Markov process in D¯ and the corresponding sequence of unbiased estimators. Assume that at x there is only one particle. Furthermore, the point y is distributed with density p1xyyΓ. With probability 1πxy, we generate a particle at y. In this case, the estimator is ξ1x1uy. With probability πxy/2, the particle moves from x to the point y1=x+ρω¯ and is absorbed. Here, ω¯ an isotropic vector in R3, and ρ is distributed in 0d with density 2ρ/d2. In this case, the estimator will be

ξ1x=d2expπxyfy1,E26

where

f˜y1=fy1,yD¯,0,yD¯..E27

Two particles are created at y1 with probability πxy/2. In this case, the estimator is

ξ1x=d2expb¯y1/πxyu2y1,E28

where

b˜y1=by1,y1D¯,0,y1D¯.E29

New particles behave in a manner similar to their parents. ξtx=ξt1x if the process terminates up to time t, and ξtx is obtained from ξt1x by the replacing uy in the estimators by ξ1x. Obviously, ξtx is a martingale relative to the family of σ algebras, Ftt=1, where Ft is generated by the process up to time t.

From the conditions (25), it is easy to see that Exξt2x<+, so that ξtx is a uniformly integrable martingale. Therefore, ξtx tends to ξx almost surely, and Exξx<+, and ξnx=Exξx/Fn. Thus, ξx is the unbiased estimator for solution to problem (22) at the point xD.

Note that m1 is the average number of particles generated over one time stop. In our case, m1=1 and so the constructed branching Markov process is critical in the sense of neutronics.

The next assertion is true.

Theorem 2 With probability one, the constructed branching Markov process terminates.

The proof is analogous to that of theorem [8] and is thus omitted.

We should note that the above method for constructing unbiased estimators may be used for solving mixed problems when parts of the boundary are given a Dirichlet condition and other parts of Neumann condition. It is only important, that pieces of the boundary, where the normal derivative exists, should be convex surfaces. In this case, we use the estimators with a combination of the above proposed algorithms with the process “walk on spheres with branching” (see Ref. [9]).

It is not difficult to see, that these estimators use namely absorption. Depending on the complexity of algorithms, one may use the other types of estimators (e.g., collisions estimators) (see Ref. [10]).

Advertisement

4. Comparison of the complexity of the estimators absorption and collision in the quasi-linear case

The comparison of the complexity absorption and collision estimators in the quasi-linear case is a difficult problem. However, in some particular cases, one can analyze the complexity in this case.

To optimize the calculation procedure we can sometimes choose different Markov chains and estimators. In this case, the choice of the estimator is most important and could increase the efficiency of the algorithm. It is often useful to compare two estimators and chose one of them base on the particular problem being solved. We note that the computational cost may depend on the type of equation and the type of Markov process. It may turn out that one estimator is best in one situation while the other maybe best for a difficult equation.

For quasi-linear equations we will give some recommendations in choosing estimators that minimize the complexity of a calculation (see Ref. [11]).

Let consider the following nonlinear integral equation

φx=fx++m=1nDDKmxy1ymi=1mφy1dy1dyiE30

where DRn, Ki(x,y1,,ym), and fx are given functions in n times D××D, and φx is the unknown function. These problems, which are connected with the construction of unbiased estimators for the functional hφ, where hx is a given function and was discussed in Refs [10, 12]. We assume that Kixy1yi, and fx satisfy the condition in [10, 12].

It’s known that the solution of nonlinear integral equations by Monte Carlo is connected with branching Markov processes, determined by the initial distribution density P0x0, and the transition density Pi1xy1yi. Here

0P0xdx=1,E31
i=1nDDPi1xy1yidyidyi=1g1x,E32

where g1x01 is the probability of terminating the branching Markov process trajectory. Realizations of this process are connected with “trees,” on which were constructed different unbiased estimators.

We consider only estimators using absorption or collision and will give recommendations for their use based on complexity.

As is known [13], the integral equation

ax=m=1nDDPm1xy1ymi=1mayidy1,dyi+g1xE33

has a unique positive solution for all xD which is determined through a Neumann series. With probability one, the number of generations in the branching Markov process is bounded.

If the branching Markov process in the branches splits into m- new particles with given probabilities αm, and the probability of absorption is g1x=α0, then probabilities αm satisfy the conditions.

i=1nDDαiPixy1yidy1dyi=1α0,E34

where the functions P1xy1yi satisfy

DDPixy1yidy1dyi=1.E35

It is not difficult to see that (see sec.1) that the function Mx, which is the mean number of particles in the branching process, satisfies the linear integral equation

Mx=α0+i=1nαiDDPixy1yij=1iMyjdy1dyi.E36

Here Mx the iterative solution to this equation with the initial value α0. It’s known (see Section 1) that the constructed branching Markov process will be terminated in condition b=m=1nmαm1, this is why we choose αm so that b1, when b is the average number of particles generated in one step.

We solve the integral Eq. (30) at the point xD. When the function h is a Dirac δ-function, that is hxyφxdx=φy. In this case, at the initial time, we have one particle at the point x almost, surely. The branching process is constructed on a tree of branches then we will construct unbiased estimators using absorption, ξnx, and collision, ξcx. To compare the complexity of these we compute on this tree.

Obviously, the number of arithmetic operations depends on the average number of particles in the tree. Furthermore, we will denote Dn as the variance of the absorption estimator, Dc and the variance of the collision estimator. The complexity of the absorption estimator is Tn=tDn and the complexity collision estimator Dc=tDc. Here t and t as the average CPU time for calculation estimators, which will be needed for calculation Tn and Tc.

First, we will investigate the variance of both estimators.

Let F+ and F be a set of positive and negative bounded functions. We shall consider the following integral equation

ψx=f2xα0++m=1nDDKm2xy1ymαmpmxy1ymi=1mψyidy1,,dyiE37
ψ1x=fx2φxfx++m=1nDDKm2xy1ymαmpmxy1ymi=1mψ1yidy1,,dyiE38

We suppose that the minimal solutions of Eqs. (37), (38) exist and that Neumann series converges for (37) with the initial function ψ0x=f2xα0 and for (38) with the initial function ψ0x=fx2φxfx. Then the expression for the variances will be

Dn=h2/p0ψhφ2,Dc=h2/p0ψ1hφ2.E39

Let the function GxF+, where Gx=fx2φxfx. Moreover, if fx0 then GxF+, in this case, the following hold:

Theorem 3 LetfxF+. If the probability of terminating the trajectory of our branching processes is

α00minGxxD,E40

then DcDn.

Proof: Since the values of Dn and Dc depend on solutions of Eqs. (37), (38), we will investigate the minimal solutions of these equations. The minimal solutions of these equations continuously depend on right-hand sides of the equations. Consequently, if α0<minfx2φxfx, then ψ1xψ. From here, DcDn.

Consequence: If it follows that α00minGxxD, then DnDc.

Theorem 4 Suppose thatfxF. If the probability of terminating a trajectory of our branching processes is

α00minGxxD,E41

then DnDc.

The proof is analogous to the above.

Thus, using a priori information about the variance of estimators (solutions of the Eqs. (37), (38)), one can give recommendations on choosing optimal estimators.

References

  1. 1. Miranda C. Partial Differential Equations of Elliptic Type. Berlin-Heidelberg-New York: Springer Verlag; 1970. p. 372
  2. 2. Anishenko RN. Theorems of existence and method of sequential approximations for some nonlinear problems “Russian mathematics”, (Izvestiya Vysshikh Uchebnykh Zavedenii-in Russian). Mathematica. 1971;5:65-64
  3. 3. Ermakov SM, Nekrutkin VV, Sipin AS. Random Processes for Classical Equations of Mathematical Physics. Dordrecht, Netherlands: Springer; 2013. p. 302
  4. 4. Sipin AS. About the solution of Neyman problem by Monte-Carlo method. In: Metodi Monte-Karlo v vichislitelnoy matematike i matematicheskoy fizike. Novosibirsk: Computing center of RAN publisher; 1976. pp. 120-136
  5. 5. Meyer PA. Probability and Potentials. Waldham, Massachusetts, Toronto, London: Blaisdell Publishing Co.; 1966. p. 266
  6. 6. Rasulov AS. The construction of unbiased estimators for solving one nonlinear problem of Neyman. In: Algoritmi i chislennie metodi resheniya prikladnoy mehaniki. Tashkent: Trudi Tash GU; 1986. pp. 64-68
  7. 7. Rasulov AS, Raimova GM. Monte Carlo solution of the Neumann problem for the nonlinear Helmholtz equation. Mathematics and Computers in Simulation. 2015;117:1-9. DOI: 10.1016/j.matcom.2015.05.002
  8. 8. Rasulov AS. Monte Carlo algorithms for the solution quasi-linear Dirichlet boundary value problems of elliptical type. Journal of Mathematics ans Statistics (to appiar-submitted). 2023;11(3):592-597
  9. 9. Puskarev LI, Rasulov AS. Solution of some boundary value problems with nonlinearity in boundary conditions by random walks on spheres. In: Problems of Computational and Applied Mathematics. Vol. 61. Tashkent: TashGU publisher; 1980. pp. 47-52. (in Russian)
  10. 10. Sizova AF. About the Variance of Estimator by Collisions for Solving Nonlinear Problems by Monte-Carlo Method. Vol. 1. Saint Petersburg: Vestnik LGU; 1976. pp. 152-155
  11. 11. Rasulov AS. The efficiency of two models of Monte-Carlo method for solving nonlinear problems. In: Prikladnaya matematika i mehanika. Tashkent: Trudi Tash GU; 1983. pp. 60-64
  12. 12. Ermakov SM. The analogue of Neyman-Ulam scheme in nonlinear case. Jurnal vichislitelnoy matematiki i matematicheskoy fiziki. 1973;13(3):564-573
  13. 13. Ermakov SM. Monte-Carlo method for iteration of nonlinear operators. DAN SSSP. 1972;204(2):271-274

Written By

Abdujabar Rasulov

Submitted: 30 June 2023 Reviewed: 13 July 2023 Published: 06 February 2024