Open access peer-reviewed chapter

The Discontinuous Galerkin Finite Element Method for Ordinary Differential Equations

Written By

Mahboub Baccouch

Submitted: 16 March 2016 Reviewed: 20 July 2016 Published: 14 December 2016

DOI: 10.5772/64967

From the Edited Volume

Perusal of the Finite Element Method

Edited by Radostina Petrova

Chapter metrics overview

1,915 Chapter Downloads

View Full Metrics

Abstract

We present an analysis of the discontinuous Galerkin (DG) finite element method for nonlinear ordinary differential equations (ODEs). We prove that the DG solution is $(p + 1) $th order convergent in the $L^2$-norm, when the space of piecewise polynomials of degree $p$ is used. A $ (2p+1) $th order superconvergence rate of the DG approximation at the downwind point of each element is obtained under quasi-uniform meshes. Moreover, we prove that the DG solution is superconvergent with order $p+2$ to a particular projection of the exact solution. The superconvergence results are used to show that the leading term of the DG error is proportional to the $ (p + 1) $-degree right Radau polynomial. These results allow us to develop a residual-based a posteriori error estimator which is computationally simple, efficient, and asymptotically exact. The proposed a posteriori error estimator is proved to converge to the actual error in the $L^2$-norm with order $p+2$. Computational results indicate that the theoretical orders of convergence are optimal. Finally, a local adaptive mesh refinement procedure that makes use of our local a posteriori error estimate is also presented. Several numerical examples are provided to illustrate the global superconvergence results and the convergence of the proposed estimator under mesh refinement.

Keywords

  • discontinuous Galerkin finite element method
  • ordinary differential equations
  • a priori error estimates
  • superconvergence
  • a posteriori error estimates
  • adaptive mesh refinement

1. Introduction

In this chapter, we introduce and analyze the discontinuous Galerkin (DG) method applied to the following first‐order initial‐value problem (IVP)

dudt=f(t,u),t[0,T],u(0)=u0,E1

where u:[0,T]n, u0n, and f:[0,T]×nn. We assume that the solution exists and is unique and we would like to approximate it using a discontinuous piecewise polynomial space. According to the ordinary differential equation (ODE) theory, the condition fC1([0,T]×n) is sufficient to guarantee the existence and uniqueness of the solution to (1). We note that a general nth‐order IVP of the form y(n)=g(t,y,y,,y(n1)) with initial conditions y(0)=a0, y(0)=a1,,y(n1)(0)=an1 can be converted into a system of equations in the form (1), where u=[y,y,,y(n1)]t, f(t,u)=[u2,u3,,un,g(t,u1,,un)]t, and u0=[a0,a1,,an1]t.

The high‐order DG method considered here is a class of finite element methods (FEMs) using completely discontinuous piecewise polynomials for the numerical solution and the test functions. The DG method was first designed as an effective numerical method for solving hyperbolic conservation laws, which may have discontinuous solutions. Here, we will discuss the algorithm formulation, stability analysis, and error estimates for the DG method solving nonlinear ODEs. DG method combines the best proprieties of the classical continuous finite element and finite volume methods such as consistency, flexibility, stability, conservation of local physical quantities, robustness, and compactness. Recently, DG methods become highly attractive and popular, mainly because these methods are high‐order accurate, nonlinear stable, highly parallelizable, easy to handle complicated geometries and boundary conditions, and capable to capture discontinuities without spurious oscillations. The original DG finite element method (FEM) was introduced in 1973 by Reed and Hill [1] for solving steady‐state first‐order linear hyperbolic problems. It provides an effective means of solving hyperbolic problems on unstructured meshes in a parallel computing environment. The discontinuous basis can capture shock waves and other discontinuities with accuracy [2, 3]. The DG method can easily handle adaptivity strategies since the h refinement (mesh refinement and coarsening) and the p refinement (method order variation) can be done without taking into account the continuity restrictions typical of conforming FEMs. Moreover, the degree of the approximating polynomial can be easily changed from one element to the other [3]. Adaptivity is of particular importance in nonlinear hyperbolic problems given the complexity of the structure of the discontinuities and geometries involved. Due to local structure of DG methods, physical quantities such as mass, momentum, and energy are conserved locally through DG schemes. This property is very important for flow and transport problems. Furthermore, the DG method is highly parallelizable [4, 5]. Because of these nice features, the DG method has been analyzed and extended to a wide range of applications. In particular, DG methods have been used to solve ODEs [69], hyperbolic [5, 6, 1019] and diffusion and convection diffusion [2023] partial differential equations (PDEs), to mention a few. For transient problems, Cockburn and Shu [17] introduced and developed the so‐called Runge‐Kutta discontinuous Galerkin (RKDG) methods. These numerical methods use DG discretizations in space and combine it with an explicit Runge‐Kutta time‐marching algorithm. The proceedings of Cockburn et al. [24] and Shu [25] contain a more complete and current survey of the DG method and its applications. Despite the attractive advantages mentioned above, DG methods have some drawbacks. Unlike the continuous FEMs, DG methods produce dense and ill‐conditioned matrices increasing with the order of polynomial degree[23].

Related theoretical results in the literature including superconvergence results and error estimates of the DG methods for ODEs are given in [79, 2628]. In 1974, LaSaint and Raviart [9] presented the first error analysis of the DG method for the initial‐value problem (1). They showed that the DG method is equivalent to an implicit Runge‐Kutta method and proved a rate of convergence of

(hp) for general triangulations and of
(hp+1) for Cartesian grids. Delfour et al. [7] investigated a class of Galerkin methods which lead to a family of one‐step schemes generating approximations up to order 2p+2 for the solution of an ODE, when polynomials of degree p are used. In their proposed method, the numerical solution uh at the discontinuity point tn is defined as an average across the jump, i.e., αnuh(tn)+(1αn)uh(tn+). By choosing special values of αn, one can obtain the original DG scheme of LeSaint and Raviart [9] and Euler's explicit, improved, and implicit schemes. Delfour and Dubeau [27] introduced a family of discontinuous piecewise polynomial approximation schemes. They presented a more general framework of one‐step methods such as implicit Runge‐Kutta and Crank‐Nicholson schemes, multistep methods such as Adams‐Bashforth and Adams‐Moulton schemes, and hybrid methods. Later, Johnson [8] proved new optimal a priori error estimates for a class of implicit one‐step methods for stiff ODEs obtained by using the discontinuous Galerkin method with piecewise polynomials of degree zero and one. Johnson and Pitkaränta [29] proved a rate of convergence of
(hp+1/2) for general triangulations and Peterson [19] confirmed this rate to be optimal. Richter [30] obtained the optimal rate of convergence
(hp+1) for some structured two‐dimensional non‐Cartesian grids. We also would like to mention the work of Estep [28], where the author outlined a rigorous theory of global error control for the approximation of the IVP (1). In [6], Adjerid et al. showed that the DG solution of one‐dimensional hyperbolic problems exhibit an
(hp+2) superconvergence rate at the roots of the right Radau polynomial of degree p+1. Furthermore, they obtained a (2p+1)th order superconvergence rate of the DG approximation at the downwind point of each element. They performed a local error analysis and showed that the local error on each element is proportional to a Radau polynomial. They further constructed implicit residual‐based a posteriori error estimates but they did not prove their asymptotic exactness. In 2010, Deng and Xiong [31] investigated a DG method with interpolated coefficients for the IVP (1). They proved pointwise superconvergence results at Radau points. More recently, the author [12, 15, 26, 3239] investigated the global convergence of the several residual‐based a posteriori DG and local DG (LDG) error estimates for a variety of linear and nonlinear problems.

This chapter is organized as follows: In Section 2, we present the discrete DG method for the classical nonlinear initial‐value problem. In Section 3, we present a detailed proof of the optimal a priori error estimate of the DG scheme. We state and prove our main superconvergence results in Section 4. In Section 5, we present the a posteriori error estimation procedure and prove that these error estimates converge to the true errors under mesh refinement. In Section 6, we propose an adaptive algorithm based on the local a posteriori error estimates. In Section 7, we present several numerical examples to validate our theoretical results. We conclude and discuss our results in Section 8.

Advertisement

2. The DG scheme for nonlinear IVPs

The error analysis of nonlinear scalar and vector initial‐value problems (IVPs) having smooth solutions is similar. For this, we restrict our theoretical discussion to the following nonlinear initial‐value problem (IVP)

u=f(t,u),t[0,T],u(0)=u0,E2

where f(t,u):[0,T]× is a sufficiently smooth function with respect to the variables t and u. More precisely, we assume that |fu(t,u)|M1 on the set D=[0,T]×2, where M1 is a positive constant. We note that the assumption |fu(t,u)|M1 is sufficient to ensure that f(t,u) satisfies a Lipschitz condition in u on the convex set D with Lipschitz constant M1

|f(t,u)f(t,v)|M1|uv|,for any (t,u) and (t,v)D.E3

Next, we introduce the DG method for the model problem (2). Let 0=t0<t1<<tN=T be a partition of the interval Ω=[0,T]. We denote the mesh by Ij=[tj1,tj],j=1,,N. We denote the length of Ij by hj=tjtj1. We also denote h=max1jNhj and hmin=min1jNhj as the length of the largest and smallest subinterval, respectively. Here, we consider regular meshes, that is, hhminλ, where λ1 is a constant (independent of h) during mesh refinement. If λ=1, then the mesh is uniformly distributed. In this case, the nodes and mesh size are defined by tj=jh,j=0,1,,N, h=T/N.

Throughout this work, we define v(tj) and v(tj+) to be the left limit and the right limit of the function v at the discontinuity point tj, i.e., v(tj)=lims0v(tj+s) and v(tj+)=lims0+v(tj+s). To simplify the notation, we denote by [v](tj)=v(tj+)v(tj) the jump of v at the point tj.

If we multiply (2) by an arbitrary test function v, integrate over the interval Ij, and integrate by parts, we get the DG weak formulation

Ijvudt+Ijf(t,u)vdtu(tj)v(tj)+u(tj1)v(tj1)=0.E4

We denote by Vhp the finite element space of polynomials of degree at most p in each interval Ij, i.e.,

Vhp={v:v|IjPp(Ij),j=1,,N},E101

where Pp(Ij) denotes the set of all polynomials of degree no more than p on Ij. We would like to emphasize that polynomials in Vhp are allowed to have discontinuities at the nodes tj.

Replacing the exact solution u(t) by a piecewise polynomial uh(t)Vhp and choosing vVhp, we obtain the DG scheme: Find uhVhp such that vVhp and j=1,,N,

Ijvuhdt+Ijf(t,uh)vdtu^h(tj)v(tj)+u^h(tj1)v(tj1+)=0,E5a

where u^h(tj) is the so‐called numerical flux which is nothing but the discrete approximation u at the node t=tj. We remark that uh is not necessarily continuous at the nodes.

To complete the definition of the DG scheme, we still need to define u^h on the boundaries of Ij. Since for IVPs, information travel from the past into the future, it is reasonable to take u^h as the classical upwind flux

u^h(t0)=u0,andu^h(tj)=uh(tj),j=1,,N.E5b

2.1. Implementation

The DG solution uh(t) can be efficiently obtained in the following order: first, we compute uh(t) in the first element I1 using (5a) and (5b) with j=1 since uh(t0)=u0 is known. Then, we can find uh(t) in I2 since uh(t) in I1 is already available. We can repeat the same process to compute uh(t) in I3,,IN. More specifically, uh(t) can be obtained locally for each Ij using the following two steps: (i) express uh(t) as a linear combination of orthogonal basis Li,j(t),i=0,,p, where Li,j is the ith degree Legendre polynomial on Ij, i.e., uh(t)=i=0pci,jLi,j(t),tIj, where {Li,j(t)}i=0i=p is a local basis of Pp(Ij), and (ii) choose the test functions v=Lk,j(t),k=0,,p. Thus, on each Ij, we get a (p+1)×(p+1) system of nonlinear algebraic equations, which can be solved for the unknown coefficients c0,j,,cp,j using, e.g., Newton's method for nonlinear systems. Once we obtain the DG solution on all elements Ij,j=1,,N, we get the DG solution which is a piecewise discontinuous polynomial of degree p. We refer to [79] for more details about DG methods for ODEs as well as their properties and applications.

2.2. Linear stability for the DG method

Let us now establish a stability result for the DG method applied to the linear case, i.e., f(t,u)=λu. Taking v=uh in the discrete weak formulation (5a), we get

uh2(tj)2+uh2(tj1+)2uh(tj1)uh(tj1+)=λIjuh2dt,E201

which is equivalent to

uh2(tj)2uh2(tj1)2+12(uh(tj1)uh(tj1+))2=λIjuh2dt.E202

Summing over all elements, we get the equality

uh2(T)2u022+12j=1N(uh(tj1)uh(tj1+))2=λΩuh2dt.E203

Consequently, uh2(T)2u022λΩuh2dt, which gives the stability result uh2(T)u02 provided that λ0.

Advertisement

3. A priori error analysis

We begin by defining some norms that will be used throughout this work. We define the L2 inner product of two integrable functions, u and v, on the interval Ij=[tj1,tj] as (u,v)Ij=Iju(t)v(t)dt. Denote u0,Ij=(u,u)Ij1/2 to be the standard L2 norm of u on Ij. Moreover, the standard L norm of u on Ij is defined by u,Ij=suptIj|u(t)|. Let Hs(Ij), where s=0,1,, denote the standard Sobolev space of square integrable functions on Ij with all derivatives u(k),k=0,1,,s being square integrable on Ij, i.e., Hs(Ij)={u:Ij|u(k)(t)|2dt<,0ks}, and equipped with the norm us,Ij=(k=0su(k)0,Ij2)1/2. The Hs(Ij) seminorm of a function u on Ij is given by |u|s,Ij=u(s)0,Ij. We also define the norms on the whole computational domain Ω as follows:

u0,Ω=(j=1Nu0,Ij2)1/2,u,Ω=max1jNu,Ij,us,Ω=(j=1Nus,Ij2)1/2.E204

The seminorm on the whole computational domain Ω is defined as |u|s,Ω=(j=1N|u|s,Ij2)1/2. We note that if uHs(Ω), s=1,2,, then the norm us,Ω on the whole computational domain is the standard Sobolev norm (k=0su(k)0,Ω2)1/2. For convenience, we use uIj and u to denote u0,Ij and u0,Ω, respectively.

For p1, we consider two special projection operators, Ph±, which are defined as follows: For a smooth function u, the restrictions of Ph+u and Phu to Ij are polynomials in Pp(Ij) satisfying

Ij(Phuu)vdt=0,vPp1(Ij),and(Phuu)(tj)=0,E6a
Ij(Ph+uu)vdt=0,vPp1(Ij),and(Ph+uu)(tj1+)=0.E6b

These two particular Gauss‐Radau projections are very important in the proofs of optimal L2 error estimates and superconvergence results. We note that the special projections Ph±u are mainly utilized to eliminate the jump terms at the cell boundaries in the error estimate in order to achieve the optimal order of accuracy [22].

For the projections mentioned above, it is easy to show that for any uHp+1(Ij) with j=1,,N, there exists a constant C independent of the mesh size h such that (see, e.g., [40])

uPh±uIjChjp+1|u|p+1,Ij,(uPh±u)IjChjp|u|p,Ij.E7

Moreover, we recall the inverse properties of the finite element space Vhp that will be used in our error analysis: For any vhVhp, there exists a positive constant C independent of vh and h, such that, j=1,,N,

vh(k)IjChjk||vh||Ij,k1,|vh(tj1+)|+|vh(tj)|Chj1/2||vh||Ij.E8

From now on, the notation C,C1,C2, etc. will be used to denote generic positive constants independent of h, but may depend upon the exact solution of (1) and its derivatives. They also may have different values at different places.

Throughout this work, let us denote e=uuh to be the error between the exact solution of (2) and the DG solution defined in (5a) and (5b), ε=uPhu to be the projection error, and e¯=Phuuh to be the error between the projection of the exact solution Phu and the DG solution uh. We observe that the actual error can be written as e=(uPhu)+(Phuuh)=ε+e¯.

Now, we are ready to prove our optimal error estimates for e in the L2 and H1 norms.

Theorem 3.1. Suppose that the exact solution of (2) is sufficiently smooth with bounded derivatives, i.e., up+1,Ω is bounded. We also assume that |fu(t,u)|M1 on D=[0,T]×. Let p0 and uh be the DG solution of (5a) and (5b), then, for sufficiently small h, there exists a positive constant C independent of h such that,

eChp+1,E9
j=1NeIj2Ch2p,e1,ΩChp.E10

Proof. We first need to derive some error equations which will be used repeatedly throughout this and the next sections. Subtracting (5a) from (4) with vVhp and using the numerical flux (5b), we obtain the following error equation: vVhp,

Ijvedt+Ij(f(t,u)f(t,uh))vdt+e(tj1)v(tj1+)e(tj)v(tj)=0.E11

By integration by parts, we get

IjevdtIj(f(t,u)f(t,uh))vdt+[e](tj1)v(tj1+)=0.E12

Applying Taylor's series with integral remainder in the variable u and using the relation uuh=e, we write

f(t,u)f(t,uh)=θ(uuh)=θe,whereθ=01fu(t,u+s(uhu))ds=01fu(t,use)ds.E13

Substituting (13) into (12), we arrive at

Ij(eθe)vdt+[e](tj1)v(tj1+)=0,vVhp.E14

To simplify the notation, we introduce the bilinear operator Aj(e;V) as

Aj(e;V)=Ij(eθe)Vdt+[e](tj1)V(tj1+).E15

Thus, we can write (14) as

Aj(e;v)=0,vVhp.E16

A direct calculation from integration by parts yields

Aj(e;V)=Ij(VθV)edt+e(tj)V(tj)e(tj1)V(tj1+).E17

On the other hand, if we add and subtract Ph+V to V then we can write (15) as

Aj(e;V)=Aj(e;VPh+V)+Aj(e;Ph+V).E18

Combining (18) and (16) with v=Ph+VPp(Ij) and applying the property of the projection Ph+, i.e., (VPh+V)(tj1+)=0, we obtain

Aj(e;V)=Ij(eθe)(VPh+V)dt+[e](tj1)(VPh+V)(tj1+)=Ij(eθe)(VPh+V)dt.E19

If v is a polynomial of degree at most p then v is a polynomial of degree at most p1. Therefore, by the property of the projection Ph+, we immediately see

Ijv(VPh+V)dt=0,vPp(Ij).E20

Substituting the relation e=ε+e¯ into (19) and invoking (20) with v=e¯, we get

Aj(e;V)=Ij(εθe)(VPh+V)dt+Ije¯'(VPh+V)dt=Ij(εθe)(VPh+V)dt.E21

Now, we are ready to prove the theorem. We construct the following auxiliary problem: find φ such that

φθφ=e,t[0,T]subject toφ(T)=0.E22

where θ=θ(t)=01fu(t,u(t)se(t))ds. Clearly, the exact solution to (22) is given by the explicit formula

φ(t)=1Θ(t)tTΘ(y)e(y)dy,whereΘ(t)=exp(tTθ(s)ds).E23

Next, we prove some regular estimates which will be needed in our error analysis. Using the assumption |fu(t,u)|M1, we see that θ(t),t[0,T] is bounded by M1

|θ(t)|01|fu(t,u(t)se(t))|ds01M1ds=M1,t[0,T].E24a

Using the definition of Θ and the estimate (24a), we have

0Θ(t)exp(0T|θ(s)|ds)exp(0TM1ds)=exp(M1T)=C1.E24b

Similarly, we can easily estimate 1Θ(t) as follows

01Θ(t)=exp(tTθ(s)ds)exp(0T|θ(s)|ds)exp(0TM1ds)=exp(M1T)=C1.E24c

Applying the estimates (24b), (24c), and the Cauchy‐Schwarz inequality, we get

|φ(t)|1Θ(t)tTΘ(y)|e(y)|dyC1tTC1|e(y)|dyC120T|e(y)|dyC12T1/2e,t[0,T].E205

Squaring both sides and intergrading over Ω yields

φ2C14T2e2=C2e2.E25a

We also need to obtain an estimate of |φ|1,Ω. Using (22) and (24a) gives

|φ|=|θφ+e|M1|φ|+|e|,t[0,T].E206

Squaring both sides, applying the inequality (a+b)22a2+2b2, integrating over the computational domain Ω, and using (25a), we get

|φ|1,Ω2=0T|φ|2dt2(M12φ2+e2)2(M12C2+1)e2C3e2.E25b

Applying the projection result and the estimate (3.20b) yields

φPh+φC4h|φ|1,ΩC5he.E25c

Now, we are ready to show (9). Using (17) with V=φ and (22), we obtain

Aj(e;φ)=Ij(φθφ)edte(tj1)φ(tj1)+e(tj)φ(tj)=Ije2dte(tj1)φ(tj1)+e(tj)φ(tj).E207

Summing over the elements and using the fact that φ(T)=e(t0)=0 yields

j=1NAj(e;φ)=e2e(t0)φ(t0)+e(T)φ(T)=e2.E26

On the other hand, if we choose V=φ in (21) then we get

Aj(e;φ)=Ij(εθe)(φPh+φ)dt.E27

Summing over all elements and applying the Cauchy‐Schwarz inequality, we get

j=1NAj(e;φ)(ε+M1e)φPh+φ.E208

Using the estimate (25c), we deduce that

j=1NAj(e;φ)(C0hp|u|p+1,Ω+M1e)C1heC(hp+1+he)e.E28

Combining (26) and (28), we conclude that

eChp+1+Che.E29

Thus, (1Ch)eChp+1, where C is a positive constant independent of h. Therefore, for sufficiently small h, e.g., h12C, we obtain 12e(1Ch)eChp+1, which yields e2Chp+1 for h small. Thus, we completed the proof of (9).

To show (10), we use e=e¯+ε, the classical inverse inequality (8), the estimate (9), and the projection result (7) to obtain

e=e¯'+εe¯'+εC1h1e¯+C2hpC1h1(e+ε)+C2hpC3hp+C2hpChp.E209

We note that e1,Ω2=e2+e2. Applying (9) and the estimate eChp yields e1,Ω2C1h2p+2+C2h2p=

(h2p), which completes the proof of the theorem.

Advertisement

4. Superconvergence error analysis

In this section, we study the superconvergence properties of the DG method. We first show a (2p+1)th order superconvergence rate of the DG approximation at the downwind point of each element. Then, we apply this superconvergence result to show that the DG solution converges to the special projection of the exact solution Phu at

(hp+2). This result allows us to prove that the leading term of the DG error is proportional to the (p+1) degree right Radau polynomial.

First, we define some special polynomials. The pth degree Legendre polynomial can be defined by Rodrigues formula [41]

L˜p(ξ)=12pp!dpdξp((ξ21)p),1ξ1.E210

It satisfies the following important properties: L˜p(1)=1, L˜p(1)=(1)p, and the orthogonality relation

11L˜p(ξ)L˜q(ξ)dξ=22p+1δpq,where δpq is the Kronecker symbol.E30

One can easily write the (p+1) degree Legendre polynomial on [1,1] as

L˜p+1(ξ)=(2p+2)!2p+1((p+1)!)2ξp+1+q˜p(ξ),whereq˜pPp([1,1]).E211

The (p+1) degree right Radau polynomial on [1,1] is defined as R˜p+1(ξ)=L˜p+1(ξ)L˜p(ξ). It has p+1 real distinct roots, 1<ξ0<<ξp=1.

Mapping Ij into the reference element [1,1] by the linear transformation t=tj+tj12+hj2ξ, we obtain the shifted Legendre and Radau polynomials on Ij:

Lp+1,j(t)=L˜p+1(2ttjtj1hj),Rp+1,j(t)=R˜p+1(2ttjtj1hj).E301

Next, we define the monic Radau polynomial, ψp+1,j(t), on Ij as

ψp+1,j(t)=hjp+1[(p+1)!]2(2p+2)!Rp+1,j(t)=cphjp+1Rp+1,j(t),wherecp=((p+1)!)2(2p+2)!.E31

Throughout this work the roots of Rp+1,j(t) are denoted by tj,i=tj+tj12+hj2ξi,i=0,1,,p.

In the next lemma, we recall the following results which will be needed in our error analysis [32].

Lemma 4.1. The polynomials Lp,j and ψp+1,j satisfy the following properties

Lp,jIj2=hj2p+1,Ijψp+1,jψp+1,jdt=k1hj2p+2,ψp+1,jIj2=(2p+2)k2hj2p+3,E32

where k1=2cp2, k2=k1(2p+1)(2p+3), and cp=((p+1)!)2(2p+2)!.

Now, we are ready to prove the following superconvergence results.

Theorem 4.1. Suppose that the assumptions of Theorem 1 are satisfied. Also, we assume that fu is sufficiently smooth with respect to t and u (for example, h(t)=fu(t,u(t))Cp([0,T]) is enough.). Then there exists a positive constant C such that

|e(tk)|Ch2p+1,k=1,,N,E33
|e¯(tk)|Ch2p+1,k=1,,N,E34
e¯Chp+1,E35
e¯Chp+2.E36

Proof. To prove (33), we proceed by the duality argument. Consider the following auxiliary problem:

W+θW=0,t[0,tk]subject toW(tk)=1,E37

where 1kN and θ=θ(t)=01fu(t,u(t)se(t))ds. The exact solution of this problem is W(t)=exp(ttkθ(s)ds), tΩk=[0,tk]. Using the assumption h(t)=fu(t,u(t))Cp([0,T]) and the estimate (24a), we can easily show that there exists a constant C such that

Wp+1,ΩkC.E38

Using (17) and (37), we get

Aj(e;W)=Ij(WθW)edt+e(tj1)W(tj1)+e(tj)W(tj)=e(tj1)W(tj1)+e(tj)W(tj).E302

Summing over the elements Ij,j=1,,k, using W(tk)=1, and the fact that e(t0)=0, we obtain

j=1kAj(e;W)=e(t0)W(t0)+e(tk)W(tk)=e(tk).E39

Now, taking V=W in (21) yields

Aj(e;W)=Ij(εθe)(WPh+W)dt.E303

Summing over all elements Ij,j=1,,k with k=1,,N and applying (39), we arrive at

e(tk)=j=1kIj(εθe)(WPh+W)dt.E304

Using (24a) and applying the Cauchy‐Schwarz inequality, we obtain

|e(tk)|(ε0,Ωk+M1e0,Ωk)WPh+W0,Ωk(ε+M1e)WPh+W0,Ωk.E305

Invoking the estimates (7), (9), and (38), we conclude that

|e(tk)|(C0hp|u|p+1,Ω+M1C1hp+1)C2hp+1|W|p+1,ΩkC(hp+hp+1)hp+1=O(h2p+1),E40

for all k=1,,N, which completes the proof of (33).

In order to prove (34), we use the relation e=e¯+ε, the property of the projection Ph, i.e., ε(tk)=0, and the estimate (33) to get

|e¯(tk)|=|e(tk)ε(tk)|=|e(tk)|=O(h2p+1).E306

Next, we will derive optimal error estimate for e¯'. By the property of Ph, we have

Ijεvdt=0,vPp(Ij),andε(tj)=0,j=1,,N.E41

Using the relation e=ε+e¯, applying (41) and (11) yields

Ijve¯dt+Ij(f(t,u)f(t,uh))vdt+e¯(tj1)v(tj1+)e¯(tj)v(tj)=0.E307

By integration by parts on the first term, we obtain

Ij(e¯'f(t,u)+f(t,uh))vdt+[e¯](tj1)v(tj1+)=0.E42

Choosing v(t)=e¯'(t)(1)pe¯'(tj1+)Lp,j(t)Pp(Ij) in (42), we have, by the property L˜p(1)=(1)p and the orthogonality relation (30), v(tj1+)=0 and

Ij(e¯')2dt=(1)pe¯'(tj1+)IjLp,je¯'dt+Ij(f(t,u)f(t,uh))(e¯'(1)pe¯'(tj1+)Lp,j)dt=Ij(f(t,u)f(t,uh))(e¯'(1)pe¯'(tj1+)Lp,j)dt.E43

Using (3) and applying the Cauchy‐Schwarz inequality gives

e¯Ij2Ij|f(t,u)f(t,uh)|(|e¯|+|e¯(tj1+)||Lp,j|)dtM1Ij|e|(|e¯|+|e¯(tj1+)||Lp,j|)dtM1eIj(e¯Ij+|e¯(tj1+)|Lp,jIj).E44

Combining (44) with (8) and (32), we obtain

e¯Ij2M1eIj(e¯Ij+(C1hj1/2e¯Ij)(hj1/2(2p+1)1/2))CeIje¯Ij.E367

Consequently, e¯IjCeIj. Taking the square of both sides, summing over all elements, and using (9), we conclude that

e¯'2Ce2Ch2p+2.E45

Finally, we will estimate e¯. Using the fundamental theorem of calculus, we write

|e¯(t)|=|e¯(tj)+tjte¯(s)ds||e¯(tj)|+Ij|e¯(s)|ds,tIj.E308

Taking the square of both sides, applying the inequality (a+b)22a2+2b2, and applying the Cauchy‐Schwartz inequality, we get

|e¯(t)|22|e¯(tj)|2+2(Ij|e¯(s)|ds)22|e¯(tj)|2+2hjIj|e¯(s)|2ds=2|e¯(tj)|2+2hje¯Ij2.E309

Integrating this inequality with respect to t and using the estimate (34), we get

e¯Ij22hj|e¯(tj)|2+2hj2e¯Ij22Chj4p+3+2hj2e¯Ij2.E310

Summing over all elements and using the estimate (35) and the fact that h=maxhj, we obtain

e¯2C1h4p+2+2h2e¯'2C1h4p+2+2C2h2p+4=O(h2p+4),E46

where we used 4p+22p+4 for p1. This completes the proof of the theorem.

The previous theorem indicates that the DG solution uh is closer to Phu than to the exact solution u. Next, we apply the results of Theorem 2 to prove that the actual error e can be split into a significant part, which is proportional to the (p+1) degree right Radau polynomial, and a less significant part that converges at

(hp+2) rate in the L2 norm. Before we prove this result, we introduce two interpolation operators π and π^. The interpolation operator π is defined as follows: For smooth u=u(t), πu|IjPp(Ij) and interpolates u at the roots of the (p+1) degree right Radau polynomial shifted to Ij, i.e., at tj,i,i=0,1,,p,. The interpolation operator π^ satisfies π^u|IjPp+1(Ij) and π^u|Ij interpolates u at tj,i,i=0,1,,p, and at an additional point t¯j in Ij with t¯jtj,i,i=0,1,,p. The choice of the additional point is not important and can be chosen as t¯j=tj1.

Next, we recall the following results from [12] which will be needed in our analysis.

Lemma 4.2. If uHp+2(Ij), then interpolation error can be split as

uπu=ϕj+γj,onIj,E47a

where

ϕj(t)=αjψp+1,j(t),ψp+1,j(t)=i=0p(ttj,i),γj=uπ^u,E47b

and αj is the coefficient of tp+1 in the (p+1) degree polynomial π^u. Furthermore,

ϕjs,IjChjp+1sup+1,Ij,s=0,,p,E47c
γjs,IjChjp+2sup+2,Ij,s=0,,p+1.E47d

Finally,

πuPhuIjChjp+2up+2,Ij.E48

Proof. The proof of this lemma can be found in [12], more precisely in its Lemma 2.1.

The main global superconvergence result is stated in the following theorem.

Theorem 4.2. Under the assumptions of Theorem 2, there exists a constant C independent of h such that

uhπuChp+2.E49

Moreover, the true error can be divided into a significant part and a less significant part as

e(t)=αjψp+1,j(t)+ωj(t),onIj,E50a

where

ωj=γj+πuuh,E50b

and

j=1NωjIj2Ch2(p+2),j=1NωjIj2Ch2(p+1).E50c

Proof. Adding and subtracting Phu to uhπu, we write uhπu=uhPhu+Phuπu=e¯+Phuπu. Taking the L2 norm and using the triangle inequality, we get

uhπue¯+Phuπu.E311

Applying the estimates (36) and (48), we deduce (49). Next, adding and subtracting πu to e, we write e=uπu+πuuh. Moreover, one can split the interpolation error uπu on Ij as in (47a) to get

e=ϕj+γj+πuuh=ϕj+ωj,whereωj=γj+πuuh.E51

Next, we use the Cauchy‐Schwarz inequality and the inequality |ab|12(a2+b2) to write

ωjIj2=(γj+πuuh,γj+πuuh)Ij=γjIj2+2(γj,πuuh)Ij+πuuhIj22(γjIj2+πuuhIj2).E312

Summing over all elements and applying (47d) with s=0 and (49) yields the first estimate in (50c).

Using the Cauchy‐Schwarz inequality and the inequality |ab|12(a2+b2), we write

ωjIj2=(γj+(πuuh),γj+(πuuh))Ij2(γjIj2+(πuuh)Ij2).E52

Using the inverse inequality (8), i.e., (πuuh)IjCh1(πuuh)Ij, we obtain the estimate

ωjIj2C(γjIj2+h2πuuhIj2).E313

Summing over all elements and applying (49) and the estimate (47d) with s=1, we establish the second estimate in (50c).

Advertisement

5. A posteriori error estimation

In this section, we use the superconvergence results from the previous section to construct a residual‐based a posteriori error estimator which is computationally simple, efficient, and asymptotically exact. We will also prove its asymptotic exactness under mesh refinement. First, we present the weak finite element formulation to compute a posteriori error estimate for the nonlinear IVP (2).

In order to obtain a procedure for estimating the error e, we multiply (2) by arbitrary smooth function v and integrate over the element Ij to obtain

Ijuvdt=Ijf(t,u)vdt.E53

Replacing u by uh+e and choosing v=ψp+1,j(t), we obtain

Ijeψp+1,jdt=Ij(f(t,uh+e)uh)ψp+1,jdt.E54

Substituting (50a), i.e., e(t)=αjψp+1,j(t)+ωj(t), into the left‐hand side of (54) yields

αjIjψp+1,jψp+1,jdt=Ij(f(t,uh+e)uhωj)ψp+1,jdt.E55

Using (32) and solving for αj, we obtain

αj=1k1hj2p+2Ij(f(t,uh+e)uhωj)ψp+1,jdt.E56

Our error estimate procedure consists of approximating the true error on each element Ij by the leading term as

e(t)E(t)=ajψp+1,j(t),tIj,E57a

where the coefficient of the leading term of the error, aj, is obtained from the coefficient αj defined in (56) by neglecting the terms ωj and e, i.e.,

aj=1k1hj2p+2Ij(f(t,uh)uh)ψp+1,jdt.E57b

We remark that our a posteriori error estimate is obtained by solving local problems with no boundary condition.

The global effectivity index, defined by σ=Ee, is an important criterion for evaluating the quality of an error estimator. The main results of this section are stated in the following theorem. In particular, we prove that the error estimate E, in the L2 norm, converges to the actual error e. Moreover, we show that our a posterior error estimate is asymptotically exact by showing that the global effectivity index σ1 as h0.

Theorem 5.1. Suppose that the assumptions of Theorem 2 are satisfied. If E(t)=ajψp+1,j(t),tIj, where aj,j=1,,N, are defined in (57b), then

eE2Ch2p+4.E58

Thus, the post‐processed approximation uh+E yields O(hp+2) superconvergent solution, i.e.,

u(uh+E)2=j=1Nu(uh+ajψp+1,j)Ij2Ch2p+4.E59

Furthermore, then there exists a positive constant C independent of h such that

|e2E2|Ch2p+4.E60

Finally, if there exists a constant C=C(u)>0 independent of h such that

eChp+1,E61

then the global effectivity index in the L2 norm converges to unity at

(h) rate, i.e.,

Ee=1+O(h).E62

Proof. First, we will prove (58) and (59). Since e=αjψp+1,j+ωj and E=ajψp+1,j on Ij, we have

eEIj2=(αjaj)ψp+1,j+ωjIj22(αjaj)2ψp+1,jIj2+2ωjIj2,E401

where we used the inequality (a+b)22a2+2b2. Summing over all elements yields

eE2=j=1NeEIj22j=1N(αjaj)2ψp+1,jIj2+2j=1NωjIj2.E63

Next, we will derive upper bounds for j=1N(αjaj)2ψp+1,jIj2. Subtracting (56) from (57b), we obtain

ajαj=1k1hj2p+2Ij(f(t,uh+e)f(t,uh)ωj)ψp+1,jdt.E64

Thus,

|ajαj|1k1hj2p+2Ij(|f(t,uh+e)f(t,uh)|+|ωj|)|ψp+1,j|dt.E65

Using the Lipschitz condition (3) and applying the Cauchy‐Schwarz inequality yields

|ajαj|1k1hj2p+2Ij(M1|e|+|ωj|)|ψp+1,j|dtψp+1,jIjk1hj2p+2(M1eIj+ωjIj).E66

Applying the inequality (a+b)22(a2+b2), we obtain

(ajαj)22ψp+1,jIj2k12hj4p+4(M12eIj2+ωjIj2).E67

Multiplying by ψp+1,jIj2 and using (32), i.e., ψp+1,jIj2=(2p+2)k2hj2p+3 yields

(ajαj)2ψp+1,jIj22ψp+1,jIj4k12hj4p+4(M12eIj2+ωjIj2)k3hj2(eIj2+ωjIj2),E68

where k3=2(2p+2)2k22k12max(M12,1) is a constant independent of the mesh size.

Summing over all elements and using h=max1jNhj, we arrive at

j=1N(ajαj)2ψp+1,jIj2k3h2[e2+j=1NωjIj2].E402

Combining this estimate with (9) and (50c), we establish

j=1N(ajαj)2ψp+1,jIj2Ch2p+4.E69

Now, combining (63) and the estimates (50c) and (69) yields

eE22C1h2p+4+2C2h2p+4=Ch2p+4,E403

which completes the proof of (58). Using the relation e=uuh and the estimate (58), we obtain

j=1Nu(uh+ajψp+1,j)Ij2=u(uh+E)2=eE2Ch2p+4.E404

Next, we will prove (60). Using the reverse triangle inequality, we have

|Ee|Ee,E70

which, after applying the estimate (58), completes the proof of (60).

In order to show (62), we divide (70) by e to obtain |σ1|Eee. Applying the estimate (58) and the inverse estimate (61), we arrive at

|σ1|Ch.E405

Therefore, σ=Ee=1+

(h), which establishes (62).

Remark 5.1. The previous theorem indicates that the computable quantity E converges to e at

(hp+2) rate. This accuracy enhancement is achieved by adding the error estimate E to the DG solution uh.

Remark 5.2. The performance of an error estimator σ is typically measured by the global effectivity index which is defined as the ratio of the estimated error E to the actual error e. We say that the error estimator is asymptotically exact if σ1 as h0. The estimate (62) indicates that the global effectivity index in the L2 norm converge to unity at

(h) rate. Therefore, the proposed estimator E is asymptotically exact. We would like to emphasize that E is a computable quantity since it only depends on the DG solution uh and f. It provides an asymptotically exact a posteriori estimator on the actual error e. Finally, we would like to point out that our procedure for estimating the error e is computationally simple. Furthermore, our DG error indicator is obtained by solving a local problem with no boundary condition on each element. This makes it useful in adaptive computations. We demonstrate this in Section 6.

Remark 5.3. Our proofs are valid for any regular meshes and using piecewise polynomials of degree p1. If p=0 then (46) gives e¯=

(h) which is the same as e=
(h). Thus, our superconvergence results are not valid when using p=0. Also, our error estimate procedure does not apply.

Remark 5.4. The assumption (61), which is used to prove the convergence of σ to unity at

(h), requires that terms of order
(hp+1) are present in the error. If not, E might not be a good approximation of e. We note that the exponent of h in the estimate (9) is optimal in the sense that it cannot be improved. In fact, for the hversion finite element method one may show that provided that the (p+1)th order derivatives of the exact solution u do not vanish identically over the whole domain, then an inverse estimate of the form eC(u)hp+1 is valid for some positive constant C(u) depending only on u [4244].

Remark 5. Our results readily extend to nonlinear systems of ODEs of the form

dudt=f(t,u),t[0,T],u(0)=u0,E406

where u=[u1,,un]t:[0,T]n, u0n, and f=[f1,,fn]t:[0,T]×nn. The DG method for this problem consists of finding uhVhp={v:v|Ij(Pp(Ij))n,j=1,,N} such that: vVhp and j=1,,N,

Ij(v')tuhdt+Ij(v)tf(t,uh)dt(v)t(tj)u(tj)+(v)t(tj1+)u(tj1)=0.E407
Advertisement

6. Application: adaptive mesh refinement (AMR)

A posteriori error estimates play an essential role in assessing the reliability of numerical solutions and in developing efficient adaptive algorithms. Adaptive methods based on a posteriori error estimates have become established procedures for computing efficient and accurate approximations to the solution of differential equations. The standard adaptive FEMs through local refinement can be written in the following loop

SOLVEESTIMATEMARKREFINE.E410

The local a posteriori errors estimator of Section 5 can be used to mark elements for refinement.

Next, we present a simple DG adaptive algorithm based on the local a posteriori error estimator proposed in the previous section. The adaptive algorithm that we propose has the following steps:

  1. Select a tolerance Tol and a maximum bound on the number of interval (say Nmax=1000). Put E=1.

  2. Construct an initial coarse mesh with N+1 nodes. For simplicity, we start with a uniform mesh having N=2 elements.

  3. While N+1Nmax and ETol do

    1. Solve the DG scheme to obtain the solution uh as described in Section 2.

    2. For each element, use (57a) and (57b) to compute the local error estimators EIj,j=1,,N as described in Section 5 and the global error estimator E=(j=1NEIj2)1/2.

    3. For all elements Ij

      1. Choose a parameter 0λ1. If the estimated global error EIj<λmaxj=1,,NEIj then stop and accept the DG solution on the element Ij.

      2. Otherwise, reject the DG solution on Ij and divide the element Ij into two uniform elements by adding the coordinate of the midpoint of Ij to the list of nodes.

  4. Endwhile.

Remark 6.1. There are many possibilities for selecting the elements to be refined given the local error indicator EIj. In the above algorithm, we used the most popular fixed‐rate strategy which consists of refining the element Ij if EIj>λmaxj=1,,NEIj, where 0λ1 is a parameter provided by the user. Note that the choice λ=0 gives uniform refinement, while λ=1 gives no refinement. Also, there are other stopping criteria that may be used to stop the adaptive algorithm.

Advertisement

7. Computational results

In this section, we present several numerical examples to (i) validate our superconvergence results and the global convergence of the residual‐based a posteriori error estimates, and (ii) test the above local adaptive mesh refinement procedure that makes use of our local a posteriori error estimate.

Example 7.1. The test problem we consider is the following nonlinear IVP

u=uu2,t[0,1],u(0)=1.E408

Clearly, the exact solution is u(t)=12et1. We use uniform meshes obtained by subdividing the computational domain [0,1] into N intervals with N= 5, 10, 20, 30, 40, 50. This example is tested by using Pp polynomials with p=04. Figure 1 shows the L2 errors e and e¯ with log‐log scale as well as their orders of convergence. These results indicate that e=

(hp+1) and e¯=
(hp+2). This example demonstrates that our theoretical convergence rates are optimal.

Figure 1.

Log‐log plots of e (left) and e¯ (right) versus mesh sizes h for Example 7.1 on uniform meshes having N= 5, 10, 20, 30, 40, 50 elements using Pp, p=0 to 4.

Figure 2.

Log‐log plots of e* (left) versus h for Example 7.1 using N= 5, 10, 20, 30, 40, 50 and Pp, p=0 to 4. Log‐log plots of |e|* (right) versus h using N= 5, 10, 20, 30 elements using Pp, p=0 to 3.

Next, we compute the maximum error at the shifted roots of the (p+1) degree right Radau polynomial on each element Ij and then take the maximum over all elements. For simplicity, we use e* to denote max1jN(max0ip|e(tj,i)|), where tj,i are the roots of Rp+1,j(t). Similarly, we compute the true error at the downwind point of each element and then we denote |e|* to be the maximum over all elements Ij,j=1,,N, i.e., |e|*=max1jN|e(tj)|. In Figure 2, we present the errors e*, |e|* and their orders of convergence. We observe that e*=

(hp+2) and |e|*=
(h2p+1) as expected. Thus, the error at right Radau points converges at
(hp+2). Similarly, the error at the downwind point of each element converge with an order 2p+1. This is in full agreement with the theory.

Next, we use (57a) and (57b) to compute the a posteriori error estimate for the DG solution. The global errors eE and their orders of convergence, using the spaces Pp with p=14, are shown in Figure 3. We observe that eE=

(hp+2). This is in full agreement with the theory. This example demonstrates that the convergence rate proved in this work is sharp. Since eE=u(uh+E)=
(hp+2), we conclude that the computable quantities uh+E converges to the exact solution u at
(hp+2) rate in the L2 norm. We would like to emphasize that this accuracy enhancement is achieved by adding the error estimate E to the DG solution uh only once at the end of the computation. This leads to a very efficient computation of the postprocessed approximation uh+E.

Figure 3.

The errors eE and their orders of convergence for Example 1 on uniform meshes having N= 5, 10, 20, 30, 40, 50 elements using Pp, p=1 to 4.

In Table 1, we present the actual L2 errors and the global effectivity indices. These results demonstrate that the proposed a posteriori error estimates is asymptotically exact.

N p=1 p=2 p=3 p=4
e σ e σ e σ e σ
5 4.7637e-03 1.0362 2.7867e-04 1.0531 1.6847e-05 1.0637 1.0386e-06 1.0705
10 1.2750e-03 1.0179 3.7805e-05 1.0271 1.1742e-06 1.0326 3.7481e-08 1.0363
20 3.2849e-04 1.0089 4.8747e-06 1.0136 7.6227e-08 1.0164 1.2290e-09 1.0182
30 1.4736e-04 1.0059 1.4568e-06 1.0090 1.5201e-08 1.0109 1.6369e-10 1.0122
40 8.3262e-05 1.0044 6.1698e-07 1.0068 4.8296e-09 1.0082 3.9026e-11 1.0091
50 5.3429e-05 1.0035 3.1660e-07 1.0054 1.9827e-09 1.0066 1.2820e-11 1.0073

Table 1.

The errors e and the global effectivity indices for Example 7.1 on uniform meshes having N= 5, 10, 20, 30, 40, 50 elements using Pp, p=1 to 4.

In Figure 4, we show the errors δe=|‖eE‖| and δσ=|σ1|. We see that δe=

(hp+2) and δσ=
(h) as the theory predicts.

Figure 4.

Convergence rates for δe (left) and δσ (right) for Example 1 on uniform meshes having N=5, 10, 20, 30, 40, 50 elements using Pp, p=1 to 4.

Example 7.2. In this example we test our error estimation procedure presented in Section 6 on adaptively refined meshes. We consider the following model problem

u=βu,t[0,5],u(0)=1,E409

where the exact solution is simply u(t)=eβt. We apply our adaptive algorithm using β=1 (unstable), β=1 (stable), and β=20 (stiff). The DG solutions and the sequence of meshes obtained by applying our adaptive algorithm with Tol=102 for p=14 are shown in Figures 57 for β=1, β=1, and β=20, respectively. As can be expected, the adaptive algorithm refines in the vicinity of the endpoint t=5 with coarser meshes for increasing polynomial degree p. Furthermore, we observe that, when λ is closer to 0, we get more uniform refinement near the portion with high approximation error. When λ is near 1, we get less uniform refinement near the portion with high approximation error. We also observed that, both for λ=0.2 and λ=0.9, the optimal convergence rates are achieved asymptotically and that the global effectivity indices converge to unity with increasing polynomial degree p. Furthermore, we tested our adaptive algorithm on other problems and observed similar conclusions. These results are not included to save space.

Figure 5.

u, uh, and final meshes for Example 7.2 with β=20 using Pp, p=1 to 4, and Tol=102.

Figure 6.

u, uh, and final meshes for Example 7.2 with β = −1 using Pp, p = 1 to 4, and Tol = 10−2.

Figure 7.

u, uh, and final meshes for Example 7.2 with β = −20 using Pp, p = 1 to 4, and Tol = 10−2.

Advertisement

8. Concluding remarks

In this chapter, we presented a detailed analysis of the original discontinuous Galerkin (DG) finite element method for the approximation of initial‐value problems (IVPs) for nonlinear ordinary differential equations (ODEs). We proved several optimal error estimates and superconvergence results. In particular, we showed that the DG solution converges to the true solution with order p+1, when the space of piecewise polynomials of degree p is used. We further proved the (2p+1)th superconvergence rate at the downwind points. Moreover, we proved that the DG solution is

(hp+2) superconvergent toward a particular projection of the exact solution. We used these results and showed that the leading term of the DG error is proportional to the (p+1) degree right Radau polynomial. This result allowed us to construct computationally simple, efficient, and asymptotically exact a posteriori error estimator. It is obtained by solving a local residual problem with no boundary condition on each element. Furthermore, we proved that the proposed a posteriori error estimator converges to the actual error in the L2 norm. The order of convergence is proved to be p+2. All proofs are valid for regular meshes and for Pp polynomials with p1. Finally, we presented a local adaptive procedure that makes use of our local a posteriori error estimate. Future work includes the study of superconvergence of DG method for nonlinear boundary‐value problems.

Abbreviations

Symbols

References

  1. 1. W. H. Reed, T. R. Hill, Triangular mesh methods for the neutron transport equation, Tech. Rep. LA‐UR‐73‐479, Los Alamos Scientific Laboratory, Los Alamos, 1973.
  2. 2. R. Biswas, K. Devine, J. E. Flaherty, Parallel adaptive finite element methods for conservation laws, Applied Numerical Mathematics 14 (1994): 255–284.
  3. 3. J.‐F. Remacle, J. E. Flaherty, M. S. Shephard, An adaptive discontinuous Galerkin technique with an orthogonal basis applied to compressible flow problems, SIAM Review 45 (2002): 53–72.
  4. 4. J. Flaherty, R. Loy, M. Shephard, J. Teresco, Software for the parallel adaptive solution of conservation laws by discontinuous Galerkin methods, in: B. Cockburn, G. Karniadakis, C.‐W. Shu (eds.), Discontinuous Galerkin Methods, Vol. 11 of Lecture Notes in Computational Science and Engineering, Springer, Berlin Heidelberg, 2000, pp. 113–123.
  5. 5. J. E. Flaherty, R. Loy, M. S. Shephard, B. K. Szymanski, J. D. Teresco, L. H. Ziantz, Adaptive local refinement with octree load‐balancing for the parallel solution of three‐dimensional conservation laws, Journal of Parallel and Distributed Computing 47 (1997): 139–152.
  6. 6. S. Adjerid, K. D. Devine, J. E. Flaherty, L. Krivodonova, A posteriori error estimation for discontinuous Galerkin solutions of hyperbolic problems, Computer Methods in Applied Mechanics and Engineering 191 (2002): 1097–1112.
  7. 7. M. Delfour, W. Hager, F. Trochu, Discontinuous Galerkin methods for ordinary differential equation, Mathematics of Computation 154 (1981): 455–473.
  8. 8. C. Johnson, Error estimates and adaptive time‐step control for a class of one‐step methods for stiff ordinary differential equations, SIAM Journal on Numerical Analysis 25 (1988): 908–926.
  9. 9. P. Lesaint, P. Raviart, On a finite element method for solving the neutron transport equations, in: C. de Boor (ed.), Mathematical Aspects of Finite Elements in Partial Differential Equations, Academic Press, New York, 1974.
  10. 10. S. Adjerid, M. Baccouch, The discontinuous Galerkin method for two‐dimensional hyperbolic problems. Part I: Superconvergence error analysis, Journal of Scientific Computing 33 (2007): 75–113.
  11. 11. S. Adjerid, M. Baccouch, The discontinuous Galerkin method for two‐dimensional hyperbolic problems. Part II: A posteriori error estimation, Journal of Scientific Computing 38 (2009): 15–49.
  12. 12. S. Adjerid, M. Baccouch, Asymptotically exact a posteriori error estimates for a one‐dimensional linear hyperbolic problem, Applied Numerical Mathematics 60 (2010): 903–914.
  13. 13. S. Adjerid, M. Baccouch, Adaptivity and error estimation for discontinuous Galerkin methods, in: X. Feng, O. Karakashian, Y. Xing (eds.), Recent Developments in Discontinuous Galerkin Finite Element Methods for Partial Differential Equations, vol. 157 of The IMA Volumes in Mathematics and its Applications, Springer International Publishing, Switzerland, 2014, pp. 63–96.
  14. 14. M. Baccouch, A local discontinuous Galerkin method for the second‐order wave equation, Computer Methods in Applied Mechanics and Engineering 209–212 (2012): 129–143.
  15. 15. M. Baccouch, A posteriori error estimates for a discontinuous Galerkin method applied to one‐dimensional nonlinear scalar conservation laws, Applied Numerical Mathematics 84 (2014): 1–21.
  16. 16. M. Baccouch, S. Adjerid, Discontinuous Galerkin error estimation for hyperbolic problems on unstructured triangular meshes, Computer Methods in Applied Mechanics and Engineering 200 (2010): 162–177.
  17. 17. B. Cockburn, C. W. Shu, TVB Runge‐Kutta local projection discontinuous Galerkin methods for scalar conservation laws II: General framework, Mathematics of Computation 52 (1989): 411–435.
  18. 18. K. D. Devine, J. E. Flaherty, Parallel adaptive hp‐refinement techniques for conservation laws, Computer Methods in Applied Mechanics and Engineering 20 (1996): 367–386.
  19. 19. T. E. Peterson, A note on the convergence of the discontinuous Galerkin method for a scalar hyperbolic equation, SIAM Journal on Numerical Analysis 28 (1) (1991): 133–140.
  20. 20. P. Castillo, A superconvergence result for discontinuous Galerkin methods applied to elliptic problems, Computer Methods in Applied Mechanics and Engineering 192 (2003): 4675–4685.
  21. 21. F. Celiker, B. Cockburn, Superconvergence of the numerical traces for discontinuous Galerkin and hybridized methods for convection‐diffusion problems in one space dimension, Mathematics of Computation 76 (2007): 67–96.
  22. 22. Y. Cheng, C.‐W. Shu, Superconvergence of discontinuous Galerkin and local discontinuous Galerkin schemes for linear hyperbolic and convection‐diffusion equations in one space dimension, SIAM Journal on Numerical Analysis 47 (2010): 4044–4072.
  23. 23. B. Cockburn, C. W. Shu, The local discontinuous Galerkin method for time‐dependent convection‐diffusion systems, SIAM Journal on Numerical Analysis 35 (1998): 2440–2463.
  24. 24. B. Cockburn, G. E. Karniadakis, C. W. Shu, Discontinuous Galerkin Methods Theory, Computation and Applications, Lecture Notes in Computational Science and Engineering, Vol. 11, Springer, Berlin, 2000.
  25. 25. C.‐W. Shu, Discontinuous Galerkin method for time‐dependent problems: Survey and recent developments, in: X. Feng, O. Karakashian, Y. Xing (eds.), Recent Developments in Discontinuous Galerkin Finite Element Methods for Partial Differential Equations, Vol. 157 of The IMA Volumes in Mathematics and its Applications, Springer International Publishing Switzerland, 2014, pp. 25–62.
  26. 26. M. Baccouch, Analysis of a posteriori error estimates of the discontinuous Galerkin method for nonlinear ordinary differential equations, Applied Numerical Mathematics 106 (2016): 129–153.
  27. 27. M. Delfour, F. Dubeau, Discontinuous polynomial approximations in the theory of one‐step, hybrid and multistep methods for nonlinear ordinary differential equations, Mathematics of Computation 47 (1986): 169–189.
  28. 28. D. Estep, A posteriori error bounds and global error control for approximation of ordinary differential equations, SIAM Journal on Numerical Analysis 32 (1995): 1–48.
  29. 29. C. Johnson, J. Pitkaranta, An analysis of the discontinuous Galerkin method for a scalar hyperbolic equation, Mathematics of Computation 47 (1986): 285–312.
  30. 30. G. Richter, An optimal‐order error estimate for discontinuous Galerkin method, Mathematics of Computation 50 (1988): 75–88.
  31. 31. K. Deng, Z. Xiong, Superconvergence of a discontinuous finite element method for a nonlinear ordinary differential equation, Applied Mathematics and Computation 217 (2010): 3511–3515.
  32. 32. M. Baccouch, Asymptotically exact a posteriori LDG error estimates for one‐dimensional transient convection‐diffusion problems, Applied Mathematics and Computation 226 (2014): 455–483.
  33. 33. M. Baccouch, Global convergence of a posteriori error estimates for a discontinuous Galerkin method for one‐dimensional linear hyperbolic problems, International Journal of Numerical Analysis and Modeling 11 (2014): 172–192.
  34. 34. M. Baccouch, The local discontinuous Galerkin method for the fourth‐order Euler‐Bernoulli partial differential equation in one space dimension. Part II: A posteriori error estimation, Journal of Scientific Computing 60 (2014): 1–34.
  35. 35. M. Baccouch, A superconvergent local discontinuous Galerkin method for the second‐order wave equation on Cartesian grids, Computers and Mathematics with Applications 68 (2014): 1250–1278.
  36. 36. M. Baccouch, Asymptotically exact a posteriori local discontinuous Galerkin error estimates for the one‐dimensional second‐order wave equation, Numerical Methods for Partial Differential Equations 31 (2015): 1461–1491.
  37. 37. M. Baccouch, Asymptotically exact local discontinuous Galerkin error estimates for the linearized Korteweg‐de Vries equation in one space dimension, International Journal of Numerical Analysis and Modeling 12 (2015): 162–195.
  38. 38. M. Baccouch, Superconvergence and a posteriori error estimates of the DG method for scalar hyperbolic problems on Cartesian grids, Applied Mathematics and Computation 265 (2015): 144–162.
  39. 39. M. Baccouch, B. Johnson, A high‐order discontinuous Galerkin method for Itô stochastic ordinary differential equations, Journal of Computational and Applied Mathematics 308 (2016): 138–165.
  40. 40. P. G. Ciarlet, The Finite Element Method for Elliptic Problems, North‐Holland Pub. Co., Amsterdam/New York/Oxford, 1978.
  41. 41. M. Abramowitz, I. A. Stegun, Handbook of Mathematical Functions, Dover, New York, 1965.
  42. 42. M. Ainsworth, J. T. Oden, A posteriori Error Estimation in Finite Element Analysis, John Wiley, New York, 2000.
  43. 43. L. Schumaker, Spline Functions: Basic Theory, Cambridge University Press, Cambridge/New York, 2007.
  44. 44. K. Segeth, A posteriori error estimation with the finite element method of lines for a nonlinear parabolic equation in one space dimension, Numerische Mathematik 83 (3) (1999): 455–475.

Written By

Mahboub Baccouch

Submitted: 16 March 2016 Reviewed: 20 July 2016 Published: 14 December 2016