Open access peer-reviewed chapter

A Numerical Approach to Solving an Inverse Heat Conduction Problem Using the Levenberg-Marquardt Algorithm

Written By

Tao Min, Xing Chen, Yao Sun and Qiang Huang

Reviewed: 09 August 2019 Published: 07 October 2019

DOI: 10.5772/intechopen.89096

From the Edited Volume

Inverse Heat Conduction and Heat Exchangers

Edited by Suvanjan Bhattacharya, Mohammad Moghimi Ardekani, Ranjib Biswas and R. C. Mehta

Chapter metrics overview

687 Chapter Downloads

View Full Metrics

Abstract

This chapter is intended to provide a numerical algorithm involving the combined use of the Levenberg-Marquardt algorithm and the Galerkin finite element method for estimating the diffusion coefficient in an inverse heat conduction problem (IHCP). In the present study, the functional form of the diffusion coefficient is an unknown priori. The unknown diffusion coefficient is approximated by the polynomial form and the present numerical algorithm is employed to find the solution. Numerical experiments are presented to show the efficiency of the proposed method.

Keywords

  • parabolic equation
  • inverse problem
  • Levenberg-Marquardt

1. Introduction

The numerical solution of the inverse heat conduction problem (IHCP) requires to determine diffusion coefficient from an additional information. Inverse heat conduction problems have many applications in various branches of science and engineering, mechanical and chemical engineers, mathematicians and specialists in many other sciences branches are interested in inverse problems, each with different application in mind [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15].

In this work, we propose an algorithm for numerical solving an inverse heat conduction problem. The algorithm is based on the Galerkin finite element method and Levenberg-Marquardt algorithm [16, 17] in conjunction with the least-squares scheme. It is assumed that no prior information is available on the functional form of the unknown diffusion coefficient in the present study, thus, it is classified as the function estimation in inverse calculation. Run the numerical algorithm to solve the unknown diffusion coefficient which is approximated by the polynomial form. The Levenberg-Marquardt optimization is adopted to modify the estimated values.

The plan of this paper is as follows: in Section 2, we formulate a one-dimensional IHCP. In Section 3, the numerical algorithm is derived. Calculation of sensitivity coefficients will be discussed in Section 4. In order to discuss on some numerical aspects, two examples are given in Section 5. Section 6 ends this paper with a brief discussion on some numerical aspects.

Advertisement

2. Description of the problem

The mathematical formulation of a one-dimensional heat conduction problem is given as follows:

u t = x q x u x + f x t , x t 0 L × 0 T , E1

with the initial condition

u x 0 = u 0 x , 0 x L , E2

and Dirichlet boundary conditions

u 0 t = g 1 t , 0 t T , E3
u 1 t = g 2 t , 0 t T , E4

where f x t , u 0 x , g 1 t , g 2 t and q x are continuous known functions. We consider the problem (1)(4) as a direct problem. As we all know, if u 0 x , g 1 t , g 2 t are continuous functions and q x is known, the problem (1)–(4) has a unique solution.

For the inverse problem, the diffusion coefficient q x is regarded as being unknown. In addition, an overspecified condition is also considered available. To estimate the unknown coefficient q x , the additional information on the boundary x = x 0 , 0 < x 0 < L is required. Let the u x t taken at x = x 0 over the time period 0 T be denoted by

u x 0 t = g t 0 t T . E5

It is evident that for an unknown function q x , the problem (1)(4) is under-determined and we are forced to impose additional information (5) to provide a unique solution pair u x t q x to the inverse problem (1)(5).

We note that the measured overspecified condition u x 0 t = g t should contain measurement errors. Therefore the inverse problem can be stated as follows: by utilizing the above-mentioned measured data, estimate the unknown function q x .

In this work the polynomial form is proposed for the unknown function q x before performing the inverse calculation. Therefore q x approximated as

q x q x ^ = p 1 + p 2 x + p 3 x 2 + + p m + 1 x m , E6

where p 1 , p 2 , , p m + 1 are constants which remain to be determined simultaneously. The unknown coefficients p 1 , p 2 , , p m + 1 can be determined by using least squares method. The error in the estimate

F p 1 p 2 p m + 1 = i = 1 n u x 0 t i p 1 p 2 p m + 1 g t i 2 , E7

is to be minimized. Here, u x 0 t i p 1 p 2 p m + 1 are the calculated results. These quantities are determined from the solution of the direct problem which is given previously by using an approximated q x ^ for the exact q x . The estimated values of p j , j = 1 , 2 , , m + 1 are determined until the value of F p 1 p 2 p m + 1 is minimum. Such a norm can be written as

F P = U P G T U P G , E8

where P T = p 1 p 2 p m + 1 denotes the vector of unknown parameters and the superscript T above denotes transpose. The vector U P G T is given by

U P G T = u x 0 t 1 p g t 1 u ( x 0 t 2 p ) g t 2 u ( x 0 t n p ) g t n . E9

F P is real-valued bounded function defined on a closed bounded domain D R m + 1 . The function F P may have many local minimum in D , but it has only one global minimum. When F P and D have some attractive properties, for instance, F P is a differentiable concave function and D is a convex region, then a local maximum and problem can be solved explicitly by mathematical programming methods.

Advertisement

3. Overview of the Levenberg-Marquardt method

The Levenberg-Marquardt method, originally devised for application to nonlinear parameter estimation problems, has also been successfully applied to the solution of linear ill-conditioned problems. Such a method was first derived by Levenberg (1944) by modifying the ordinary least-squares norm. Later Marquardt (1963) derived basically the same technique by using a different approach. Marquardt’s intention was to obtain a method that would tend to the Gauss method in the neighborhood of the minimum of the ordinary least-squares norm, and would tend to the steepest descent method in the neighborhood of the initial guess used for the iterative procedure.

To minimize the least squares norm (8), we need to equate to zero the derivatives of F P with respect to each of the unknown parameters p 1 p 2 p m + 1 , that is

F P p 1 = F P p 2 = = F p p m + 1 = 0 . E10

Let us introduce the sensitivity or Jacobian matrix, as follows:

J P = U T P P T = u p 1 x 0 t 1 p u p 2 x 0 t 1 p u p m + 1 x 0 t 1 p u p 1 x 0 t 2 p u p 2 x 0 t 2 p u p m + 1 x 0 t 2 p u p 1 x 0 t n p u p 2 x 0 t n p u p m + 1 x 0 t n p , E11
or J ij = u p j x 0 t i p = u x 0 t i p p j , i = 1 , 2 , n , j = 1 , 2 , m + 1 . E12

The elements of the sensitivity matrix are called the sensitivity coefficients, the results of differentiation (10) can be written down as follows:

2 J T P U P G = 0 . E13

For linear inverse problem the sensitivity matrix is not a function of the unknown parameters. The Eq. (13) can be solved then in explicit form:

P = J T J 1 J T G . E14

In the case of a nonlinear inverse problem, the matrix J has some functional dependence on the vector p . The solution of Eq. (13) requires an iterative procedure, which is obtained by linearizing the vector U P with a Taylor series expansion around the current solution at iteration k. Such a linearization is given by

U P = U P k + J k P P k , E15

where U P k and J k are the estimated temperatures and the sensitivity matrix evaluated at iteration k, respectively. Eq. (15) is substituted into (14) and the resulting expression is rearranged to yield the following iterative procedure to obtain the vector of unknown parameters P :

P k + 1 = P k + J k T J k 1 J k T G U P k . E16

The iterative procedure given by Eq. (16) is called the Gauss method. Such method is actually an approximation for the Newton (or Newton-Raphson) method. We note that Eq. (14), as well as the implementation of the iterative procedure given by Eq. (16), require the matrix J T J to be nonsingular, or

J T J 0 , E17

where is the determinant.

Formula (17) gives the so called identifiability condition, that is, if the determinant of J T J is zero, or even very small, the parameters p j , for j = 1 , 2 , , m + 1 , cannot be determined by using the iterative procedure of Eq. (16).

Problems satisfying J T J 0 are denoted ill-conditioned. Inverse heat transfer problems are generally very ill-conditioned, especially near the initial guess used for the unknown parameters, creating difficulties in the application of Eqs. (14) or (16). The Levenberg-Marquardt method alleviates such difficulties by utilizing an iterative procedure in the form:

P k + 1 = P k + J k T J k + μ k Ω k 1 J k T G U P k , E18

where μ k is a positive scalar named damping parameter and Ω k is a diagonal matrix.

The purpose of the matrix term μ k Ω k is to damp oscillations and instabilities due to the ill-conditioned character of the problem, by making its components large as compared to those of J T J if necessary. μ k is made large in the beginning of the iterations, since the problem is generally ill-conditioned in the region around the initial guess used for iterative procedure, which can be quite far from the exact parameters. With such an approach, the matrix J T J is not required to be non-singular in the beginning of iterations and the Levenberg-Marquardt method tends to the steepest descent method, that is, a very small step is taken in the negative gradient direction. The parameter μ k is then gradually reduced as the iteration procedure advances to the solution of the parameter estimation problem, and then the Levenberg-Marquardt method tends to the Gauss method given by (16). The following criteria were suggested in literature [13] to stop the iterative procedure of the Levenberg-Marquardt method given by Eq. (18):

F p k + 1 < ε 1 , E19
J k G U p k < ε 2 , E20
p k + 1 p k < ε 3 , E21

where ε 1 , ε 2 and ε 3 are user prescribed tolerances and denotes the Euclidean norm. The criterion given by Eq. (19) tests if the least squares norm is sufficiently small, which is expected in the neighborhood of the solution for the problem. Similarly, Eq. (20) checks if the norm of the gradient of F p is sufficiently small, since it is expected to vanish at the point where F p is minimum. The last criterion given by Eq. (21) results from the fact that changes in the vector of parameters are very small when the method has converged. Generally, these three stopping criteria need to be tested and the iterative procedure of the Levenberg-Marquardt method is stopped if any of them is satisfied.

Different versions of the Levenberg-Marquardt method can be found in the literature, depending on the choice of the diagonal matrix Ω k and on the form chosen for the variation of the damping parameter μ k . In this paper, we choose the Ω k as

Ω k = diag J k T J k . E22

Suppose that the vector of temperature measurements G = g t 1 g t 2 g t n are given at times t i , i = 1 , 2 , , n and an initial guess P 0 is available for the vector of unknown parameters P . Choose a value for μ 0 , say, μ 0 = 0.001 and k = 0 . Then,

Step 1. Solve the direct problem (1)(4) with the available estimate P k in order to obtain the vector U P k = u x 0 t 1 p k u ( x 0 t 2 p k ) u ( x 0 t n p k ) .

Step 2. Compute F P k from the Eq. (8).

Step 3. Compute the sensitivity matrix J k from (12) and then the matrix Ω k from (22), by using the current value of P k .

Step 4. Solve the following linear system of algebraic equations, obtained from (18): J k T J k + μ k Ω k Δ P k = J k T G U P k in order to compute Δ P k = P k + 1 P k .

Step 5. Compute the new estimate P k + 1 as P k + 1 = P k + Δ P k .

Step 6. Solve the exact problem (1)(4) with the new estimate P k + 1 in order to find U P k + 1 . Then compute F P k + 1 .

Step 7. If F P k + 1 F P k replace μ k by 10 μ k and return to step 4.

Step 8. If F P k + 1 F P k , accept the new estimate P k + 1 and replace μ k by 0.1 μ k .

Step 9. Check the stopping criteria given by (19). Stop the iterative procedure if any of them is satisfied; otherwise, replace k by k + 1 and return to step 3.

Advertisement

4. Calculation of sensitivity coefficients

Generally, there have two approaches for determining the gradient; the first is a discretize-then-differentiate approach and the second is a differentiate-then-discretize approach.

The first approach is to approximate the gradient of the functional by a finite difference quotient approximation, but in general, we cannot determine the sensitivities exactly, so this method may led to larger error.

Here we intend to use differentiate-then-discretize approach which we refer to as the sensitivity equation method. This method can be determined more efficiently with the help of the sensitivities

u k = u p k , k = 1 , 2 , m + 1 . E23

We first differentiate the flow system (1)(4) with respect to each of the design parameters p 1 p 2 p m + 1 , to obtain the m + 1 continuous sensitivity systems: for k = 1 , 2 , , m + 1

u k t = x p 1 + p 2 x + + p m + 1 x m u k x + x k 1 u x u k x 0 = 0 u k 0 t = 0 u k L t = 0 . E24

There have ( m + 2 ) equations, we can make them in one system equation and use the finite element methods to solve the system of equation. Here, we give the vector form of the equation as follow:

P 1 U t + Γ = F U x 0 = U 0 x U 0 t = G 1 t U L t = G 2 t , E25

where

U = u u 1 u 2 u m + 1 , Γ = p 1 + p 2 x + + p m + 1 x m u x p 1 + p 2 x + + p m + 1 x m u 1 x u x p 1 + p 2 x + + p m + 1 x m u 2 x x u x p 1 + p 2 x + + p m + 1 x m u m + 1 x x m u x , F = f x t 0 0 0 . E26
U 0 x = u 0 x 0 0 0 T , G 1 t = g 1 t 0 0 0 T , G 2 t = g 2 t 0 0 0 T . E27

We use the Galerkin finite element method approximation for discretizing problem (25). For this, we multiply the Eq. (25) by a test function v : 0 L R , v V 0 H 0 1 0 L and integrate the obtained equation in space form 0 to L . We obtain the following equation:

0 L U x t t v x dx 0 L Γ v x dx = 0 L F x t v x dx , E28

integrating by parts gives

0 L Γ v x dx = Γ v x 0 L 0 L Γ v x t x dx . E29

We can change the first derivative in time and the integral. We have v 0 = 0 = v L , because v V 0 . This leads to an equivalent problem to P 1 : t > 0 , find U x t satisfying

d dt 0 L U x t v x dx + 0 L Γ v x t x dx = 0 L F x t v x dx , E30

for all v V 0 H 0 1 0 L . To simplify the notation we use the scalar product in L 2 0 L

f g = 0 L f x g x dx . E31

We also can define the following bilinear form:

a U v = 0 L Γ v x t x dx = 0 L p 1 + p 2 x + + p m + 1 x m u x v x dx 0 L p 1 + p 2 x + + p m + 1 x m u 1 x v x u x v x dx 0 L p 1 + p 2 x + + p m + 1 x m u 2 x v x x u x v x dx 0 L p 1 + p 2 x + + p m + 1 x m u m + 1 x v x x m + 1 u x v x dx . E32

Finally, we obtain with this notations the weak problem of P 1 :

P 2 d dt U v L 2 + a U v = F v L 2 U x 0 = U 0 x U 0 t = G 1 t U L t = G 2 t . E33

4.1 Space-discretization with the Galerkin method

In this section, we search a semi-discrete approximation of the weak problem P 2 , using the Galerkin finite element method. This leads to a first order Cauchy-problem in time.

Let V h be a N x + 1 dimensional subspace of V and V 0 , h = V h V 0 . Then the following problem is an approximation of the weak problem, find u h , u 1 , h , u 2 , h u m + 1 . h V h that satisfies:

d dt U h v h + a U h v h = F v h U h x 0 = U 0 , h x U h 0 t = G 1 t U h L t = G 2 t , E34

for all v h V 0 , h . where U h = u h u 1 , h u 2 , h u m + 1 . h T .

The choice of V h is completely arbitrary. So we can choose it the way that for later treatment, it will be as easy as possible. For example, we subdivide the interval 0 L into partitions of equal distances h :

0 = a 1 < a 2 < < a N x < a N x + 1 = L , , a i = i 1 h E35
V h = v h C 0 0 L : v h a i , a i + 1 P 1 i = 1 N x , E36
V 0 , h = v h V h : v h 0 = v h L = 0 . E37

Note, that the finite dimension allows us to build a finite base for the corresponding space. In the case of V 0 , h , we have: φ i i = 2 N x where i = 2 N x .

φ i x = 0 x [ a 0 , a i 1 ] x h i 2 x a i 1 a i i x h x a i a i + 1 0 x a i + 1 a N x + 1 , E38

while we add for V h the two functions φ 1 and φ N x + 1 defined as:

φ 1 x = 1 x h , if x a 1 a 2 0 , if x a 2 a N x + 1 , E39
φ N x + 1 x = 0 , if x a 1 a N x x h N x + 1 , if x a N x a N x + 1 , E40

so that we can write U h as a linear combination of the basic elements:

U h x t = j = 1 N x + 1 U ˜ j t φ j x , E41
U 0 , h x t = j = 1 N x + 1 U 0 x j φ j x , E42

where U ˜ 1 t = G 0 t and U ˜ N x + 1 t = G 1 t . Using that a is bilinear form and that Eq. (34) is valid for each element of the base φ i i = 2 N x , we obtain

j = 1 N x + 1 d dt U ˜ j t φ j φ i + j = 1 N x + 1 U ˜ j t a φ j φ i = F φ i , E43

i = 2 N x . This equation can be written in a vector form. For this we define the vectors u , u 0 and F with components

F i t = F φ i L 2 , u j t u ˜ j t , u 0 , j = u 0 x j , E44

and matrices M and A as

m ij φ i φ j L 2 , a ij a φ i φ j , E45

Note that M , A R N x 1 × N x + 1 , u R N x + 1 , and F R N x 1 . So that (43) is equal to the Cauchy problem

M d dx u t + A u t = F t u t 0 = u 0 , E46

the Crank-Nicolson method can be applied to (46) at time t k , resulting in

M u k + 1 u k Δt + 1 2 A u k + 1 + 1 2 A u k = 1 2 F k + F k + 1 , E47

where u k = u t k , F k = F t k , k = 0 , 1 , . .

The Eq. (47) can be written in simple form as

M + Δt 2 A u k + 1 = M Δt 2 A u k + Δt 2 F k + F k + 1 , E48

the algebraic system (48) is solved by Gauss elimination method.

Advertisement

5. Numerical experiment

In this section, we are going to demonstrate some numerical results for u x t q x in the inverse problem (1)(5). Therefore the following examples are considered and the solution is obtained.

Example 1. Consider (1)(4) with

u x 0 = sinx , 0 x 1 , E49
u 0 t = 0 , 0 t 1 E50
u 1 t = sin 1 e t , 0 t 1 , E51
f x t = sin x x 2 4 + x 2 + 1 e t x + 1 2 cos x e t sin xe t , 0 x 1 0 t 1 E52

We obtain the unique exact solution

q x = 1 + 0.5 x + 0.25 x 2 E53

And

u x t = sin x e t . E54

We take the observed data g as

g t = u 0.5 t = sin 0.5 e t 0 t 1 E55

The unknown function q x defined as the following form

q x ^ = p 1 + p 2 x + p 3 x 2 , E56

where p 1 , p 2 , p 3 are unknown coefficients.

Table 1 shows how the Levenberg-Marquardt algorithm can find the best parameters after 12 iterations when it is initialized in four different points.

Starting point 0.5 0.5 0.5 1 1 1 10 10 10 50 50 50
Iteration 12 0.999729028233135
0.499885876453067
0.252009862457275
0.999729028233183
0.499885876453056
0.252009862457315
0.999729028233194
0.499885876453057
0.252009862457325
0.999729028307261
0.499885876454169
0.25200986249336
Error F 8.7564944405 × 10−14 8.7564944427 × 10−14 8.7564944420 × 10−14 8.7564944420 × 10−14

Table 1.

Performance of the algorithm when it is run to solve the model using three different parameters guesses.

Figures 14 show the fitness of the estimated parameters and the rate of convergence.

Figure 1.

All the initial values for the parameters are set 0.5.

Figure 2.

All the initial values for the parameters are set 1.

Figure 3.

All the initial values for the parameters are set 10.

Figure 4.

All the initial values for the parameters are set 50.

Figures 58 show the comparison between the inversion results q ^ x and the exact value q x :

Figure 5.

The comparison chart with all the initial values for the parameters is set 0.5.

Figure 6.

The comparison chart with all the initial values for the parameters is set 1.

Figure 7.

The comparison chart with all the initial values for the parameters is set 10.

Figure 8.

The comparison chart with all the initial values for the parameters is set 50.

Table 2 shows the values of q j Δ x and u j Δ x 0.5 in x = jΔx with the all the initial values are set 1.

Numerical Exact Numerical Exact
j q j Δ x q j Δ x u j Δ x 0.5 u j Δ x 0.5
0 0.999729028233183 1 0 0
1 1.05223771450306 1.0525 0.0605593190239173 0.0605520280601669
2 1.10978659802209 1.11 0.120511797611786 0.120499040271796
3 1.17237567879026 1.1725 0.179257059078521 0.179242065904716
4 1.24000495680758 1.24 0.236207449080344 0.236194164064666
5 1.31267443207404 1.3125 0.290793943250869 0.290786288212692
6 1.39038410458965 1.39 0.342471828361625 0.342472971890064
7 1.47313397435441 1.4725 0.390726114897089 0.390737778838824
8 1.56092404136831 1.56 0.435076630410587 0.435098463062163
9 1.65375430563136 1.6525 0.475082717530532 0.475111787267016
10 1.75162476714355 1.75 0.510347406713368 0.510377951544573

Table 2.

The values of q j Δ x and u j Δ x 0.5 in x = jΔx with the all the initial values being set to 1.

Example 2. Consider (1)(4) with

u x 0 = xe x , 0 x 1 , E57
u 0 t = te t , 0 t 1 E58
u 1 t = 1 + t e 1 t , 0 t 1 E59
f x t = e t + x t + x e t + x e x e t + x t + x e t + x + e x 2 e t + x t + x e t + x , 0 x 1 0 t 1 E60

We obtain the unique exact solution

q x = e x , E61

And

u x t = x + t e x t . E62

We take the observed data g as

g t = u 0.5 t = 0.5 + t e 0.5 t 0 t 1 . E63

The unknown function q x defined as the following form

q x ^ = p 1 + p 2 x + p 3 x 2 + p 4 x 3 + p 5 x 4 + p 6 x 5 + p 7 x 6 + p 8 x 7 , where p 1 , p 2 , , p 7 , p 8 are unknown coefficients.

Table 3 shows how the Levenberg-Marquardt algorithm can find the best parameters after 20 iterations when it is initialized in four different points.

Starting point 0.1 0.1 0.1 0.1
0.1 0.1 0.1 0.1
0.5 0.5 0.5 0.5
0.5 0.5 0.5 0.5
1 1 1 1
1 1 1 1
2 2 2 2
2 2 2 2
Iteration 20 1.01536263526644
0.896348846894057
0.954285303464511
–0.890298938193057
1.40131927153131
–0.871276408882294
0.183785623507722
0.0359103726343979
1.01536263500695
0.896348850692318
0.954285278486704
–0.890298849338373
1.40131909032315
–0.871276197318301
0.183785492186491
0.0359104061952515
1.01536263525763
0.896348847022403
0.954285302637587
–0.890298935334171
1.40131926588117
–0.871276402487896
0.18378561965322
0.0359103735933848
1.01536263525905
0.896348846999736
0.954285302790922
–0.890298935876618
1.40131926696099
–0.87127640370648
0.183785620380814
0.0359103734148875
Error F 7.89749200363512 × 10 11 7.89749200363504 × 10 11 7.89749200353888 × 10 11 7.8974920035389 × 10 11

Table 3.

Performance of the algorithm when it is run to solve the model using four different parameters guesses.

Figures 912 show the fitness of the estimated parameters and the rate of convergence.

Figure 9.

All the initial values for the parameters are set 0.1.

Figure 10.

All the initial values for the parameters are set 0.5.

Figure 11.

All the initial values for the parameters are set 1.

Figure 12.

All the initial values for the parameters are set 2.

Figures 1316 show the comparison between the inversion results q ^ x and the exact value q x :

Figure 13.

The comparison chart with all the initial values for the parameters is set 0.1.

Figure 14.

The comparison chart with all the initial values for the parameters is set 0.5.

Figure 15.

The comparison chart with all the initial values for the parameters is set 1.

Figure 16.

The comparison chart with all the initial values for the parameters is set 2.

Table 4 shows the values of q j Δ x and u j Δ x 0.5 in x = jΔx with the all the initial values are set 1.

Numerical Exact Numerical Exact
j q j Δ x q j Δ x u j Δ x 0.5 u j Δ x 0.5
0 1.01536263525763 1 0.303342962644088 0.303265329856317
1 1.11378168059013 1.10517091807565 0.329201964677126 0.329286981656416
2 1.22765694959399 1.22140275816017 0.347492882224886 0.347609712653987
3 1.35549021305874 1.349858807576 0.359347568702678 0.359463171293777
4 1.4973722149265 1.49182469764127 0.365808711321159 0.365912693766539
5 1.65432828415206 1.64872127070013 0.367792378208857 0.367879441171442
6 1.82785056869393 1.82211880039051 0.366091218454182 0.366158192067887
7 2.01963499046464 2.01375270747048 0.361387761473796 0.361433054294643
8 2.23154102006879 2.22554092849247 0.354267869273063 0.354291330944216
9 2.46579237015687 2.45960311115695 0.345233023618059 0.345235749518249
10 2.72543670622333 2.71828182845905 0.334712604803175 0.334695240222645

Table 4.

The values of q j Δ x and u j Δ x 0.5 in x = jΔx with the all the initial values are set 1.

Advertisement

6. Conclusions

A numerical method to estimate the temperature u x t and the coefficient q x is proposed for an IHCP and the following results are obtained.

  1. The present study, successfully applies the numerical method involving the Levenberg-Marquardt algorithm in conjunction with the Galerkin finite element method to an IHCP.

  2. From the illustrated example it can be seen that the proposed numerical method is efficient and accurate to estimate the temperature u x t and the coefficient q x .

Advertisement

Acknowledgments

The work of the author is supported by the Special Funds of the National Natural Science Foundation of China (Nos. 51190093 and 51179151). The author would like to thank the referees for constructive suggestions and comments.

Advertisement

Conflict of interests

The authors declare that there is on conflict of interests regarding the publication of this article.

References

  1. 1. Shidfar A, Karamali GR. Numerical solution of inverse heat conduction problem with nonstationary measurements. Applied Mathematics and Computation. 2005;168(1):540-548
  2. 2. Shidfar A, Karamali GR, Damirchi J. An inverse heat conduction problem with a nonlinear source term. Nonlinear Analysis: Theory Methods & Applications. 2006;65(3):615-621
  3. 3. Shidfar A, Pourgholi R. Numerical approximation of solution of an inverse heat conduction problem based on Legendre polynomials. Applied Mathematics and Computation. 2006;175(2):1366-1374
  4. 4. Shidfar A, Azary H. An inverse problem for a nonlinear diffusion equation. Nonlinear Analysis: Theory Methods & Applications. 1997;28(4):589-593
  5. 5. Kurpisza K, Nowaka AJ. BEM approach to inverse heat conduction problems. Engineering Analysis with Boundary Elements. 1992;10(4):291-297
  6. 6. Han H, Ingham DB, Yuan Y. The boundary-element method for the solution of the back ward heat conduction equation. Journal of Computational Physics. 1995;116(2):292-299
  7. 7. Skorek J. Applying the least squares adjustment technique for solving inverse heat conduction problems. In: Taylor C, editor. Proceedings of the 8th Conference on Numerical Methods in Laminar and Turbulent Flow. Swansea: Pineridge Press; 1993. pp. 189-198
  8. 8. Pasquetti R, Le Niliot C. Boundary element approach for inverse heat conduction problems: Application to a bidimensional transient numerical experiment. Numerical Heat Transfer, Part B. 1991;20(2):169-189
  9. 9. Ingham DB, Yuan Y. The solution of a nonlinear inverse problem in heat transfer. IMA Journal of Applied Mathematics. 1993;50(2):113-132
  10. 10. Hensel E. Inverse Theory and Applications for Engineers. N.J. ISBN: 0135034590: Prentice Hall; 1991
  11. 11. Özisik MN. Inverse Heat Transfer: Fundamentals and Applications. New York, USA: Taylor and Francis. ISBN: 1-56032-838-X; 2000
  12. 12. Pourgholi R, Azizi N, Gasimov YS, Aliev F, Khala HK. Removal of numerical instability in the solution of an inverse heat conduction problem. Communications in Nonlinear Science and Numerical Simulation. 2009;14(6):2664-2669
  13. 13. Shidfar A, Pourgholi R, Ebrahimi M. A numerical method for solving of a nonlinear inverse diffusion problem. Computers and Mathematics with Applications. 2006;52(6–7):1021-1030
  14. 14. Wang J, Zabaras N. A Bayesian inference approach to the inverse heat conduction problem. International Journal of Heat and Mass Transfer. 2004;47(17–18):3927-3941
  15. 15. Dehghan M. Determination of an unknown parameter in a semi-linear parabolic equation. Mathematical Problems in Engineering. 2002;8(2):111-122
  16. 16. Lihua J, Changfeng M. On L-M method for nonlinear equations. Journal of Mathematics. Wuhan University. 2009;29(3):253-259
  17. 17. Chen P. Why not use the Levenberg–Marquardt method for fundamental matrix estimation? The Institution of Engineering and Technology. 2010;4(4):286-288

Written By

Tao Min, Xing Chen, Yao Sun and Qiang Huang

Reviewed: 09 August 2019 Published: 07 October 2019