Performance of the algorithm when it is run to solve the model using three different parameters guesses.
This chapter is intended to provide a numerical algorithm involving the combined use of the Levenberg-Marquardt algorithm and the Galerkin finite element method for estimating the diffusion coefficient in an inverse heat conduction problem (IHCP). In the present study, the functional form of the diffusion coefficient is an unknown priori. The unknown diffusion coefficient is approximated by the polynomial form and the present numerical algorithm is employed to find the solution. Numerical experiments are presented to show the efficiency of the proposed method.
- parabolic equation
- inverse problem
The numerical solution of the inverse heat conduction problem (IHCP) requires to determine diffusion coefficient from an additional information. Inverse heat conduction problems have many applications in various branches of science and engineering, mechanical and chemical engineers, mathematicians and specialists in many other sciences branches are interested in inverse problems, each with different application in mind [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15].
In this work, we propose an algorithm for numerical solving an inverse heat conduction problem. The algorithm is based on the Galerkin finite element method and Levenberg-Marquardt algorithm [16, 17] in conjunction with the least-squares scheme. It is assumed that no prior information is available on the functional form of the unknown diffusion coefficient in the present study, thus, it is classified as the function estimation in inverse calculation. Run the numerical algorithm to solve the unknown diffusion coefficient which is approximated by the polynomial form. The Levenberg-Marquardt optimization is adopted to modify the estimated values.
The plan of this paper is as follows: in Section 2, we formulate a one-dimensional IHCP. In Section 3, the numerical algorithm is derived. Calculation of sensitivity coefficients will be discussed in Section 4. In order to discuss on some numerical aspects, two examples are given in Section 5. Section 6 ends this paper with a brief discussion on some numerical aspects.
2. Description of the problem
The mathematical formulation of a one-dimensional heat conduction problem is given as follows:
with the initial condition
and Dirichlet boundary conditions
where , , , and are continuous known functions. We consider the problem (1)–(4) as a direct problem. As we all know, if , , are continuous functions and is known, the problem (1)–(4) has a unique solution.
For the inverse problem, the diffusion coefficient is regarded as being unknown. In addition, an overspecified condition is also considered available. To estimate the unknown coefficient , the additional information on the boundary , is required. Let the taken at over the time period be denoted by
It is evident that for an unknown function , the problem (1)–(4) is under-determined and we are forced to impose additional information (5) to provide a unique solution pair to the inverse problem (1)–(5).
We note that the measured overspecified condition should contain measurement errors. Therefore the inverse problem can be stated as follows: by utilizing the above-mentioned measured data, estimate the unknown function .
In this work the polynomial form is proposed for the unknown function before performing the inverse calculation. Therefore approximated as
where are constants which remain to be determined simultaneously. The unknown coefficients can be determined by using least squares method. The error in the estimate
is to be minimized. Here, are the calculated results. These quantities are determined from the solution of the direct problem which is given previously by using an approximated for the exact . The estimated values of , are determined until the value of is minimum. Such a norm can be written as
where denotes the vector of unknown parameters and the superscript above denotes transpose. The vector is given by
is real-valued bounded function defined on a closed bounded domain . The function may have many local minimum in , but it has only one global minimum. When and have some attractive properties, for instance, is a differentiable concave function and is a convex region, then a local maximum and problem can be solved explicitly by mathematical programming methods.
3. Overview of the Levenberg-Marquardt method
The Levenberg-Marquardt method, originally devised for application to nonlinear parameter estimation problems, has also been successfully applied to the solution of linear ill-conditioned problems. Such a method was first derived by Levenberg (1944) by modifying the ordinary least-squares norm. Later Marquardt (1963) derived basically the same technique by using a different approach. Marquardt’s intention was to obtain a method that would tend to the Gauss method in the neighborhood of the minimum of the ordinary least-squares norm, and would tend to the steepest descent method in the neighborhood of the initial guess used for the iterative procedure.
To minimize the least squares norm (8), we need to equate to zero the derivatives of with respect to each of the unknown parameters , that is
Let us introduce the sensitivity or Jacobian matrix, as follows:
The elements of the sensitivity matrix are called the sensitivity coefficients, the results of differentiation (10) can be written down as follows:
For linear inverse problem the sensitivity matrix is not a function of the unknown parameters. The Eq. (13) can be solved then in explicit form:
In the case of a nonlinear inverse problem, the matrix has some functional dependence on the vector . The solution of Eq. (13) requires an iterative procedure, which is obtained by linearizing the vector with a Taylor series expansion around the current solution at iteration k. Such a linearization is given by
where and are the estimated temperatures and the sensitivity matrix evaluated at iteration k, respectively. Eq. (15) is substituted into (14) and the resulting expression is rearranged to yield the following iterative procedure to obtain the vector of unknown parameters :
The iterative procedure given by Eq. (16) is called the Gauss method. Such method is actually an approximation for the Newton (or Newton-Raphson) method. We note that Eq. (14), as well as the implementation of the iterative procedure given by Eq. (16), require the matrix to be nonsingular, or
where is the determinant.
Formula (17) gives the so called identifiability condition, that is, if the determinant of is zero, or even very small, the parameters , for , cannot be determined by using the iterative procedure of Eq. (16).
Problems satisfying are denoted ill-conditioned. Inverse heat transfer problems are generally very ill-conditioned, especially near the initial guess used for the unknown parameters, creating difficulties in the application of Eqs. (14) or (16). The Levenberg-Marquardt method alleviates such difficulties by utilizing an iterative procedure in the form:
where is a positive scalar named damping parameter and is a diagonal matrix.
The purpose of the matrix term is to damp oscillations and instabilities due to the ill-conditioned character of the problem, by making its components large as compared to those of if necessary. is made large in the beginning of the iterations, since the problem is generally ill-conditioned in the region around the initial guess used for iterative procedure, which can be quite far from the exact parameters. With such an approach, the matrix is not required to be non-singular in the beginning of iterations and the Levenberg-Marquardt method tends to the steepest descent method, that is, a very small step is taken in the negative gradient direction. The parameter is then gradually reduced as the iteration procedure advances to the solution of the parameter estimation problem, and then the Levenberg-Marquardt method tends to the Gauss method given by (16). The following criteria were suggested in literature  to stop the iterative procedure of the Levenberg-Marquardt method given by Eq. (18):
where and are user prescribed tolerances and denotes the Euclidean norm. The criterion given by Eq. (19) tests if the least squares norm is sufficiently small, which is expected in the neighborhood of the solution for the problem. Similarly, Eq. (20) checks if the norm of the gradient of is sufficiently small, since it is expected to vanish at the point where is minimum. The last criterion given by Eq. (21) results from the fact that changes in the vector of parameters are very small when the method has converged. Generally, these three stopping criteria need to be tested and the iterative procedure of the Levenberg-Marquardt method is stopped if any of them is satisfied.
Different versions of the Levenberg-Marquardt method can be found in the literature, depending on the choice of the diagonal matrix and on the form chosen for the variation of the damping parameter . In this paper, we choose the as
Suppose that the vector of temperature measurements are given at times , and an initial guess is available for the vector of unknown parameters . Choose a value for , say, and . Then,
Step 2. Compute from the Eq. (8).
Step 4. Solve the following linear system of algebraic equations, obtained from (18): in order to compute .
Step 5. Compute the new estimate as .
Step 7. If replace by and return to step 4.
Step 8. If , accept the new estimate and replace by .
Step 9. Check the stopping criteria given by (19). Stop the iterative procedure if any of them is satisfied; otherwise, replace by and return to step 3.
4. Calculation of sensitivity coefficients
Generally, there have two approaches for determining the gradient; the first is a discretize-then-differentiate approach and the second is a differentiate-then-discretize approach.
The first approach is to approximate the gradient of the functional by a finite difference quotient approximation, but in general, we cannot determine the sensitivities exactly, so this method may led to larger error.
Here we intend to use differentiate-then-discretize approach which we refer to as the sensitivity equation method. This method can be determined more efficiently with the help of the sensitivities
There have () equations, we can make them in one system equation and use the finite element methods to solve the system of equation. Here, we give the vector form of the equation as follow:
We use the Galerkin finite element method approximation for discretizing problem (25). For this, we multiply the Eq. (25) by a test function , and integrate the obtained equation in space form to . We obtain the following equation:
integrating by parts gives
We can change the first derivative in time and the integral. We have , because . This leads to an equivalent problem to : find satisfying
for all . To simplify the notation we use the scalar product in
We also can define the following bilinear form:
Finally, we obtain with this notations the weak problem of :
4.1 Space-discretization with the Galerkin method
In this section, we search a semi-discrete approximation of the weak problem , using the Galerkin finite element method. This leads to a first order Cauchy-problem in time.
Let be a dimensional subspace of and . Then the following problem is an approximation of the weak problem, find that satisfies:
for all . where .
The choice of is completely arbitrary. So we can choose it the way that for later treatment, it will be as easy as possible. For example, we subdivide the interval into partitions of equal distances :
Note, that the finite dimension allows us to build a finite base for the corresponding space. In the case of , we have: where .
while we add for the two functions and defined as:
so that we can write as a linear combination of the basic elements:
where and . Using that is bilinear form and that Eq. (34) is valid for each element of the base , we obtain
. This equation can be written in a vector form. For this we define the vectors and with components
and matrices and as
Note that . So that (43) is equal to the Cauchy problem
the Crank-Nicolson method can be applied to (46) at time , resulting in
where , .
The Eq. (47) can be written in simple form as
the algebraic system (48) is solved by Gauss elimination method.
5. Numerical experiment
We obtain the unique exact solution
We take the observed data as
The unknown function defined as the following form
where are unknown coefficients.
Table 1 shows how the Levenberg-Marquardt algorithm can find the best parameters after 12 iterations when it is initialized in four different points.
|Starting point||0.5 0.5 0.5||1 1 1||10 10 10||50 50 50|
|Error F||8.7564944405 × 10−14||8.7564944427 × 10−14||8.7564944420 × 10−14||8.7564944420 × 10−14|
Table 2 shows the values of and in with the all the initial values are set 1.
We obtain the unique exact solution
We take the observed data as
The unknown function defined as the following form
, where are unknown coefficients.
Table 3 shows how the Levenberg-Marquardt algorithm can find the best parameters after 20 iterations when it is initialized in four different points.
|Starting point||0.1 0.1 0.1 0.1|
0.1 0.1 0.1 0.1
|0.5 0.5 0.5 0.5|
0.5 0.5 0.5 0.5
|1 1 1 1|
1 1 1 1
|2 2 2 2|
2 2 2 2
Table 4 shows the values of and in with the all the initial values are set 1.
A numerical method to estimate the temperature and the coefficient is proposed for an IHCP and the following results are obtained.
The present study, successfully applies the numerical method involving the Levenberg-Marquardt algorithm in conjunction with the Galerkin finite element method to an IHCP.
From the illustrated example it can be seen that the proposed numerical method is efficient and accurate to estimate the temperature and the coefficient .
The work of the author is supported by the Special Funds of the National Natural Science Foundation of China (Nos. 51190093 and 51179151). The author would like to thank the referees for constructive suggestions and comments.
Conflict of interests
The authors declare that there is on conflict of interests regarding the publication of this article.