Open access peer-reviewed chapter

Unconstrained Optimization Methods: Conjugate Gradient Methods and Trust-Region Methods

By Snezana S. Djordjevic

Reviewed: January 14th 2019Published: February 19th 2019

DOI: 10.5772/intechopen.84374

Downloaded: 858


Here, we consider two important classes of unconstrained optimization methods: conjugate gradient methods and trust region methods. These two classes of methods are very interesting; it seems that they are never out of date. First, we consider conjugate gradient methods. We also illustrate the practical behavior of some conjugate gradient methods. Then, we study trust region methods. Considering these two classes of methods, we analyze some recent results.


  • conjugate gradient method
  • hybrid conjugate gradient method
  • three-term conjugate gradient method
  • modified conjugate gradient method
  • trust region methods

1. Introduction

Remind to the unconstrained optimization problem which we can present as


where f:RnRis a smooth function.

Here, we consider two classes of unconstrained optimization methods: conjugate gradient methods and trust region methods. Both of them are made with the aim to solve the unconstrained optimization problem (1).

In this chapter, at first, we consider the conjugate gradient methods. Then, we study trust region methods. Also, we try to give some of the most recent results in these areas.

2. Conjugate gradient method (shortly CG)

The conjugate gradient method is the method between the steepest descent method and the Newton method.

The conjugate gradient method in fact deflects the direction of the steepest descent method by adding to it a positive multiple of the direction used in the last step.

The restarting and the preconditioning are very important to improve the conjugate gradient method [47].

Some of well-known CG methods are [12, 19, 20, 23, 24, 31, 39, 40, 49]:


Consider positive definite quadratic function


where G is an n×nsymmetric positive definite matrix, bRn, and cis a real number.

Theorem 1.2.1. [47] (Property theorem of conjugate gradient method) For positive definite quadratic function (2), FR conjugate gradient method with the exact line search terminates after mnsteps, and the following properties hold for all i, 0im:



where mis the number of distinct eigenvalues of G.

Now, we give the algorithm of conjugate gradient method.

Algorithm 1.2.1. (CG method).

Assumptions: ε<0and x0Rn. Let k=0, t0=0, d1=0, d0=g0, β1=0, and β0=0.

Step 1. If gkε, then STOP.

Step 2. Calculate the step-size tkby a line search.

Step 3. Calculate βkby any of the conjugate gradient method.

Step 4. Calculate dk=gk+βk1dk1.

Step 5. Set xk+1=xk+tkdk.

Step 6. Set k=k+1and go to Step 1.

2.1 Convergence of conjugate gradient methods

Theorem 1.2.2. [47] (Global convergence of FR conjugate gradient method) Suppose that f:RnRis continuously differentiable on a bounded level set


and let FR method be implemented by the exact line search. Then, the produced sequence xkhas at least one accumulation point, which is a stationary point, i.e.:

  1. When xkis a finite sequence, then the final point xis a stationary point of f.

  2. When xkis an infinite sequence, then it has a limit point, and it is a stationary point.

In [35], a comparison of two methods, the steepest descent method and the conjugate gradient method which are used for solving systems of linear equations, is illustrated. The aim of the research is to analyze, which method is faster in solving these equations and how many iterations are needed by each method for solving.

The system of linear equations in the general form is considered:


where matrix Ais symmetric and positive definite.

The conclusion is that the SDmethod is a faster method than the CG, because it solves equations in less amount of time.

By the other side, the authors find that the CGmethod is slower but more productive than the SD, because it converges after less iterations.

So, we can see that one method can be used when we want to find solution very fast and another can converge to maximum in less number of iterations.

Again, we consider the problem (1), where f:RnRis a smooth function and its gradient is available.

A hybrid conjugate gradient method is a certain combination of different conjugate gradient methods; it is made to improve the behavior of these methods and to avoid the jamming phenomenon.

An excellent survey of hybrid conjugate gradient methods is given in [5].

Three-term conjugate gradient methods were studied in the past (e.g., see [8, 32, 34], etc.); but, from recent papers about CGmethods, we can conclude that maybe the mainstream is made by three-term and even four-term conjugate gradient methods. An interesting paper about a five-term hybrid conjugate gradient method is [1]. Also, from recent papers we can conclude that different modifications of the existing CGmethods are made, as well as different hybridizations of CGand BFGSmethods.

Consider unconstrained optimization problem (1), where f:RnRis a continuously differentiable function, bounded from below. Starting from an initial point x0Rn, the three-term conjugate gradient method with line search generates a sequence xk, given by the next iterative scheme:


where tkis a step-size which is obtained from the line search, and


In the last relation, δkand ηkare the conjugate gradient parameters, sk=xk+1xk, gk=fxk, and yk=gk+1gk. We can see that the search direction dk+1is computed as a linear combination of gk+1, sk, and yk.

In [6], the author suggests another way to get three-term conjugate gradient algorithms by minimization of the one-parameter quadratic model of the function f. The idea is to consider the quadratic approximation of the function fin the current point and to determine the search direction by minimization of this quadratic model. It is assumed that the symmetrical approximation of the Hessian matrix Bk+1satisfies the general quasi-Newton equation which depends on a positive parameter:


In this paper the quadratic approximation of the function fis considered:


The direction dk+1is computed as


where the scalar βkis determined as the solution of the following minimizing problem:


From (6) and (7), the author obtains


Using (5), from (7), the next expression for βkis obtained:


Using the idea of Perry [36], the author obtains


In fact, in this approach the author gets a family of three-term conjugate gradient algorithms depending of a positive parameter ω.

Next, in [52], the WYLconjugate gradient (CG) formula, with βkWYL0, is further studied. A three-term WYLCGalgorithm is presented, which has the sufficiently descent property without any conditions. The global convergence and the linear convergence are proven; moreover, the n-step quadratic convergence with a restart strategy is established if the initial step length is appropriately chosen.

The first three-term Hestenes-Stiefel (HS) method (TTHSmethod) can be found in [55].

Baluch et al. [7] describe a modified three-term Hestenes-Stiefel (HS) method. Although the earliest conjugate gradient method HSachieves global convergence using an exact line search, this is not guaranteed in the case of an inexact line search. In addition, the HSmethod does not usually satisfy the descent property. The modified three-term conjugate gradient method from [7] possesses a sufficient descent property regardless of the type of line search and guarantees global convergence using the inexact Wolfe-Powell line search [50, 51]. The authors also prove the global convergence of this method. The search direction, which is considered in [7], has the next form:


where βkBZA=gkTgkgk1dk1Tyk1+μgkTdk1, θkBZA=gkTdk1dk1Tyk1+μgkTdk1,μ>1.

In [13], an accelerated three-term conjugate gradient method is proposed, in which the search direction satisfies the sufficient descent condition as well as extended Dai-Liao conjugacy condition:


This method seems different from the existent methods.

Next, Li-Fushikuma quasi-Newton equation is




where Cand rare two given positive constants. Based on (10), Zhou and Zhang [56] propose a modified version of DLmethod, called ZZmethod in [13].

In [30], some new conjugate gradient methods are extended, and then some three-term conjugate gradient methods are constructed. Namely, the authors remind to [41, 42], with its conjugate gradient parameters, respectively:


wherefrom it is obvious that βkMRMIL=βkRMILfor the exact line search. Let us say that these methods, presented in [41, 42], are RMILand MRMILmethods.

The three-term RMILand MRMILmethods are introduced in [30].

The search direction dkcan be expressed as


where βkis given by (11) or (12), and


An important property of the proposed methods is that the search direction always satisfies the sufficient descent condition without any line search, that is, the next relation always holds


Under the standard Wolfe line search and the classical assumptions, the global convergence properties of the proposed methods are proven.

Having in view the conjugate gradient parameter suggested in [49], in [45] the next two conjugate gradient parameters are presented:


Motivated by [49], as well as by [45], in [1], a new hybrid nonlinear CG method is proposed; it combines the features of five different CG methods, with the aim of combining the positive features of different non-hybrid methods. The proposed method generates descent directions independently of the line search. Under some assumptions on the objective function, the global convergence is proven under the standard Wolfe line search. Conjugate gradient parameter, proposed in [1], is


Let’s note that the proposed method is hybrid of FR, DY, WYL, MHS, and MLS.

The behaviors of the methods BZA, TTRMIL, MRMIL, MHS, MLS, and hAOare illustrated by the next tables.

The test criterion is CPU time.

The tests are performed on the computer Workstation Intel Celeron CPU 1,9 GHz.

The experiments are made on the test functions from [3].

Each problem is tested for a number of variables n=1000and n=5000.

The average CPU time values are given in the last rows of these tables (Tables 1, 2, 3, 4).


Table 1.

n = 1000.

Full Hess.FH32.9484192.5740162.6988173.0264192.6364172.745618

Table 2.

n = 1000.


Table 3.

n = 5000.

Full Hess.FH35.4444354.7892315.5692366.1776406.1620396.224440

Table 4.

n = 5000.

In [2], based on the numerical efficiency of Hestenes-Stiefel (HS) method, a new modified HSalgorithm is proposed for unconstrained optimization. The new direction independent of the line search satisfies the sufficient descent condition. Motivated by theoretical and numerical features of three-term conjugate gradient (CG) methods proposed by [33], similar to the approach in [10], the new direction is computed by minimizing the distance between the CGdirection and the direction of the three-term CGmethods proposed by [33]. Under some mild conditions, the global convergence of the new method for general functions is established when the standard Wolfe line search is used. In this paper the conjugate gradient parameter is given by




But this new CG direction does not fulfill a descent condition, so further modification is made, namely, having in view [53], the authors [2] introduce


where λ>14is a parameter. Also, the global convergence is proven under standard conditions.

It is worth to mention the next papers about this theme, which can be interesting [4, 14, 15, 16, 17, 25, 26, 27].

3. Trust region methods

We remind that the basic idea of Newton method is to approximate the objective function fxaround xkby using a quadratic model:


where gk=fxk, Gk=2fxk, and also use the minimizer skof qksto set xk+1=xk+sk.

Also, remind that Newton method can only guarantee the local convergence, i.e., when sis small enough and the method is convergent locally.

Further, Newton method cannot be used when Hessian is not positive definite.

There exists another class of methods, known as trust region methods. It does not use the line search to get the global convergence, as well as it avoids the difficulty which is the consequence of the nonpositive definite Hessian in the line search.

Furthermore, it produces greater reduction of the function fthan line search approaches.

Here, we define the region around the current iterate:


where Δkis the radius of Ωk, inside which the model is trusted to be adequate to the objective function.

Our further intention is to choose a step which should be the approximate minimizer of the quadratic model in the trust region. In fact, xk+skshould be the approximately best point on the sphere:


with the center xkand the radius Δk.

In the case that this step is not acceptable, we reduce the size of the step, and then we find a new minimizer.

This method has the rapid local convergence rate, and that’s the property of Newton method and quasi-Newton method, too, but the important characteristic of trust region method is also the global convergence.

Since the step is restricted by the trust region, this method is also called the restricted step method.

The model subproblem of the trust region method is


where Δkis the trust region radius and Bkis a symmetric approximation of the Hessian Gk.

In the case that we use the standard l2norm 2, skis the minimizer of qksin the ball of radius Δk. Generally, different norms define the different shapes of the trust region.

Setting Bk=Gkin (17)(18), the method becomes a Newton-type trust region method.

The problem by itself is the choice of Δkat each single iteration.

If the agreement between the model qksand the objective function fxk+sis satisfactory enough, the value Δkshould be chosen as large as it is possible. The expression Aredk=fxkfxk+skis called the actual reduction, and the expression Predk=qk0qkskis called the predicted reduction; here, we emphasize that


measures the agreement between the model function qksand the objective function fxk+s.

If rkis close to 0 or it is negative, the trust region is going to shrink; otherwise, we do not change the trust region.

The conclusion is that rkis important in making the choice of new iterate xk+1as well as in updating the trust region radius Δk. Now, we give the trust region algorithm.

Algorithm 1.3.1. (Trust region method).

Assumptions: x0, Δ¯, Δ00Δ¯, ε0, 0<η1η2<1, and 0<γ1<1<γ2.

Let k=0.

Step 1. If gkε, then STOP.

Step 2. Approximately solve the problem (17)(18) for sk.

Step 3. Compute fxk+skand rk. Set


Step 4. If rk<η1, then Δk+10γ1Δk.

If rkη1η2, then Δk+1γ1ΔkΔk.

If rkη2and sk=Δk, then Δk+1Δkminγ2ΔkΔ¯.

Step 5. Generate Bk+1, update qk, set k=k+1, and go to Step 1.

In Algorithm 1.3.1, Δ¯is a bound for all Δk. Those iterations with the property rkη2(and so those for which Δk+1Δk) are called very successful iterations; the iterations with the property rkη1(and so those for which xk+1=xk+sk) are called successful iterations; and the iterations with the property rk<η1(and so those for which xk+1=xk) are called unsuccessful iterations. Generally, the iterations from the two first cases are called successful iterations.

Some choices of parameters are η1=0,01, η2=0,75, γ1=0,5, γ2=2, Δ0=1, and Δ0=110g0. The algorithm is insensitive to change of these parameters.

Next, if rk<0,01, then Δk+1can be chosen from 0.01,0.5skon the basis of a polynomial interpolation.

In the case of quadratic interpolation, we set




3.1 Convergence of trust region methods

Assumption 1.3.1 (Assumption A0).

We assume that the approximations of Hessian Bkare uniformly bounded in norm and the level set L=xfxfx0is bounded, as well as f:RnRis continuously differentiable on L. We allow the length of the approximate solution skof the subproblem (17)(18) to exceed the bound of the trust region, but we also assume that


where η˜is a positive constant.

In this kind of trust region way of thinking, generally we do not seek an accurate solution of the subproblem (17)(18); we are satisfied by finding a nearly optimal solution of the subproblem (17)(18).

Strong theoretical as well as numerical results can be obtained if the step sk, produced by Algorithm 1.3.1, satisfies


Theorem 1.3.1 [47] Under Assumption A0, if Algorithm 3.1 has finitely many successful iterations, then it converges to the first-order stationary point.

Theorem 1.3.2 [47] Under Assumption A0, if Algorithm 3.1 has infinitely many successful iterations, then


In [44], it is emphasized that trust region methods are very effective for optimization problems and a new adaptive trust region method is presented. This method combines a modified secant equation with the BFGSupdate formula and an adaptive trust region radius, where the new trust region radius makes use of not only the function information but also the gradient information. Let B̂kbe a positively definite matrix based on modified Cholesky factorization [43]. Under suitable conditions, in [44] the global convergence is proven; also, the local superlinear convergence of the proposed method is demonstrated. Motivated by the adaptive technique, the proposed method possesses the following nice properties:

  1. The trust region radius uses not only the gradient value but also the function value.

  2. Computing the matrix B̂kof the inverse and the value of B̂k1, at each iterative point xk, is not required.

  3. The computational time is reduced.

A modified secant equation is introduced:


where qk=yk+hkdk, fk=fxk, and hk=gk+1+gkTdk+2fkfk+1dk2.

When fis twice continuously differentiable and Bk+1is generated by the BFGSformula, where B0=I, this modified secant Eq. (19) possesses the following nice property:


and this property holds for all k.

Under classical assumptions, the global convergence of the method presented in [44] is also proven in this paper.

In [28], the hybridization of monotone and non-monotone approaches is made; a modified trust region ratio is used, in which more information is provided about the agreement between the exact and the approximate models. An adaptive trust region radius is used, as well as two accelerated Armijo-type line search strategies to avoid resolving the trust region subproblem whenever a trial step is rejected. It is shown that the proposed algorithm is globally and locally superlinearly convergent. In this paper trust region methods are denoted shortly by TR; it is emphasized that in TRmethod, having in view that the iterative scheme is


and it often happens that skis an approximate solution of the following quadratic subproblem:


Performance of the TRmethods is much influenced by the strategy of choosing the TRradius at each iteration. To determine the radius Δk, in the standard TRmethod, the agreement between fxk+sand mksis evaluated by the so-called TRratio ρk:


When ρkis negative or a small positive number near to zero, the quadratic model is a poor approximation of the objective function. In such situation, Δkshould be decreased and, consequently, the subproblem (20) should be solved again. However, when ρkis close to 1, it is reasonable to use the quadratic model as an approximation of the objective function. So, the step skshould be accepted and Δkcan be increased. Here, the authors use the modified version of ρk:


where Rk=ηkflk+1ηkfk, ηkηminηmax, ηmin01, and ηmaxηmin1. Also,


where NNwhich is originally used by Toint [48].

Something more about trust region methods can be found in [9, 18, 21, 22, 54].

4. Conclusion

The conjugate gradient methods and trust region methods are very popular now.

Many scientists consider these methods.

Namely, different modifications of these methods are made, with the aim to improve them.

Next, the scientists try to make not only new methods but also whole new classes of methods. For the specific values of the parameters, individual methods are distinguished from these classes. It is always more desirable to make a class of methods instead of individual methods.

Hybrid conjugate gradient methods are made in many different ways; this class of conjugate gradient methods is always actual.

Further, one of the contemporary trends is to use BFGS update in constructions of new conjugate gradient methods (e.g., see [46]).

Finally, let us emphasize that contemporary papers often use the Picard-Mann-Ishikawa iterative processes and they make the connection of these kinds of processes with the unconstrained optimization (see [29, 37, 38]).

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Snezana S. Djordjevic (February 19th 2019). Unconstrained Optimization Methods: Conjugate Gradient Methods and Trust-Region Methods, Applied Mathematics, Bruno Carpentieri, IntechOpen, DOI: 10.5772/intechopen.84374. Available from:

chapter statistics

858total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

What Determines EP Curve Shape?

By Frank Xuyan Wang

Related Book

First chapter

Exact Traveling Wave Solutions of One-Dimensional Parabolic-Parabolic Models of Chemotaxis

By Maria Vladimirovna Shubina

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us