Open access peer-reviewed chapter

Bilevel Disjunctive Optimization on Affine Manifolds

By Constantin Udriste, Henri Bonnel, Ionel Tevy and Ali Sapeeh Rasheed

Submitted: October 24th 2017Reviewed: February 19th 2018Published: September 5th 2018

DOI: 10.5772/intechopen.75643

Downloaded: 642


Bilevel optimization is a special kind of optimization where one problem is embedded within another. The outer optimization task is commonly referred to as the upper-level optimization task, and the inner optimization task is commonly referred to as the lower-level optimization task. These problems involve two kinds of variables: upper-level variables and lower-level variables. Bilevel optimization was first realized in the field of game theory by a German economist von Stackelberg who published a book (1934) that described this hierarchical problem. Now the bilevel optimization problems are commonly found in a number of real-world problems: transportation, economics, decision science, business, engineering, and so on. In this chapter, we provide a general formulation for bilevel disjunctive optimization problem on affine manifolds. These problems contain two levels of optimization tasks where one optimization task is nested within the other. The outer optimization problem is commonly referred to as the leaders (upper level) optimization problem and the inner optimization problem is known as the followers (or lower level) optimization problem. The two levels have their own objectives and constraints. Topics affine convex functions, optimizations with auto-parallel restrictions, affine convexity of posynomial functions, bilevel disjunctive problem and algorithm, models of bilevel disjunctive programming problems, and properties of minimum functions.


  • convex programming
  • affine manifolds
  • optimization along curves
  • bilevel disjunctive optimization
  • minimum functions
  • Mathematics Subject Classification 2010: 90C25
  • 90C29
  • 90C30

1. Affine convex functions

In optimization problems [16, 17, 19, 23, 24, 25, 26, 27], one can use an affine manifoldas a pair MΓ, where Mis a smooth real n-dimensional manifold, and Γis an affine symmetric connection on M. The connection Γproduces auto-parallel curves xtvia ODE system


They are used for defining the convexity of subsets in Mand convexity of functions f:DMR(see also [3, 6]).

Definition 1.1An affine manifoldMΓis called autoparallely complete if any auto-parallelxtstarting atpMis defined for all values of the parametertR.

Theorem 1.1[1] LetMbe a (Hausdorff, connected, smooth) compactn-manifold endowed with an affine connectionΓand letpM. If the holonomy groupHolpΓ(regarded as a subgroup of the groupGlTpMof all the linear automorphisms of the tangent spaceTpM) has compact closure, thenMΓis autoparallely complete.

Let MΓbe an auto-parallely complete affine manifold. For a C2function f:MR, we define the tensor HessΓfof components


Definition 1.2AC2functionf:MRis called:

(1) linear affine with respect toΓifHessΓf=0, throughout;

(2) affine convex (convex with respect toΓ) ifHessΓf0(positive semidefinite), throughout.

The function fis: (1) linear affine if its restriction fxton each autoparallel xtsatisfies fxt=at+b, for some numbers a,bthat may depend on xt; (2) affine convex if its restriction fxtis convex on each auto-parallel xt.

Theorem 1.2If there exists a linear affine nonconstant functionfonMΓ, then the curvature tensor fieldRhikjis inKerdf.

Proof.For given Γ, if we consider


as a PDEs system (a particular case of a Frobenius-Mayer system of PDEs) with 12nn+1equations and the unknown function f, then we need the complete integrability conditions




it follows


Corollary 1.1If there existsnlinear affine functionsfl,l=1,,nonMΓ, whosedflare linearly independent, thenΓis flat, that is,Rhikj=0.

Of course this only means the curvature tensor is zero on the topologically trivial region we used to set up our co-vector fields dflx. But we can always cover any manifold by an atlas of topologically trivial regions, so this allows us to deduce that the curvature tensor vanishes throughout the manifold.

Remark 1.1There is actually no need to extenddflxto the entire manifold. If this could be done, thendflxwould now be everywhere nonzero co-vector fields; but there are topologies, for example,S2, for which we know such things do not exist. Therefore, there are topological manifolds for which we are forced to work on topologically trivial regions.

The following theorem is well-known [16, 17, 19, 23]. Due to its importance, now we offer new proofs (based on catastrophe theory, decomposing a tensor into a specific product, and using slackness variables).

Theorem 1.3Letf:MRbe aC2function.

(1) Iffis regular or has only one minimum point, then there exists a connectionΓsuch thatfis affine convex.

(2) Iffhas a maximum pointx0, then there is no connectionΓmakingfaffine convex throughout.

Proof.For the Hessian HessΓfijbe positive semidefinite, we need nconditions like inequalities and equalities. The number of unknowns Γijhis 12n2n+1.The inequalities can be replaced by equalities using slackness variables.

The first central idea for the proof is to use the catastrophe theory, since almost all families fxc, x=x1xnRn, c=c1cmRm, of real differentiable functions, with m4parameters, are structurally stable and are equivalent, in the vicinity of any point, with one of the following forms [15]:

We eliminate the case with maximum point, that is., Morse 0-saddle and the saddle point. Around each critical point (in a chart), the canonical form fxcis affine convex, with respect to appropriate locally defined linear connections that can be found easily. Using change of coordinates and the partition of unity, we glue all these connections to a global one, making fxcaffine convex on M.

At any critical point x0, the affine Hessian HessΓfis reduced to Euclidean Hessian, 2fxixjx0. Then the maximum point condition or the saddle condition is contradictory to affine convexity condition.

A direct proof based on decomposition of a tensor:Let MΓbe an affine manifold and f:MRbe a C2function.

Suppose fhas no critical points (is regular). If the function fis not convex with respect to Γ, we look to find a new connection Γ¯ijh=Γijh+Tijh, with the unknown a tensor field Tijh, such that


where σijxis a positive semi-definite tensor. A very particular solution is the decomposition Tijhx=ahxbijx, where the vector field ahas the property


and the tensor bijis


Remark 1.2The connectionΓ¯ijhis strongly dependent on both the functionfand the tensor fieldσij.

Suppose fhas a minimum point x0. In this case, observe that we must have the condition σijx0=2fxixjx0. Can we make the previous reason for xx0and then extend the obtained connection by continuity? The answer is generally negative. Indeed, let us compute


Here we cannot plug in the point x0because we get 00, an indeterminate form.

To contradict, we fix an auto-parallel γt,t0ϵ, starting from minimum point x0=γ0, tangent to γ̇0=vand we compute (via l’Hôpital rule)


But this result depends on the direction v(different values along two different auto-parallels).

In some particular cases, we can eliminate the dependence on the vector v. For example, the conditions


are sufficient to do this.

A particular condition for independence on vis


In this particular condition, we can show that we can build connections of previous type good everywhere.

1.1. Lightning through examples

Let us lightning our previous statements by the following examples.

Example 1.1(for the first part of the theorem) Let us consider the functionf:R2R,fxy=x3+y3+3x+3yandΓijh=0,i,j,h=1,2. Thenfx=3x2+3,fy=3y2+3andfhas no critical point. Moreover, the Euclidean Hessian offis not positive semi-definite overall. Let us make the above construction forσijxy=δij. Takinga1=a2=1, we obtain the connection


that is not unique.

Example 1.2(for one minimum point) Let us consider the functionf:R2R,fxy=1ex2+y2andΓijh=0,i,j,h=1,2. Thenfx=2xex2+y2,fy=2yex2+y2andfhas a unique critical minimum point00. However, the Euclidean Hessian offis not positive semi-definite overall. We make previous reason forσij=2ex2+y2δij,a1=fx,a2=fy. Hence we obtainΓ¯ijh=Tijh,


Observe thatlimxy00Tijhxy=0. Hence takeΓ¯ijh00=0.

The next example shows what happens if we come out of the conditions of the previous theorem.

Example 1.3Let us take the functionf:RR,fx=x3, where the critical pointx=0is an inflection point. We takeΓx=12x2, which is not defined at the critical pointx=0, but the relation of convexity is realized by prolongation,


Let us consider the ODE of auto-parallels


The solutions


are auto-parallels onR\0t1t2Γ, wheret1,t2are real solutions of2+t2ct=0. These curves are extended att=0by continuity. The manifoldRΓis not auto-parallely complete. Since the imagexRis not a “segment”, the functionf:RR,fx=x3is not globally convex.

Remark 1.3Forn2, there existsC1functionsφ:RnRwhich have two minimum points without having another extremum point. As example,


has two (global) minimum pointsp=10,q=12.

The restriction


is difference of two affine convex functions (see Section 2).

Our chapter is based also on some ideas in: [3] (convex mappings between Riemannian manifolds), [7] (geometric modeling in probability and statistics), [13] (arc length in metric and Finsler manifolds), [14] (applications of Hahn-Banach principle to moment and optimization problems), [21] (geodesic connectedness of semi-Riemannian manifolds), and [28] (tangent and cotangent bundles). For algorithms, we recommend the paper [20] (sequential and parallel algorithms).


2. Optimizations with autoparallel restrictions

2.1. Direct theory

The auto-parallel curves xton the affine manifold MΓare solutions of the second order ODE system


Obviously, the complete notation is xtx0ξ0, with


Definition 2.1LetDMbe open and connected andf:DRaC2function. The pointx0Dis called minimum (maximum) point offconditioned by the auto-parallel system, together with initial conditions, if for the maximal solutionxtx0ξ0:ID, there exists a neighborhoodIt0oft0such that


Theorem 2.1Ifx0Dis an extremum point offconditioned by the previous second order system, thendfx0ξ0=0.

Definition 2.2The pointsxDwhich are solutions of the equationdfxξ=0are called critical points offconditioned by the previous spray.

Theorem 2.2Ifx0Dis a conditioned critical point of the functionf:DRof classC2constrained by the previous auto-parallel system and if the number


is strictly positive (negative), thenx0is a minimum (maximum) point offconstrained by the auto-parallel system.

Example 2.1We compute the Christoffel symbols on the unit sphereS2, using spherical coordinatesθφand the Riemannian metric


Whenθ0,π, we find


and all the otherΓs are equal to zero. We can show that the apparent singularity atθ=0,πcan be removed by a better choice of coordinates at the poles of the sphere. Thus, the above affine connection extends to the whole sphere.

The second order system defining auto-parallel curves (geodesics) onS2are


The solutions are great circles on the sphere. For example,θ=αt+βandφ= const.

We compute the curvature tensorRof the unit sphereS2. Since there are only two independent coordinates, all the non-zero components of curvature tensorRare given byRji=Rjθφi=Rjφθi, wherei,j=θ,φ. We getRφθ=sin2θ,Rθφ=1and the other components are0.

Let(θtθ0φ0ξ,φtθ0φ0ξ,tRbe the maximal auto-parallel which satisfiesθt0θ0φ0ξ=θ0, θ̇t0θ0φ0ξ=ξ1; φt0θ0φ0ξ=φ0, φ̇t0θ0φ0ξ=ξ2. We wish to computeminfθφ=Rφθ=sin2θwith the restrictionθtθ0φ0ξφt,θ0φ0ξ,tR.

Sincedf=2sinθcosθ0, the critical point conditiondfθφξ=0becomessinθcosθξ1=0. Consequently, the critical points are eitherθ0=kφ,ξ1ξ200, orθ1=2k+1π2kφ,ξ1ξ200, orθφ,ξ1=0ξ20.

The components of the Hessian offare


At the critical pointsθ0φorθ1φ, the Hessian offis positive or negative semi-definite. On the other hand, alongξ1=0ξ20, we findHessfijξiξj=12sin22θξ22>0,ξ20.Consequently, each pointθ2φ, is a minimum point offalong each auto-parallel, starting from given point and tangent toξ1=0ξ20.

2.2. Theory via the associated spray

This point of view regarding extrema comes from paper [22].

The second order system of auto-parallels induces a spray (special vector field) Yxy=yhΓijhxyiyjon the tangent bundle TM, that is,


The solutions γt=xtyt:IDof class C2are called field lines of Y. They depend on the initial condition γtt=t0=x0y0, and therefore the notation γtx0y0is more suggestive.

Definition 2.3LetDTMbe open and connected andf:DRaC2function. The pointx0y0Dis called minimum (maximum) point offconditioned by the previous spray, if for the maximal field lineγtx0y0,tI, there exists a neighborhoodIt0oft0such that


Theorem 2.3Ifx0y0Dis an extremum point offconditioned by the previous spray, thenx0y0is a point whereYis inKerdf.

Definition 2.4The pointsxyDwhich are solutions of the equation


are called critical points offconditioned by the previous spray.

Theorem 2.4Ifx0y0Dis a conditioned critical point of the functionf:DRof classC2constrained by the previous spray and if the number


is strictly positive (negative), thenx0y0is a minimum (maximum) point offconstrained by the spray.

Example 2.2We consider the Volterra-Hamilton ODE system[2].


which models production in a Gause-Witt2-species evolving inR4: (1) competition ifα1>0, α2>0, β1>0, β2>0and (2) parasitism ifα1>0, α2<0, β1>0, β2>0.

Changing the real parametertinto an affine parameters, we find the connection with constant coefficients


Letxtx0y0,tIbe the maximal field line which satisfiesxt0x0y0=x0y0. We wish to computemaxfx1x2y1y2=y2with the restrictionx=xtx0y0.

We apply the previous theory. Introduce the vector field


We set the critical point conditiondfY=0. Sincedf=0,0,0,1, it follows the relationλy1β1y222β2y1y2=0, that is, the critical point set is a conic iny1Oy2.

Sinced2f=0, the sufficiency condition is reduced todfDYYx0y0<0, that is,


This last relation is equivalent either to


or to


Each critical point satisfying one of the last two conditions is a maximum point.

3. Affine convexity of posynomial functions

For the general theory regarding geometric programming (based on posynomial, signomial functions, etc.), see [11].

Theorem 3.1Each posynomial function is affine convex, with respect to some affine connection.

Proof.A posynomial function has the form


where all the coefficients ckare positive real numbers, and the exponents aikare real numbers. Let us consider the auto-parallel curves of the form


joining the points a=a1anand b=b1bn, which fix, as example, the affine connection


It follows


One term in this sum is of the form ψkt=Ak1tBkt, and hence ψ¨kt=Ak1tBktlnAklnBk2>0.

Remark 3.1Posynomial functions belong to the class of functions satisfying the statement “product of two convex function is convex”.

Corollary 3.1Each signomial function is difference of two affine convex posynomials, with respect to some affine connection.

Proof.A signomial function has the form


where all the exponents aikare real numbers and the coefficients ckare either positive or negative. Without loss of generality, suppose that for k=1,,k0we have ck>0and for k=k0+1,,Kwe have ck<0. We use the decomposition


we apply the Theorem and the implication utvtuvconvex.□

Corollary 3.2(1) The polynomial functions with positive coefficients, restricted toR++n, are affine convex functions.

(2) The polynomial functions with positive and negative terms, restricted toR++n, are differences of two affine convex functions.

Proudnikov [18] gives the necessary and sufficient conditions for representing Lipschitz multivariable function as a difference of two convex functions. An algorithm and a geometric interpretation of this representation are also given. The outcome of this algorithm is a sequence of pairs of convex functions that converge uniformly to a pair of convex functions if the conditions of the formulated theorems are satisfied.

4. Bilevel disjunctive problem

Let M1,1Γ, the leader decision affine manifold, and M2,2Γ, the follower decision affine manifold, be two connected affine manifolds of dimension n1and n2, respectively. Moreover, M2,2Γis supposed to be complete. Let also f:M1×M2Rbe the leader objective function, and let F=F1Fr:M1×M2Rrbe the follower multiobjective function.

The components Fi:M1×M2Rare (possibly) conflicting objective functions.

A bilevel optimization problemmeans a decision of leader with regard to a multi-objective optimum of the follower (in fact, a constrained optimization problem whose constraints are obtained from optimization problems). For details, see [5, 10, 12].

Let xM1, yM2be the generic points. In this chapter, the disjunctive solution set of a follower multiobjective optimization problemis defined by

(1) the set-valued function





(2) the set-valued function




We deal with two bilevel problems:

(1) The optimistic bilevel disjunctive problem


In this case, the follower cooperates with the leader; that is, for each xM1, the follower chooses among all its disjunctive solutions (his best responses) one which is the best for the leader (assuming that such a solution exists).

(2) The pessimistic bilevel disjunctive problem


In this case, there is no cooperation between the leader and the follower, and the leader expects the worst scenario; that is, for each xM1, the follower may choose among all its disjunctive solutions (his best responses) one which is unfavorable for the leader.

So, a general optimization problem becomes a pessimistic bilevel problem.

Theorem 4.1The value


exists if and only if, for an indexi, the minimumminxfxy:yψixexists and, for eachji, eitherminxfxy:yψjxexists orψj=Ø. In this case,


coincides to the minimum of minima that exist.

Proof.Let us consider the multi-functions ϕix=fxψixand ϕx=fxψx. Then ϕx=i=1kϕix. It follows that minxϕxexists if and only if either minxϕixexists or ψi=, and at least one minimum exists.

Taking minimum of minima that exist, we find


Theorem 4.2SupposeM1is a compact manifold. If for eachxM1, at least one partial functionyFixyis affine convex and has a critical point, then the problemOBDPhas a solution.

Proof.In our hypothesis, the set ψxis nonvoid, for any x, and the compacity assures the existence of minxfxψx.

In the next Theorem, we shall use the Value Function Method or Utility Function Method.□

Theorem 4.3If aC1increasing scalarization partial function


has a minimum, then there exists an indexisuch thatψix. Moreover, iffxyis bounded, then the bilevel problem


has solution.

Proof.Let minyLxy=Lxy. Suppose that for each i=1,,k,minyFixy<Fixy. Then ywould not be minimum point for the partial function yLxy. Hence, there exists an index isuch that yψix.□

Boundedness of fimplies that the bilevel problem has solution once it is well-posed, but the fact that the problem is well-posed is shown in the first part of the proof.

4.1. Bilevel disjunctive programming algorithm

An important concept for making wise tradeoffs among competing objectives is bilevel disjunctive programming optimality, on affine manifolds, introduced in this chapter.

We present an exact algorithm for obtaining the bilevel disjunctive solutions to the multi-objective optimization in the following section.

Step 1: Solve


Let ψx=i=1rψixbe a subset in M2representing the mapping of optimal solutions for the follower multi-objective function.

Step 2: Build the mapping f(x,ψx.

Step 3: Solve the leader’s following program


From numerical point of view, we can use the Newton algorithm for optimization on affine manifolds, which is given in [19].

5. Models of bilevel disjunctive programming problems

The manifold Mis understood from the context. The connection Γijhcan be realized in each case, imposing convexity conditions.

Example 5.1Let us solve the problem (cite[7], p. 7;[9]):


subject to


Both the lower and the upper level optimization tasks have two objectives each. For a fixedyvalue, the feasible region of the lower-level problem is the area inside a circle with center at originx1=x2=0and radius equal toy. The Pareto-optimal set for the lower-level optimization task, preserving a fixedy, is the bottom-left quarter of the circle,


The linear constraint in the upper level optimization task does not allow the entire quarter circle to be feasible for somey. Thus, at most a couple of points from the quarter circle belongs to the Pareto-optimal set of the overall problem. Eichfelder [8] reported the following Pareto-optimal set of solutions


The Pareto-optimal front inF1F2space can be written in parametric form


Example 5.2Consider the bilevel programming problem


where the set-valued function is




SinceFxy=xy2+x2, we get


on the regions where the functions are defined.

Taking into account2x12+x2>0and2x+12+x2>0, it follows thatxy=00is the unique optimistic optimal solution of the problem. Now, if the leader is not exactly enough in choosing his solution, then the real outcome of the problem has an objective function value above1which is far away from the optimistic optimal value zero.

Example 5.3LetFxy=F1xyF2xyand a Pareto disjunctive problem


Then it appears a bilevel disjunctive programming problem of the form


This problem is interesting excepting the caseψx=Ø,x. IfyF1xyandyF2xyare convex functions, thenψxØ.

To write an example, we use


and we consider a bilevel disjunctive programming problem of the form






The objectivefxy=xy2+x2and the multi-functionψxproduce a multi-function


In context, we find the inferior envelope


and then


Since2x12+x2>0, the unique optimal solution isxy=00.

If we consider onlyψ1xas active, then the unique optimal solution00is maintained. Ifψ2xis active, then the optimal solution is01.


6. Properties of minimum functions

Let M1,1Γ, the leader decision affine manifold, and M2,2Γ, the follower decision affine manifold, be two connected affine manifolds of dimension n1and n2, respectively. Starting from a function with two vector variables


and taking the infimum after one variable, let say y, we build a function


which is called minimum function.

A minimum functionis usually specified by a pointwise mapping aof the manifold M1in the subsets of a manifold M2and by a functional φxyon M1×M2. In this context, some differential properties of such functions were previously examined in [4]. Now we add new properties related to increase and convexity ideas.

First we give a new proof to Brian White Theorem (see Mean Curvature Flow, p. 7, Internet 2017).

Theorem 6.1Suppose thatM1is compact,M2=0Tandf:M1×0TR. Letϕt=minxfxt. If, for eachxwithϕt=fxt, we haveftxt0, thenϕis an increasing function.

Proof.We shall prove the statement in three steps.

(1) Iffis continuous, thenϕis (uniformly) continuous.

Indeed, fis continuous on the compact M1×01, hence uniformly continuous. So, for ε>0it exists δ>0such that if t1t2<δ, then fxt1fxt2<ε, for any xM1, or


On one hand, if we put ϕt1=fx1t1and ϕt2=fx2t2, then we have


Hence minxfxt1fx2t2ε, so is ϕt1ϕt2ε.

On the other hand,


Hence minxfxt2+εfx1t1, so is ϕt1ϕt2ε.

Finally, ϕt1ϕt2ε, for t1t2<δ, that is, ϕis (uniformly) continuous.

(2) Let us fixt00T. Ifϕt0=fx0t0andftx0t00, then it existsδ>0such thatϕtϕt0, for anytt0δt0.

Suppose ftx0t0>0, it exists δ>0such that fx0tfx0t0, for each tt0δt0. It follows minxfxtfx0tfx0t0, and so is ϕtϕt0.

If ftx0t0=0, then we use f¯xt=fxt+εt,ε>0. For f¯, the above proof holds, and we take ε0.

(3) ϕis an increasing function.

Let 0a<bTand note A=tabϕtϕb. Ais not empty. If α=infA, then, by the step (2), α<band, by the step (1), αA. If α>a, we can use the step (2) for t0=αand it would result that αwas not the lower bound of A. Hence α=aand ϕaϕb.

RemarkThe third step shows that a function having the properties (1) and (2) is increasing. For this the continuity is essential. Only property (2) is not enough. For example, the function defined by ϕt=ton 01and ϕt=1ton 12has only the property (2), but it is not increasing on 02.

RemarkSuppose that fis a C2function and minxfxt=fx0tt, where x0tis an interior point of M. Since x0tis a critical point, we have


Consequently, ϕtis an increasing function. If M1has a nonvoid boundary, then the monotony extends by continuity (see also the evolution of an extremum problem).

Example 6.1The single-time perspective of a functionf:RnRis the functiong:Rn×R+R, gxt=tfx/t, domg=xtx/tdomft>0. The single-time perspectivegis convex iffis convex.

The single-time perspective is an example verifying Theorem 7.1. Indeed, the critical point condition forg, inx, gx=0, givesx=tx0, wherex0is a critical point off. Consequently,ϕt=minxgxt=tfx0. On the other hand, in the minimum point, we havegtxt=fx0. Thenϕtis increasing iffx00, as inTheorem 4.1.

Theorem 6.2Suppose thatM1is compact andf:M1×M2R. Letϕy=minxfxy. If, for eachxwithϕy=fxy, we havefyαxy0, thenϕyis a partially increasing function.

Proof.Suppose that fis a C2function and minxfxy=fx0yy, where x0yis an interior point of M1. Since x0yis a critical point, we have


Consequently, ϕyis a partially increasing function. If Mhas a non-void boundary, then the monotony extends by continuity.□

Theorem 6.3Suppose thatM1is compact andf:M1×M2R. Letϕy=minxfxy. If, for eachxwithϕy=fxy, we havedy2fxy0, thenϕyis an affine concave function.

Proof.Without loss of generality, we work on Euclidean case. Suppose that fis a C2function and minxfxy=fxyy, where xyis an interior point of M1. Since xyis a critical point, we must have


Taking the partial derivative with respect to yαand the scalar product with xiyβit follows


On the other hand


Theorem 6.4Letf:M1×M2Rbe aC2function and


If the setA=xyy:yM2is affine convex andfAis affine convex, thenϕyis affine convex.

Proof.Suppose fis a C2function. At points xyy, we have


© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Constantin Udriste, Henri Bonnel, Ionel Tevy and Ali Sapeeh Rasheed (September 5th 2018). Bilevel Disjunctive Optimization on Affine Manifolds, Optimization Algorithms - Examples, Jan Valdman, IntechOpen, DOI: 10.5772/intechopen.75643. Available from:

chapter statistics

642total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Distributionally Robust Optimization

By Jian Gao, Yida Xu, Julian Barreiro-Gomez, Massa Ndong, Michalis Smyrnakis and Hamidou Tembine

Related Book

First chapter

Digital Image Processing with MATLAB

By Mahmut Sinecen

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us