## Abstract

Bilevel optimization is a special kind of optimization where one problem is embedded within another. The outer optimization task is commonly referred to as the upper-level optimization task, and the inner optimization task is commonly referred to as the lower-level optimization task. These problems involve two kinds of variables: upper-level variables and lower-level variables. Bilevel optimization was first realized in the field of game theory by a German economist von Stackelberg who published a book (1934) that described this hierarchical problem. Now the bilevel optimization problems are commonly found in a number of real-world problems: transportation, economics, decision science, business, engineering, and so on. In this chapter, we provide a general formulation for bilevel disjunctive optimization problem on affine manifolds. These problems contain two levels of optimization tasks where one optimization task is nested within the other. The outer optimization problem is commonly referred to as the leaders (upper level) optimization problem and the inner optimization problem is known as the followers (or lower level) optimization problem. The two levels have their own objectives and constraints. Topics affine convex functions, optimizations with auto-parallel restrictions, affine convexity of posynomial functions, bilevel disjunctive problem and algorithm, models of bilevel disjunctive programming problems, and properties of minimum functions.

### Keywords

- convex programming
- affine manifolds
- optimization along curves
- bilevel disjunctive optimization
- minimum functions
- Mathematics Subject Classification 2010: 90C25
- 90C29
- 90C30

## 1. Affine convex functions

In optimization problems [16, 17, 19, 23, 24, 25, 26, 27], one can use an * affine manifold*as a pair

They are used for defining the convexity of subsets in

** Theorem 1.1**[1]

Let

be a (Hausdorff, connected, smooth) compact

-manifold endowed with an affine connection

and let

. If the holonomy group

(regarded as a subgroup of the group

of all the linear automorphisms of the tangent space

) has compact closure, then

is autoparallely complete.

Let

The function

* Proof.*For given

as a PDEs system (a particular case of a Frobenius-Mayer system of PDEs) with

Since,

it follows

Of course this only means the curvature tensor is zero on the topologically trivial region we used to set up our co-vector fields

The following theorem is well-known [16, 17, 19, 23]. Due to its importance, now we offer new proofs (based on catastrophe theory, decomposing a tensor into a specific product, and using slackness variables).

* Proof.*For the Hessian

The first central idea for the proof is to use the catastrophe theory, since almost all families

We eliminate the case with maximum point, that is., Morse

At any critical point

** A direct proof based on decomposition of a tensor:**Let

Suppose

where

and the tensor

Suppose

Here we cannot plug in the point

To contradict, we fix an auto-parallel

But this result depends on the direction

In some particular cases, we can eliminate the dependence on the vector

are sufficient to do this.

A particular condition for independence on

In this particular condition, we can show that we can build connections of previous type good everywhere.

### 1.1. Lightning through examples

Let us lightning our previous statements by the following examples.

The next example shows what happens if we come out of the conditions of the previous theorem.

Our chapter is based also on some ideas in: [3] (convex mappings between Riemannian manifolds), [7] (geometric modeling in probability and statistics), [13] (arc length in metric and Finsler manifolds), [14] (applications of Hahn-Banach principle to moment and optimization problems), [21] (geodesic connectedness of semi-Riemannian manifolds), and [28] (tangent and cotangent bundles). For algorithms, we recommend the paper [20] (sequential and parallel algorithms).

## 2. Optimizations with autoparallel restrictions

### 2.1. Direct theory

The auto-parallel curves

Obviously, the complete notation is

### 2.2. Theory via the associated spray

This point of view regarding extrema comes from paper [22].

The second order system of auto-parallels induces a spray (special vector field)

The solutions

* We consider the Volterra-Hamilton ODE system*[2].

## 3. Affine convexity of posynomial functions

For the general theory regarding geometric programming (based on posynomial, signomial functions, etc.), see [11].

* Proof.*A posynomial function has the form

where all the coefficients

joining the points

It follows

One term in this sum is of the form

* Proof.*A signomial function has the form

where all the exponents

we apply the Theorem and the implication

Proudnikov [18] gives the necessary and sufficient conditions for representing Lipschitz multivariable function as a difference of two convex functions. An algorithm and a geometric interpretation of this representation are also given. The outcome of this algorithm is a sequence of pairs of convex functions that converge uniformly to a pair of convex functions if the conditions of the formulated theorems are satisfied.

## 4. Bilevel disjunctive problem

Let * leader decision affine manifold*, and

*, be two connected affine manifolds of dimension*follower decision affine manifold

*, and let*leader objective function

*.*follower multiobjective function

The components

A * bilevel optimization problem*means a decision of leader with regard to a multi-objective optimum of the follower (in fact, a constrained optimization problem whose constraints are obtained from optimization problems). For details, see [5, 10, 12].

Let * disjunctive solution set of a follower multiobjective optimization problem*is defined by

(1) the set-valued function

where

or

(2) the set-valued function

where

We deal with two bilevel problems:

(1) The

In this case, the follower cooperates with the leader; that is, for each

(2) The

In this case, there is no cooperation between the leader and the follower, and the leader expects the worst scenario; that is, for each

So, a general optimization problem becomes a pessimistic bilevel problem.

* Proof.*Let us consider the multi-functions

Taking minimum of minima that exist, we find

* Proof.*In our hypothesis, the set

In the next Theorem, we shall use the Value Function Method or Utility Function Method.□

* Proof.*Let

Boundedness of

### 4.1. Bilevel disjunctive programming algorithm

An important concept for making wise tradeoffs among competing objectives is bilevel disjunctive programming optimality, on affine manifolds, introduced in this chapter.

We present an exact algorithm for obtaining the bilevel disjunctive solutions to the multi-objective optimization in the following section.

** Step 1**: Solve

Let

** Step 2**: Build the mapping

** Step 3**: Solve the leader’s following program

From numerical point of view, we can use the Newton algorithm for optimization on affine manifolds, which is given in [19].

## 5. Models of bilevel disjunctive programming problems

The manifold

* Let us solve the problem (cite*[7]

*[9]*, p. 7;

):

* . Thus, at most a couple of points from the quarter circle belongs to the Pareto-optimal set of the overall problem. Eichfelder [*8

] reported the following Pareto-optimal set of solutions

## 6. Properties of minimum functions

Let * leader decision affine manifold*, and

*, be two connected affine manifolds of dimension*follower decision affine manifold

and taking the infimum after one variable, let say

which is called * minimum function*.

A * minimum function*is usually specified by a pointwise mapping

First we give a new proof to Brian White Theorem (see Mean Curvature Flow, p. 7, Internet 2017).

* Proof.*We shall prove the statement in three steps.

(1)

Indeed,

On one hand, if we put

Hence

On the other hand,

Hence

Finally,

(2)

Suppose

If

(3)

Let

** Remark**The third step shows that a function having the properties (1) and (2) is increasing. For this the continuity is essential. Only property (2) is not enough. For example, the function defined by

** Remark**Suppose that

Consequently,

□

* Theorem 4.1*.

* Proof.*Suppose that

Consequently,

* Proof.*Without loss of generality, we work on Euclidean case. Suppose that

Taking the partial derivative with respect to

On the other hand

□

* Proof.*Suppose