1. Affine convex functions
In optimization problems [16, 17, 19, 23, 24, 25, 26, 27], one can use an affine manifold as a pair MΓ, where Mis a smooth real n-dimensional manifold, and Γis an affine symmetric connection on M. The connection Γproduces auto-parallel curves xtvia ODE system
x¨ht+Γijhxtẋitẋjt=0.
They are used for defining the convexity of subsets in Mand convexity of functions f:D⊂M→R(see also [3, 6]).
Definition 1.1 An affine manifold MΓis called autoparallely complete if any auto-parallel xtstarting at p∈Mis defined for all values of the parameter t∈R.
Theorem 1.1 [1] Let Mbe a (Hausdorff, connected, smooth) compact n-manifold endowed with an affine connection Γand let p∈M. If the holonomy group HolpΓ(regarded as a subgroup of the group GlTpMof all the linear automorphisms of the tangent space TpM) has compact closure, then MΓis autoparallely complete.
Let MΓbe an auto-parallely complete affine manifold. For a C2function f:M→R, we define the tensor HessΓfof components
HessΓfij=∂2f∂xi∂xj−Γijh∂f∂xh.
Definition 1.2 A C2function f:M→Ris called:
(1) linear affine with respect to Γif HessΓf=0, throughout;
(2) affine convex (convex with respect to Γ) if HessΓf≽0(positive semidefinite), throughout.
The function fis: (1) linear affine if its restriction fxton each autoparallel xtsatisfies fxt=at+b, for some numbers a,bthat may depend on xt; (2) affine convex if its restriction fxtis convex on each auto-parallel xt.
Theorem 1.2 If there exists a linear affine nonconstant function fon MΓ, then the curvature tensor field Rhikjis in Kerdf.
Proof. For given Γ, if we consider
∂2f∂xi∂xj=Γijh∂f∂xh
as a PDEs system (a particular case of a Frobenius-Mayer system of PDEs) with 12nn+1equations and the unknown function f, then we need the complete integrability conditions
∂3f∂xi∂xj∂xk=∂3f∂xk∂xi∂xj.
Since,
∂3f∂xi∂xj∂xk=∂Γijh∂xk+ΓijlΓklh∂f∂xh,
it follows
∂f∂xhRhikj=0,Rhikj=∂Γijh∂xk−∂Γkih∂xj+ΓijlΓklh−ΓkilΓjlh.
Corollary 1.1 If there exists nlinear affine functions fl,l=1,…,non MΓ, whose dflare linearly independent, then Γis flat, that is, Rhikj=0.
Of course this only means the curvature tensor is zero on the topologically trivial region we used to set up our co-vector fields dflx. But we can always cover any manifold by an atlas of topologically trivial regions, so this allows us to deduce that the curvature tensor vanishes throughout the manifold.
Remark 1.1 There is actually no need to extend dflxto the entire manifold. If this could be done, then dflxwould now be everywhere nonzero co-vector fields; but there are topologies, for example, S2, for which we know such things do not exist. Therefore, there are topological manifolds for which we are forced to work on topologically trivial regions.
The following theorem is well-known [16, 17, 19, 23]. Due to its importance, now we offer new proofs (based on catastrophe theory, decomposing a tensor into a specific product, and using slackness variables).
Theorem 1.3 Let f:M→Rbe a C2function.
(1) If fis regular or has only one minimum point, then there exists a connection Γsuch that fis affine convex.
(2) If fhas a maximum point x0, then there is no connection Γmaking faffine convex throughout.
Proof. For the Hessian HessΓfijbe positive semidefinite, we need nconditions like inequalities and equalities. The number of unknowns Γijhis 12n2n+1.The inequalities can be replaced by equalities using slackness variables.
The first central idea for the proof is to use the catastrophe theory, since almost all families fxc, x=x1…xn∈Rn, c=c1…cm∈Rm, of real differentiable functions, with m≤4parameters, are structurally stable and are equivalent, in the vicinity of any point, with one of the following forms [15]:
We eliminate the case with maximum point, that is., Morse 0-saddle and the saddle point. Around each critical point (in a chart), the canonical form fxcis affine convex, with respect to appropriate locally defined linear connections that can be found easily. Using change of coordinates and the partition of unity, we glue all these connections to a global one, making fxcaffine convex on M.
At any critical point x0, the affine Hessian HessΓfis reduced to Euclidean Hessian, ∂2f∂xi∂xjx0. Then the maximum point condition or the saddle condition is contradictory to affine convexity condition.
A direct proof based on decomposition of a tensor: Let MΓbe an affine manifold and f:M→Rbe a C2function.
Suppose fhas no critical points (is regular). If the function fis not convex with respect to Γ, we look to find a new connection Γ¯ijh=Γijh+Tijh, with the unknown a tensor field Tijh, such that
∂2f∂xi∂xjx−Γ¯ijhx∂f∂xhx=σijx,x∈M,
where σijxis a positive semi-definite tensor. A very particular solution is the decomposition Tijhx=ahxbijx, where the vector field ahas the property
Daf=ahx∂f∂xhx≠0,x∈M
and the tensor bijis
bijx=1Daf∂2f∂xi∂xjx−Γijhx∂f∂xhx−σijx,x∈M.
Remark 1.2 The connection Γ¯ijhis strongly dependent on both the function fand the tensor field σij.
Suppose fhas a minimum point x0. In this case, observe that we must have the condition σijx0=∂2f∂xi∂xjx0. Can we make the previous reason for x≠x0and then extend the obtained connection by continuity? The answer is generally negative. Indeed, let us compute
bijx0=limx→x01Daf∂2f∂xi∂xjx−Γijhx∂f∂xhx−σijx.
Here we cannot plug in the point x0because we get 00, an indeterminate form.
To contradict, we fix an auto-parallel γt,t∈0ϵ, starting from minimum point x0=γ0, tangent to γ̇0=vand we compute (via l’Hôpital rule)
bijx0v=limt→0bijγt=∂3f∂xi∂xj∂xkx0−Γijhx0∂2f∂xh∂xkx0−∂σij∂xkx0vkahx0∂2f∂xh∂xkx0vk.
But this result depends on the direction v(different values along two different auto-parallels).
In some particular cases, we can eliminate the dependence on the vector v. For example, the conditions
∂3f∂xi∂xj∂xlx0−Γijhx0∂2f∂xh∂xlx0−∂σij∂xlx0
=ρ∂3f∂xi∂xj∂xkx0−Γijhx0∂2f∂xh∂xkx0−∂σij∂xkx0,
ahx0∂2f∂xh∂xlx0=ρahx0∂2f∂xh∂xkx0
are sufficient to do this.
A particular condition for independence on vis
∂3f∂xi∂xj∂xkx0−Γijhx0∂2f∂xh∂xkx0−∂σij∂xkx0=0.
In this particular condition, we can show that we can build connections of previous type good everywhere.
1.1. Lightning through examples
Let us lightning our previous statements by the following examples.
Example 1.1 (for the first part of the theorem) Let us consider the function f:R2→R,fxy=x3+y3+3x+3yand Γijh=0,i,j,h=1,2. Then ∂f∂x=3x2+3,∂f∂y=3y2+3and fhas no critical point. Moreover, the Euclidean Hessian of fis not positive semi-definite overall. Let us make the above construction for σijxy=δij. Taking a1=a2=1, we obtain the connection
Γ¯11h=6x−13x2+3y2+2,Γ¯22h=6y−13x2+3y2+2,Γ¯12h=Γ¯21h=0,h=1,2,
that is not unique.
Example 1.2 (for one minimum point) Let us consider the function f:R2→R,fxy=1−e−x2+y2and Γijh=0,i,j,h=1,2. Then ∂f∂x=2xe−x2+y2,∂f∂y=2ye−x2+y2and fhas a unique critical minimum point 00. However, the Euclidean Hessian of fis not positive semi-definite overall. We make previous reason for σij=2e−x2+y2δij,a1=∂f∂x,a2=∂f∂y. Hence we obtain Γ¯ijh=Tijh,
Γ¯111=−2x3x2+y2,Γ¯121=Γ¯211=−2x2yx2+y2,Γ¯221=−2xy2x2+y2,
Γ¯112=−2x2yx2+y2,Γ¯122=Γ¯212=−2xy2x2+y2,Γ¯222=−2y3x2+y2.
Observe that limxy→00Tijhxy=0. Hence take Γ¯ijh00=0.
The next example shows what happens if we come out of the conditions of the previous theorem.
Example 1.3 Let us take the function f:R→R,fx=x3, where the critical point x=0is an inflection point. We take Γx=−1−2x2, which is not defined at the critical point x=0, but the relation of convexity is realized by prolongation,
σx=f′′(x)−Γxf′x=3x2+2x+2>0,∀x∈R.
Let us consider the ODE of auto-parallels
x′′t−1+2t2x′t2=0,t≠0.
The solutions
xt=−12ln∣−2+t2−ct∣+c8+c2arctanh2t−c8+c2+c1
are auto-parallels on R\0t1t2Γ, where t1,t2are real solutions of −2+t2−ct=0. These curves are extended at t=0by continuity. The manifold RΓis not auto-parallely complete. Since the image xRis not a “segment”, the function f:R→R,fx=x3is not globally convex.
Remark 1.3 For n≥2, there exists C1functions φ:Rn→Rwhich have two minimum points without having another extremum point. As example,
φx1x2=x12−12+x12x2−x1−12
has two (global) minimum points p=−10,q=12.
The restriction
φx1x2=x14+x14x22+2x1+2−x12+2x13x2+2x12x2,x1>0,x2>0
is difference of two affine convex functions (see Section 2).
Our chapter is based also on some ideas in: [3] (convex mappings between Riemannian manifolds), [7] (geometric modeling in probability and statistics), [13] (arc length in metric and Finsler manifolds), [14] (applications of Hahn-Banach principle to moment and optimization problems), [21] (geodesic connectedness of semi-Riemannian manifolds), and [28] (tangent and cotangent bundles). For algorithms, we recommend the paper [20] (sequential and parallel algorithms).
2. Optimizations with autoparallel restrictions
2.1. Direct theory
The auto-parallel curves xton the affine manifold MΓare solutions of the second order ODE system
x¨ht+Γijhxtẋitẋjt=0,xt0=x0,ẋt0=ξ0.
Obviously, the complete notation is xtx0ξ0, with
xt0x0ξ0=x0,ẋt0x0ξ0=ξ0.
Definition 2.1 Let D⊂Mbe open and connected and f:D→Ra C2function. The point x0∈Dis called minimum (maximum) point of fconditioned by the auto-parallel system, together with initial conditions, if for the maximal solution xtx0ξ0:I→D, there exists a neighborhood It0of t0such that
fxtx0ξ0≥≤fx0,∀t∈It0⊂I.
Theorem 2.1 If x0∈Dis an extremum point of fconditioned by the previous second order system, then dfx0ξ0=0.
Definition 2.2 The points x∈Dwhich are solutions of the equation dfxξ=0are called critical points of fconditioned by the previous spray.
Theorem 2.2 If x0∈Dis a conditioned critical point of the function f:D→Rof class C2constrained by the previous auto-parallel system and if the number
Hessfijξ0iξ0j=∂2f∂xi∂xj−∂f∂xhΓijhx0ξ0iξ0j
is strictly positive (negative), then x0is a minimum (maximum) point of fconstrained by the auto-parallel system.
Example 2.1 We compute the Christoffel symbols on the unit sphere S2, using spherical coordinates θφand the Riemannian metric
gθθ=1,gθφ=gφθ=0,gφφ=sin2θ.
When θ≠0,π, we find
Γφφθ=−12sin2θ,Γφθφ=Γθφφ=cotθ,
and all the other Γs are equal to zero. We can show that the apparent singularity at θ=0,πcan be removed by a better choice of coordinates at the poles of the sphere. Thus, the above affine connection extends to the whole sphere.
The second order system defining auto-parallel curves (geodesics) on S2are
θ¨t−12sin2θtφ̇tφ̇t=0,φ¨t−2cotθtφ̇tθ̇t=0.
The solutions are great circles on the sphere. For example, θ=αt+βand φ= const.
We compute the curvature tensor Rof the unit sphere S2. Since there are only two independent coordinates, all the non-zero components of curvature tensor Rare given by Rji=Rjθφi=−Rjφθi, where i,j=θ,φ. We get Rφθ=sin2θ,Rθφ=−1and the other components are 0.
Let (θtθ0φ0ξ,φtθ0φ0ξ,t∈Rbe the maximal auto-parallel which satisfies θt0θ0φ0ξ=θ0, θ̇t0θ0φ0ξ=ξ1; φt0θ0φ0ξ=φ0, φ̇t0θ0φ0ξ=ξ2. We wish to compute minfθφ=Rφθ=sin2θwith the restriction θtθ0φ0ξφt,θ0φ0ξ,t∈R.
Since df=2sinθcosθ0, the critical point condition dfθφξ=0becomes sinθcosθξ1=0. Consequently, the critical points are either θ0=kπk∈ℤφ,ξ1ξ2≠00, or θ1=2k+1π2k∈ℤφ,ξ1ξ2≠00, or θφ,ξ1=0ξ2≠0.
The components of the Hessian of fare
Hessfθθ=∂2f∂θ∂θ=2cos2θ,Hessfθφ=0,Hessfφφ=12sin22θ.
At the critical points θ0φor θ1φ, the Hessian of fis positive or negative semi-definite. On the other hand, along ξ1=0ξ2≠0, we find Hessfijξiξj=12sin22θξ22>0,ξ2≠0.Consequently, each point θ≠kπ2φ, is a minimum point of falong each auto-parallel, starting from given point and tangent to ξ1=0ξ2≠0.
2.2. Theory via the associated spray
This point of view regarding extrema comes from paper [22].
The second order system of auto-parallels induces a spray (special vector field) Yxy=yhΓijhxyiyjon the tangent bundle TM, that is,
ẋht=yht,ẏht+Γijhxtyityjt=0.
The solutions γt=xtyt:I→Dof class C2are called field lines of Y. They depend on the initial condition γtt=t0=x0y0, and therefore the notation γtx0y0is more suggestive.
Definition 2.3 Let D⊂TMbe open and connected and f:D→Ra C2function. The point x0y0∈Dis called minimum (maximum) point of fconditioned by the previous spray, if for the maximal field line γtx0y0,t∈I, there exists a neighborhood It0of t0such that
fγtx0y0≥≤fx0y0,∀t∈It0⊂I.
Theorem 2.3 If x0y0∈Dis an extremum point of fconditioned by the previous spray, then x0y0is a point where Yis in Kerdf.
Definition 2.4 The points xy∈Dwhich are solutions of the equation
DYfxy=dfYxy=0
are called critical points of fconditioned by the previous spray.
Theorem 2.4 If x0y0∈Dis a conditioned critical point of the function f:D→Rof class C2constrained by the previous spray and if the number
d2fYY+dfDYYx0y0
is strictly positive (negative), then x0y0is a minimum (maximum) point of fconstrained by the spray.
Example 2.2 We consider the Volterra-Hamilton ODE system [2].
dx1dtt=y1t,dx2dtt=y2t,
dy1dtt=λy1t−α1y12t−2α2y1ty2t,
dy2dtt=λy1t−β1y22t−2β2y1ty2t,
which models production in a Gause-Witt 2-species evolving in R4: (1) competition if α1>0, α2>0, β1>0, β2>0and (2) parasitism if α1>0, α2<0, β1>0, β2>0.
Changing the real parameter tinto an affine parameter s, we find the connection with constant coefficients
Γ111=13α1−2β2,Γ222=13β1−2α2,
Γ121=132α2−β1,Γ122=132β2−α1.
Let xtx0y0,t∈Ibe the maximal field line which satisfies xt0x0y0=x0y0. We wish to compute maxfx1x2y1y2=y2with the restriction x=xtx0y0.
We apply the previous theory. Introduce the vector field
Y=y1y2λy1−α1y12−2α2y1y2λy1−β1y22−2β2y1y2.
We set the critical point condition dfY=0. Since df=0,0,0,1, it follows the relation λy1−β1y22−2β2y1y2=0, that is, the critical point set is a conic in y1Oy2.
Since d2f=0, the sufficiency condition is reduced to dfDYYx0y0<0, that is,
λ−α1β1y22λ−2β2y2−2α2y2y0<0.
This last relation is equivalent either to
λ−2α2y02λ−2β2y02−α1β1y022<0,λ−2β2y02>0
or to
λ−2α2y02λ−2β2y02−α1β1y022>0,λ−2β2y02<0.
Each critical point satisfying one of the last two conditions is a maximum point.
3. Affine convexity of posynomial functions
For the general theory regarding geometric programming (based on posynomial, signomial functions, etc.), see [11].
Theorem 3.1 Each posynomial function is affine convex, with respect to some affine connection.
Proof. A posynomial function has the form
f:R++n→R,fx=∑k=1Kck∏i=1nxiaik,
where all the coefficients ckare positive real numbers, and the exponents aikare real numbers. Let us consider the auto-parallel curves of the form
γt=a11−tb1ta21−tb2t…an1−tbnt,t∈01,
joining the points a=a1…anand b=b1…bn, which fix, as example, the affine connection
Γhjh=Γjhh=−12μhμjxj,andotherwiseΓijh=0.
It follows
fγt=∑k=1Kck∏i=1naiaik1−tbiaikt
=∑k=1Kck∏i=1naiaik1−t∏i=1nbiaikt.
One term in this sum is of the form ψkt=Ak1−tBkt, and hence ψ¨kt=Ak1−tBktlnAk−lnBk2>0.
Remark 3.1 Posynomial functions belong to the class of functions satisfying the statement “product of two convex function is convex”.
Corollary 3.1 Each signomial function is difference of two affine convex posynomials, with respect to some affine connection.
Proof. A signomial function has the form
f:R++n→R,fx=∑k=1Kck∏i=1nxiaik,
where all the exponents aikare real numbers and the coefficients ckare either positive or negative. Without loss of generality, suppose that for k=1,…,k0we have ck>0and for k=k0+1,…,Kwe have ck<0. We use the decomposition
fx=∑k=1k0ck∏i=1nxiaik−∑k=k0+1K∣ck∣∏i=1nxiaik,
we apply the Theorem and the implication u″t≥v″t⇒u−vconvex.□
Corollary 3.2 (1) The polynomial functions with positive coefficients, restricted to R++n, are affine convex functions.
(2) The polynomial functions with positive and negative terms, restricted to R++n, are differences of two affine convex functions.
Proudnikov [18] gives the necessary and sufficient conditions for representing Lipschitz multivariable function as a difference of two convex functions. An algorithm and a geometric interpretation of this representation are also given. The outcome of this algorithm is a sequence of pairs of convex functions that converge uniformly to a pair of convex functions if the conditions of the formulated theorems are satisfied.
4. Bilevel disjunctive problem
Let M1,1Γ, the leader decision affine manifold, and M2,2Γ, the follower decision affine manifold, be two connected affine manifolds of dimension n1and n2, respectively. Moreover, M2,2Γis supposed to be complete. Let also f:M1×M2→Rbe the leader objective function, and let F=F1…Fr:M1×M2→Rrbe the follower multiobjective function.
The components Fi:M1×M2→Rare (possibly) conflicting objective functions.
A bilevel optimization problem means a decision of leader with regard to a multi-objective optimum of the follower (in fact, a constrained optimization problem whose constraints are obtained from optimization problems). For details, see [5, 10, 12].
Let x∈M1, y∈M2be the generic points. In this chapter, the disjunctive solution set of a follower multiobjective optimization problem is defined by
(1) the set-valued function
ψ:M1⇉M2,ψx=Argminy∈M2Fxy,
where
Argminy∈M2Fxy≔∪i=1rArgminy∈M2Fixy
or
(2) the set-valued function
ψ:M1⇉M2,ψx=Argmaxy∈M2Fxy,
where
Argmaxy∈M2Fxy≔∪i=1rArgmaxy∈M2Fixy.
We deal with two bilevel problems:
(1) The optimistic bilevel disjunctive problem
OBDPminx∈M1miny∈ψxfxy.
In this case, the follower cooperates with the leader; that is, for each x∈M1, the follower chooses among all its disjunctive solutions (his best responses) one which is the best for the leader (assuming that such a solution exists).
(2) The pessimistic bilevel disjunctive problem
PBDPminx∈M1maxy∈ψxfxy.
In this case, there is no cooperation between the leader and the follower, and the leader expects the worst scenario; that is, for each x∈M1, the follower may choose among all its disjunctive solutions (his best responses) one which is unfavorable for the leader.
So, a general optimization problem becomes a pessimistic bilevel problem.
Theorem 4.1 The value
minxfxy:y∈ψx
exists if and only if, for an index i, the minimum minxfxy:y∈ψixexists and, for each j≠i, either minxfxy:y∈ψjxexists or ψj=Ø. In this case,
minxfxy:y∈ψx
coincides to the minimum of minima that exist.
Proof. Let us consider the multi-functions ϕix=fxψixand ϕx=fxψx. Then ϕx=∪i=1kϕix. It follows that minxϕxexists if and only if either minxϕixexists or ψi=∅, and at least one minimum exists.
Taking minimum of minima that exist, we find
minxfxy:y∈ψx.□
Theorem 4.2 Suppose M1is a compact manifold. If for each x∈M1, at least one partial function y→Fixyis affine convex and has a critical point, then the problem OBDPhas a solution.
Proof. In our hypothesis, the set ψxis nonvoid, for any x, and the compacity assures the existence of minxfxψx.
In the next Theorem, we shall use the Value Function Method or Utility Function Method.□
Theorem 4.3 If a C1increasing scalarization partial function
y→Lxy=uF1xy…Fkxy
has a minimum, then there exists an index isuch that ψix≠∅. Moreover, if fxyis bounded, then the bilevel problem
minxfxy:y∈ψx
has solution.
Proof. Let minyLxy=Lxy∗. Suppose that for each i=1,…,k,minyFixy<Fixy∗. Then y∗would not be minimum point for the partial function y→Lxy. Hence, there exists an index isuch that y∗∈ψix.□
Boundedness of fimplies that the bilevel problem has solution once it is well-posed, but the fact that the problem is well-posed is shown in the first part of the proof.
4.1. Bilevel disjunctive programming algorithm
An important concept for making wise tradeoffs among competing objectives is bilevel disjunctive programming optimality, on affine manifolds, introduced in this chapter.
We present an exact algorithm for obtaining the bilevel disjunctive solutions to the multi-objective optimization in the following section.
Step 1: Solve
ψix=Argminy∈M2Fixy,i=1,…,m.
Let ψx=∪i=1rψixbe a subset in M2representing the mapping of optimal solutions for the follower multi-objective function.
Step 2: Build the mapping f(x,ψx.
Step 3: Solve the leader’s following program
minxfxyy∈ψx.
From numerical point of view, we can use the Newton algorithm for optimization on affine manifolds, which is given in [19].
5. Models of bilevel disjunctive programming problems
The manifold Mis understood from the context. The connection Γijhcan be realized in each case, imposing convexity conditions.
Example 5.1 Let us solve the problem (cite [7], p. 7; [9]):
minx1x2Fx1x2y=x1−yx2
subject to
x1x2∈Argminx1x2x1x2y2−x12−x22≥0,
1+x1+x2≥0,−1≤x1,x2≤1,0≤y≤1.
Both the lower and the upper level optimization tasks have two objectives each. For a fixed yvalue, the feasible region of the lower-level problem is the area inside a circle with center at origin x1=x2=0and radius equal to y. The Pareto-optimal set for the lower-level optimization task, preserving a fixed y, is the bottom-left quarter of the circle,
x1x2∈R2x12+x22=y2x1≤0x2≤0.
The linear constraint in the upper level optimization task does not allow the entire quarter circle to be feasible for some y. Thus, at most a couple of points from the quarter circle belongs to the Pareto-optimal set of the overall problem. Eichfelder [8] reported the following Pareto-optimal set of solutions
A=x1x2y∈R3x1=−1−x2x2=−12±122y2−1y∈121.
The Pareto-optimal front in F1−F2space can be written in parametric form
F1F2∈R2F1=−1−F2−tF2=−12±122t2−1t∈121.
Example 5.2 Consider the bilevel programming problem
minxx−y2+x2:−20≤x≤20y∈ψx,
where the set-valued function is
ψx=Argminyxy:−x−1⩽y⩽−x+1.
Explicitly,
ψx=−11ifx=0−x−1ifx>0−x+1ifx<0.
Since Fxy=x−y2+x2, we get
Fxψx=01ifx=0−2x−12+x2ifx>0−2x+12+x2ifx<0.
on the regions where the functions are defined.
Taking into account −2x−12+x2>0and −2x+12+x2>0, it follows that x∘y∘=00is the unique optimistic optimal solution of the problem. Now, if the leader is not exactly enough in choosing his solution, then the real outcome of the problem has an objective function value above 1which is far away from the optimistic optimal value zero.
Example 5.3 Let Fxy=F1xyF2xyand a Pareto disjunctive problem
ψx=ArgminyFxy=ArgminyF1xy∪ArgminyF2xy.
Then it appears a bilevel disjunctive programming problem of the form
minxfxyy∈ψx.
This problem is interesting excepting the case ψx=Ø,∀x. If y→F1xyand y→F2xyare convex functions, then ψx≠Ø.
To write an example, we use
F1xy=xy:−x−1⩽y⩽−x+1,F2xy=x2+y2:y⩾−x+1
and we consider a bilevel disjunctive programming problem of the form
minxx−y2+x2:−20≤x≤20y∈ψx,
with
ψx=ψ1x∪ψ2x,
where
ψ1x=Argminyxy:−x−1⩽y⩽−x+1=−11ifx=0−x−1ifx>0−x+1ifx<0,
ψ2x=Argminyx2+y2:y⩾−x+1=−x+1ifx≤10ifx>1,
ψx=−11ifx=0−x−1−x+1if0<x≤1−x−10ifx>1−x+1ifx<0.
The objective fxy=x−y2+x2and the multi-function ψxproduce a multi-function
fxψx=01ifx=02x+12+x22x−12+x2if0<x≤12x+12+x22x2ifx>12x−12+x2ifx<0.
In context, we find the inferior envelope
yx=0ifx=0−x+1if0<x≤10ifx>1−x+1ifx<0
and then
fxyx=0ifx=02x−12+x2ifx∈−∞0∪012x2ifx>1.
Since 2x−12+x2>0, the unique optimal solution is x∘y∘=00.
If we consider only ψ1xas active, then the unique optimal solution 00is maintained. If ψ2xis active, then the optimal solution is 01.
6. Properties of minimum functions
Let M1,1Γ, the leader decision affine manifold, and M2,2Γ, the follower decision affine manifold, be two connected affine manifolds of dimension n1and n2, respectively. Starting from a function with two vector variables
φ:M1×M2→R,xy→φxy,
and taking the infimum after one variable, let say y, we build a function
fx=infyφxy:y∈ax,
which is called minimum function.
A minimum function is usually specified by a pointwise mapping aof the manifold M1in the subsets of a manifold M2and by a functional φxyon M1×M2. In this context, some differential properties of such functions were previously examined in [4]. Now we add new properties related to increase and convexity ideas.
First we give a new proof to Brian White Theorem (see Mean Curvature Flow, p. 7, Internet 2017).
Theorem 6.1 Suppose that M1is compact, M2=0Tand f:M1×0T→R. Let ϕt=minxfxt. If, for each xwith ϕt=fxt, we have ∂f∂txt≥0, then ϕis an increasing function.
Proof. We shall prove the statement in three steps.
(1) If fis continuous, then ϕis (uniformly) continuous.
Indeed, fis continuous on the compact M1×01, hence uniformly continuous. So, for ε>0it exists δ>0such that if ∣t1−t2∣<δ, then ∣fxt1−fxt2∣<ε, for any x∈M1, or
−ε<fxt1−fxt2<ε
On one hand, if we put ϕt1=fx1t1and ϕt2=fx2t2, then we have
fxt1>fxt2−ε≥fx2t2−ε.
Hence minxfxt1≥fx2t2−ε, so is ϕt1−ϕt2≥−ε.
On the other hand,
fxt2+ε>fxt1≥fx1t1.
Hence minxfxt2+ε≥fx1t1, so is ϕt1−ϕt2≤ε.
Finally, ∣ϕt1−ϕt2∣≤ε, for ∣t1−t2∣<δ, that is, ϕis (uniformly) continuous.
(2) Let us fix t0∈0T. If ϕt0=fx0t0and ∂f∂tx0t0≥0, then it exists δ>0such that ϕt≤ϕt0, for any t∈t0−δt0.
Suppose ∂f∂tx0t0>0, it exists δ>0such that fx0t≤fx0t0, for each t∈t0−δt0. It follows minxfxt≤fx0t≤fx0t0, and so is ϕt≤ϕt0.
If ∂f∂tx0t0=0, then we use f¯xt=fxt+εt,ε>0. For f¯, the above proof holds, and we take ε→0.
(3) ϕis an increasing function.
Let 0≤a<b≤Tand note A=t∈abϕt≤ϕb. Ais not empty. If α=infA, then, by the step (2), α<band, by the step (1), α∈A. If α>a, we can use the step (2) for t0=αand it would result that αwas not the lower bound of A. Hence α=aand ϕa≤ϕb.
Remark The third step shows that a function having the properties (1) and (2) is increasing. For this the continuity is essential. Only property (2) is not enough. For example, the function defined by ϕt=ton 01and ϕt=1−ton 12has only the property (2), but it is not increasing on 02.
Remark Suppose that fis a C2function and minxfxt=fx0tt, where x0tis an interior point of M. Since x0tis a critical point, we have
ϕ't=∂f∂tx0tt+<∂f∂xx0tt,x0't>=∂f∂tx0tt≥0.
Consequently, ϕtis an increasing function. If M1has a nonvoid boundary, then the monotony extends by continuity (see also the evolution of an extremum problem).
□
Example 6.1 The single-time perspective of a function f:Rn→Ris the function g:Rn×R+→R, gxt=tfx/t, domg=xtx/t∈domft>0. The single-time perspective gis convex if fis convex.
The single-time perspective is an example verifying Theorem 7.1. Indeed, the critical point condition for g, in x, ∂g∂x=0, gives x=tx0, where x0is a critical point of f. Consequently, ϕt=minxgxt=tfx0. On the other hand, in the minimum point, we have ∂g∂txt=fx0. Then ϕtis increasing if fx0≥0, as in Theorem 4.1.
Theorem 6.2 Suppose that M1is compact and f:M1×M2→R. Let ϕy=minxfxy. If, for each xwith ϕy=fxy, we have ∂f∂yαxy≥0, then ϕyis a partially increasing function.
Proof. Suppose that fis a C2function and minxfxy=fx0yy, where x0yis an interior point of M1. Since x0yis a critical point, we have
∂ϕ∂yα=∂f∂yαx0yy+<∂f∂xx0yy,∂x0∂yα>=∂f∂yαx0yy≥0.
Consequently, ϕyis a partially increasing function. If Mhas a non-void boundary, then the monotony extends by continuity.□
Theorem 6.3 Suppose that M1is compact and f:M1×M2→R. Let ϕy=minxfxy. If, for each xwith ϕy=fxy, we have dy2fxy≤0, then ϕyis an affine concave function.
Proof. Without loss of generality, we work on Euclidean case. Suppose that fis a C2function and minxfxy=fxyy, where xyis an interior point of M1. Since xyis a critical point, we must have
∂f∂xixyy=0.
Taking the partial derivative with respect to yαand the scalar product with ∂xi∂yβit follows
∂2f∂xi∂xj∂xj∂yα∂xi∂yβ+∂2f∂yα∂xi∂xi∂yβ=0.
On the other hand
dyϕy=dyfxyy=∂f∂xi∂xi∂yα+∂f∂yαdyα=∂f∂yαdyα
dy2ϕy=∂2f∂yα∂xi∂xi∂yβ+∂2f∂yα∂yβdyαdyβ
=−∂2f∂xj∂xi∂xi∂yβ∂xj∂yα+∂2f∂yα∂yβdyαdyβ≤0.
□
Theorem 6.4 Let f:M1×M2→Rbe a C2function and
ϕy=minxfxy=fxyy.
If the set A=xyy:y∈M2is affine convex and fAis affine convex, then ϕyis affine convex.
Proof. Suppose fis a C2function. At points xyy, we have
0≤d2fxyy=∂2f∂xi∂xj∂xi∂yα∂xj∂yβ+2∂2f∂xi∂yα∂xi∂yβ+∂2f∂yα∂yβdyαdyβ
=∂2f∂xi∂yα∂xi∂yβ+∂2f∂yα∂yβdyαdyβ=d2ϕy.