Open access

A Linear System of Both Equations and Inequalities in Max-Algebra

Written By

Abdulhadi Aminu

Submitted: 11 November 2011 Published: 11 July 2012

DOI: 10.5772/48195

From the Edited Volume

Linear Algebra - Theorems and Applications

Edited by Hassan Abid Yasser

Chapter metrics overview

2,862 Chapter Downloads

View Full Metrics

1. Introduction

The aim of this chapter is to present a system of linear equation and inequalities in max-algebra. Max-algebra is an analogue of linear algebra developed on the pair of operations (,) extended to matrices and vectors, where ab=max(a,b) and ab=a+b for a,b. The system of equations Ax=c and inequalities Bxd have each been studied in the literature. We will present necessary and sufficient conditions for the solvability of a system consisting of these two systems and also develop a polynomial algorithm for solving max-linear program whose constraints are max-linear equations and inequalities. Moreover, some solvability concepts of an inteval system of linear equations and inequalities will also be presented.

Max-algebraic linear systems were investigated in the first publications which deal with the introduction of algebraic structures called (max,+) algebras. Systems of equations with variables only on one side were considered in these publications [1], [2] and [3]. Other systems with a special structure were investigated in the context of solving eigenvalue problems in correspondence with algebraic structures or synchronisation of discrete event systems, see [4] and also [1] for additional information. Given a matrix A, a vector b of an appropriate size, using the notation =max, =plus, the studied systems had one of the following forms: Ax=b, Ax=x or Ax=xb. An infinite dimensional generalisation can be found in [5].

In [1] Cuninghame-Green showed that the problem Ax=b can be solved using residuation [6]. That is the equality in Ax=b be relaxed so that the set of its sub-solutions is studied. It was shown that the greatest solution of Axb is given by x ¯ where

x ¯ j =min iM (b i a ij -1 )foralljN

The equation Ax=b is also solved using the above result as follows: The equation Ax=b has solution if and only if Ax ¯=b. Also, Gaubert [7] proposed a method for solving the one-sided system x=Axb using rational calculus.

Zimmermann [3] developed a method for solving Ax=b by set covering and also presented an algorithm for solving max-linear programs with one sided constraints. This method is proved to has a computational complexity of O(mn), where m,n are the number of rows and columns of input matrices respectively. Akian, Gaubert and Kolokoltsov [5] extended Zimmermann's solution method by set covering to the case of functional Galois connections.

Butkovic [8] developed a max-algebraic method for finding all solutions to a system of inequalities x i -x j >b ij ,i,j=1,...,n using n generators. Using this method Butkovic [8] developed a pseudopolynomial algorithm which either finds a bounded mixed-integer solution, or decides that no such solution exists. Summary of these results can be found in [9] and [10]

Cechla ´rova and Diko [11] proposed a method for resolving infeasibility of the system Ax=b . The techniques presented in this method are to modify the right-hand side as little as possible or to omit some equations. It was shown that the problem of finding the minimum number of those equations is NP-complete.

Advertisement

2. Max-algebra and some basic definitions

In this section we introduce max-algebra, give the essential definitions and show how the operations of max-algebra can be extended to matrices and vectors.

In max-algebra, we replace addition and multiplication, the binary operations in conventional linear algebra, by maximum and addition respectively. For any problem that involves adding numbers together and taking the maximum of numbers, it may be possible to describe it in max-algebra. A problem that is nonlinear when described in conventional terms may be converted to a max-algebraic problem that is linear with respect to (,)=(max,+).

Definition 1 The max-plus semiring ¯ is the set {-}, equipped with the addition (a,b)max(a,b) and multiplication (a,b)a+b denoted by and respectively. That is ab=max(a,b) and ab=a+b. The identity element for the addition (or zero) is -, and the identity element for the multiplication (or unit) is 0.

Definition 2 The min-plus semiring min is the set {+}, equipped with the addition (a,b)min(a,b) and multiplication (a,b)a+b denoted by ' and ' respectively. The zero is +, and the unit is 0. The name tropical semiring is also used as a synonym of min-plus when the ground set is .

The completed max-plus semiring ¯ max is the set {±}, equipped with the addition (a,b)max(a,b) and multiplication (a,b)a+b, with the convention that -+(+)=++(-)=-. The completed min-plus semiring ¯ min is defined in the dual way.

Proposition 1 The following properties hold for all a,b,c ¯:

ab=baab=baa(bc)=(ab)ca(bc)=(ab)c
a(bc)=abaca(-)=-=(-)aa0=a=0aaa -1 =0,a,a -1

The statements follow from the definitions.

Proposition 2 For all a,b,c ¯ the following properties hold:

abacbcabacbc,cabab=ba>bac>bc,-<c<+

The statements follow from definitions. The pair of operations (,) is extended to matrices and vectors as in the conventional linear algebra as follows: For A=(a ij ),B=(b ij ) of compatible sizes and α we have:

AB=(a ij b ij )AB= k a ik b kj αA=(αa ij )

Example 1

315215-1026-54=315615

Example 2

-41-5308-121731=(-4+(-1))(1+1)(-5+3)(-4+2)(1+7)(-5+1)(3+(-1))(0+1)(8+3)(3+2)(0+7)(8+1)=28119

Example 3

107-32610=17712161110

Proposition 3

For A,B,C ¯ m×n of compatible sizes, the following properties hold:

AB=BAA(BC)=(AB)CA(BC)=(AB)CA(BC)=ABAC(AB)C=ACBC

The statements follow from the definitions.

Proposition 4

The following hold for A,B,C, a,b,c,x,y of compatible sizes and α,β:

A(αB)=α(AB)α(AB)=αAαB(αβ)A=αAβBx T αy=αx T yabc T ac T bABACBCABACBCABAB=B

The statements follow from the definition of the pair of operations (,).

Definition 3 Given real numbers a,b,c,, a max-algebraic diagonal matrix is defined as:

diag(a,b,c,)=ab-c-

Given a vector d=(d 1 ,d 2 ,,d n ), the diagonal of the vector d is denoted as diag(d)=diag(d 1 ,d 2 ,,d n ).

Definition 4 Max-algebraic identity matrix is a diagonal matrix with all diagonal entries zero. We denote by I an identity matrix. Therefore, identity matrix I=diag(0,0,0,).

It is obvious that AI=IA for any matrices A and I of compatible sizes.

Definition 5 Any matrix that can be obtained from the identity matrix, I, by permuting its rows and or columns is called a permutation matrix. A matrix arising as a product of a diagonal matrix and a permutation matrix is called a generalised permutation matrix [12].

Definition 6 A matrix A ¯ n×n is invertible if there exists a matrix B ¯ n×n , such that AB=BA=I. The matrix B is unique and will be called the inverse of A. We will henceforth denote B by A -1 .

It has been shown in [1] that a matrix is invertible if and only if it is a generalised permutation matrix.

If x=(x 1 ,,x n ) we will denote x -1 =(x 1 -1 ,,x n -1 ), that is x -1 =-x, in conventional notation.

Example 4

Consider the following matrices

A=--35---8-andB=--5----8-3--

The matrix B is an inverse of A because,

AB=--35---8---5----83--=0---0---0

Given a matrix A=(a ij ) ¯, the transpose of A will be denoted by A T , that is A T =(a ji ). Structures of discrete-event dynamic systems may be represented by square matrices A over the semiring:

¯=({-},,)=({-},max,+)

The system is embeddable in the self-dual system:

¯ ¯=({-}{+},,, ' , ' )=({-}{+},max,+,min,+)

Basic algebraic properties for ' and ' are similar to those of and described earlier. These are obtained by swapping and . Extension of the pair ( ' , ' ) to matrices and vectors is as follows:

Given A,B of compatible sizes and α, we define the following:

A ' B=(a ij ' b ij )A ' B= k ' a ik ' b kj =min k (a ik +b kj )α ' A=(α ' a ij )

Also, properties of matrices for the pair ( ' , ' ) are similar to those of (,), just swap and . For any matrix A=[a ij ] over ¯ ¯, the conjugate matrix is A * =[-a ji ] obtained by negation and transposition, that is A=-A T .

Proposition 5 The following relations hold for any matrices U,V,W over ¯ ¯ .

(U ' V)WU ' (VW)uid16
U(U * ' W)Wuid17
U(U * ' (UW))=UWuid18

Follows from the definitions.

Advertisement

3. The Multiprocessor Interactive System (MPIS): A practical application

Linear equations and inequalities in max-algebra have a considerable number of applications, the model we present here is called the multiprocessor interactive system (MPIS) which is formulated as follows:

Products P 1 ,,P m are prepared using n processors, every processor contributing to the completion of each product by producing a partial product. It is assumed that every processor can work on all products simultaneously and that all these actions on a processor start as soon as the processor is ready to work. Let a ij be the duration of the work of the j th processor needed to complete the partial product for P i (i=1,,m;j=1,,n). Let us denote by x j the starting time of the j th processor (j=1,,n). Then, all partial products for P i (i=1,,m;j=1,,n) will be ready at time max(a i1 +x 1 ,,a in +x n ). If the completion times b 1 ,,b m are given for each product then the starting times have to satisfy the following system of equations:

max(a i1 +x 1 ,,a in +x n )=b i foralliM

Using the notation ab=max(a,b) and ab=a+b for a,b extended to matrices and vectors in the same way as in linear algebra, then this system can be written as

Ax=buid19

Any system of the form () is called 'one-sided max-linear system'. Also, if the requirement is that each product is to be produced on or before the completion times b 1 ,,b m , then the starting times have to satisfy

max(a i1 +x 1 ,,a in +x n )b i foralliM

which can also be written as

Axbuid20

The system of inequalities () is called 'one-sided max-linear system of inequalities'.

Advertisement

4. Linear equations and inequalities in max-algebra

In this section we will present a system of linear equation and inequalities in max-algebra. Solvability conditions for linear system and inequalities will each be presented. A system consisting of max-linear equations and inequalities will also be discussed and necessary and sufficient conditions for the solvability of this system will be presented.

4.1. System of equations

In this section we present a solution method for the system Ax=b as given in [3], [1], [13] and also in the monograph [10]. Results concerning the existence and uniqueness of solution to the system will also be presented.

Given A=(a ij ) ¯ m×n and b=(b 1 ,,b m ) T ¯ m , a system of the form

Ax=buid22

is called a one-sided max-linear system, some times we may omit 'max-linear' and say one-sided system. This system can be written using the conventional notation as follows

max j=1,,n (a ij +x j )=b i ,iMuid23

The system in () can be written after subtracting the right-hand sides constants as

max j=1,,n (a ij b i -1 +x j )=0,iM

A one-sided max-linear system whose all right hand side constants are zero is called normalised max-linear system or just normalised and the process of subtracting the right-hand side constants is called normalisation. Equivalently, normalisation is the process of multiplying the system () by the matrix B ' from the left. That is

B ' Ax=B ' b=0

where,

B ' =diag(b 1 -1 ,b 2 -1 ,,b m -1 )=diag(b -1 )

For instance, consider the following one-sided system:

-213302121x 1 x 2 x 3 =563uid24

After normalisation, this system is equivalent to

-7-4-2-3-6-4-2-1-2x 1 x 2 x 3 =000

That is after multiplying the system () by

-5----6----3

Consider the first equation of the normalised system above, that is max(x 1 -7,x 2 -4,x 3 -2)=0. This means that if (x 1 ,x 2 ,x 3 ) T is a solution to this system then x 1 7,x 2 4, x 3 2 and at least one of these inequalities will be satisfied with equality. From the other equations of the system, we have for x 1 3, x 1 2, hence we have x 1 min(7,3,2)=-max(-7,-3,-2)=-x ¯ 1 where -x ¯ 1 is the column 1 maximum. It is clear that for all j then x j x ¯ j , where -x ¯ j is the column j maximum. At the same time equality must be attained in some of these inequalities so that in every row there is at least one column maximum which is attained by x j . This observation was made in [3].

Definition 7 A matrix A is called doubly -astic [14], [15], if it has at least one finite element on each row and on each column.

We introduce the following notations

S(A,b)={x ¯ n ;Ax=b}M j ={kM;b k a kj -1 =max i (b i a ij -1 )}foralljNx ¯(A,b) j =min iM (b i a ij -1 )foralljN

We now consider the cases when A=- and/or b=-. Suppose that b=-. Then S(A,b) can simply be written as

S(A,b)={x ¯ n ;x j =-,ifA j -,jN}

Therefore if A=- we have S(A,b)= ¯ n . Now, if A=- and b- then S(A,b)=. Thus, we may assume in this section that A=- and b-. If b k =- for some kM then for any xS(A,b) we have x j =- if a kj -, jN, as a result the k th equation could be removed from the system together with every column j in the matrix A where a kj - (if any), and set the corresponding x j =-. Consequently, we may assume without loss of generality that b m .

Moreover, if b m and A has an - row then S(A,b)=. If there is an - column j in A then x j may take on any value in a solution x. Thus, in what follows we assume without loss of generality that A is doubly -astic and b m .

Theorem 1 Let A=(a ij ) ¯ m×n be doubly -astic and b m . Then xS(A,b) if and only if

i)xx ¯(A,b)andii) jN x M j =MwhereN x ={jN;x j =x ¯(A,b) j }

Suppose xS(A,b). Thus we have,

Ax=bmax j (a ij +x j )=b i foralliMa ij +x j =b i forsomejNx j b i a ij -1 foralliMx j min iM (b i a ij -1 )foralljN

Hence, xx ¯ .

Now that xS(A,b). Since M j M we only need to show that M jN x M j . Let kM. Since b k =a kj x j >- for some jN and x j -1 x ¯ j -1 a ij b i -1 for every iM we have x j -1 =a kj b k -1 =max iM a ij b i -1 . Hence kM j and x j =x ¯ j .

Suppose that xx ¯ and jN x M j =M. Let kM, jN. Then a kj x j b k if a kj =-. If a kj - then

a kj x j a kj x ¯ j a kj b k a kj -1 =b k uid27

Therefore Axb. At the same time kM j for some jN satisfying x j =x ¯ j . For this j both inequalities in () are equalities and thus Ax=b. The following is a summary of prerequisites proved in [1] and [12]:

Theorem 2 Let A=(a ij ) ¯ m×n be doubly -astic and b m . The system Ax=b has a solution if and only if x ¯(A,b) is a solution.

Follows from Theorem .

Since x ¯(A,b) has played an important role in the solution of Ax=b. This vector x ¯ is called the principal solution to Ax=b [1], and we will call it likewise. The principal solution will also be used when studying the systems Axb and also when solving the one-sided system containing both equations and inequalities. The one-sided systems containing both equations and inequalities have been studied in [16] and the result will be presented later in this chapter.

Note that the principal solution may not be a solution to the system Ax=b. More precisely, the following are observed in [12]:

Corollary 1 Let A=(a ij ) ¯ m×n be doubly -astic and b m . Then the following three statements are equivalent:

i)S(A,b)ii)x ¯(A,b)S(A,b)iii) jN M j =M

The statements follow from Theorems and .

For the existence of a unique solution to the max-linear system Ax=b we have the following corollary:

Corollary 2 Let A=(a ij ) ¯ m×n be doubly -astic and b m . Then S(A,b)={x ¯(A,b)} if and only if

i) jN M j =Mandii) jN M j MforanyN ' N,N ' N

Follows from Theorem . The question of solvability and unique solvability of the system Ax=b was linked to the set covering and minimal set covering problem of combinatorics in [12].

4.2. System of inequalities

In this section we show how a solution to the one-sided system of inequalities can be obtained.

Let A=(a ij ) m×n and b=(b 1 ,,b m ) T . A system of the form:

Axbuid32

is called one-sided max-linear system of inequalities or just one-sided system of inequalities. The one-sided system of inequalities has received some attention in the past, see [3], [1] and [17] for more information. Here, we will only present a result which shows that the principal solution, x ¯(A,b) is the greatest solution to (). That is if () has a solution then x ¯(A,b) is the greatest of all the solutions. We denote the solution set of () by S(A,b,). That is

S(A,b,)={x n ;Axb}

Theorem 3 xS(A,b,) if and only if xx ¯(A,b).

Suppose xS(A,b,). Then we have

Axbmax j (a ij +x j )b i forallia ij +x j b i foralli,jx j b i a ij -1 foralli,jx j min i (b i a ij -1 )foralljxx ¯(A,b)

and the proof is now complete. The system of inequalities

AxbCxduid34

was discussed in [18] where the following result was presented.

Lemma 1 A system of inequalities () has a solution if and only if Cx ¯(A,b)d

4.3. A system containing of both equations and inequalities

In this section a system containing both equations and inequalities will be presented, the results were taken from [16]. Let A=(a ij ) k×n , C=(c ij ) r×n , b=(b 1 ,,b k ) T k and d=(d 1 ,,d r ) T r . A one-sided max-linear system with both equations and inequalities is of the form:

Ax=bCxduid37

We shall use the following notation throughout this paper

R={1,2,...,r}S(A,C,b,d)={x n ;Ax=bandCxd}S(C,d,)={x n ;Cxd}x ¯ j (C,d)=min iR (d i c ij -1 )foralljNK={1,,k}K j =kK;b k a kj -1 =min iK b i a ij -1 foralljN
x ¯ j (A,b)=min iK (b i a ij -1 )foralljNx ¯=(x ¯ 1 ,...,x ¯ n ) T J={jN;x ¯ j (C,d)x ¯ j (A,b)}andL=NJ

We also define the vector x ^=(x ^ 1 ,x ^ 2 ,...,x ^ n ) T , where

x ^ j (A,C,b,d)x ¯ j (A,b)ifjJx ¯ j (C,d)ifjLuid38

and N x ^ ={jN;x ^ j =x ¯ j }.

Theorem 4 Let A=(a ij ) k×n , C=(c ij ) r×n , b=(b 1 ,,b k ) T k and d=(d 1 ,,d r ) T r . Then the following three statements are equivalent:

(i)S(A,C,b,d)(ii)x ^(A,C,b,d)S(A,C,b,d)(iii) jJ K j =K

(i)(ii). Let xS(A,C,b,d), therefore xS(A,b) and xS(C,d,). Since xS(C,d,), it follows from Theorem that xx ¯(C,d). Now that xS(A,b) and also xS(C,d,), we need to show that x ¯ j (C,d)x ¯ j (A,b) for all jN x (that is N x J). Let jN x then x j =x ¯ j (A,b). Since xS(C,d,) we have xx ¯(C,d) and therefore x ¯ j (A,b)x ¯ j (C,d) thus jJ. Hence, N x J and by Theorem jJ K j =K. This also proves (i)(iii)

(iii)(i). Suppose jJ K j =K. Since x ^(A,C,b,d)x ¯(C,d) we have x ^(A,C,b,d)S(C,d,). Also x ^(A,C,b,d)x ¯(A,b) and N x ^ J gives jN x ^(A,C,b,d) K j jJ K j =K. Hence jN x ^(A,C,b,d) K j =K, therefore x ^(A,C,b,d)S(A,b) and x ^(A,C,b,d)S(C,d,). Hence x ^(A,C,b,d)S(A,C,b,d) (that is S(A,C,b,d)) and this also proves (iii)(ii).

Theorem 5 Let A=(a ij ) k×n , C=(c ij ) r×n , b=(b 1 ,,b k ) T k and d=(d 1 ,,d r ) T r . Then xS(A,C,b,d) if and only if

(i)xx ^(A,C,b,d)and(ii) jN x K j =KwhereN x ={jN;x j =x ¯ j (A,b)}

() Let xS(A,C,b,d), then xx ¯(A,b) and xx ¯(C,d). Since x ^(A,C,b,d)=x ¯(A,b) ' x ¯(C,d) we have xx ^(A,C,b,d). Also, xS(A,C,b,d) implies that xS(C,d,). It follows from Theorem that jN x K j =K.

() Suppose that xx ^(A,C,b,d)=x ¯(A,b) ' x ¯(C,d) and jN x K j =K. It follows from Theorem that xS(A,b), also by Theorem xS(C,d,). Thus xS(A,b)S(C,d,)=S(A,C,b,d).

We introduce the symbol |X| which stands for the number of elements of the set X.

Lemma 2 Let A=(a ij ) k×n , C=(c ij ) r×n , b=(b 1 ,,b k ) T k and d=(d 1 ,,d r ) T r . If |S(A,C,b,d)|=1 then |S(A,b)|=1.

Suppose |S(A,C,b,d)|=1, that is S(A,C,b,d)={x} for an x n . Since S(A,C,b,d)={x} we have xS(A,b) and thus S(A,b). For contradiction, suppose |S(A,b)|>1. We need to check the following two cases: (i) L and (ii) L= where L=NJ, and show in each case that |S(A,C,b,d)|>1.

Proof of Case (i), that is L: Suppose that L contains only one element say nN i.e L={n}. Since xS(A,C,b,d) it follows from Theorem that x ^(A,C,b,d)S(A,C,b,d). That is x=x ^(A,C,b,d)=(x ¯ 1 (A,b),x ¯ 2 (A,b),,x ¯ n-1 (A,b),x ¯ n (C,d))S(A,C,b,d). It can also be seen that, x ¯(C,d) n <x ¯ n (A,b) and any vector of the form z=(x ¯ 1 (A,b),x ¯ 2 (A,b),,x ¯ n-1 (A,b),α)S(A,C,b,d), where αx ¯ n (C,d). Hence |S(A,C,b,d)|>1. If L contains more than one element, then the proof is done in a similar way.

Proof of Case (ii), that is L= (J=N): Suppose that J=N. Then we have x ^(A,C,b,d)=x ¯(A,b)x ¯(C,d). Suppose without loss of generality that x,x ' S(A,b) such that xx ' . Then xx ¯(A,b)x ¯(C,d) and also x ' x ¯(A,b)x ¯(C,d). Thus, x,x ' S(C,d,). Consequently, x, x ' S(A,C,b,d) and xx ' . Hence |S(A,C,b,d)|>1.

Theorem 6 Let A=(a ij ) k×n , C=(c ij ) r×n , b=(b 1 ,,b k ) T k and d=(d 1 ,,d r ) T r . If |S(A,C,b,d)|=1 then J=N.

Suppose |S(A,C,b,d)|=1. It follows from Theorem that jJ K j =K. Also, |S(A,C,b,d)|=1 implies that |S(A,b)|=1 (Lemma ). Moreover, |S(A,b)|=1 implies that jN K j =K and jN ' K j K,N ' N,N ' N (Theorem ). Since JN and jJ K j =K, we have J=N.

Corollary 3 Let A=(a ij ) k×n , C=(c ij ) r×n , b=(b 1 ,,b k ) T k and d=(d 1 ,,d r ) T r . If |S(A,C,b,d)|=1 then S(A,C,b,d)={x ¯(A,b)}.

The statement follows from Theorem and Lemma .

Corollary 4 Let A=(a ij ) k×n , C=(c ij ) r×n , b=(b 1 ,,b k ) T k and d=(d 1 ,,d r ) T k . Then, the following three statements are equivalent:

(i)|S(A,C,b,d)|=1(ii)|S(A,b)|=1andJ=N(iii) jJ K j =Kand jJ ' K j K,foreveryJ ' J,J ' J,andJ=N

(i)(ii) Follows from Lemma and Theorem .

(ii)(i) Let J=N, therefore x ¯x ¯(C,d) and thus S(A,b)S(C,d,). Therefore we have S(A,C,b,d)=S(A,b)S(C,d,)=S(A,b). Hence |S(A,C,b,d)|=1.

(ii)(iii) Suppose that S(A,b)={x} and J=N. It follows from Theorem that jN K j =K and jN ' K j K,N ' N,N ' N. Since J=N the statement now follows from Theorem .

(iii)(ii) It is immediate that J=N and the statement now follows from Theorem .

Theorem 7 Let A=(a ij ) k×n , C=(c ij ) r×n , b=(b 1 ,,b k ) T k and d=(d 1 ,,d r ) T k . If |S(A,C,b,d)|>1 then |S(A,C,b,d)| is infinite .

Suppose |S(A,C,b,d)|>1. By Corollary we have jJ K j =K, for some JN, JN(that is jN such that x ¯ j (A,b)>x ¯ j (C,d)). Now JN and jJ K j =K, Theorem implies that any vector x=(x 1 ,x 2 ,...,x n ) T of the form

x j x ¯ j (A,b)ifjJyx ¯ j (C,d)ifjL

is in S(A,C,b,d), and the statement follows.

Remark 1 From Theorem we can say that the number of solutions to the one-sided system containing both equations and inequalities can only be 0,1,or.

The vector x ^(A,C,b,d) plays an important role in the solution of the one-sided system containing both equations and inequalities. This role is the same as that of the principal solution x ¯(A,b) to the one-sided max-linear system Ax=b, see [19] for more details.

Advertisement

5. Max-linear program with equation and inequality constraints

Suppose that the vector f=(f 1 ,f 2 ,...,f n ) T n is given. The task of minimizing [maximizing]the function f(x)=f T x=max(f 1 +x 1 ,f 1 +x 2 ...,f n +x n ) subject to () is called max-linear program with one-sided equations and inequalities and will be denoted by MLP min and [MLP max ]. We denote the sets of optimal solutions by S min (A,C,b,d) and S max (A,C,b,d), respectively.

Lemma 3 Suppose f n and let f(x)=f T x be defined on n . Then,

(i) f(x) is max-linear, i.e. f(λxμy)=λf(x)μf(y)

for every x,y n .

(ii) f(x) is isotone, i.e. f(x)f(y) for every x,y n , xy.

(i) Let α. Then we have

f(λxμy)=f T λxf T μy=λf T xμf T y=λf(x)μf(y)

and the statement now follows.

(ii) Let x,y n such that xy. Since xy, we have

max(x)max(y)f T xf T y,forany,f n f(x)f(y).

Note that it would be possible to convert equations to inequalities and conversely but this would result in an increase of the number of constraints or variables and thus increasing the computational complexity. The method we present here does not require any new constraint or variable.

We denote by

Ax i =max jN (a ij +x j )

A variable x j will be called active if x j =f(x), for some jN. Also, a variable will be called active on the constraint equation if the value (Ax) i is attained at the term x j respectively. It follows from Theorem and Lemma that x ^(A,C,b,d)S max (A,C,b,d). We now present a polynomial algorithm which finds xS min (A,C,b,d) or recognizes that S min (A,B,c,d)=. Due to Theorem either x ^(A,C,b,d)S(A,C,b,d) or S(A,C,b,d)=. Therefore, we assume in the following algorithm that S(A,C,b,d) and also S min (A,C,b,d). ONEMLP-EI(Max-linear program with one-sided equations and inequalities) f=(f 1 ,f 2 ,...,f n ) T n , b=(b 1 ,b 2 ,...b k ) T k , d=(d 1 ,d 2 ,...d r ) T r , A=(a ij )R k×n and C=(c ij )R r×n . xS min (A,C,b,d).

  1. Find x ¯(A,b), x ¯(C,d), x ^(A,C,b,d) and K j , jJ;J={jN;x ¯ j (C,d)x ¯ j (A,b)}

  2. x:=x ^(A,C,b,d)

  3. H(x):={jN;f j +x j =f(x)}

  4. J:=JH(x)

  5. If

    jJ K j K

    then stop (xS min (A,C,b,d))

  6. Set x j small enough (so that it is not active on any equation or inequality) for every jH(x)

  7. Go to 3

Theorem 8 The algorithm ONEMLP-EI is correct and its computational complexity is O((k+r)n 2 ).

The correctness follows from Theorem and the computational complexity is computed as follows. In Step 1 x ¯(A,b) is O(kn), while x ¯(C,d), x ^(A,C,b,d) and K j can be determined in O(rn), O(k+r)n and O(kn) respectively. The loop 3-7 can be repeated at most n-1 times, since the number of elements in J is at most n and in Step 4 at least one element will be removed at a time. Step 3 is O(n), Step 6 is O(kn) and Step 7 is O(n). Hence loop 3-7 is O(kn 2 ).

5.1. An example

Consider the following system max-linear program in which f=(5,6,1,4,-1) T ,

A=384010622101-248,b=757,
C=-12-30634-22113-234andd=556.

We now make a record run of Algorithm ONEMLP-EI. x ¯(A,b)=(5,-1,3,3,-1) T , x ¯(C,d)=(2,1,7,3,-1) T , x ^(A,C,b,d)=(2,-1,3,3,-1) T , J={2,3,4,5} and K 2 ={1,2}, K 3 ={1,2}, K 4 ={2,3} and K 5 ={3}. x:=x ^(A,C,b,d)=(2,-1,3,3,-1) T and H(x)={1,4} and J¬H(x). We also have J:=JH(x)={2,3,5} and K 2 K 3 K 5 =K. Then set x 1 =x 4 =10 -4 (say) and x=(10 -4 ,-1,3,10 -4 ,-1) T . Now H(x)={2} and J:=JH(x)={3,5}. Since K 3 K 5 =K set x 2 =10 -4 (say) and we have x=(10 -4 ,10 -4 ,3,10 -4 ,-1) T . Now H(x)={3} and J:=JH(x)={5}. Since K 5 K then we stop and an optimal solution is x=(10 -4 ,10 -4 ,3,10 -4 ,-1) T and f min =4.

Advertisement

6. A special case of max-linear program with two-sided constraints

Suppose c=(c 1 ,c 2 ,...,c m ) T ,d=(d 1 ,d 2 ,...,d m ) T m ,A=(a ij ) and B=(b ij ) m×n are given matrices and vectors. The system

Axc=Bxduid57

is called non-homogeneous two-sided max-linear system and the set of solutions of this system will be denoted by S. Two-sided max-linear systems have been studied in [20], [21], [22] and [23].

Optimization problems whose objective function is max-linear and constraint () are called max-linear programs (MLP). Max-linear programs are studied in [24] and solution methods for both minimization and maximization problems were developed. The methods are proved to be pseudopolynomial if all entries are integer. Also non-linear programs with max-linear constraints were dealt with in [25], where heuristic methods were develeoped and tested for a number of instances.

Consider max-linear programs with two-sided constraints (minimization), MLP min

f(x)=f T xminsubjecttoAxc=Bxduid58

where f=(f 1 ,,f n ) T n , c=(c 1 ,,c m ) T ,d=(d 1 ,,d m ) T m , A=(a ij ) and B=(b ij ) m×n are given matrices and vectors. We introduce the following:

y=(f 1 x 1 ,f 2 x 2 ,,f n x n )=diag(f)xuid59

diag(f) means a diagonal matrix whose diagonal elements are f 1 ,f 2 ,...,f n and off diagonal elements are -. It therefore follows from () that

f T x=0 T yx=(f 1 -1 y 1 ,f 2 -1 y 2 ,,f n -1 y n )=diag(f) -1 yuid60

Hence, by substituting () and () into () we have

0 T yminsubjecttoA ' yc=B ' yd,uid61

where 0 T is transpose of the zero vector, A ' =A(diag(f)) -1 and B ' =B(diag(f)) -1

Therefore we assume without loss of generality that f=0 and hence () is equivalent to

f(x)= j=1,,n x j minsubjecttoAxc=Bxduid62

The set of feasible solutions for () will be denoted by S and the set of optimal solutions by S min . A vector is called constant if all its components are equal. That is a vector x n is constant if x 1 =x 2 ==x n . For any xS we define the set Q(x)={iM;(Ax) i >c i }. We introduce the following notation of matrices. Let A=(a ij )R m×n , 1i 1 <i 2 <<i q m and 1j 1 <j 2 <<j r n. Then,

Ai 1 ,i 2 ,,i q j 1 ,j 2 ,,j r =a i 1 j 1 a i 1 j 2 a i 1 j r a i 2 j 1 a i 2 j 2 a i 2 j r a i q j 1 a i q j 2 a i q j r =A(Q,R)

where, Q={i 1 ,,i q }, R={j 1 ,,j r }. Similar notation is used for vectors c(i 1 ,,i r )=(c i 1 c i r ) T =c(R). Given MLP min with cd, we define the following sets

M = ={iM;c i =d i }andM > ={iM;c i >d i }

We also define the following matrices:

A = =A(M = ,N),A > =A(M > ,N)B = =B(M = ,N),B > =B(M > ,N)c = =c(M = ),c > =c(M > )uid63

An easily solvable case arises when there is a constant vector xS such that the set Q(x)=. This constant vector x satisfies the following equations and inequalities

A = xc = A > xc > B = xc = B > x=c > uid64

where A = ,A > ,B = ,B > , c = and c > are defined in (). The one-sided system of equation and inequalities () can be written as

Gx=pHxquid65

where,

G=(B > ),H=A = A > B = p=c > andq=c = c > c = uid66

Recall that S(G,H,p,q) is the set of solutions for ().

Theorem 9 Let Q(x)= for some constant vector x=(α,,α) T S. If zS min then zS(G,H,p,q).

Let x=(α,,α) T S. Suppose Q(z)= and zS min . This implies that f(z)f(x)=α. Therefore we have, jN, zα. Consequently, zx and (Az) i (Ax) i for all iM. Since, Q(z)= and zS(G,H,p,q).

Corollary 5 If Q(x)= for some constant vector xS then S min S min (G,H,p,q).

The statement follows from Theorem .

Advertisement

7. Some solvability concepts of a linear system containing of both equations and inequalities

System of max-separable linear equations and inequalities arise frequently in several branches of Applied Mathematics: for instance in the description of discrete-event dynamic system [4], [1] and machine scheduling [10]. However, choosing unsuitable values for the matrix entries and right-handside vectors may lead to unsolvable systems. Therefore, methods for restoring solvability suggested in the literature could be employed. These methods include modifying the input data [11], [26] or dropping some equations [11]. Another possibility is to replace each entry by an interval of possible values. In doing so, our question will be shifted to asking about weak solvability, strong solvability and control solvability.

Interval mathematics was championed by Moore [27] as a tool for bounding errors in computer programs. The area has now been developed in to a general methodology for investigating numerical uncertainty in several problems. System of interval equations and inequalities in max-algebra have each been studied in the literature. In [26] weak and strong solvability of interval equations were discussed, control sovability, weak control solvability and universal solvability have been dealt with in [28]. In [29] a system of linear inequality with interval coefficients was discussed. In this section we consider a system consisting of interval linear equations and inequalities and present solvability concepts for such system.

An algebraic structure (B,,) with two binary operations and is called max-plus algebra if

B={-},ab=max{a,b},ab=a+b

for any a,b.

Let m,n,r be given positive integers and a, we use throughout the paper the notation M={1,2,...,m},N={1,2,...,n},R={1,2,...,r} and a -1 =-a. The set of all m×n, r×n matrices over B is denoted by B(m,n) and B(r,n) respectively. The set of all n-dimensional vectors is denoted by B(n). Then for each matrix AB(n,m) and vector xB(n) the product Ax is define as

(Ax)=max jN a ij +x j

For a given matrix interval 𝐀=[A ̲,A ¯] with A ̲,A ¯B(k,n),A ̲A ¯ and given vector interval 𝐛=[b ̲,b ¯] with b ̲,b ¯B(n),b ̲b ¯ the notation

𝐀x=𝐛uid69

represents an interval system of linear max-separable equations of the form

Ax=buid70

Similarly, for a given matrix interval 𝐂=[A ̲,A ¯] with C ̲,C ¯B(r,n),C ̲C ¯ and given vector interval 𝐝=[d ̲,d ¯] with d ̲,d ¯B(n),b ̲b ¯ the notation

𝐂x𝐝uid71

represents an interval system of linear max-separable inequalities of the form

Cxduid72

Interval system of linear max-separable equations and inequalities have each been studied in the literature, for more information the reader is reffered to . The following notation

𝐀x=𝐛𝐂x𝐝uid73

represents an interval system of linear max-separable equations and inequalities of the form

Ax=bCxduid74

where A𝐀,C𝐂,b𝐛 and d𝐝.

The aim of this section is to consider a system consisting of max-separable linear equations and inequalities and presents some solvability conditions of such system. Note that it is possible to convert equations to inequalities and conversely, but this would result in an increase in the number of equations and inequalities or an increase in the number of unknowns thus increasing the computational complexity when testing the solvability conditions. Each system of the form () is said to be a subsystem of (). An interval system () has constant matrices if A ̲=A ¯ and C ̲=C ¯. Similarly, an interval system has constant right hand side if b ̲=b ¯ and d ̲=d ¯. In what follows we will consider A(k,n) and C(r,n).

7.1. Weak solvability

Definition 8 A vector y is a weak solution to an interval system () if there exists A𝐀,C𝐂,b𝐛 and d𝐝 such that

Ay=bCyduid77

Theorem 10 A vector x n is a weak solution of () if and only if

x=x ¯A ̲b ¯C ̲d ¯

and

A ¯x ¯A ̲b ¯C ̲d ¯b ̲

Let i={1,...,m} be an arbitrary chosen index and x=(x 1 ,x 2 ,...,x n ) T n fixed. If A𝐀 then (Ax) i is isotone and we have

(Ax) i [(A ̲x) i ,(A ¯x) i ]

Hence, x is a weak solution if and only if

[(A ̲x) i ,(A ¯x) i ][b ̲ i ,b ¯ i ]uid79

Similarly, if C ̲xd ¯ then x is obviously a weak solution to

A ̲xb ¯C ̲xd ¯uid80

That is

x=x ¯A ̲b ¯C ̲d ¯

Also from () x is a weak solution if and only if

[(A ̲x) i ,(A ¯x) i ][b ̲ i ,b ¯ i ],i=1,2,...,m

That is

A ¯x ¯A ̲b ¯C ̲d ¯b ̲

Definition 9 An interval system () is weakly solvable if there exists A𝐀,C𝐂,b𝐛 and d𝐝 such that () is solvable.

Theorem 11 An interval system () with constant matrix A=A ̲=A ¯, C=C ̲=C ¯ is weakly solvable if and only if

Ax ¯Ab ¯Cd ¯b ̲

The (if) part follows from the definition. Conversely, Let

Ax ¯AbCd=b

be solvable subsystem for b[b ̲ i ,b ¯ i ]. Then we have

Ax ¯Ab ¯Cd ¯Ax ¯AbCd=bb ̲

7.2. Strong solvability

Definition 10 A vector x is a strong solution to an interval system () if for each A𝐀,C𝐂 and each b𝐛,d𝐝 there is an x such that () holds.

Theorem 12 a vector x is a strong solution to () if and only if it is a solution to

Ex=fC ¯xd ̲

where

E=A ¯A ̲,f=b ̲b ¯uid86

If x is a strong solution of (), it obviously satisfies (). Conversely, suppose x satisfies () and let A ˜𝐀,C ˜𝐂,b ˜𝐛,d ˜𝐝 such that A ˜xb ˜ and C ˜x>d ˜. Then i(1,2,...,m) such that either (A ˜x) i <b ˜ i or (A ˜x) i >b ˜ i and (C ˜x) i >d ˜ i . Therefore, (A ̲x) i <(A ˜x) i <b i , (A ¯x) i (A ˜x) i >b i and (C ¯x) i >(C ˜x) i >d i and the theorem statement follows.

Advertisement

Acknowledgement

The author is grateful to the Kano University of Science and Technology, Wudil for paying the publication fee.

References

  1. 1. R. A. Cuninghame-Green, Minimax Algebra, Lecture Notes in Economics and Mathematical Systems, vol.166, Springer, Berlin (1979).
  2. 2. N. N. Vorobyov, Extremal algebra of positive matrices, Elektronische Datenverarbeitung und Kybernetik 3 (1967) 39-71 (in Russian).
  3. 3. K. Zimmermann, Extrema ´lni ´ algebra, Vy ´zkumna ´ publikace Ekonomicko-matematicke ´ laborator ˇe pr ˇi Ekonomicke ´m u ´stave 𝐶 ˇSAV, 46, Praha (1976) (in Czech).
  4. 4. F. L. Bacelli, G. Cohen, G. J. Olsder, J. P. Quadrat, Synchronization and Linearity, An Algebra for Discrete Event Systems, Wiley, Chichester (1992).
  5. 5. M. Akian, S. Gaubert, V. Kolokoltsov, Set covering and invertibility of functional Galois connections, in: G. L. Litvinov, V. P. Maslov (Ed.), Idempotent Mathematics and Mathematical Physics, American Mathematical Society (2005) 19-51.
  6. 6. T. S. Blyth, M. F. Janowitz, Residuation Theory, Pergamon press (1972).
  7. 7. S. Gaubert, Methods and Application of (max,+) Linear Algebra, INRIA (1997).
  8. 8. P. Butkovic ˇ, Finding a bounded mixed-integer solution to a system of dual inequalities, Operations Research Letters. 36 (2008) 623-627.
  9. 9. M. Akian, R. Bapat, S. Gaubert, Max-plus algebra, in: L. Hogben (Ed.), Handbook of Linear algebra: Discrete Mathematics and its Application, Chapman & Hall/CRC, Baton Rouge, L.A (2007).
  10. 10. P. Butkovic ˇ, Max-linear Systems: Theory and Algorithms., Springer Monographs in Mathematics, Springer-Verlag (2010).
  11. 11. K. Cechla ´rova ´, P. Diko, Resolving infeasibility in extremal algebras, Linear Algebra & Appl. 290 (1999) 267-273.
  12. 12. P. Butkovic ˇ, Max-algebra: the linear algebra of combinatorics?, Linear Algebra & Appl. 367 (2003) 313-335.
  13. 13. R. A. Cuninghame-Green, Minimax Algebra and applications, in: Advances in Imaging and Electron Physics, vol. 90, Academic Press, New York (1995) 1-121.
  14. 14. R. A. Cuninghame-Green, K. Zimmermann, Equation with residual functions, Comment. Math. Uni. Carolina 42(2001) 729-740.
  15. 15. P. D. Moral, G. Salut, Random particle methods in (max,+) optimisation problems in: Gunawardena (Ed.), Idempotency, Cambridge (1988) 383-391.
  16. 16. A. Aminu, Simultaneous solution of linear equations and inequalities in max-algebra Kybernetika, 47, 2, (2011),241-250.
  17. 17. P. Butkovic ˇ, Necessary solvability conditions of linear extremal equations, Discrete Applied Mathematics 10 (1985) 19-26, North-Holland.
  18. 18. K. Cechla ´rova ´, Solution of interval systems in max-algebra, in: V. Rupnik, L. Zadnik-stirn, S. Drobne(Eds.),Proc. SOR (2001) Preddvor, Slonvenia, 321-326.
  19. 19. A. Aminu, Max-algebraic linear systems and programs, PhD Thesis, University of Birmingham, UK , (2009).
  20. 20. P. Butkovic ˇ, G. Hegedu ¨s, An elemination method for finding all solutions of the system of linear equations over an extremal algebra, Ekonom. mat. Obzor 20 (1984) 2003-215.
  21. 21. R. A. Cuninghame-Green, P. Butkovic ˇ, The equation Ax=By over (max,+), Theoret. Comput. Sci. 293 (1991) 3-12.
  22. 22. B. Heidergott, G. J. Olsder, J. van der Woude, Max-plus at work, Modelling and Analysis of Synchronized Systems: A course on Max-Plus Algebra and Its Applications, Princeton University Press, New Jersey (2006).
  23. 23. E. A. Walkup, G. Boriello, A general linear max-plus solution technique, in: Gunawardena( Ed.), Idempotency, Cambridge (1988) 406-415.
  24. 24. P. Butkovic ˇ, A. Aminu, Introduction to Max-linear programming, IMA Journal of Management Mathematics (2008), 20, 3 (2009) 233-249.
  25. 25. A. Aminu, P. Butkovic ˇ, Non-linear programs with max-linear constraints: a heuristic approach, IMA Journal of Management Mathematics 23, 1, (2012), 41-66.
  26. 26. R.A Cuninghame-Green, K. Cechla ´rova ´ Residuationin fuzzy algebra and some appplications, Fuzzy Sets and Systems 71 (1995) 227-239.
  27. 27. R.E Moore Methods and application of interval analysis, SIAM, (1979).
  28. 28. H. Mys ˇová, Control solvability of interval systems in max-separable linear equations, Linear algebra and its Application, (2006) 416, 215-223.
  29. 29. M. Fielder, J. Nedoma, J. Ramik, J.Rohn, K. Zimmermann, Linear optimization problems with inexact data, Springer, Berlin, (2006).

Written By

Abdulhadi Aminu

Submitted: 11 November 2011 Published: 11 July 2012