1. Introduction
The aim of this chapter is to present a system of linear equation and inequalities in max-algebra. Max-algebra is an analogue of linear algebra developed on the pair of operations extended to matrices and vectors, where and for . The system of equations and inequalities have each been studied in the literature. We will present necessary and sufficient conditions for the solvability of a system consisting of these two systems and also develop a polynomial algorithm for solving max-linear program whose constraints are max-linear equations and inequalities. Moreover, some solvability concepts of an inteval system of linear equations and inequalities will also be presented.
Max-algebraic linear systems were investigated in the first publications which deal with the introduction of algebraic structures called (max,+) algebras. Systems of equations with variables only on one side were considered in these publications [1], [2] and [3]. Other systems with a special structure were investigated in the context of solving eigenvalue problems in correspondence with algebraic structures or synchronisation of discrete event systems, see [4] and also [1] for additional information. Given a matrix , a vector of an appropriate size, using the notation , , the studied systems had one of the following forms: , or . An infinite dimensional generalisation can be found in [5].
In [1] Cuninghame-Green showed that the problem can be solved using residuation [6]. That is the equality in be relaxed so that the set of its sub-solutions is studied. It was shown that the greatest solution of is given by where
The equation is also solved using the above result as follows: The equation has solution if and only if . Also, Gaubert [7] proposed a method for solving the one-sided system using rational calculus.
Zimmermann [3] developed a method for solving by set covering and also presented an algorithm for solving max-linear programs with one sided constraints. This method is proved to has a computational complexity of , where are the number of rows and columns of input matrices respectively. Akian, Gaubert and Kolokoltsov [5] extended Zimmermann's solution method by set covering to the case of functional Galois connections.
Butkovic [8] developed a max-algebraic method for finding all solutions to a system of inequalities using generators. Using this method Butkovic [8] developed a pseudopolynomial algorithm which either finds a bounded mixed-integer solution, or decides that no such solution exists. Summary of these results can be found in [9] and [10]
Cechlrova and Diko [11] proposed a method for resolving infeasibility of the system . The techniques presented in this method are to modify the right-hand side as little as possible or to omit some equations. It was shown that the problem of finding the minimum number of those equations is NP-complete.
2. Max-algebra and some basic definitions
In this section we introduce max-algebra, give the essential definitions and show how the operations of max-algebra can be extended to matrices and vectors.
In max-algebra, we replace addition and multiplication, the binary operations in conventional linear algebra, by maximum and addition respectively. For any problem that involves adding numbers together and taking the maximum of numbers, it may be possible to describe it in max-algebra. A problem that is nonlinear when described in conventional terms may be converted to a max-algebraic problem that is linear with respect to .
The completed max-plus semiring is the set , equipped with the addition and multiplication , with the convention that . The completed min-plus semiring is defined in the dual way.
The statements follow from the definitions.
The statements follow from definitions. The pair of operations (,) is extended to matrices and vectors as in the conventional linear algebra as follows: For of compatible sizes and we have:
For of compatible sizes, the following properties hold:
The statements follow from the definitions.
The following hold for , of compatible sizes and :
The statements follow from the definition of the pair of operations (,).
Given a vector , the
It is obvious that for any matrices and of compatible sizes.
It has been shown in [1] that a matrix is
If we will denote , that is , in conventional notation.
Consider the following matrices
The matrix is an inverse of because,
Given a matrix , the
The system is embeddable in the self-dual system:
Basic algebraic properties for and are similar to those of and described earlier. These are obtained by swapping and . Extension of the pair to matrices and vectors is as follows:
Given of compatible sizes and , we define the following:
Also, properties of matrices for the pair are similar to those of , just swap and . For any matrix over , the
Follows from the definitions.
3. The Multiprocessor Interactive System (MPIS): A practical application
Linear equations and inequalities in max-algebra have a considerable number of applications, the model we present here
is called the
Products are prepared using processors, every processor contributing to the completion of each product by producing a partial product. It is assumed that every processor can work on all products simultaneously and that all these actions on a processor start as soon as the processor is ready to work. Let be the duration of the work of the processor needed to complete the partial product for . Let us denote by the starting time of the processor . Then, all partial products for will be ready at time . If the completion times are given for each product then the starting times have to satisfy the following system of equations:
Using the notation and for extended to matrices and vectors in the same way as in linear algebra, then this system can be written as
Any system of the form (▭) is called 'one-sided max-linear system'. Also, if the requirement is that each product is to be produced on or before the completion times , then the starting times have to satisfy
which can also be written as
The system of inequalities (▭) is called 'one-sided max-linear system of inequalities'.
4. Linear equations and inequalities in max-algebra
In this section we will present a system of linear equation and inequalities in max-algebra. Solvability conditions for linear system and inequalities will each be presented. A system consisting of max-linear equations and inequalities will also be discussed and necessary and sufficient conditions for the solvability of this system will be presented.
4.1. System of equations
In this section we present a solution method for the system as given in [3], [1], [13] and also in the monograph [10]. Results concerning the existence and uniqueness of solution to the system will also be presented.
Given and , a system of the form
is called a
The system in (▭) can be written after subtracting the right-hand sides constants as
A one-sided max-linear system whose all right hand side constants are zero is called
where,
For instance, consider the following one-sided system:
After normalisation, this system is equivalent to
That is after multiplying the system (▭) by
Consider the first equation of the normalised system above, that is . This means that if is a solution to this system then ,, and at least one of these inequalities will be satisfied with equality. From the other equations of the system, we have for , , hence we have where is the column 1 maximum. It is clear that for all then , where is the column maximum. At the same time equality must be attained in some of these inequalities so that in every row there is at least one column maximum which is attained by . This observation was made in [3].
We introduce the following notations
We now consider the cases when and/or . Suppose that . Then can simply be written as
Therefore if we have . Now, if and then . Thus, we may assume in this section that and . If for some then for any we have if , , as a result the equation could be removed from the system together with every column in the matrix where (if any), and set the corresponding . Consequently, we may assume without loss of generality that .
Moreover, if and has an row then . If there is an column in then may take on any value in a solution . Thus, in what follows we assume without loss of generality that is doubly and .
Suppose . Thus we have,
Hence, .
Now that . Since we only need to show that . Let . Since for some and for every we have . Hence and .
Suppose that and . Let , . Then if . If then
Therefore . At the same time for some satisfying . For this both inequalities in (▭) are equalities and thus . The following is a summary of prerequisites proved in [1] and [12]:
Follows from Theorem ▭.
Since has played an important role in the solution of . This vector is called the
Note that the principal solution may not be a solution to the system . More precisely, the following are observed in [12]:
The statements follow from Theorems ▭ and ▭.
For the existence of a unique solution to the max-linear system we have the following corollary:
Follows from Theorem ▭. The question of solvability and unique solvability of the system was linked to the set covering and minimal set covering problem of combinatorics in [12].
4.2. System of inequalities
In this section we show how a solution to the one-sided system of inequalities can be obtained.
Let and . A system of the form:
is called
Suppose . Then we have
and the proof is now complete. The system of inequalities
was discussed in [18] where the following result was presented.
4.3. A system containing of both equations and inequalities
In this section a system containing both equations and inequalities will be presented, the results were taken from [16].
Let , ,
and .
A
We shall use the following notation throughout this paper
We also define the vector , where
and .
. Let , therefore and . Since , it follows from Theorem ▭ that . Now that and also , we need to show that for all (that is ). Let then . Since we have and therefore thus . Hence, and by Theorem ▭ . This also proves
. Suppose . Since we have . Also and gives . Hence , therefore and . Hence (that is ) and this also proves .
() Let , then and . Since we have . Also, implies that . It follows from Theorem ▭ that .
() Suppose that and . It follows from Theorem ▭ that , also by Theorem ▭ . Thus .
We introduce the symbol which stands for the number of elements of the set .
Suppose , that is for an . Since we have and thus . For contradiction, suppose . We need to check the following two cases: (i) and (ii) where , and show in each case that .
Suppose . It follows from Theorem ▭ that
The statement follows from Theorem ▭ and Lemma ▭.
It is immediate that and the statement now follows from Theorem ▭.
Suppose . By Corollary ▭ we have , for some , (that is such that ). Now and , Theorem ▭ implies that any vector of the form
is in , and the statement follows.
The vector plays an important role in the solution of the one-sided system containing both equations and inequalities. This role is the same as that of the principal solution to the one-sided max-linear system , see [19] for more details.
5. Max-linear program with equation and inequality constraints
Suppose that the vector is given. The
task of minimizing [maximizing]the function
subject
to (▭) is called max-linear program with one-sided equations and
inequalities and will be denoted by and
[]. We denote the sets of optimal solutions by
and , respectively.
(i) is max-linear, i.e.
for every .
(ii) is isotone, i.e. for every , .
(i) Let . Then we have
and the statement now follows.
(ii) Let such that . Since , we have
Note that it would be possible to convert equations to inequalities and conversely but this would result in an increase of the number of constraints or variables and thus increasing the computational complexity. The method we present here does not require any new constraint or variable.
We denote by
A variable will be called
Find , , and , ;
If
then stop ()
Set small enough (so that it is not active on any equation or inequality) for every
Go to 3
The correctness follows from Theorem ▭ and the computational complexity is computed as follows. In Step 1 is , while , and can be determined in , and respectively. The loop 3-7 can be repeated at most times, since the number of elements in is at most and in Step 4 at least one element will be removed at a time. Step 3 is , Step 6 is and Step 7 is . Hence loop 3-7 is .
5.1. An example
Consider the following system max-linear program in which ,
We now make a record run of Algorithm ONEMLP-EI. , , , and , , and . and and . We also have and . Then set (say) and . Now and . Since set (say) and we have . Now and . Since then we stop and an optimal solution is and .
6. A special case of max-linear program with two-sided constraints
Suppose and are given matrices and vectors. The system
is called non-homogeneous two-sided max-linear system and the set of solutions of this system will be denoted by . Two-sided max-linear systems have been studied in [20], [21], [22] and [23].
Optimization problems whose objective function is max-linear and constraint (▭) are called max-linear programs (MLP). Max-linear programs are studied in [24] and solution methods for both minimization and maximization problems were developed. The methods are proved to be pseudopolynomial if all entries are integer. Also non-linear programs with max-linear constraints were dealt with in [25], where heuristic methods were develeoped and tested for a number of instances.
Consider max-linear programs with two-sided constraints (minimization),
where , , and are given matrices and vectors. We introduce the following:
means a diagonal matrix whose diagonal elements are and off diagonal elements are . It therefore follows from (▭) that
Hence, by substituting (▭) and (▭) into (▭) we have
where is transpose of the zero vector, and
Therefore we assume without loss of generality that and hence (▭) is equivalent to
The set of feasible solutions for (▭) will be denoted by
and the set of optimal solutions by . A vector is called
where, , . Similar notation is used for vectors . Given with , we define the following sets
We also define the following matrices:
An easily solvable case arises when there is a constant vector such that the set . This constant vector satisfies the following equations and inequalities
where , and are defined in (▭). The one-sided system of equation and inequalities (▭) can be written as
where,
Recall that is the set of solutions for (▭).
Let . Suppose and . This implies that . Therefore we have, , . Consequently, and for all . Since, and .
The statement follows from Theorem ▭.
7. Some solvability concepts of a linear system containing of both equations and inequalities
System of max-separable linear equations and inequalities arise frequently in several branches of Applied Mathematics: for instance in the description of discrete-event dynamic system [4], [1] and machine scheduling [10]. However, choosing unsuitable values for the matrix entries and right-handside vectors may lead to unsolvable systems. Therefore, methods for restoring solvability suggested in the literature could be employed. These methods include modifying the input data [11], [26] or dropping some equations [11]. Another possibility is to replace each entry by an interval of possible values. In doing so, our question will be shifted to asking about weak solvability, strong solvability and control solvability.
Interval mathematics was championed by Moore [27] as a tool for bounding errors in computer programs. The area has now been developed in to a general methodology for investigating numerical uncertainty in several problems. System of interval equations and inequalities in max-algebra have each been studied in the literature. In [26] weak and strong solvability of interval equations were discussed, control sovability, weak control solvability and universal solvability have been dealt with in [28]. In [29] a system of linear inequality with interval coefficients was discussed. In this section we consider a system consisting of interval linear equations and inequalities and present solvability concepts for such system.
An algebraic structure with two binary operations and is called max-plus algebra if
for any .
Let be given positive integers and , we use throughout the paper the notation and . The set of all , matrices over is denoted by and respectively. The set of all -dimensional vectors is denoted by . Then for each matrix and vector the product is define as
For a given matrix interval with and given vector interval with the notation
represents an interval system of linear max-separable equations of the form
Similarly, for a given matrix interval with and given vector interval with the notation
represents an interval system of linear max-separable inequalities of the form
Interval system of linear max-separable equations and inequalities have each been studied in the literature, for more information the reader is reffered to . The following notation
represents an interval system of linear max-separable equations and inequalities of the form
where and .
The aim of this section is to consider a system consisting of max-separable linear equations and inequalities and presents some solvability conditions of such system. Note that it is possible to convert equations to inequalities and conversely, but this would result in an increase in the number of equations and inequalities or an increase in the number of unknowns thus increasing the computational complexity when testing the solvability conditions. Each system of the form (▭) is said to be a subsystem of (▭). An interval system (▭) has constant matrices if and . Similarly, an interval system has constant right hand side if and . In what follows we will consider and .
7.1. Weak solvability
and
Let be an arbitrary chosen index and fixed. If then is isotone and we have
Hence, is a weak solution if and only if
Similarly, if then is obviously a weak solution to
That is
Also from (▭) is a weak solution if and only if
That is
The (if) part follows from the definition. Conversely, Let
be solvable subsystem for . Then we have
7.2. Strong solvability
where
If is a strong solution of (▭), it obviously satisfies (▭). Conversely, suppose satisfies (▭) and let such that and . Then such that either or and . Therefore, , and and the theorem statement follows.
Acknowledgement
The author is grateful to the Kano University of Science and Technology, Wudil for paying the publication fee.
References
- 1.
R. A. Cuninghame-Green, Minimax Algebra, Lecture Notes in Economics and Mathematical Systems , vol.166, Springer, Berlin (1979). - 2.
N. N. Vorobyov, Extremal algebra of positive matrices, Elektronische Datenverarbeitung und Kybernetik 3 (1967) 39-71 (in Russian). - 3.
K. Zimmermann, Extrem ln algebra, V , 46, Praha (1976) (in Czech).zkumn publikace Ekonomicko-matematick laborato e p i Ekonomick m stave SAV - 4.
F. L. Bacelli, G. Cohen, G. J. Olsder, J. P. Quadrat, Synchronization and Linearity, An Algebra for Discrete Event Systems , Wiley, Chichester (1992). - 5.
M. Akian, S. Gaubert, V. Kolokoltsov, Set covering and invertibility of functional Galois connections, in: G. L. Litvinov, V. P. Maslov (Ed.), Idempotent Mathematics and Mathematical Physics , American Mathematical Society (2005) 19-51. - 6.
T. S. Blyth, M. F. Janowitz, Residuation Theory , Pergamon press (1972). - 7.
S. Gaubert, Methods and Application of (max,+) Linear Algebra, INRIA (1997). - 8.
P. Butkovi , Finding a bounded mixed-integer solution to a system of dual inequalities, Operations Research Letters. 36 (2008) 623-627. - 9.
M. Akian, R. Bapat, S. Gaubert, Max-plus algebra, in: L. Hogben (Ed.), Handbook of Linear algebra: Discrete Mathematics and its Application , Chapman & Hall/CRC, Baton Rouge, L.A (2007). - 10.
P. Butkovi , Max-linear Systems: Theory and Algorithms. , Springer Monographs in Mathematics, Springer-Verlag (2010). - 11.
K. Cechl rov , P. Diko, Resolving infeasibility in extremal algebras, Linear Algebra & Appl. 290 (1999) 267-273. - 12.
P. Butkovi , Max-algebra: the linear algebra of combinatorics?, Linear Algebra & Appl. 367 (2003) 313-335. - 13.
R. A. Cuninghame-Green, Minimax Algebra and applications, in: Advances in Imaging and Electron Physics , vol. 90, Academic Press, New York (1995) 1-121. - 14.
R. A. Cuninghame-Green, K. Zimmermann, Equation with residual functions, Comment. Math. Uni. Carolina 42(2001) 729-740. - 15.
P. D. Moral, G. Salut, Random particle methods in (max,+) optimisation problems in: Gunawardena (Ed.), Idempotency , Cambridge (1988) 383-391. - 16.
A. Aminu, Simultaneous solution of linear equations and inequalities in max-algebra Kybernetika , 47, 2, (2011),241-250. - 17.
P. Butkovi , Necessary solvability conditions of linear extremal equations, Discrete Applied Mathematics 10 (1985) 19-26, North-Holland. - 18.
K. Cechl rov , Solution of interval systems in max-algebra, in: V. Rupnik, L. Zadnik-stirn, S. Drobne(Eds.) ,Proc. SOR (2001) Preddvor, Slonvenia, 321-326. - 19.
A. Aminu, Max-algebraic linear systems and programs, PhD Thesis, University of Birmingham, UK , (2009). - 20.
P. Butkovi , G. Heged s, An elemination method for finding all solutions of the system of linear equations over an extremal algebra, Ekonom. mat. Obzor 20 (1984) 2003-215. - 21.
R. A. Cuninghame-Green, P. Butkovi , The equation over (max,+), Theoret. Comput. Sci. 293 (1991) 3-12. - 22.
B. Heidergott, G. J. Olsder, J. van der Woude, Max-plus at work, Modelling and Analysis of Synchronized Systems: A course on Max-Plus Algebra and Its Applications , Princeton University Press, New Jersey (2006). - 23.
E. A. Walkup, G. Boriello, A general linear max-plus solution technique, in: Gunawardena( Ed.), Idempotency , Cambridge (1988) 406-415. - 24.
P. Butkovi , A. Aminu, Introduction to Max-linear programming, IMA Journal of Management Mathematics (2008), 20, 3 (2009) 233-249. - 25.
A. Aminu, P. Butkovi , Non-linear programs with max-linear constraints: a heuristic approach, IMA Journal of Management Mathematics 23, 1, (2012), 41-66. - 26.
R.A Cuninghame-Green, K. Cechl rov Residuationin fuzzy algebra and some appplications , Fuzzy Sets and Systems 71 (1995) 227-239. - 27.
R.E Moore Methods and application of interval analysis , SIAM, (1979). - 28.
H. My ová, Control solvability of interval systems in max-separable linear equations, Linear algebra and its Application, (2006) 416, 215-223. - 29.
M. Fielder, J. Nedoma, J. Ramik, J.Rohn, K. Zimmermann, Linear optimization problems with inexact data , Springer, Berlin, (2006).