This chapter deals with a class of discrete-time inventory control systems where the demand process D t is formed by independent and identically distributed random variables with unknown density. Our objective is to introduce a suitable density estimation method which, combined with optimal control schemes, defines a procedure to construct optimal policies under a discounted optimality criterion.
- discounted optimality
- density estimation
- inventory systems
- optimal policies
- Markov decision processes
- AMS 2010 subject classifications: 93E20
Inventory systems are one of the most studied sequential decision problems in the fields of operation research and operation management. Its origin lies in the problem of determining how much inventory of a certain product should be kept in existence to meet the demand of buyers, at a cost as low as possible. Specifically, the question is: How much should be ordered, or produced, to satisfy the demand that will be presented during a certain period? Clearly, the behavior of the inventory over time depends on the ordered quantities and the demand of the product in successive periods. Indeed, let and be the inventory level and the order quantity at the beginning of period respectively, and be the random demand during period Then is a stochastic process whose evolution in time is given as
Schematically, this process is illustrated in the following figure.
(Standard inventory system)
In this case, the inventory manager (IM) observes the inventory level and then selects the order quantity as a function of The order quantity process causes costs in the operation of the inventory system. For instance, if the quantity ordered is relatively small, then the items are very likely to be sold out, but there will be unmet demand. In this case the holding cost is reduced, but there is a significant cost due to shortage. Otherwise, if the size of the order is large, there is a risk of having surpluses with a high holding cost. These facts give rise to a stochastic optimization problem, which can be modeled as a Markov decision process (MDP). That is, the inventory system can be analyzed as a stochastic optimal control problem whose objective is to find the optimal ordering policy that minimizes a total expected cost.
The analysis of the control problem associated to inventory systems has been done under several scenarios: discrete-time and continuous-time systems with finite or infinite capacity, inventory systems considering bounded and unbounded one-stage cost, as well as partially observable models, among others (see, e.g., [1, 2, 3, 4, 5, 7]). Moreover, such scenarios have their own methods and techniques to solve the corresponding control problem. However, in most cases, it has been assumed that all the components that define the behavior of the inventory system are known to the IM, which, in certain situations, can be too strong and unrealistic. Hence it is necessary to implement schemes that allow learning or collecting information about the unknown components during the evolution of the system to choose a decision with as much information as possible.
In this chapter we study a class of inventory control systems where the density of the demand is unknown by the IM. In this sense, our objective is to propose a procedure that combines density estimation methods and control schemes to construct optimal policies under a total expected discounted cost criterion. The estimation and control procedure is illustrated in the following figure:
(Estimation and control procedure)
In this case, unlike the standard inventory system, before choosing the order quantity , the IM implements a density estimation method to get an estimate and, possibly, combines this with the history of the system to select Specifically, the density of the demand is estimated by the projection of an arbitrary estimator on an appropriate set, and its convergence is stated with respect to a norm which depends on the components of the inventory control model.
In general terms, our approach consists in to show that the inventory system can be studied under the weighted-norm approach, widely studied by several authors in the field of Markov decision processes (see, e.g.,  and references therein) and in adaptive control (see, e.g. [9, 12, 13, 14]). That is, we prove the existence of a weighted function W which imposes a growth condition on the cost functions. Then, applying the dynamic programming algorithm, the density estimation method is adapted to such a condition to define an estimation and control procedure for the construction of optimal policies.
The chapter is organized as follows. In Section 2 we describe the inventory model and define the corresponding optimal control problem. In Section 3 we introduce the dynamic programming approach under the true density. Next, in Section 4 we present the density estimation method which will be used to state, in Section 5, an estimation and control procedure for the construction of optimal policies. The proofs of the main results are given in Section 6. Finally, in Section 7, we present some concluding remarks.
2. The inventory model
We consider an inventory system evolving according to the difference equation
where and are the inventory level and the order quantity at the beginning of period taking values in and , respectively, and represents the random demand during period We assume that is an observable sequence of nonnegative independent and identically distributed (i.i.d.) random variables with a common density which is unknown by the inventory manager. In addition, we assume finite expectation
Moreover, there exists a measurable function such that
almost everywhere with respect to the Lebesgue measure. In addition
The one-stage cost function is defined as
where and are, respectively, the holding cost per unit, the ordering cost per unit, and the shortage cost per unit, satisfying
The order quantities applied by the IM are selected according to rules known as ordering control policies defined as follows. Let be the space of histories of the inventory system up to time That is, a typical element of is written as
An ordering policy (or simply a policy) is a sequence of measurable functions such that . We denote by the set of all policies. A feedback policy or Markov policy is a sequence of functions such that A feedback policy is stationary if there exists a function such that for all
When using a policy given the initial inventory level , we define the total expected discounted cost as
where is the so-called discount factor. The inventory control problem is then to find an optimal feedback policy such that for all , where
is the optimal discounted cost, which we call value function.
We define the mean one-stage cost as
Then, by using properties of conditional expectation, we can rewrite the total expected discounted cost (6) as
The sequence of events in our model is as follows. Since the density is unknown, the one-stage cost (8) is also unknown by the IM. Then if at stage the inventory level is the IM implements a suitable density estimation method to get an estimate of Next, he/she combines this with the history of the system to select an order quantity Then a cost is incurred, and the system moves to a new inventory level according to the transition law
where denotes the indicator function of the set and is the Borel algebra on . Once the transition to the inventory level occurs, the process is repeated. Furthermore, the costs are accumulated according to the discounted cost criterion (9).
3. Dynamic programming equation under the true density
The study of the inventory control problem will be done by means of the well-known dynamic programming (DP) approach, which we now introduce in terms of the unknown density In order to establish precisely the ideas, we first present some preliminary and useful facts.
The set of order quantities in which we can find the optimal ordering policy should be
Thus, we can restrict the range of so that Specifically we have the following result.
Lemma 3.1 Let be the policy defined as , and let be a policy such that for at least a Then
That is, is a better solution than
Proof. Let be the inventory levels generated by the application of and be the sequence of inventory levels and order quantities generated by where , and Without loss of generality, we suppose that for a we have Note that for all Then observing that
Remark 3.2 Observe that for we have
where, by writing
In addition, observe that for any fixed the functions and are convex, which implies that is convex. Moreover
The following lemma provides a growth property of the one-stage cost function (8).
Lemma 3.3 There exist a number and a function such that
and for all
In addition, for any density on such that
The proof of Lemma 3.3 is given in Section 6.
We denote by the normed linear space of all measurable functions with finite weighted-norm (norm) defined as
Essentially, Lemma 3.3 proves that the inventory system (1) falls within of the weighted-norm approach used to study general Markov decision processes (see, e.g., ). Hence, we can formulate, on the space , important results as existence of solutions of the DP-equation, convergence of the value iteration algorithm, as well as existence of optimal policies, in the context of the inventory system (1). Indeed, let
be the -stage discounted cost under the policy and the initial inventory level and
Moreover, from [11, Theorem 8.3.6], by making the appropriate changes, we have the following result.
Theorem 3.4 (Dynamic programming) (a) The functions and belong to Moreover
(c) is convex.
(d) satisfies the dynamic programming equation:
(e) There exists a function such that and, for each
Moreover, is an optimal control policy.
4. Density estimation
As the density is unknown, the results in Theorem 3.4 are not applicable, and therefore they are not accessible to the IM. In this section we introduce a suitable density estimation method with which we can obtain an estimated DP-equation. This will allow us to define a scheme for the construction of optimal policies. To this end, let be independent realizations of the demand whose density is
D.1. is a density.
D.2. a.e. with respect to the Lebesgue measure.
for measurable functions on
It is worth noting that for any density on satisfying (14), the norm is finite. The remainder of the section is devoted to prove Theorem 4.1.
We define the set as:
Observe that .
Lemma 4.2 The set is closed and convex in
Proof. The convexity of follows directly. To prove that is closed, let be a sequence in such that First, we prove
We assume that there is with such that being the Lebesgue measure on . Then, for some and with
Now, since there exists with such that
Using the fact that , we obtain that does not converge to in measure, which is a contradiction to the convergence in Therefore a.e.
On the other hand, applying Holder’s inequality and using the fact that , from (20),
which implies Now, as we have that is a density. Similarly, from (4),
for some constant Letting we obtain
which, in turn, implies that
This proves that is closed.∎
Let be an arbitrary estimator of such that
Lemma 4.2 ensures the existence of the estimator which is defined by the projection of on the set of densities That is, the density , expressed as
is the “best approximation” of the estimator on the set that is,
Now observe that satisfies the properties D.1, D.2, and D.3. Hence, Theorem 4.1 will be proved if we show that satisfies D.4 and D.5. To this end, since , from (26) observe that
which implies that, from (25),
That is, satisfies Property D.4. In fact, since a.s., from (27) it is easy to see that
Now, to obtain property D.5, observe that from (12)
Therefore, property D.4 yields
which proves the property D.5.
5. Estimation and control
Having defined the estimator we will now introduce an estimate dynamic programming procedure with which we can construct optimal policies for the inventory systems.
Observe that for each from (14),
Now, we define the estimate one-stage cost function:
where (see Remark 3.2) for
In addition, observe that for each is convex and
We define the sequence of functions as and for
We can state our main results as follows:
Theorem 5.1 (a) For and
(d) For each there exists such that the selector defined as
attains the minimum in (34).
Remark 5.2 From [10, Proposition D.7], for each , there is an accumulation point of the sequence . Hence, there exists a constant such that
Theorem 5.3 Let be the selector defined in (36). Then the stationary policy is an optimal base stock policy for the inventory problem.
6.1 Proof of Lemma 3.3
Note that, for each
where and Moreover, for every density function on and
On the other hand, we define the sequence of functions , as
and for and any density function on
Observe that, for each ,
In general, it is easy to see that for each ,
Let be arbitrary, and define
Then, from (40),
Therefore, for each , and since from (41),
Furthermore, using (42) and the fact that a straightforward calculation shows that
In addition, for every density function on and
Therefore, defining we have that and
6.2 Proof of Theorem 5.1
(b) Observe that from (39), for each
which implies that (see (43))
In addition, from (37),
Finally, taking expectation, (28) and Property D.4 prove the result.
(c) For each and by adding and subtracting the term , we have
(d) For each let be the function defined as
Hence, (34) is equivalent to
Moreover (see (33)), observe that is convex and Thus, there exist a constant such that
attains the minimum in (51).∎
6.3 Proof of Theorem 5.3
We fix an arbitrary . Since is an accumulation point of (see Remark 5.2), there exists a subsequence of (such that
Moreover, from (34) and Theorem 5.1(d), letting we have
On the other hand, following similar arguments as the proof of Theorem 5.1(c), for each and we have
Then, for each
where the last inequality follows by applying Fatou’s Lemma and because the function is continuous. Hence, taking expectation and liminf in (52), we obtain
7. Concluding remarks
In this chapter we have introduced an estimation and control procedure in inventory systems when the density of the demand is unknown by the inventory manager. Specifically we have proposed a density estimation method defined by the projection to a suitable set of densities, which, combined with control schemes relative to the inventory systems, defines a procedure to construct optimal ordering policies.
A point to highlight is that our results include the most general scenarios of an inventory system, e.g., state and control spaces either countable or uncountable, possibly unbounded costs, finite or infinite inventory capacity. This generality entailed the need to develop new estimation and control techniques, accompanied by a suitable mathematical analysis. For example, the simple fact of considering possibly unbounded costs led us to formulate a density estimation method that was related to the weight function W, which, in turn, defines the normed linear space (see (15)), all this through the projection estimator. Observe that if the cost function c is bounded, we can take and we have (see (19) and (25)). Thus, any consistent density estimator can be used for the construction of optimal ordering policies.
Finally, the theory presented in this chapter lays the foundations to develop estimation and control algorithms in inventory systems considering other optimality criteria, for instance, the average cost or discounted criteria with random state-action-dependent discount factors (see [14, 15] and references therein).