Open access peer-reviewed chapter

Density Estimation in Inventory Control Systems under a Discounted Optimality Criterion

Written By

Jesús Adolfo Minjárez-Sosa

Reviewed: 04 July 2019 Published: 07 August 2019

DOI: 10.5772/intechopen.88392

From the Edited Volume

Statistical Methodologies

Edited by Jan Peter Hessling

Chapter metrics overview

907 Chapter Downloads

View Full Metrics

Abstract

This chapter deals with a class of discrete-time inventory control systems where the demand process D t is formed by independent and identically distributed random variables with unknown density. Our objective is to introduce a suitable density estimation method which, combined with optimal control schemes, defines a procedure to construct optimal policies under a discounted optimality criterion.

Keywords

  • discounted optimality
  • density estimation
  • inventory systems
  • optimal policies
  • Markov decision processes
  • AMS 2010 subject classifications: 93E20
  • 62G07
  • 90B05

1. Introduction

Inventory systems are one of the most studied sequential decision problems in the fields of operation research and operation management. Its origin lies in the problem of determining how much inventory of a certain product should be kept in existence to meet the demand of buyers, at a cost as low as possible. Specifically, the question is: How much should be ordered, or produced, to satisfy the demand that will be presented during a certain period? Clearly, the behavior of the inventory over time depends on the ordered quantities and the demand of the product in successive periods. Indeed, let I t and q t be the inventory level and the order quantity at the beginning of period t , respectively, and D t be the random demand during period t . Then I t t 0 is a stochastic process whose evolution in time is given as

I t + 1 = max 0 I t + q t D t I t + q t D t + , t = 0 , 1 ,

Schematically, this process is illustrated in the following figure.

(Standard inventory system)

In this case, the inventory manager (IM) observes the inventory level I t and then selects the order quantity q t as a function of I t . The order quantity process causes costs in the operation of the inventory system. For instance, if the quantity ordered is relatively small, then the items are very likely to be sold out, but there will be unmet demand. In this case the holding cost is reduced, but there is a significant cost due to shortage. Otherwise, if the size of the order is large, there is a risk of having surpluses with a high holding cost. These facts give rise to a stochastic optimization problem, which can be modeled as a Markov decision process (MDP). That is, the inventory system can be analyzed as a stochastic optimal control problem whose objective is to find the optimal ordering policy that minimizes a total expected cost.

The analysis of the control problem associated to inventory systems has been done under several scenarios: discrete-time and continuous-time systems with finite or infinite capacity, inventory systems considering bounded and unbounded one-stage cost, as well as partially observable models, among others (see, e.g., [1, 2, 3, 4, 5, 7]). Moreover, such scenarios have their own methods and techniques to solve the corresponding control problem. However, in most cases, it has been assumed that all the components that define the behavior of the inventory system are known to the IM, which, in certain situations, can be too strong and unrealistic. Hence it is necessary to implement schemes that allow learning or collecting information about the unknown components during the evolution of the system to choose a decision with as much information as possible.

In this chapter we study a class of inventory control systems where the density of the demand is unknown by the IM. In this sense, our objective is to propose a procedure that combines density estimation methods and control schemes to construct optimal policies under a total expected discounted cost criterion. The estimation and control procedure is illustrated in the following figure:

(Estimation and control procedure)

In this case, unlike the standard inventory system, before choosing the order quantity q t , the IM implements a density estimation method to get an estimate ρ t , and, possibly, combines this with the history of the system h t = I 0 q 0 D 0 I t 1 q t 1 D t 1 I t to select q t = q t h t ρ t . Specifically, the density of the demand is estimated by the projection of an arbitrary estimator on an appropriate set, and its convergence is stated with respect to a norm which depends on the components of the inventory control model.

In general terms, our approach consists in to show that the inventory system can be studied under the weighted-norm approach, widely studied by several authors in the field of Markov decision processes (see, e.g., [11] and references therein) and in adaptive control (see, e.g. [9, 12, 13, 14]). That is, we prove the existence of a weighted function W which imposes a growth condition on the cost functions. Then, applying the dynamic programming algorithm, the density estimation method is adapted to such a condition to define an estimation and control procedure for the construction of optimal policies.

The chapter is organized as follows. In Section 2 we describe the inventory model and define the corresponding optimal control problem. In Section 3 we introduce the dynamic programming approach under the true density. Next, in Section 4 we present the density estimation method which will be used to state, in Section 5, an estimation and control procedure for the construction of optimal policies. The proofs of the main results are given in Section 6. Finally, in Section 7, we present some concluding remarks.

Advertisement

2. The inventory model

We consider an inventory system evolving according to the difference equation

I t + 1 = I t + q t D t + , t = 0 , 1 , , E1

where I t and q t are the inventory level and the order quantity at the beginning of period t , taking values in I 0 and Q 0 , respectively, and D t represents the random demand during period t . We assume that D t is an observable sequence of nonnegative independent and identically distributed (i.i.d.) random variables with a common density ρ L 1 0 which is unknown by the inventory manager. In addition, we assume finite expectation

D ¯ E D t < . E2

Moreover, there exists a measurable function ρ ¯ L 1 0 such that

ρ s ρ ¯ s E3

almost everywhere with respect to the Lebesgue measure. In addition

0 s 2 ρ ¯ s ds < . E4

For example, if ρ ¯ s K min 1 1 / s 1 + r , s [ 0 , ) , for some positive constants K and r , then there are plenty of densities that satisfy (3)(4).

The one-stage cost function is defined as

c ˜ I q D = cq + h I + q D + + b D I q + , I q I × Q , E5

where h , c , and b are, respectively, the holding cost per unit, the ordering cost per unit, and the shortage cost per unit, satisfying b > c .

The order quantities applied by the IM are selected according to rules known as ordering control policies defined as follows. Let H t be the space of histories of the inventory system up to time t . That is, a typical element of H t is written as

h t = I 0 q 0 D 0 I t 1 q t 1 D t 1 I t .

An ordering policy (or simply a policy) γ = γ t is a sequence of measurable functions γ t : H t Q , such that γ t h t = q t , t 0 . We denote by Γ the set of all policies. A feedback policy or Markov policy is a sequence γ = g t of functions g t : I Q , such that g t I t = q t . A feedback policy γ = g t is stationary if there exists a function g : I Q such that g t = g for all t 0 .

When using a policy γ Γ , given the initial inventory level I 0 = I , we define the total expected discounted cost as

V γ I E t = 0 α t c ˜ I t q t D t , E6

where α 0 1 is the so-called discount factor. The inventory control problem is then to find an optimal feedback policy γ such that V γ I = V I for all I I , where

V I inf γ Γ V γ I , I I , E7

is the optimal discounted cost, which we call value function.

We define the mean one-stage cost as

c I q = cq + hE I + q D + + bE D I q + = cq + h 0 I + q I + q s + ρ s ds + b I + q s I q + ρ s ds , I q I × Q . E8

Then, by using properties of conditional expectation, we can rewrite the total expected discounted cost (6) as

V γ I = E I γ t = 0 α t c I t q t , E9

where E I γ denotes the expectation operator with respect to the probability P I γ induced by the policy γ , given the initial inventory level I 0 = I (see, e.g., [8, 10]).

The sequence of events in our model is as follows. Since the density ρ is unknown, the one-stage cost (8) is also unknown by the IM. Then if at stage t the inventory level is I t = I I , the IM implements a suitable density estimation method to get an estimate ρ t of ρ . Next, he/she combines this with the history of the system to select an order quantity q t = q = γ t ρ t h t Q . Then a cost c I q is incurred, and the system moves to a new inventory level I t + 1 = I ' I according to the transition law

Q B I q Prob I t + 1 B I t = I q t = q = 0 1 B I + q s + ρ s ds E10

where 1 B . denotes the indicator function of the set B B I , and B I is the Borel σ algebra on I . Once the transition to the inventory level I ' occurs, the process is repeated. Furthermore, the costs are accumulated according to the discounted cost criterion (9).

Advertisement

3. Dynamic programming equation under the true density ρ

The study of the inventory control problem will be done by means of the well-known dynamic programming (DP) approach, which we now introduce in terms of the unknown density ρ . In order to establish precisely the ideas, we first present some preliminary and useful facts.

The set of order quantities in which we can find the optimal ordering policy should be Q = 0 Q Q ,

where

Q = b D ¯ c 1 α .

Thus, we can restrict the range of q so that q Q . Specifically we have the following result.

Lemma 3.1 Let γ 0 Γ be the policy defined as γ 0 = 0 0 , and let γ ¯ = γ ¯ t be a policy such that γ ¯ k h k = q ¯ k > Q , for at least a k = 0 , 1 , . Then

V γ 0 I V γ ¯ I , I I . E11

That is, γ 0 is a better solution than γ ¯ .

Proof. Let I t 0 , t = 0 , 1 , , be the inventory levels generated by the application of γ 0 , and I ¯ t q ¯ t be the sequence of inventory levels and order quantities generated by γ ¯ , where I 0 0 = I ¯ 0 = I , I t + 1 0 = I t 0 D t + , and I ¯ t + 1 = I ¯ t + q ¯ t D t + , t 0 . Without loss of generality, we suppose that for a q ¯ > Q we have q ¯ 0 = q ¯ . Note that I t 0 I ¯ t , for all t 0 . Then observing that c q ¯ > b D ¯ / 1 α ,

V γ 0 I = E t = 0 α t c I t 0 0 D t = E t = 0 α t h I t 0 D t + + b D t I t 0 + E t = 0 α t h I ¯ t D t + + b t = 0 α t E D t E t = 0 α t h I ¯ t + q ¯ t D t + + b D t I ¯ t q ¯ t + + b D ¯ 1 α E t = 0 α t h I ¯ t + q ¯ t D t + + b D t I ¯ t q ¯ t + + c q ¯ E t = 0 α t c q ¯ t + h I ¯ t + q ¯ t D t + + b D t I ¯ t q ¯ t + = V γ ¯ I , I I .

Remark 3.2 Observe that for I q I × Q we have

c I q = cq + L I + q ,

where, by writing y = I + q ,

L y hE y D + + bE D y + .

In addition, observe that for any fixed s [ 0 , ) , the functions y y s + and y s y + are convex, which implies that L y is convex. Moreover

lim y L y = .

The following lemma provides a growth property of the one-stage cost function (8).

Lemma 3.3 There exist a number β and a function W : I [ 1 , ) such that 0 < αβ < 1 ,

sup I q s I × Q × [ 0 , ) W I + q s + W I φ < , E12

and for all I q I × Q

c I q W I . E13

In addition, for any density μ on 0 such that 0 s < ,

0 W I + q s + μ s ds βW I , I q I × Q . E14

The proof of Lemma 3.3 is given in Section 6.

We denote by B W the normed linear space of all measurable functions u : I with finite weighted-norm ( W norm) W defined as

u W sup I I u I W I . E15

Essentially, Lemma 3.3 proves that the inventory system (1) falls within of the weighted-norm approach used to study general Markov decision processes (see, e.g., [11]). Hence, we can formulate, on the space B W , important results as existence of solutions of the DP-equation, convergence of the value iteration algorithm, as well as existence of optimal policies, in the context of the inventory system (1). Indeed, let

V n γ I = E I γ t = 0 n 1 α t c I t q t

be the n -stage discounted cost under the policy γ Γ and the initial inventory level I I , and

V n I = inf γ Γ V n γ I ; V 0 I = 0 , I I

the corresponding value function. Then, for all n 0 and I I , (see, e.g., [6, 10, 11]),

V n I = min q Q c I q + α 0 V n 1 I + q s + ρ s ds E16

Moreover, from [11, Theorem 8.3.6], by making the appropriate changes, we have the following result.

Theorem 3.4 (Dynamic programming) (a) The functions V n and V belong to B W . Moreover

V n I W I 1 αβ , V I W I 1 αβ , I I . E17

  1. (b) As n , V n V W 0 .

  2. (c) V is convex.

  3. (d) V satisfies the dynamic programming equation:

V I = min q Q c I q + α 0 V I + q s + ρ s ds = min I y Q + I cy + L y + α 0 V y s + ρ s ds cI , I I . E18

  1. (e) There exists a function g : I Q such that g I Q and, for each I I ,

V I = c I g I + α 0 V I + g I s + ρ s ds , I I .

Moreover, γ = g is an optimal control policy.

Advertisement

4. Density estimation

As the density ρ is unknown, the results in Theorem 3.4 are not applicable, and therefore they are not accessible to the IM. In this section we introduce a suitable density estimation method with which we can obtain an estimated DP-equation. This will allow us to define a scheme for the construction of optimal policies. To this end, let D 0 , D 1 , , D t , be independent realizations of the demand whose density is ρ .

Theorem 4.1 There exists an estimator ρ t s ρ t s D 0 D 1 D t 1 , s 0 , ) , of ρ , such that (see (2) and (3)):

  1. D.1. ρ t L 1 0 is a density.

  2. D.2. ρ t ρ ¯ a.e. with respect to the Lebesgue measure.

  3. D.3. 0 s ρ t s ds D ¯ .

  4. D.4. E 0 ρ t ρ s ds 0 , as t .

  5. D.5. E ρ t ρ 0 , as t , where

μ sup I q I × Q 1 W I 0 W I + q s + μ s ds E19

for measurable functions μ on 0 .

It is worth noting that for any density μ on 0 satisfying (14), the norm μ is finite. The remainder of the section is devoted to prove Theorem 4.1.

We define the set D L 1 ( [ 0 , ) ) as:

D μ : μ is a density 0 s ds D ¯ μ s ρ ¯ s a . s . .

Observe that ρ D .

Lemma 4.2 The set D is closed and convex in L 1 ( [ 0 , ) ) .

Proof. The convexity of D follows directly. To prove that D is closed, let μ t D be a sequence in D such that μ t L 1 μ L 1 ( [ 0 , ) ) . First, we prove

μ s ρ ¯ s a . e . E20

We assume that there is A [ 0 , ) with m A > 0 such that μ s > ρ ¯ s , s A , m being the Lebesgue measure on . Then, for some ε > 0 and A A with m A > 0 ,

μ s > ρ ¯ s + ε , s A . E21

Now, since μ t D , t 0 , there exists B t 0 , ) with m B t = 0 , such that

μ t s ρ ¯ s , s [ 0 , ) \ B t , t 0 . E22

Combining (21) and (22) we have

μ t s μ s ε , s A ( [ 0 , ) \ B t ) , t 0 .

Using the fact that m ( A ( [ 0 , ) \ B t ) = m A > 0 , we obtain that μ t does not converge to μ in measure, which is a contradiction to the convergence in L 1 . Therefore μ s ρ ¯ s a.e.

On the other hand, applying Holder’s inequality and using the fact that ρ ¯ L 1 0 , from (20),

1 0 μ s ds = 0 μ t s ds 0 μ s ds = 0 μ t s μ s 1 2 μ t s μ s 1 2 ds 0 2 ρ ¯ s 1 / 2 0 μ t s μ s 1 / 2 0 as t , E23

which implies 0 μ s ds = 1 . Now, as μ 0 a . e . , we have that μ is a density. Similarly, from (4),

0 s μ t s μ s ds = 0 s μ t s μ s 1 2 μ t s μ s 1 2 ds 0 s 2 2 ρ ¯ s ds 1 / 2 0 μ t s μ s ds 1 / 2 2 1 2 M 0 μ t s μ s ds 1 / 2 , E24

for some constant M < . Letting t we obtain

0 s μ t s ds 0 s ds

which, in turn, implies that

0 s ds D ¯ .

This proves that D is closed.∎

Let ρ ̂ t s ρ ̂ t s D 0 D 1 D t , s [ 0 , ) , be an arbitrary estimator of ρ such that

E ρ ρ ̂ t L 1 = E 0 ρ s ρ ̂ t s 0 as t . E25

Lemma 4.2 ensures the existence of the estimator ρ t which is defined by the projection of ρ ̂ t on the set of densities D . That is, the density ρ t D , expressed as

ρ t arg min σ D σ ρ ̂ t L 1 ,

is the “best approximation” of the estimator ρ ̂ t on the set D , that is,

ρ t ρ ̂ t L 1 = inf μ D μ ρ ̂ t L 1 . E26

Now observe that ρ t satisfies the properties D.1, D.2, and D.3. Hence, Theorem 4.1 will be proved if we show that ρ t satisfies D.4 and D.5. To this end, since ρ D , from (26) observe that

ρ t ρ L 1 ρ t ρ ̂ t L 1 + ρ ̂ t ρ L 1 2 ρ ̂ t ρ L 1 , t 0 ,

which implies that, from (25),

E 0 ρ s ρ t s ds 2 E ρ ̂ t ρ L 1 0 , as t . E27

That is, ρ t satisfies Property D.4. In fact, since 0 ρ s ρ t s ds 2 a.s., from (27) it is easy to see that

E 0 ρ s ρ t s ds q 0 , as t , for any q > 0 . E28

Now, to obtain property D.5, observe that from (12)

ρ t ρ = sup I q I × Q 1 W I 0 W I + q s + ρ s ρ t s ds = φ 0 ρ s ρ t s ds . E29

Therefore, property D.4 yields

E ρ t ρ 0 , as t , E30

which proves the property D.5.

Advertisement

5. Estimation and control

Having defined the estimator ρ t , we will now introduce an estimate dynamic programming procedure with which we can construct optimal policies for the inventory systems.

Observe that for each t 0 , from (14),

0 W I + q s + ρ t s ds βW I , I q I × Q . E31

Now, we define the estimate one-stage cost function:

c t I q = cq + h 0 I + q I + q s + ρ t s ds + b I + q s I q + ρ t s ds = cq + L t I + q , I q I × Q , E32

where (see Remark 3.2) for y = I + q ,

L t y h 0 y y s + ρ t s ds + b y s y + ρ t s ds .

In addition, observe that for each t 0 , L t y is convex and

lim y L t y = . E33

We define the sequence of functions V t as V 0 0 , and for t 1

V t I = min q Q c t I q + α 0 V t 1 I + q s + ρ t s ds = min I y Q + I cy + L t y + α 0 V t 1 y s + ρ t s ds cI , I I . E34

We can state our main results as follows:

Theorem 5.1 (a) For t 0 and I I ,

V t I W I 1 αβ . E35

Therefore, V t B W .

  1. (b) As t , E sup I q I × Q c t I q c ( I q ) W I 0 .

  2. (c) As t , E V t V W 0 .

  3. (d) For each t 0 , there exists K t 0 such that the selector g t : I Q defined as

q t = g t I K t I if 0 I K t 0 if I > K t

attains the minimum in (34).

Remark 5.2 From [10, Proposition D.7], for each I I , there is an accumulation point g I Q of the sequence g t I . Hence, there exists a constant K such that

g I K I if 0 I K 0 if I > K E36

Theorem 5.3 Let g be the selector defined in (36). Then the stationary policy γ g is an optimal base stock policy for the inventory problem.

Advertisement

6. Proofs

6.1 Proof of Lemma 3.3

Note that, for each I q I × Q ,

c I q cQ + h I + Q + b D ¯ c + h Q + hI + b D ¯ MG I , E37

where M max c + h Q + b D ¯ h and G I = I + 1 . Moreover, for every density function μ on 0 and I q I × Q ,

0 G I + q s + μ s ds G I + Q . E38

On the other hand, we define the sequence of functions w t , w t : I , as

w 0 I 1 + MG I E39

and for t 1 and any density function μ on 0

w t I sup q Q 0 w t 1 I + q s + μ s ds .

Observe that, for each I I ,

w 1 I = sup q Q 0 1 + MG I + q s + μ s ds 1 + MG I + MQ .

Thus,

w 2 I = sup q Q 0 1 + MG I + q s + + MQ μ s ds 1 + MG I + MQ + MQ , I I .

In general, it is easy to see that for each I I ,

w t I MG I + 1 + j = 0 t 1 MQ = MG I + 1 + MQ t . E40

Let α 0 α 1 be arbitrary, and define

W I t = 0 α 0 t w t I . E41

Then, from (40),

W I t = 0 α 0 t MG I + 1 + MQ t = t = 0 α 0 t MG I + 1 + MQ t = 0 t α 0 t MG I + 1 1 α 0 + MQ α 0 1 α 0 2 . E42

Therefore, W I < for each I I , and since w 0 > 1 , from (41),

W I > 1 . E43

Furthermore, using (42) and the fact that W w 0 , a straightforward calculation shows that

φ sup I q s I × Q × 0 , ) W I + q s + W I < . E44

Now, from (37) and (39), c I q w 0 I , which yields, for all I q I × Q ,

c I q W I . E45

In addition, for every density function μ on 0 and I q I × Q ,

0 W I + q s + μ s ds = 0 t = 0 α 0 t w t I + q s + μ s ds = t = 0 α 0 t 0 w t I + q s + μ s ds t = 0 α 0 t w t + 1 I = α 0 1 t = 0 α 0 t w t I w 0 I = α 0 1 W I w 0 I α 0 1 W I .

Therefore, defining β α 0 1 , we have that 0 < αβ < 1 , and

0 W I + q s + μ s ds βW I , I q I × Q ,

which, together with (43), (44), and (45), proves Lemma 3.3.∎

6.2 Proof of Theorem 5.1

  1. (a) Since 0 s ρ t s ds D ¯ , from (32) (see (37)) c t I q MG I for each t 0 , I q I × Q . Hence, it is easy to see that c t I q W I for each I q I × Q (see (45)). Then we have V 1 I W I , and from (31), and by applying induction arguments, we get

V t I W I 1 αβ , t 0 , I I . E46

  1. (b) Observe that from (39), for each I I ,

W I w 0 I = 1 + MG I ,

  1. which implies that (see (43))

MG I W I 1 1 W I < . E47

  1. In addition, from (37),

h I + Q MG I . E48

  1. On the other hand, similarly as (24), from (4), it is easy to see that

0 s ρ t s ρ s ds 2 1 2 M 0 ρ t s ρ s ds 1 / 2 , E49

  1. for some constant M < . Hence, combining (47)(49), from the definition of c t I q and c I q , we have

c t I q c ( I q ) W I h W I 0 I + Q ρ t s ρ s ds + b W I 0 s ρ t s ρ s ds MG I W I 0 ρ t s ρ s ds + b 2 1 2 M ' 0 ρ t s ρ s ds 1 / 2 .

  1. Finally, taking expectation, (28) and Property D.4 prove the result.

  2. (c) For each I I and t 0 , by adding and subtracting the term α 0 V t 1 I + q s + ρ s ds , we have

V t I V I sup q Q c t I q ( I q ) + sup q Q α 0 V t 1 I + q s + ρ t s ρ s ds + α 0 V t 1 I + q s + V I + q s + ρ s ds sup q Q c t I q ( I q ) + α 1 αβ sup q Q 0 W I + q s + ρ t s ρ s ds + αβ V t 1 V W W I ,

where the last inequality is due to (35), (17), (14), and (15). Therefore, from (15) and (19) and by taking expectation,

E V t V W E sup q Q c t I q c ( I q ) + α 1 αβ E ρ t ρ + αβE V t 1 V W . E50

Finally, from (17) and (35), η lim sup t E V V t W < . Hence, taking limsup in both sides of (50), from part (a) and property D.5 in Theorem 4.1, we get η αβη , which yields η = 0 (since 0 < αβ < 1 ). This proves (c).

  1. (d) For each t 0 , let H t : I be the function defined as

H t y cy + L t y + α 0 V t 1 y s + ρ t s ds .

Hence, (34) is equivalent to

V t I = min q Q H t I + q cI , I I . E51

Moreover (see (33)), observe that H t is convex and lim y H t y = . Thus, there exist a constant K t 0 such that

H t K t = min I y Q + I H t y ,

and

g t I = K t I if 0 I K t 0 if I > K t

attains the minimum in (51).∎

6.3 Proof of Theorem 5.3

We fix an arbitrary I I . Since g I is an accumulation point of g t I (see Remark 5.2), there exists a subsequence t m I of t ( t m = t m I ) such that

g t m I g I as m .

Moreover, from (34) and Theorem 5.1(d), letting t m = m , we have

V m I = c m I g m + α 0 V m 1 I + g m s + ρ m ds . E52

On the other hand, following similar arguments as the proof of Theorem 5.1(c), for each m 0 and I q I × Q , we have

α 0 V m 1 I + q s + ρ m s ds α 0 V I + q s + ρ s ds α 0 V m 1 I + q s + V I + q s + ρ m s ds + α 0 V I + q s + ρ m s ρ s ds αβ V m 1 V W W I + α 1 αβ ρ m ρ .

Then, for each I I ,

E sup q Q α 0 V m 1 I + q s + ρ m s ds α 0 V I + q s + ρ s ds 0 , as m . E53

Now,

α 0 V m 1 I + g m s + ρ m ds = α 0 V m 1 I + g m s + ρ m ds α 0 V I + g m s + ρ s ds + α 0 V I + g m s + ρ s ds . E54

Taking expectation and liminf as m on both sides of (54), from (53) we obtain

lim inf m 0 V m 1 I + g m s + ρ m ds = lim inf m 0 V I + g m s + ρ s ds 0 V I + g s + ρ s ds ,

where the last inequality follows by applying Fatou’s Lemma and because the function q I + q s + is continuous. Hence, taking expectation and liminf in (52), we obtain

c I g + α 0 V I + g s + ρ s ds V I , I I . E55

As I was arbitrary, by (18), the equality holds in (55) for all I I . To conclude, standard arguments on stochastic control literature (see, e.g., [10]) show that the policy γ = g is optimal.∎

Advertisement

7. Concluding remarks

In this chapter we have introduced an estimation and control procedure in inventory systems when the density of the demand is unknown by the inventory manager. Specifically we have proposed a density estimation method defined by the projection to a suitable set of densities, which, combined with control schemes relative to the inventory systems, defines a procedure to construct optimal ordering policies.

A point to highlight is that our results include the most general scenarios of an inventory system, e.g., state and control spaces either countable or uncountable, possibly unbounded costs, finite or infinite inventory capacity. This generality entailed the need to develop new estimation and control techniques, accompanied by a suitable mathematical analysis. For example, the simple fact of considering possibly unbounded costs led us to formulate a density estimation method that was related to the weight function W, which, in turn, defines the normed linear space B W (see (15)), all this through the projection estimator. Observe that if the cost function c is bounded, we can take W 1 and we have = L 1 (see (19) and (25)). Thus, any L 1 consistent density estimator ρ t can be used for the construction of optimal ordering policies.

Finally, the theory presented in this chapter lays the foundations to develop estimation and control algorithms in inventory systems considering other optimality criteria, for instance, the average cost or discounted criteria with random state-action-dependent discount factors (see [14, 15] and references therein).

References

  1. 1. Arrow KJ, Karlin S, Scarf H. Studies in the Mathematical Theory of Inventory and Production. CA: Stanford University Press; 1958
  2. 2. Bensoussan A, Çakanyıldırım M, Sethi SP. Partially observed inventory systems: The case of zero balance walk. SIAM Journal on Control and Optimization. 2007;46:176-209
  3. 3. Bensoussan A, Çakanyıldırım M, Minjárez-Sosa JA, Royal A, Sethi SP. Inventory problems with partially observed demands and lost sales. Journal of Optimization Theory and Applications. 2008;136:321-340
  4. 4. Bensoussan A, Çakanyıldırım M, Minjárez-Sosa JA, Sethi SP, Shi R. Partially observed inventory systems: The case of rain checks. SIAM Journal on Control and Optimization. 2008;47(5):2490-2519
  5. 5. Bensoussan A, Çakanyıldırım M, Minjárez-Sosa JA, Sethi SP, Shi R. An incomplete information inventory model with presence of inventories or backorders as only observations. Journal of Optimization Theory and Applications. 2010;146(3):544-580
  6. 6. Bertsekas DP. Dynamic Programming: Deterministic and Stochastic Models. Englewood Cliffs, N.J: Prentice-Hall; 1987
  7. 7. Beyer D, Cheng F, Sethi SP, Taksar MI. Markovian Demand Inventory Models. New York: Springer; 2008
  8. 8. Dynkin EB, Yushkevich AA. Controlled Markov Processes. New York: Springer-Verlag; 1979
  9. 9. Gordienko EI, Minjárez-Sosa JA. Adaptive control for discrete-time Markov processes with unbounded costs: Discounted criterion. Kybernetika. 1998;34:217-234
  10. 10. Hernández-Lerma O, Lasserre JB. Discrete-Time Markov Control Processes: Basic Optimality Criteria. New York: Springer-Verlag; 1996
  11. 11. Hernández-Lerma O, Lasserre JB. Further Topics on Discrete-Time Markov Control Processes. New York: Springer-Verlag; 1999
  12. 12. Hilgert N, Minjárez-Sosa JA. Adaptive policies for time-varying stochastic systems under discounted criterion. Mathematical Methods of Operations Research. 2001;54(3):491-505
  13. 13. Minjárez-Sosa JA. Approximation and estimation in Markov control processes under discounted criterion. Kybernetika. 2004;6(40):681-690
  14. 14. Minjárez-Sosa JA. Empirical estimation in average Markov control processes. Applied Mathematics Letters. 2008;21:459-464
  15. 15. Minjárez-Sosa JA. Markov control models with unknown random state-action-dependent discount factors. TOP. 2015;23:743-772

Written By

Jesús Adolfo Minjárez-Sosa

Reviewed: 04 July 2019 Published: 07 August 2019