Open access peer-reviewed chapter

Methods of Conditionally Optimal Forecasting for Stochastic Synergetic CALS Technologies

Written By

Igor N. Sinitsyn and Anatoly S. Shalamov

Submitted: 01 February 2022 Reviewed: 11 February 2022 Published: 18 April 2022

DOI: 10.5772/intechopen.103657

From the Edited Volume

Time Series Analysis - New Insights

Edited by Rifaat Abdalla, Mohammed El-Diasty, Andrey Kostogryzov and Nikolay Makhutov

Chapter metrics overview

114 Chapter Downloads

View Full Metrics

Abstract

Problems of optimal, sub- and conditionally optimal filtering and forecasting in product and staff subsystems at the background noise in synergistical organization-technical-economical systems (SOTES) are considered. Nowadays for highly available systems the problems of creation of basic systems engineering principles, approaches and information technologies (IT) for SOTES from modern spontaneous markets at the background inertially going world economics crisis, weakening global market relations at conditions of competition and counteraction reinforcement is very important. Big enterprises need IT due to essential local and systematic economic loss. It is necessary to form general approaches for stochastic processes and parameters estimation in SOTES at the background noises. The following notations are introduced: special observation SOTES (SOTES-O) with own organization-product resources and internal noise as information from special SOTES being enact noise (SOTES-N). Conception for SOTES structure for systems of technical, staff and financial support is developed. Linear, linear with parametric noises and nonlinear stochastic (discrete and hybrid) equations describing organization-production block (OPB) for three types of SOTES with their planning-economical estimating divisions are worked out. SOTES-O is described by two interconnected subsystems: state SOTES sensor and OPB supporting sensor with necessary resources. After short survey of modern modeling, sub- and conditionally optimal filtering and forecasting basic algorithms and IT for typical SOTES are given. Influence of OTES-N noise on rules and functional indexes of subsystems accompanying life cycle production, its filtration and forecasting is considered. Experimental software tools for modeling and forecasting of cost and technical readiness for parks of aircraft are developed.

Keywords

  • sub- and conditionally optimal filtering and forecasting (COF and COFc)
  • continuous acquisition logic support (CALS)
  • organizational-technical-economical systems (OTES)
  • probability modeling
  • synergetical OTES (SOTES)

1. Introduction

Stochastic continuous acquisition logic support (CALS) is the basis of integrated logistic support (ILS) in the presence of noises and stochastic factors in organizational-technical-economic systems (OTES). Stochastic CALS methodology was firstly developed in [1, 2, 3, 4, 5]. According to contemporary notions in broad sense ILS being CALS basis represents the systems of scientific, design-project, organization-technical, manufactural and informational-management technologies, means and fractial measures during life cycle (LC) of high-quality manufacturing products (MP) for obtaining maximal required available level of quality and minimal product technical exploitational costs.

Contemporary standards being CALS vanguard methodology in not right measure answer necessary purposes. CALS standard have a debatable achievement and the following essential shortcoming:

  • informational-technical-economic models being not dynamical;

  • integrated database for analysis of logistic support is super plus on one hand and on the other hand does not contain information necessary for complex through cost LC estimation according to modern decision support algorithms;

  • computational algorithms for various LC stage are simplified and do not permit forecasting with necessary accuracy and perform at conditions of internal and external noises and stochastic factors.

So ILS standard do not provide the whole realization of advantages for modern and perspective information technologies (IT) including staff structure in the field of stochastic modeling and estimation of two interconnected spheres: techno-sphere (techniques and technologies) and social ones.

These stochastic systems (StS) form the new systems class: OTES-CALS systems. Such systems destined for the production and realization of various services including engineering and other categorical works providing exploitation, aftersale MP support and repair, staff, medical, economical and financial support of all processes. New developed approach is based on new stochasting modeling and estimating approaches. Nowadays such IT are widely used in technical application of complex systems functioning in stochastic media.

Estimation of IT is based on: (1) model of OTES; (2) model of OTES-O (observation system); (3) model OTES-N (noise support); (4) criteria, estimation methods models and for new generations of synergetic OTES (SOTES) measuring model and organization-production block (OPB) in OTES-O are separated.

Synergetics being interdisciplinary science is based on the principle of self-realization of the open nonlinear dissipative and nonconservative systems. According to [6, 7] in equilibrium when all systems parameters are stable and variation in it arise due to minimal deviations of some control parameters. As a result, the system begins to move off from equilibrium state with increasing velocity. Further the non-stability process lead to total chaos and as a result appears bifurcation. After that gradually new regime establishes and so on.

The existence of big amount of free entering elements and subsystems of various levels is the basic principle of self-organization. One of inalienable properties of synergetical system is the existence of “attractors”. Attractor is defined as attraction set (manifold) in phase space being the aim of all nonlinear trajectories of moving initial point (IP). These manifolds are time invariant and are defined from equilibrium equation. Invariant manifolds are also determined as constraints of non-conservative synergetical system. In synergetical control theory [8] transition from natural, unsupervised behavior according to algorithms of dissipative structure to control motion IP along artificially in putted demanded invariant manifolds. As a control object of synergetical system always nonlinear its dynamics may be described by nonlinear differential equations. In case of big dimension the parameters of order are introduced by revealing most slow variable and more quick subordination variables. This approach in hierarchical synergetic system is called subordination principle. So at lower hierarchy level processors go with maximal velocity. Invariant manifolds are connected with slow dynamics.

Section 1 is devoted to probabilistic modeling problems in typical StS. Special attention is paid to hybrid systems. Such specific StS as linear, linear with the Gaussian parametric noises and nonlinear reducible to quasilinear by normal approximation method. For quick off-line and on-line application theory of conditionally optimal forecasting in typical StS is developed in Section 2. In Section 3 basic off-line algorithm of probability modeling in SOTES are presented. Basic conditionary optimal filtering and forecasting quick – off-line and on-line algorithms for SOTES are given in Section 4. Peculiarities of new SOTES generalizations are described in Section 5. Simple example illustrating the influence of SOTES-N noise on rules and functional indexes of subsystems accompanying life cycle production, its filtration and forecasting is presented in Section 6. Experimental software tools for forecasting of cost and technical readiness for aircraft parks are developed.

Advertisement

2. Probabilistic modeling in StS

Let us consider basic mathematical models of stochastic OTES:

  • continuous models defined by stochastic differential equations;

  • discrete models defined by stochastic difference equations;

  • hydride models as a mixer of difference and differential equations.

Probabilistic analytical modeling of stochastic systems (StS) equations is based on the solution of deterministic evolutionary equations (Fokker-Plank-Kolmogorov, Pugachev, Feller-Kolmogorov) for one- and finite dimensions. For stochastic equations of high dimensions solution of evolutionary equation meets principle computationary difficulties.

At practice taking into account specific properties of StS it is possible to design rather simple stochastic models using a priori data about StS structure, parameters and stochastic factors. It is very important to design for different stages of the life cycle (LC) models based on available information. At the last LC stage we need hybrid stochastic models.

Let us consider basic general and specific stochastic models and basic algorithms of probabilistic analytical modeling. Special attention will paid to algorithms based on normal approximation, statistical linearization and equivalent linearization methods. For principally nonlinear non Gaussian StS may be recommended corresponding parametrization methods [9].

2.1 Continuous StS

Continuous stochastic models of systems involve the action of various random factors. While using models described by differential equations the inclusion of random factors leads to the equations which contain random variables.

Differential equations for a StS (more precisely for a stochastic model of a system) must be replaced in the general case by the Equations [9, 10].

Ż=FZxt,Y=GZt,E1

where Fzxt and Gzt are random functions of the p-dimensional vector, z,n-dimensional vector x and time t (as a rule G is independent of x). In consequence of the randomness of the right-hand sides of Eq. (1) and also perhaps of the initial value of the state vector Z0=Zt0 the state vector of the system Z and the output Y represent the random variables at any fixed time moment t. This is the reason to denote them by capital letters as well as the random functions in the right-hand sides to the Eq. (1). The state vector of the system Zt and its output Yt considered as the functions of time t represent random functions of time t (in the general case vector random functions). In every specific trial the random functions Fzxt and Gzt are realized in the form of some functions fzxt and gzt and these realizations determine the corresponding realizations zt,yt of the state vector Zt and the output Yt satisfying the differential equations (which are the realizations of Eq. (1)

ż=fzxt,y=gzt.

Thus we come to the necessity to study the differential equations with random functions in the right-hand sides.

At practice the randomness of the right-hand sides of the differential equations arises usually from the fact that they represent known functions some of whose arguments are considered as random variables or as random functions of time t and perhaps of the state and the output of the system. But in the latter cased these functions are usually replaced by the random functions of time which are only obtained by assuming that their arguments Z and Y are known functions of time corresponding to the nominal regime of system functioning. In practical problems such an assumption usually provides sufficient accuracy.

So we may restrict ourselves to the case where all uncertain variables in the right-hand sides of differential equations may be considered as random functions of time. Then Eq. (1) may be written in the form

Ż=fZxN1tt,Y=gZN2tt,E2

where f and g are known functions whose arguments include random functions of time N1t and N2t. The initial state vector of the system Z0 in practical problems is always a random variable independent of the random functions N1t and N2t (independent of random disturbances acting of the system).

Every realization n1tTn2tTT of the random function N1tTN2tTT determines the corresponding realizations fzxn1tt,gzn2tt of the functions fzxN1tt,gzN2tt, and in accordance with this Eq. (2) determine respective realizations zt and yt of the state vector of the system Zt and its output Yt.

Following [9, 10] let us consider the differential equation

dX/dt=aXt+bXtV,E3

where axt,bxt being functions mapping Rp×R into Rp and Rpq, respectively, is called a stochastic differential equation if the random function (generalized) Vt represents a white noise in the strict sense. Let X0 be a random vector of the same dimension as the random function Xt. Eq. (3) with the initial condition Xt0=X0 determines the stochastic process (StP)Xt .

In order to give an exact sense to Eq. (3) and to the above statement we shall integrate formally Eq. (3) in the limits from t0 to t with the initial condition Xt0=X0. As result we obtain

Xt=X0+t0taXττ+t0tbXττVτ

where the first integral represents a mean square (m.s.) integral. Introducing the StP with independent increments Wt whose derivative is a white noise Vt we rewrite the previous equation in the form

Xt=X0+t0taXττ+t0tbXττdWτ.E4

This equation has the exact sense. Stochastic differential Eq. (3) or the equivalent equation

dX=aXtdt+bXtdWE5

with the initial condition Xt0=X0 represents a concise form for of Eq. (4).

Eq. (5) in which the second integral represents a stochastic Ito integral is called a stochastic Ito integral equation and the corresponding differential Eq. (3) or (5) is called a stochastic Ito differential Eq.

A random process Xt satisfying Eq. (4) in which the integral represent the m.s. limits of the corresponding integral sums is called a mean square of shortly, an m.s. solution of stochastic integral Eq. (4) and of the corresponding stochastic differential Eq. (3) or (5) with the initial condition Xt0=X0.

If the integrals in Eq. (4) exist for every realization of the StPWt and Xt and equality (4) is valid for every realization then the random process Xt is called a solution in the realization of Eq. (4) and of the corresponding stochastic differential Eq. (3) and (5) with the initial condition Xt0=X0.

Stochastic Ito differential Eqs. (3) and (5) with the initial condition Xt0=X0, where X0 is a random variable independent of the future values of a white noise Vs,s>t0 (future increments WsWt,s>tt0, of the process W) determines a Markov random process.

In case of W being vector StP with independent in increments probabilistic modeling of one and n-dimensional characteristic functions g1=EeiλTZt and gn=Eexpik=1nλkTZtk and densities f1 and fn is based on the following integrodifferential Pugachev Eqs:

g1λtt=12πpiλTazt+χbztTλteiλT+μTzg1μtdμdz,E6
tngnλ1λnt1tn=12πpqiλnTazntn+χbzntnTλntn×expik=1nλkTμkTzKgnμ1μnt1tndμ1,,dμn;dz1,,dzn,E7
f1ztt=12πpiλTaζt+χbζtTλteiλTζzf1ζtdζdλ,E8
tnfnz1znt1tn=12πnpiλnTaζntn+χbζntnTλntn×expik=1nλkTζkzkfnζ1ζnt1tndζ1,,dζn;dλ1,,dλn,E9
f1zt0=f0z,E10

where i being imaginary unit,

χμt=1h1μth1μtt,E11
fnz1zn1znt1tn1tn1=fn1z1zn1t1tn1δznzn1.E12

For the Wiener W StP with intensity matrix vt we use Fokker-Plank-Kolmogorov Eqs:

fnztt=Tzaztfnzt+12trzTzbztvtb(zt)Tfn(zt)E13

at initial conditions (12).

2.2 Discrete StS

For discrete vector StP yielding regression and autoregression StS

Xk+1=ωkXkVkk=12,E14
Xk+1=akXk+bkXkVkk=12.E15

Eqs for one and n dimensional densities and characteristic functions are described by:

fkx=12πpeiλTxgkλ,gkλ=EexpiλTXk,E16
fk1,,knx1xn=12πnpexpih=1nλhTxhgk1,,knλ1λndλ1,,dλn,E17
gk1,,knλ1λn=Eexpil=1nλlTxkl,E18
gk+1λ=Eexp{iλωkXkVk=eiλTωxυfkxhkυdxdυ,E19
gk1,,knλ1λn=Eexpil=1nλlTxkl+iλnTωknXknVkn=expih=1nλhTxh+iλnTωknxnυnfk1,,knx1xnηnυndx1dxn,dυn.E20

Here E being symbol of mathematical expectation, hk being Vk characteristic function

gk1,,kn1,kn1λ1,λn=gk1,.kn1λ1,λn1+λn,gk1,,knλ1,λn=gs1,,snλs1,λsn,E21

where s1sn – permutation of 1n at ks1<ks2<<ksn.

In case of the autoregression StS (1.14) basic characteristic functions are given by Eqs:

gk+1λ1λn=EexpiλTakxk+iλTbkXkVk=eiλTakx+iλTakxυfkxhkυdxdυ=EexpiλTakXk+hkbkXkTλ,E22
gk1,,knλ1λn=Eexpil=1n1λlTxkl+iλnTaknXkn+iλnTbknXknVkn=expih=1n1λlTxl+iλnTaknxn+iλnTbknxnυn×Eexpil=1n1λlTxkl+iλnTaknXkn+hnbnXnTλn.E23

2.3 Hybrid continuous and discrete StS

When the system described by Eq. (2) is automatically controlled the function which determines the goal of control is measured with random errors and the control system components forming the required input x are always subject to noises, i.e. to random disturbances. Forming the required input and the real input including the additional variables necessary to transform these equations into a first-order equation may be written in the form

Ẋ=φXUt,U̇=ψXZUN3ttE24

where U is the vector composed of the required input and all the auxiliary variables, and N3t is some random function of time t (in the general case representing a vector random function). Writing down these equations we have taken into account that owing to the action of noises described by the random function N3t, the vector U and the input X represent random functions of time and in accordance with this we denoted them by capital letters. These equations together with the first Eq. (2) form the set of Eqs

Ż=fZXN1tt,Ẋ=φXUt,U̇=ψXZUN3tt.

These equations may be written in the form of one equation determining the extended state vector of the system Z1=ZTXTUTT:

Ż1=f1Z1N4tt

where N4t=N1tTN3tTT, and

f1Z1N4tt=fZXN1tTφXUtTψXZUN3tTT.

As a result rejecting the indices of Z1 and f1 we replace the set of the Eqs. (2) and (24) by the equations

Ż=f1Z1N4tt,Y=gZN2tt.

In practical problems the random functions N1t and N2t are practically always independent. But the random function N3t depends on N1t and N2t due to the fact that the function hYt=hgZN2ttt and its total derivative with respect to time t enter into Eq. (24). Therefore, the random function N2t and N4t are dependent. Introducing the composite vector random function Nt=N1tTN2tTN3tTT we rewrite the equations obtained in the form

Ż=f1Z1Ntt,Y=gZNtt.E25

Thus in the cased of an automatically controlled system described by Eq. (2), after coupling Eq. (2) with the equations of forming the required and the real inputs we come to the equations of the form of (23) containing the random function Nt.

If a control StS based on digital computers we decompose the extended state vector Z into two subvectors Z, Z,Z=ZTZTT one of which Z represents a continuously varying random function, and the other Z is a step random function varying by jumps at prescribed time moments tkk=012. Then introducing the random function

Zt=k=0Zk1Akt

and putting Zk=Ztkk=012 we get the set of equations describing the evaluation of the extended state vector of controlled

Ż=fZNtt,Zk+1=φkZkNkE26

where Nkk=012 are some random variables, and Nt some random function.

For hybrid StS (HStS) let us now consider the case of a discrete-continuous system whose state vector Z=ZTZTT (extended in the general case) is determined by the set of equations

Ż=aZt+bZtV,Z=k=0Zk1Akt,Zk+1=ωkZkVkE27

where is the value of Zt at t=tk, Zk=ZkTZkTT=Ztkk=012, a,b,ωk are functions of the arguments indicated 1Akt is the indicator of the interval Ak=tktk+1k=012,V is a white noise in the strict sense, Vk is a sequence of independent random variables independent of the white noise V. The one-dimensional characteristic function h1μt of the process with independent increments Wt whose weak m.s. derivative is the white noise V, and the distributions of the random variables Vk will be assumed known.

Introducing the random processes

Zt=k=0Zk1Akt,Z¯t=ZtTZtTZtTT.

we derive in the same way as before the equation for the one-dimensional characteristic function

g1λt=EeiλTZ¯t=EexpiλTZt+iλTZt+iλTZt=EexpiλTZt+iλTZk+iλTZk

of the StP Z¯t

g1λtt=EiλTaZt+χ(b(Zt)TλTt)eiλTZ¯.E28

Taking the initial moment t0=t0 the initial condition for Eq. (25) is

g1λt0=EiλT+iλTZ0+iλTZ0=g0λT+λTλTTE29

where g0ρ is the characteristic function of the initial value Z0=Zt0 of the process Zt.

At the moment tk the value of g1λt is evidently equal to

EexpiλT+λTZk+iλTZk,

i.e. to the value gkλT+λTλTT of the characteristic function gkρ of the random variable Zk=ZkTZkTT. If the function χμt is continuous function of t at any μ the g1λt tends to

EiλTZk+1+iλTZk+iλTZk

when ttk+1, i.e. to the joint characteristic function gkλλλ of the random variables Zk+1,Zk.Zk

g1λtk+10=limttk+1g1λt=gkλλλ.

At the moment tk+1g1λt changes its value by the jump and becomes equal to

EexpiλT+λTZk+1+iλTZk+1=gk+1λT+λTλTT.

To evaluate this, we substitute here the expression of Zk+1 from the last Eq. (27). Then we get

g1λtk+1=EexpiλT+λTZk+1+iλTωk(ZkVk).E30

Owing to the independence of the sequence of random variables Vk of the white noise V and independence of Vk of V0,V1,,Vk1 the random variables Zk and Zk+1 are independent of Vk. Hence, the expectation in the right-hand side of Eq. (30) is completely determined by the known distribution of the random variableVk and by the joint characteristic function gkλλλ of the random variables Zk+1,Zk,Zk, i.e. by g1λtk+10. So Eq. (26) with the initial condition (27) and formula (28) determine the evolution of the one-dimensional characteristic function g1λt of the process Z¯t=ZtTZtTZtTT and its jump-wise increments at the moments tkk=12.

In the case of the discrete-continuous HStS whose state vector is determined by Eqs

Ż=aZt+bZtVE31

we get in the same way the equation for the n-dimensional characteristic function gnλ1λnt1tn of the random process Zt=ZtTZtTZtTT,

gnλ1,,λn;t1,tn/tn=EiλnTaZtntn+χbZtntnTλntniλ1TZ¯t1++iλnTZ¯tn,E32

And the formula for the value of gnλ1λnt1tn at tn=tk+1tn1,

gnλ1,,λn;t1,,tn1tk+1=E{iλ1TZ¯t1++iλn1TZ¯tn1+iλn'T+λn'''TZk+1'+iλn''TωkZkVk}.E33

At the point tn=tk+1gnλ1λnt1tn changes its value by jump from

gnλ1,,λn;t1,,tn1tk+10=E{iλ1TZ¯t1++iλn1TZ¯tn1+iλn'TZk+1'+iλn''TZk''+iλn'''TZk'}

to gnλ1λnt1tn1tk+1 given by (33).

The right-hand side of (33) is completely determined by the known distribution of the random variable Vk and by the joint characteristic function gnλ1λnt1tn1tk+10 of the random variables Zt1,,Z¯tn1,Zk+1',Zk'',Zk'. Hence, Eq. (32) with the corresponding initial condition and formula (33) determine the evolution and the jump-wise increments of gnλ1λnt1tn at the points tk+1 when tn increases starting from the value tn1.

2.4 Linear StS

For differential linear StS and W being StP with independent increments V=Ẇ

Ż=aZ+a0+bVE34

corresponding Eqs for n-dimensional characteristic function are as follows:

gntn=λnTatngnλn+[iλnTa0tn+χ(btnTλn;tn]gn.E35

Explit formulae for n-dimensional characteristic function is described by formulae

gnλ1λnt1tn=g0k=1nutkt0Tλkexpik=1nλkTt0tkutkτTa0τ+k=1ntk1tkχbτTl=knu(t1τ)Tλlτn=12.E36

Here u=utkτ being fundamental solution of Eq u̇=au at condition: utt=I (unit n×n matrix).

In case of the Gaussian white noise V with intensity matrix v characteristic function gn is Gaussian

gnλ1λnt1tn=g0k=1nutkt0Tλkexpik=1nλkTt0tkutkτa0τ12l;h=1nλlTt0minttthutlτbτvτbτTu(thτ)Tdτλhn=12.E37

2.5 Linear StS with the parametric Gaussian noises

In the case of StS with the Gaussian discrete additive and parametric noises described by Eq

Ż=aZ+a0+b0+h=1pbhZhV.E38

we have the infinite set of equations which in this case is decomposed into independent sets of equations for the initial moments αk of each given order

α̇k=r=1pkrar,0αker+q=1par,eqαk+eqer+12r=1pkrkr1σrr,0αk2er+q=1pσrr,eqαk+eq2er+q,u=1pσrr,eq+euαk+eq+eu2er+r=2ps=1p1krksσrs,0αkeres+q=1pσrs,eqαk+eqeres+q,u=1pσrs,eq+euαk+eq+eueres,E39
ar,0=a0,r,ar,eq=arqk1kp=012σk=12..

Corresponding Eqs of correlational theory are as follows:

ṁ=am+a0;E40
K̇=aK+KaT+b0vboT+h=1pbhvb0T+b0vbhTm0+h,l=1pbhvblTmhml+khl.E41

where khl is the covariance of the components Zh and Zl of the vector Zhl=1p. Eq. (41) with the initial condition Kt0=Kokpqt0=kpq0 completely determines the covariance matrix Kt of the vector Zt at any time moment t after funding its expectation m.

For discrete StS with the Gaussian parametric noises correlational Eqs may be presented in the following form:

Xk+1=akXk+a0l+b0l+j=1pbkjXkjVk,E42
mk+1=akmk+a0k,mk=EYk,E43
Kk+1=akKkakT+b0kvkb0kT+j=1pb0kvlbjkT+bjkvlb0kTmjk+j=1ph=1pbjkvkbhkTmkjmkh+kkjh,K1=EY1m1Y1m1T,E44
Kjh+1=KjhahT,Kjj=K.E45

2.6 Normal approximation method

For StS of high dimensions methods of normal approximation (MNA) are the only used at engineering practice. In case of additive noises bxt=b0t MNA is known as the method of statistical linearization (MSL).

Basic Eqs of MNA are as follows [9]:

g1λtexpiλTmt12λTKtλ,f1xt2πpKt1/2exp12xTmtTKt1xmt,E46
ṁt=φ1mtKttmt0=m0,φ1mtKtt=ENaYtt,E47
K̇t=φ2mtKttKt0=K0,E48
φ2mtKtt=φ21mtKtt+φ21mtKttT+φ22mtKtt,φ21mtKtt=ENaXttXtTmtT,φ22mtKtt=ENbXttvtbXttT,Kt1t2t2=Kt1t2Kt21φ21(m,t2Kt2,t2T,E49
gnλ1λnt1tn=expiλTm¯n12λ¯TK¯nλ¯n=12,fnx1xnt1tn=[2πnK¯n1/2exp12x¯nTm¯nTK¯n1x¯nm¯nn=12,E50
λ¯=λ1Tλ2TλnTT,m¯n=mxt1Tmxt2TmxtnTT,K¯n=Kt1t1Kt1t2Kt2tnKt2t1Kt2t2Kt2tnKtnt1Ktnt2Ktntn,wherex¯n=x1Tx2TxnTT.E51

Eq. (49) may be rewritten in form

Kt1t2t2=φ3Kt1t2t1t2E52

where

φ3Kt1t2t1t2=[2π2nxK¯21/2x1mt1φx2t2×expx1Tx2Tm¯2T)K¯21x1Tx2Tm¯2T)dx1dx2;m¯2=mt1Tmt2TT;K¯2=Kt1t1Kt1t2Kt2t1Kt2t2.E53

For discrete StS equations of MNA may be presented in the following form:

ml+1=ENωlXlVlm1=EX1l=12,E54
Kl+1=ENωlXlVlωlXlVlTENωlXlVlENωlXlVlT,E55

at conditions

K1=ENX1m1X1m1Tl=12,Klh=ENXlωhXhVhTmlENωhXhVhT,Kll=Klatl<hh=1,2,Kln=Khl=KhlTatl<h.E56

Corresponding MNA equations for Eq. (15) are the special case of Eqs. (54)(56).

Advertisement

3. Conditionally optimal forecasting in StS

Optimal forecasting is well developed for linear StS and off-line regimes [9]. For nonlinear StS linear StS with the parametric Gaussian noises and on-line regimes different versions approximate (suboptimal) methods are proposed. In [9] general results for complex statistical criteria and Bayes criteria are developed. Let us consider m.s. conditionally optimal forecasters for StS being models of stochastic OTES.

3.1 Continuous StS

Conditionally optimal forecasting (COFc) for mean square error (mse) criterion was suggested by Pugachev [10]. Following [9] we define COFC as a forecaster from class of admissible forecasters which at any joint distributions of variables Xt (state variable) X̂t (estimate of Xt), Yt (observation variable) at forecasting time Δ>0 and time moments tt0 in continuous (differential) StS

dXt=aXtYttdt+bXtYttdW1,dYt=a1XtYttdt+b1XtYttdW2E57

(W1,W2 being independent white noises with the independent increments; φφ1ψψ1 being known nonlinear functions) gives the best estimate of Xs+Δ at infinitesimal time moment s>t,st realizing minimum EX̂sX̂s+Δ2. Then COFc at any time moment tt0 is reduced to finding optimal coefficients αt,βt,γt in the following Eq:

dX̂t=αtξX̂tYttdt+βtηX̂tYttdYt+γtdt.E58

Here ξ=ξX̂tYtt,η=ηX̂tYtt are given functions of current observations Yt, estimate X̂t and time t.

Using theory of conditionary optimal estimation (13, 17, 18) for Eq

dXt+Δ=aXt+Δt+Δdt+bXt+Δt+ΔdW1t+Δ.E59

we get the following Eqs for coefficients αt,βt,γt

αtm1+βtm2+γt=m0,m0=EaXtYtt,m1=EξYtX̂t+Δt,m2=EηYtX̂t+Δta1XtYtt,E60
βt=κ02κ221,κ02=EXtX̂t+Δa1XtYttTηYtX̂t+ΔtT+EbXtYttvtb1XtYttTηYtX̂t+ΔtT,E61
κ22=EYtX̂t+Δtb1XtYttvtb1XtYttTηYtX̂t+ΔtT+EbXtYttvtb1XtYttTηYtX̂t+ΔtTE62

at condition detκ220.

The theory of conditionally optimal forecasting gives the opportunity for simultaneous filtering of state and identification of StS parameters for different forecasting time Δ. All complex calculations for COFc design do not need current observations and may be performed on a priori data during design procedures. Practical application of such COFc is reduced to Eq. (58) integration. The time derivative for the error covariance matrix Rt is defined by formulae

Ṙt=E[Xt+ΔX̂taXt+Δt+ΔT+aXt+Δt+ΔXt+ΔTX̂tTβtηYt,X̂t+Δtb1Xt,Yttv2tb1Xt,YttTηYt,X̂t+ΔtTβtT+bXt+Δt+Δv1t+ΔbX̂t+Δt+ΔT].E63

Mathematical expectations in Eq. (60)(63) are computed on the basis of joint distribution of random variables XtTXt+ΔTYtTX̂tTX̂t+ΔTT by solution of the following Pugachev Eq for characteristic function g2λ1λ2λ3μ1μ2μ3ts for StP XtTYtTX̂tTT at s>t:

g2λ1,λ2,λ3,μ1,μ2,μ3;ts/s=E{iμ1Ta1Ys,Xss+iμ2TaXss+iμ3TαsξYs,X̂ss+βsηYs,Xss+γs+χ(b1Ys,XssTμ1+bXssμ2+b1Ys,XssTηYs,X̂ssTβsTμ3;s)}×exp{iλ1TYt+iλ2TXt+iλ3TX̂t+iμ1TYs+iμ2TXs+iμ3TX̂s}.E64

at condition

g2λ1λ2λ3μ1μ2μ3tt=g1λ1+μ1λ2+μ2λ3+μ3ts.E65

Basic algorithms are defined by the following Proposals .3.1.1–3.1.3.

Proposal 3.1.1. At the conditions of the existence of probability moments (60), (61) nonlinear COFc is defined by Eqs. (58) and, (63).

Proposal 3.1.2. For linear differential StS

dXt=a1Xt+a0dt+bdW1,dYt=bYt+b1Xt+b0dt+b1dW2.E66

Eqs of exact COFc are as follows:

dX̂t=a1t+ΔεtX̂1t+ht+a0t+Δdt+εtβ1tdYtbX̂1t+b0dt.E67
ε̇t=a1t+Δεtεta1.E68
ḣt=a1t+Δεta0+a1t+Δht.E69
Ṙt=a1t+ΔRt+Rta1t+ΔTβtb1v2b1TβtT+ψ1t+Δv1t+Δb1t+ΔT.E70

In case of the linear StS with the parametric Gaussian noises:

dXt=a1Xt+a0dt+c10+r=1nxc1,ny+rXrdW1,dYt=bYt+b1Xt+b0dt+c20+r=1nyc2rYr+r=1nxc2r,ny+rXrdW2.E71

COFc is defined by exact Eqs (Proposal 3.1.3):

dX̂t=a1t+ΔεtX̂1+ht+a0t+Δdt+εtβ1tdYtbX̂1+b0dt.E72
ε̇t=a1t+Δεtεta1,ḣt=a0t+Δεta0+a1t+Δht,E73
Ṙt=a1t+ΔRt+Rta1t+ΔTβtc20+r=1ny+nxc2rmrv1c20T+r=1ny+nxc2rTmr+r=1ny+nxc2rv1c2sTkrsβtT+c10t+Δ+r=ny+1ny+nxc1rt+Δmrt+Δv2t+Δ×c10t+ΔT+r=ny+1ny+1c1rt+ΔTmrt+Δ+r=ny+1ny+nxc1rt+Δv2t+Δc1st+ΔTkrs.E74

For nonlinear StS in case of the normal StP Xt,Yt,X̂t Eqs of normal COFc (NCOFc) are defined by Proposal 3.1.1 for joint normal distribution.

3.2 Discrete and hybrid StS

Let us consider the following nonGaussian nonlinear regression StS

Xk+1=ωkXkVk,Yk=ω1kXkYkVkk=12.E75

In this case Eqs of the discrete COFc are as follows:

Xk+r+1=δkζkXkX̂k+γk,E76
δk=DkBk1,γk=mk+r+1δkρk,E77
mk+r+1=Eωk+rXk+rVk+r,E78
ρk=EζkXkX̂k,Bk=EζkXkX̂kρkζkXkX̂kT,
Dk=Eωk+rXk+rVk+rmk+r+1ζkωkXkX̂kT,E79
g2,k,k+rλ1λ2μ=Eiλ1TXk+iλ2TXk+r+iμTX̂k,E80
g2,k,k+r+1λ1λ2μ=Eexpiλ1TXk+iλ2Tωk+rXk+rVk+r+iμTX̂kE81

at initional condition

g2,k,kλ1λ2μ=g1,kλ1+λ2μ.E82

So for the nonlinear regression StS (14) we get Proposal 3.2.1 defined by Eqs. (75)(82).

In case of the nonlinear autoregression discrete StS (15) we have the following Eqs of Proposal 3.2.2:

Xk+1=akXk+bkXkVk,Yk=a1kXkYk+b1kYkVk,E83
Xk+r+1=αkξkX̂k+βkηkX̂kYk+γk,E84
αkκ11k+βkκ21k=κ01k,αkκ12kX̂k+βkκ22k=κ02k,E85
γk=ρ0k+r+1αkρ1kX̂kβkρ2k=κ02k,E86
ρ0k+r+1=Eak+rXk+r,E87
ρk=ρ1kTρ2kTT,ρ1k=EξkX̂k,ρ2k=EηkX̂ka1kXk,E88
Bk=κ11kκ12kκ11kκ22k,detBk0,E89
κ11k=EξkX̂kρ1kξkX̂kT,κ12k=κ21k=EξkX̂kρ1ka1kXkTηkX̂kT,κ22k=EηkX̂ka1kXkρ2ka1kXkTηkX̂kT+EηkX̂kb1kXkvkb1kXkTηkX̂kT,E90
Dk=κ01kκ02k,E91
κ01k=EakXkmk+1ξkX̂kT,κ02k=EakXkmk+1a1kXkTηkX̂kT+Eb1kXkvkb1kXkTηkX̂kT,E92
mk+1=ρ0k,ρ0k=EakXk,EVk=0,EVkVkT=vk.E93

Analogously we get from Proposal 3.2.2 COFc for discrete linear StS and linear with the Gaussian parametric noises. For hybrid StS we recommend mixed algorithm based on joint normal distribution and Proposal 3.1.1.

3.3 Generalizations

Mean square results (Subsection 2.1 and 2.2) may be extended to StS described by linear, linear with the Gaussian parametric noises and nonlinear Eqs or reducible to them by approximate suboptimal and conditionally optimal methods.

Differential StS with the autocorrelated noises in observations may be also reduced to differential StS.

Special COFc algorithms based on complex statistical criteria and Bayesian creteria are developed in [11].

Advertisement

4. Probability modeling in SOTES

Following [3, 4] let as consider general approach for the SOTES modeling as macroscopic (multi-level) systems including set of subsystems being also macroscopic. In our case these sets of subsystems will be clusters covering that part of MP connected with aftersales production service. More precisely the set of subsystems of lower level where input information about concrete products, personal categories etc. is formed.

For typical continuous-discrete StP in the SOTES production cluster we have the following vector stochastic equation:

dXt=φXtt+SvρXttdt+SvdP0t.E94

Here P0t being the centered Poisson StP; ρXtt being np×1 intensity of vector of StP Pt, ρXtt=ρ12Xttρ13XttρukXttT;ρukXtt being intensities of streams changes of states; φXtt being continuous np×1 vector function of quality indicators in OPB; Sv being np×nρ matrix Poisson stream of resources (production) with volumes v according to the SOTES state graph. Analogously we get corresponding equations for SOTES-O and SOTES-N:

dYt=qXtt+φ1Ytt+DrγYttdt+DrdP10t,E95
dζt=φ2ζtt+Cϑμζttdt+CϑdP20t,E96

where φ1 and φ2 being vector functions quality indicators in OPB for the SOTES-O and the SOTES-N; Dr being structional matrix of resources streams in the SOTES-N matrix; γYtt and Dr being the intensity function and vector of P10t jumps in the SOTES-O.

In linear case when ρukXtt=AρXt Eqs. (94)(96) for the SOTES, SOTES-O and the SOTES-N may be presented as

dXt=a¯tvXtdt+SvdP0t,E97
dYt=b¯rtYtdt+λYttXt+DrdP0t+ψ1tdζt,E98
dζt=c¯2ϑtζtdt+CϑdP20t.E99

Here notations

b¯1rt=b1t+Aγrt,c¯2ϑt=c2+Aμϑt.E100

Aγrt, Aμϑt are derived from Eqs:

DrγYttAγrtYt,CϑμζttAμϑtζt.E101

At practice a priori information about SOTES-N is poor than for the SOTES and SOTES-O. So introducing the Wiener StP Wt,W1t,W2t we get the following Eqs:

dXt=a¯Xt+a1Yt+a0dt+SvdP0t+ψ'tdWt,E102
dYt=qXt+b1Yt+b2ζt+b0dt+DrdP10t+ψ1tdζt+ψ1'tdW1t,E103
dζt=c¯2ζt+c0dt+CϑdP20t+ψ2'tdW2t.E104

R e m a r k 4.1. Such noises from OTES-N may act at more lower levels OTES-O included into internal SOTES being with minimal from information point of view maximal. For highest OTES levels intermediate aggregative functions may be performed. So observation and estimation systems must be through (multi-level and cascade) and provide external noise protection for all OTES levels.

R e m a r k 4.2. As a rule at administrative SOTES levels processes of information aggregative and decision making are performed.

Finally at additional conditions:

1. information streams about OPB state in the OTES-O are given by formulae

Yt=GtTst+TstE105

and every StP GtTst is supported by corresponding resource (e.g. financial);

2. for SOTES measurement only external noise from SOTES-N and own noise due to error of personal and equipment are essential.

We get the following basic ordinary differential Eqs:

Ẋt=a¯Xt+a1Gt+a0+χxVΩLX,2E106
Ġt=qTstXt+b2ζt+χgVΩLG,E107
ζ̇t=c2ζt+c0+χζVΩLζ,E108
Ṫst=bXt+b¯1Tst+b0+χτsVΩLT.E109

Here VΩt=VxTtVgTtVζTtVstTtT being vector white noise, dimVΩt=nx+ng+nζ+nts×1, MVΩt=0, with diagonal block intensity matrix vΩ=diagvxvgvζvts,dimvxt=nx×ny, dimvgt=ng×ng, dimvζt=nζ×nζ, dimvtst=nts×nnst, χx,χg,χζ,χst being known matrices:

Vx=SvVPt+ψ'tVW,Vg=ψ1tVζ+ψ1'tVW1,Vζ=CϑVp2+ψ2'tVW2;E110
VP=Ṗ0t,VP1=Ṗ10t,VP2=Ṗ20t,Vst=VP1,VW=Ẇt,VW1=Ẇ1t,VW2=Ẇ2t.E111

R e m a r k 4.3. Noises VP, VP1, VP2 (random time moments of resources or production) in are non-Gaussian noises induced by Poisson noises in the OTES, OTES-O, OTES-N, whereas noises VW, VW1, VW2 (personal errors, internal noises) are Gaussian StP.

From Eqs. (110) and (111) we have the following equivalent expressions for intensities of vector VΩt:

vx=Svρ¯STv+ψ'vWψ'T,vg=ψ1vζψT+ψ1'vW1ψ1'T,vζ=Cϑμ¯CTϑ+ψ2'vW2ψ2'T,vts=Drγ¯DTr.E112

Here the following notations are used: Svρ¯STv, Cϑμ¯CTϑ, Drγ¯DTr being intensities of nonGaussian white noises SvVPt, DrVP1t, CϑVP2t: ρ¯=EdiagρXtt,γ¯=EdiagγYtt, μ¯=Ediagμζtt being mathematical expectations of intensivity diagonal matrices of Poisson streams in the SOTES, SOTES-O, SOTES-N; vW, vW1, vW2 being intensivities of Gaussian white noises VW, VW1, VW2. Note the difference between intensity of Poisson stream and intensity of white noise.

In case of Eqs. (106)(109) with the Gaussian parametric noises we use the following Eqs:

Ẋt=LX+a˜Xt+a˜1GtVΩ,E113
Ġt=LG+q˜Xt+b˜2ζtVΩ,E114
ζ̇t=Lζ+c˜2ζtVΩ,E115
Ṫst=LT+b˜Xt+b˜1TstVΩ,E116

where bar means parametric noises coefficients.

At additive noises VΩ presenting Eqs. (113)(116) for Zt=XtGtζtTstT in form of MSL:

Żt=B0mtzKtzt+B1mtzKtztZt+B'mtzKtztVtΩ,E117

we get following set of interconnected Eqs for mtz,Ktz:

ṁtz=B0mtzKtzt,mzt0=m0z,E118
K̇tz=B1mtzKtztKtz+KtzB1mtzKtztT+B'mtzKtztvtΩB'mtzKtztT,Kzt0=K0z.E119

Eq for Kzt1t2 is given by (49).

Advertisement

5. Basic SOTES conditionally optimal filtering and forecasting algorithms

Proposal 5.1. Let SOTES, SOTES-O, SOTES-N being linear, satisfy Eqs. (102)(104) and admit linear filter of the form:

X̂̇t=a¯X̂t+a1Gt+a0+βtĠtqtX̂t+b2ζt,E120

where coefficient qt in (120) does not depend upon Tst. Then Eqs of optimal and conditionally optimal filters coincide with the Kalman-Bucy filter and may be presented in the following form:

X̂̇t=a¯X̂t+a1Gt+a0+RtqtTvg1ZqtX̂t+b2ζtZt=Ġt,E121
Ṙt=a¯Rt+Rta¯T+vxRtqtTvg1qtRt.E122

Proposal 5.2. At condition when measuring coefficient qt depends upon λt=λTstt and admit statistical linearization

λTsttλ0mstKstt+λ1mstKsttTst0,
λ0mstKstt=MλTsttλ0mstKstt,λ1mstKstt0E123

sub- and conditionally optimal filter Eqs are follows:

X̂̇t=a¯X̂t+a1Gt+a0+Rtq0mstKstTvg1Ztq0mstKstX̂t+b2ζt,E124
Ṙt=a¯Rt+Rta¯+vxRtq0mstKstvg1qtTmstKstRt.E125

R e m a r k 5.1. Filtering Eqs defined by Proposals 5.1 and 5.2 give the m.s. square optimal algorithms nonbias of Xt for OTES at conditions of internal noises of measuring devices and external noise from OTES on measuring part of SOTES-O.

R e m a r k 5.2. Accuracy of estimation X̂t depends upon not only upon noise ζt influencing on measuring signal but on rule and technical-economical quality SOTES criteria but on line state of resources Tst OPB for SOTES-O.

Using [9, 10, 11] let us consider more general SOTES than Eqs. (113)(116) for system vector X¯t=XtGtζtTstT and observation vector Y¯t=Y1Y2Y3Y4T defined by Eqs:

X¯̇t=aY¯t+a1X¯t+a0+c10+r=1nyc1rY¯r+r=1nxc1,ny+rX¯rV,E126
Zt=Y¯̇t=bY¯t+b1X¯t+b0+c20+r=1nyc2rY¯r+r=1nxc2,ny+rX¯rV1.E127

Here a,a0,a1,b,b0,b1and ciji=12j=1,nx¯– vector–matrix functions t do not depend from X¯t=X1XnxT and Y¯t=Y1YnyT. Then corresponding algorithm of conditionally optimal filter (COF) is defined by [9, 10, 11]:

X¯̂̇t=aY¯t+a1X¯̂t+a0+βtŻtbY¯t+b1X¯̂t+b0.E128

For getting Eqs for βt it is necessary to have Eqs for mathematical expectation mt and covariance matrix Kt of random vector Qt=X1XnxY1YnyT error covariance matrix Rt for X˜t=X¯̂tX¯t. Using Eqs

ṁt=amt+a0,E129
K̇t=aKt+KtaT+c0vc0T+r=1ny+nxc0vcrT+crvc0Tmr+r,s=1ny+nxcrvcsTmrms+krsE130
a=bb1aa1a0=b0a0cr=c2rc1rr=0,ny+nx¯,E131

we have the following Eq for the error covariance matrix

Ṙt=a1Rt+Rta1T[Rtb1T+c10+r=1ny+nxc1rmrvc20+r=1ny+nxc2rTmr+r,s=1ny+nxc1rvc2rTkrs]κ111××[Rtb1+c20+r=1ny+nxc2rmrvc10T+r=1ny+nxc1rTmr+r,s=1ny+nxc2rvc2sTkrs]++c10+r=1ny+nxc1rmrvc10T+r=1ny+nxc1rTmr+r,s=1ny+nxc1rvc1sTkrs.E132

Here

κ11=c20+r=1ny+nxc2rmrvc20T+r=1ny+nxc2rTmr+r,s=1ny+nxc2rvc2sTkrs,E133

mt=mrr=1ny+nx,Kt=krs(r,s=1my¯+nx¯; V being the white nonGaussian noise of intensity v. Coefficient βt in Eq. (127) is defined by formula

βt=Rtb1T+c10+r=1ny+nxc1rmrvc20T+r=1ny+nxc2rTmr+r,s=1ny+nxc1rvc2sTkrsκ111.E134

R e m a r k 5.3. In case when observations do not influence the state vector we have the following notations:

a=0,b=0,c1r=0,c2r=0r=1,4¯,nx=4,ny=4;a1=a¯a¯100q0b2000c20b00b¯1,a0=a¯00c0b0,c10=χxχGχζχst,c1,5=a˜a˜100,c1,6=q˜0b˜20,c1,7=00c˜20,c1,8=b˜00b˜1.E135

Proposal 5.3. Let SOTES is described by Eqs. (125) and (126). Then COF algorithm is defined by Eqs. (127)(133).

Theory of conditionally optimal forecasting [9, 10, 11] in case of Eqs:

X¯̇t=a1X¯t+a0c10+r=1nxc1,ny+rX¯rV1,E136
Zt=Y¯̇t=bY¯t+b1X¯t+b0c20+r=1nyc2rY¯r+r=1nxc2,ny+rX¯rV2,E137

where Δ being forecasting time, V1 and V2 are independent nonGaussian white noises with matrix intensities v1 and v2, gives the following Eqs for COFc:

X¯̂̇t=a1t+ΔX¯̂t+a0t+Δ+βtZtbY¯t+b1εt1X¯̂t+b0b1εt1ht.E138

where the following notations are used: ust is fundamental solution of Eq: du/ds=a1su at initial condition utt=I,εt=ut+Δt,

βt=εtKxKx̂xb1Tκ111,E139
ht=ht=tt+Δut+Δτa0τ,ht+Δt=εt,E140
mxt+Δ=εtmxt+ht=εtmx+ht.E141

R e m a r k 5.4. At practice COFc may be presented as sequel connection of COF, amplifier with gain εt=ut+Δt and summater ht=ht:

X¯̂t=εtX¯̂1+ht,E142

where X¯̂t being the COF output or COF of current state X¯t.

Eq. (137) may be presented in other form:

X¯̂̇t=[a1t+ΔεtX̂1+ht+a0t+Δ+εtβ1ZtbX̂1+b0.E143

Accuracy of COFc is defined by the following Eq:

Ṙt=a1t+ΔRt+Rta1t+ΔTβt[c20+r=1ny+nxc2rmrv1c20+r=1ny+nxc2rTmr+r,s=1ny+nxc2rv1c2sTkrs]βtT++c10t+Δ+r=ny+1ny+nxc1rt+ΔTmrt+Δ+s=nY+1ny+nxc1rt+Δv2t+Δc1st+ΔTkrs.E144

Proposal 5.5. At conditions of Proposal 5.3 COFc is described by Eqs. (137)(140), (143) or Eqs. (141)(143).

Let us consider Eqs. (94)(96) at conditions of possible subdivision of measuring system and OPB in SOTES-O so that qXtt=qtXt and noise ζt is additive. In this case for SOTES, SOTES-O, SOTES-N Eqs. (94)(96) may be presented in the following form:

Ẋt=φXtt+SvρXtt+χxVΩ,E145
Ġt=qTstXt+b2ζt+χgVΩ,E146
ζ̇t=φ2ζtt+Cϑμζtt+χζVΩ,E147
Ṫst=φ1Tstt+DrγTstt+χTsVΩ.E148

At condition of statistical linearization we make the following replacements:

SvρXttSvρ0mxKxt+Aρ1mxKxtXt0,E149
CϑμζttCϑμ0mζKζt+Aγ1mζKζ,E150
φζζttφζ0+φζ1ζt0.E151

So we get the following statistically linearized expressions:

φXtt+SvρXtt=φ¯X0+φ¯X1Xt,φζζtt+Cϑμζtt=φ¯ζ0+φ¯ζ1ζt,E152

where.

φ¯X0=φX0mxKxtφX1mxKxt+Aρ1mxKxtmx+Svρ0mxKxt,φ¯X1=φX1mxKxt+Aρ1mxKxt,φ¯ζ0=φζ0mζKζtφζ1mζKζt+Aμ1mζKζtmζ+Cϑμ0mζKζ0,φ¯ζ1=φζ1mζKζt+Aμ1mζKζt.E153

Proposal 5.6. For Eqs. (144)(148) at statistical linearization conditions Eqs. (152) and (153) when qt does not depend upon Tst suboptimal filtering algorithm is defined by Eqs:

X̂̇t=φ¯X1X̂t+φ¯X0+RtqtTvg1ZtqtX̂t+b2ζt,E154
Ṙt=φ¯X1Rt+Rtφ¯X1T+vxRtqtTvg1qtRt.E155

Proposal 5.7. At conditions λt=λTstt Eqs for SOTES, SOTES-O, SOTES-N may be presented in the form:

Ẋt=φ¯X1Xt+φ¯X0+χxVΩ,E156
Ġt=qXt+b2ζt+χgVΩ,E157
ζ̇t=φ¯ζ1ζ1+φ¯ζ0+χζVΩ,E158
Ṫst=φ¯st1Tst+φ¯st0+χstVΩ.E159

Suboptimal algorithm at condition

λTsttλ0mstKstE160

is as follows:

X̂̇t=φ¯X1X̂t+φ¯X0+Rtλ0Tvg1{Ztλ0TmstKstX̂t+b2ζt,E161
Ṙt=φ¯X1Rt+Rtφ¯X1T+vxRtλ0Tvg1λ0Rt.E162
Advertisement

6. Peculiarities of new SOTES generations

As it was mentioned in Introduction in lower levels of hierarchical subsystems of SOTES arise information about nomenclature and character of final production and its components.

Analogously in personal LC subsystems final production systems being categories of personal with typical works and separate specialists with common works. In [1, 2] it is presented methodology of personal structuration according to categories, typical processes graphs providing necessary professional level and healthy. Analogous approach to structuration may be used to elements macroscopic subsystems of various SOTES levels. It gives possibility to design unified modeling and filtering methods in SOTES, SOTES-O, SOTES-N and then implement optimal processes of unique budget. So we get unique methodological potential possibilities for horizontal and vertical integrated SOTES.

In case of Eqs. (107)(111) for LC subsystems in case aggregate of given personal categories defined by Eqs

ẊP=a¯XP+a1PGP+a0P+χPVΩP,E163
ĠP=qPTsPXP+b2PζP+χgPVΩP,E164
ζ̇P=с2PζP+c0P+χζPVΩP,E165
ṪsP=bPXP+b1PTsP+b0P+χstPVΩP,E166

where index P denotes variables and parameters in personal LS. According to Section 4 the following filtering Eqs:

X̂̇P=a¯PX̂P+a1pGP+a0P+RPqPTvgP1ZqPX̂P+b2PζP,E167
ṘP=a¯PRP+RPa¯PT+vPRPqPTvgP1qPRP.E168

Let us consider linear synergetical connection between X and XP:

XP=κ1X+κ0.E169

Here κ1 and κ0 being known nP×nx and nP×1 synergetical matrices. Putting (151) into Eqs. (144)(150) we get Eqs for personal subsystem and its observation expressed by X:

ẊP=a¯Pκ1X+κ0+a1PGP+a0P+χPVΩP,E170
ĠP=qPTsPκ1X+κ0+b2PζP+χgPVΩP.E171

Corresponding Eqs with combined right hand for SOTES vector X are described by:

Ẋ=a¯X+a1Gt+a0VΩXK=1,nx¯,a¯Pκ1X+κ0+a1PGPX+a0P+χPVΩPXK=nx+1,nx+nP¯.E172

Analogously using Proposal 5.1 we get the Kalman-Bucy filter Eqs:

X̂̇=a¯X̂+a1Gt+a0+RλTvg1[ZλX̂+b2ζxK=1,nx¯,a¯Pκ1X̂+κ0+a1PGPX+a0P+RPλPTvgP1ZPκ1X̂+κ0+b2PζP}xK=nx+1,nx+nP¯,E173
Ṙ=a¯R+Ra¯T+vxRλTvg1λR,E174
ṘP=a¯PRP+RPa¯PT+vPRPλPTvgP1λPRP.E175

Eqs. (154)(157) define Proposal 5.5 for SOTES filter including subsystems accompanying LC production and personal taking part in production LC.

Remark 6.1. Analogously we get Eqs for SOTES filter including for financial subsystem support and other subsystems.

Remark 6.2. Eqs. (173) and (174) are not connected and may be solved a priori.

Advertisement

7. Example

Let us consider the simple example illustrating modeling the influence of the SOTES-N noise on rules and functional indexes of subsystems accompanying LC production, its filtration and forecasting. System includes stocks of spare parts (SP), exploitation organization with park of MP and together with repair organization (Figure 1).

Figure 1.

After Sale system (ASS) for MP.

At initial time moment necessary supplement provides the required level of effective exploitation at time period 0T. Let consider processes in ASS connected with one type of composite parts (CP) in number NT. During park exploitation CP failures appears. Non-repaired CP are or repaired again and returned into exploitation or whiting off. If the level of park readiness in exploitation is less the critical the repaired MP are taken from stocks.

In graph (Figure 1) the following notations are used: (1) being in stocks in number X1, (2) exploitation in number X2, (3) repair in number X3, (4) witting off in number X4. Using Figure 2 we nx=4; transitions: 12v=1 (for the Poisson stream ρ12X1); 23ρ23X2; 24ρ24X2; 32; number of transitions is equal to np=4. As index of efficiency we use the following coefficient of technical readiness [1, 2]:

Figure 2.

Graph of production state.

KtrT=1T0TX2τNT=1TNT0TX2τ,E176

where

NT=X2+X3+X4E177

Being constant number of CP of park in exploitation.

Note that the influence of the SOTES-N on SOTES and SOTES-O is expressed in the following way: system noise ζt as factor of report documentation distortion leads to fictive underestimated K¯TRt. In case when relation

K¯TRtK¯TRtE178

is breaked down (K¯TRt being critical of floating park level) stocks will give necessary CP amount. So we receive the possibility to exclude defined CP amount from turn over.

Finally let us consider filtering and forecasting algorithms of ASS processes for exposition of noise ζt.

Solution.

1. For getting Eq for K¯TRt we assume XTRt=K¯TRt and take into account Eqs. (106)(109). So have the following scalar Eqs:

ẊTRt=1TNTX2t,Ẋ1=ρ12X1,Ẋ2=ρ12X1ρ23+ρ24X2,Ẋ3=ρ23X2ρ32X3,Ẋ4=ρ24X2.E179

In our case a1=a0=0,VX=0 and Eq. (10.4) may be presented in vector form

Ẋ=aXE180

where

a=00TNT1000ρ120000ρ12ρ23+ρ240000ρ23ρ32000ρ2400.E181

At practice the reported documentation is the complete of documents containing SP demands from stock and acknowledgement SP documents. So noise ζt acts only if its realization take part delivery and acquisition sides. This is the reason to name this noise as system noise and carried out by group of persons.

2. For setting Eqs for electronic control system we use of the following type Eq. (108)

Ġ=qX+ζ+χgVΩE182

where VΩ=VTRV1V2V3V4T being noises with intensity vg; λ=λTRλ1λ2λ3λ4T being ecoefficiency of measuring block. In scalar form Eq. (181) may be presented as

ĠTR=λTRXTR+ζ+VTR,Ġ1=λ1X1+V1,Ġ2=λ2X2+V2,Ġ3=λ3X3+V3,Ġ4=λ4X4+V4.E183

3. Algorithm for noise ζt description depends on participants. In simple case we use error

δXTRt=XTRtK¯TRt.E184

In this case we get lag in GTR measurement on variable ζt:

ζt=b2XTRXTRtK¯TR.E185

By the choice of coefficient b2 necessary time temp of documentation manipulation may be realized.

4. Using Eqs of Proposals 5.1 and 5.2 we get the following matrix filtering Eqs for system noise ζt on background of measuring noise VTR

X̂̇=aX̂+RλTvg1ZλX̂+ζ,E186
Ṙ=aR+RaTRλTvg1λR.E187

at Z=Ġ,ζ=ζt000T.

R e m a r k 7.1. Realization of the described filtering solutions for internal noises needs a priori information about basic OTES characteristics. So we need special methods and algorithms.

5. Finally linear COFc is defined by Eqs. (137)(104) for various forecasting times Δ.

R e m a r k 7.2. In case of SOTES with two subsystems using Eqs. (172)(174) we have the following Kalman-Bucy filter:

X̂̇=aX̂+RλTvg1ZλX̂+ζ,XK=1,nx¯;aPκ1X̂+κ0+RPλPTvgP1ZPλPκ1X̂+κ0ζP,XK=nxt,nx+nP¯,

where ζP being noise acting on the functional index of personal attendant subsystem.

These results are included into experimental software tools for modeling and forecasting of cost and readiness for parks of aircraft [1, 2].

Advertisement

8. Conclusion

For new generations of synergetical OTES (SOTES) methodological support for approximate solution of probabilistic modeling and mean square and forecasting filtering problems is generalized. Generalization is based on sub- and conditionally optimal filtering. Special attention is paid to linear systems and linear systems with the parametric white Gaussian noises.

Problems of optimal, sub- and conditionally optimal filtering and forecasting in product and staff subsystems at the background noise in SOTES are considered. Nowadays for highly available systems the problems of creation of basic systems engineering principles, approaches and information technologies (IT) for SOTES from modern spontaneous markets at the background inertially going world economics crisis, weakening global market relations at conditions of competition and counteraction reinforcement is very important. Big enterprises need IT due to essential local and systematic economic loss. It is necessary to form general approaches for stochastic processes (StP) and parameters estimation (filtering, identification, calibration etc) in SOTES at the background noises. Special observation SOTES (SOTES-O) with own organization-product resources and internal noise as information from special SOTES being enact noise (SOTES-N). Conception for SOTES structure for systems of technical, staff and financial support is developed. Linear, linear with parametric noises and nonlinear stochastic (discrete and hybrid) equations describing organization-production block (OPB) for three types of SOTES with their planning-economical estimating divisions are worked out. SOTES-O is described by two interconnected subsystems: state SOTES sensor and OPB supporting sensor with necessary resources. After short survey of modern modeling, sub- and conditionally optimal filtering and forecasting basic algorithms and IT for typical SOTES are given.

Influence of OTES-N noise on rules and functional indexes of subsystems accompanying life cycle production, its filtration and forecasting is considered.

Experimental software tools for modeling and forecasting of cost and technical readiness for parks of aircraft is developed.

Now we are developing presented results on the basis of cognitive approaches [12].

Advertisement

Acknowledgments

The authors would like to thank Russian Academy of Sciences and for supporting the work presented in this chapter.

Authors much obliged to Mrs. Irina Sinitsyna and Mrs. Helen Fedotova for translation and manuscript preparation.

References

  1. 1. Sinitsyn IN, Shalamov AS. Lectures on Theory of Integrated Logistic Support Systems. 2nd ed. Moscow: Torus Press; 2019. p. 1072 (in Russian)
  2. 2. Sinitsyn IN, Shalamov AS. Probabilistic modeling, estimation and control for CALS organization-technical- economic systems – Chapter 5/in book “Probability, Combinatorics and Control”. In: Kostogryzov A, Korolev V, editors. London, UK: IntechOpen; 2020. p. 117-141. DOI: 10.5772/intechopen.79802
  3. 3. Sinitsyn IN, Shalamov AS. Optimal estimation and control in stochastic synergetic organization-technical-economic systems. Filtering in product and staff subsystems at background noise (I). Highly Available Systems. 2019;15(4):27-48. DOI: 10.18127/j20729472-201904-04 (in Russian)
  4. 4. Sinitsyn IN, Shalamov AS. Optimal estimation and control in stochastic synergetic organization-technical-economic systems. Filtering in product and staff subsystems at background noise (II). Highly Available Systems. 2021;17(1):51-72. DOI: 10.18127/j20729472-202101-05 (in Russian)
  5. 5. Sinitsyn IN, Shalamov AS. Problems of estimation and control in synergetic organization-technical-economic systems. In: VII International Conference “Actual Problems of System and Software”. Moscow; 2021 (in print)
  6. 6. Haken H. Synergetics and Introduction. Springer Ser. Synergistics. Vol. 3. Berlin, Heidelberg: Springer; 1983
  7. 7. Haken H. Advanced Synergetics. Springer Ser. Synergistics. Vol. 20. Berlin, Heidelberg: Springer; 1987
  8. 8. Kolesnikov AA. Synergetical Control Theory. Taganrog: TRTU: M.: Enegroatomizdat; 1994. p. 538 (in Russian)
  9. 9. Pugachev VS, Sinitsyn IN. Stochastic Systems. Theory and Application. Singapore: World Scientific; 2001. p. 908
  10. 10. Sinitsyn IN, editor. Academic Pugachev Vladimir Semenovich: To 100 Annyversary. Moscow: Torus Press; 2011. p. 376 (in Russian)
  11. 11. Kalman SIN. Pugachev Filters. 2nd ed. Vol. 2007. Moscow: Logos; 2005. p. 772 (in Russian)
  12. 12. Kostogryzov A, Korolev V. Probabilistic Methods for cognitive solving of some problems in Artifial intelligence system – Chapter 1/in book “Probability, Combinatorics and Control”. In: Kostogryzov A, Korolev V, editors. London, UK: IntechOpen; 2020. p. 3–34. DOI: 10.5772/intechopen.89168.

Written By

Igor N. Sinitsyn and Anatoly S. Shalamov

Submitted: 01 February 2022 Reviewed: 11 February 2022 Published: 18 April 2022