Open access peer-reviewed chapter - ONLINE FIRST

Methods of Conditionally Optimal Forecasting for Stochastic Synergetic CALS Technologies

Written By

Igor N. Sinitsyn and Anatoly S. Shalamov

Submitted: February 1st, 2022 Reviewed: February 11th, 2022 Published: April 18th, 2022

DOI: 10.5772/intechopen.103657

Mathematical Concepts of Forecasting Edited by Andrey Kostogryzov

From the Edited Volume

Mathematical Concepts of Forecasting [Working Title]

Dr. Andrey Kostogryzov and Dr. Nikolay Andreevich Makhutov

Chapter metrics overview

11 Chapter Downloads

View Full Metrics


Problems of optimal, sub- and conditionally optimal filtering and forecasting in product and staff subsystems at the background noise in synergistical organization-technical-economical systems (SOTES) are considered. Nowadays for highly available systems the problems of creation of basic systems engineering principles, approaches and information technologies (IT) for SOTES from modern spontaneous markets at the background inertially going world economics crisis, weakening global market relations at conditions of competition and counteraction reinforcement is very important. Big enterprises need IT due to essential local and systematic economic loss. It is necessary to form general approaches for stochastic processes and parameters estimation in SOTES at the background noises. The following notations are introduced: special observation SOTES (SOTES-O) with own organization-product resources and internal noise as information from special SOTES being enact noise (SOTES-N). Conception for SOTES structure for systems of technical, staff and financial support is developed. Linear, linear with parametric noises and nonlinear stochastic (discrete and hybrid) equations describing organization-production block (OPB) for three types of SOTES with their planning-economical estimating divisions are worked out. SOTES-O is described by two interconnected subsystems: state SOTES sensor and OPB supporting sensor with necessary resources. After short survey of modern modeling, sub- and conditionally optimal filtering and forecasting basic algorithms and IT for typical SOTES are given. Influence of OTES-N noise on rules and functional indexes of subsystems accompanying life cycle production, its filtration and forecasting is considered. Experimental software tools for modeling and forecasting of cost and technical readiness for parks of aircraft are developed.


  • sub- and conditionally optimal filtering and forecasting (COF and COFc)
  • continuous acquisition logic support (CALS)
  • organizational-technical-economical systems (OTES)
  • probability modeling
  • synergetical OTES (SOTES)

1. Introduction

Stochastic continuous acquisition logic support (CALS) is the basis of integrated logistic support (ILS) in the presence of noises and stochastic factors in organizational-technical-economic systems (OTES). Stochastic CALS methodology was firstly developed in [1, 2, 3, 4, 5]. According to contemporary notions in broad sense ILS being CALS basis represents the systems of scientific, design-project, organization-technical, manufactural and informational-management technologies, means and fractial measures during life cycle (LC) of high-quality manufacturing products (MP) for obtaining maximal required available level of quality and minimal product technical exploitational costs.

Contemporary standards being CALS vanguard methodology in not right measure answer necessary purposes. CALS standard have a debatable achievement and the following essential shortcoming:

  • informational-technical-economic models being not dynamical;

  • integrated database for analysis of logistic support is super plus on one hand and on the other hand does not contain information necessary for complex through cost LC estimation according to modern decision support algorithms;

  • computational algorithms for various LC stage are simplified and do not permit forecasting with necessary accuracy and perform at conditions of internal and external noises and stochastic factors.

So ILS standard do not provide the whole realization of advantages for modern and perspective information technologies (IT) including staff structure in the field of stochastic modeling and estimation of two interconnected spheres: techno-sphere (techniques and technologies) and social ones.

These stochastic systems (StS) form the new systems class: OTES-CALS systems. Such systems destined for the production and realization of various services including engineering and other categorical works providing exploitation, aftersale MP support and repair, staff, medical, economical and financial support of all processes. New developed approach is based on new stochasting modeling and estimating approaches. Nowadays such IT are widely used in technical application of complex systems functioning in stochastic media.

Estimation of IT is based on: (1) model of OTES; (2) model of OTES-O (observation system); (3) model OTES-N (noise support); (4) criteria, estimation methods models and for new generations of synergetic OTES (SOTES) measuring model and organization-production block (OPB) in OTES-O are separated.

Synergetics being interdisciplinary science is based on the principle of self-realization of the open nonlinear dissipative and nonconservative systems. According to [6, 7] in equilibrium when all systems parameters are stable and variation in it arise due to minimal deviations of some control parameters. As a result, the system begins to move off from equilibrium state with increasing velocity. Further the non-stability process lead to total chaos and as a result appears bifurcation. After that gradually new regime establishes and so on.

The existence of big amount of free entering elements and subsystems of various levels is the basic principle of self-organization. One of inalienable properties of synergetical system is the existence of “attractors”. Attractor is defined as attraction set (manifold) in phase space being the aim of all nonlinear trajectories of moving initial point (IP). These manifolds are time invariant and are defined from equilibrium equation. Invariant manifolds are also determined as constraints of non-conservative synergetical system. In synergetical control theory [8] transition from natural, unsupervised behavior according to algorithms of dissipative structure to control motion IP along artificially in putted demanded invariant manifolds. As a control object of synergetical system always nonlinear its dynamics may be described by nonlinear differential equations. In case of big dimension the parameters of order are introduced by revealing most slow variable and more quick subordination variables. This approach in hierarchical synergetic system is called subordination principle. So at lower hierarchy level processors go with maximal velocity. Invariant manifolds are connected with slow dynamics.

Section 1 is devoted to probabilistic modeling problems in typical StS. Special attention is paid to hybrid systems. Such specific StS as linear, linear with the Gaussian parametric noises and nonlinear reducible to quasilinear by normal approximation method. For quick off-line and on-line application theory of conditionally optimal forecasting in typical StS is developed in Section 2. In Section 3 basic off-line algorithm of probability modeling in SOTES are presented. Basic conditionary optimal filtering and forecasting quick – off-line and on-line algorithms for SOTES are given in Section 4. Peculiarities of new SOTES generalizations are described in Section 5. Simple example illustrating the influence of SOTES-N noise on rules and functional indexes of subsystems accompanying life cycle production, its filtration and forecasting is presented in Section 6. Experimental software tools for forecasting of cost and technical readiness for aircraft parks are developed.


2. Probabilistic modeling in StS

Let us consider basic mathematical models of stochastic OTES:

  • continuous models defined by stochastic differential equations;

  • discrete models defined by stochastic difference equations;

  • hydride models as a mixer of difference and differential equations.

Probabilistic analytical modeling of stochastic systems (StS) equations is based on the solution of deterministic evolutionary equations (Fokker-Plank-Kolmogorov, Pugachev, Feller-Kolmogorov) for one- and finite dimensions. For stochastic equations of high dimensions solution of evolutionary equation meets principle computationary difficulties.

At practice taking into account specific properties of StS it is possible to design rather simple stochastic models using a priori data about StS structure, parameters and stochastic factors. It is very important to design for different stages of the life cycle (LC) models based on available information. At the last LC stage we need hybrid stochastic models.

Let us consider basic general and specific stochastic models and basic algorithms of probabilistic analytical modeling. Special attention will paid to algorithms based on normal approximation, statistical linearization and equivalent linearization methods. For principally nonlinear non Gaussian StS may be recommended corresponding parametrization methods [9].

2.1 Continuous StS

Continuous stochastic models of systems involve the action of various random factors. While using models described by differential equations the inclusion of random factors leads to the equations which contain random variables.

Differential equations for a StS (more precisely for a stochastic model of a system) must be replaced in the general case by the Equations [9, 10].


where Fzxtand Gztare random functions of the p-dimensional vector, z,n-dimensional vector xand time t(as a rule Gis independent of x). In consequence of the randomness of the right-hand sides of Eq. (1) and also perhaps of the initial value of the state vector Z0=Zt0the state vector of the system Zand the output Yrepresent the random variables at any fixed time moment t. This is the reason to denote them by capital letters as well as the random functions in the right-hand sides to the Eq. (1). The state vector of the system Ztand its output Ytconsidered as the functions of time trepresent random functions of time t(in the general case vector random functions). In every specific trial the random functions Fzxtand Gztare realized in the form of some functions fzxtand gztand these realizations determine the corresponding realizations zt,ytof the state vector Ztand the output Ytsatisfying the differential equations (which are the realizations of Eq. (1)


Thus we come to the necessity to study the differential equations with random functions in the right-hand sides.

At practice the randomness of the right-hand sides of the differential equations arises usually from the fact that they represent known functions some of whose arguments are considered as random variables or as random functions of time tand perhaps of the state and the output of the system. But in the latter cased these functions are usually replaced by the random functions of time which are only obtained by assuming that their arguments Zand Yare known functions of time corresponding to the nominal regime of system functioning. In practical problems such an assumption usually provides sufficient accuracy.

So we may restrict ourselves to the case where all uncertain variables in the right-hand sides of differential equations may be considered as random functions of time. Then Eq. (1) may be written in the form


where fand gare known functions whose arguments include random functions of time N1tand N2t. The initial state vector of the system Z0in practical problems is always a random variable independent of the random functions N1tand N2t(independent of random disturbances acting of the system).

Every realization n1tTn2tTTof the random function N1tTN2tTTdetermines the corresponding realizations fzxn1tt,gzn2ttof the functions fzxN1tt,gzN2tt, and in accordance with this Eq. (2) determine respective realizations ztand ytof the state vector of the system Ztand its output Yt.

Following [9, 10] let us consider the differential equation


where axt,bxtbeing functions mapping Rp×Rinto Rpand Rpq, respectively, is called a stochastic differential equation if the random function (generalized) Vtrepresents a white noise in the strict sense. Let X0be a random vector of the same dimension as the random function Xt. Eq. (3) with the initial condition Xt0=X0determines the stochastic process (StP)Xt.

In order to give an exact sense to Eq. (3) and to the above statement we shall integrate formally Eq. (3) in the limits from t0to twith the initial condition Xt0=X0. As result we obtain


where the first integral represents a mean square (m.s.) integral. Introducing the StP with independent increments Wtwhose derivative is a white noise Vtwe rewrite the previous equation in the form


This equation has the exact sense. Stochastic differential Eq. (3) or the equivalent equation


with the initial condition Xt0=X0represents a concise form for of Eq. (4).

Eq. (5) in which the second integral represents a stochastic Ito integral is called a stochastic Ito integral equation and the corresponding differential Eq. (3) or (5) is called a stochastic Ito differential Eq.

A random process Xtsatisfying Eq. (4) in which the integral represent the m.s. limits of the corresponding integral sums is called a mean square of shortly, an m.s. solution of stochastic integral Eq. (4) and of the corresponding stochastic differential Eq. (3) or (5) with the initial condition Xt0=X0.

If the integrals in Eq. (4) exist for every realization of the StPWtand Xtand equality (4) is valid for every realization then the random process Xtis called a solution in the realization of Eq. (4) and of the corresponding stochastic differential Eq. (3) and (5) with the initial condition Xt0=X0.

Stochastic Ito differential Eqs. (3) and (5) with the initial condition Xt0=X0, where X0is a random variable independent of the future values of a white noise Vs,s>t0(future increments WsWt,s>tt0, of the process W) determines a Markov random process.

In case of Wbeing vector StP with independent in increments probabilistic modeling of one and n-dimensional characteristic functions g1=EeiλTZtand gn=Eexpik=1nλkTZtkand densities f1and fnis based on the following integrodifferential Pugachev Eqs:


where ibeing imaginary unit,


For the Wiener WStP with intensity matrix vtwe use Fokker-Plank-Kolmogorov Eqs:


at initial conditions (12).

2.2 Discrete StS

For discrete vector StP yielding regression and autoregression StS


Eqs for one and ndimensional densities and characteristic functions are described by:


Here Ebeing symbol of mathematical expectation, hkbeing Vkcharacteristic function


where s1sn– permutation of 1nat ks1<ks2<<ksn.

In case of the autoregression StS (1.14) basic characteristic functions are given by Eqs:


2.3 Hybrid continuous and discrete StS

When the system described by Eq. (2) is automatically controlled the function which determines the goal of control is measured with random errors and the control system components forming the required input xare always subject to noises, i.e. to random disturbances. Forming the required input and the real input including the additional variables necessary to transform these equations into a first-order equation may be written in the form


where Uis the vector composed of the required input and all the auxiliary variables, and N3tis some random function of time t(in the general case representing a vector random function). Writing down these equations we have taken into account that owing to the action of noises described by the random function N3t, the vector Uand the input Xrepresent random functions of time and in accordance with this we denoted them by capital letters. These equations together with the first Eq. (2) form the set of Eqs


These equations may be written in the form of one equation determining the extended state vector of the system Z1=ZTXTUTT:


where N4t=N1tTN3tTT, and


As a result rejecting the indices of Z1and f1we replace the set of the Eqs. (2) and (24) by the equations


In practical problems the random functions N1tand N2tare practically always independent. But the random function N3tdepends on N1tand N2tdue to the fact that the function hYt=hgZN2tttand its total derivative with respect to time tenter into Eq. (24). Therefore, the random function N2tand N4tare dependent. Introducing the composite vector random function Nt=N1tTN2tTN3tTTwe rewrite the equations obtained in the form


Thus in the cased of an automatically controlled system described by Eq. (2), after coupling Eq. (2) with the equations of forming the required and the real inputs we come to the equations of the form of (23) containing the random function Nt.

If a control StS based on digital computers we decompose the extended state vector Zinto two subvectors Z, Z,Z=ZTZTTone of which Zrepresents a continuously varying random function, and the other Zis a step random function varying by jumps at prescribed time moments tkk=012.Then introducing the random function


and putting Zk=Ztkk=012we get the set of equations describing the evaluation of the extended state vector of controlled


where Nkk=012are some random variables, and Ntsome random function.

For hybrid StS (HStS) let us now consider the case of a discrete-continuous system whose state vector Z=ZTZTT(extended in the general case) is determined by the set of equations


where is the value of Ztat t=tk, Zk=ZkTZkTT=Ztkk=012,a,b,ωkare functions of the arguments indicated 1Aktis the indicator of the interval Ak=tktk+1k=012,Vis a white noise in the strict sense, Vkis a sequence of independent random variables independent of the white noise V. The one-dimensional characteristic function h1μtof the process with independent increments Wtwhose weak m.s. derivative is the white noise V, and the distributions of the random variables Vkwill be assumed known.

Introducing the random processes


we derive in the same way as before the equation for the one-dimensional characteristic function


of the StP Z¯t


Taking the initial moment t0=t0the initial condition for Eq. (25) is


where g0ρis the characteristic function of the initial value Z0=Zt0of the process Zt.

At the moment tkthe value of g1λtis evidently equal to


i.e. to the value gkλT+λTλTTof the characteristic function gkρof the random variable Zk=ZkTZkTT. If the function χμtis continuous function of tat any μthe g1λttends to


when ttk+1, i.e. to the joint characteristic function gkλλλof the random variables Zk+1,Zk.Zk


At the moment tk+1g1λtchanges its value by the jump and becomes equal to


To evaluate this, we substitute here the expression of Zk+1from the last Eq. (27). Then we get


Owing to the independence of the sequence of random variables Vkof the white noise Vand independence of Vkof V0,V1,,Vk1the random variables Zkand Zk+1are independent of Vk. Hence, the expectation in the right-hand side of Eq. (30) is completely determined by the known distribution of the random variableVkand by the joint characteristic function gkλλλof the random variables Zk+1,Zk,Zk, i.e. by g1λtk+10. So Eq. (26) with the initial condition (27) and formula (28) determine the evolution of the one-dimensional characteristic function g1λtof the process Z¯t=ZtTZtTZtTTand its jump-wise increments at the moments tkk=12.

In the case of the discrete-continuous HStS whose state vector is determined by Eqs


we get in the same way the equation for the n-dimensional characteristic function gnλ1λnt1tnof the random process Zt=ZtTZtTZtTT,


And the formula for the value of gnλ1λnt1tnat tn=tk+1tn1,


At the point tn=tk+1gnλ1λnt1tnchanges its value by jump from


to gnλ1λnt1tn1tk+1given by (33).

The right-hand side of (33) is completely determined by the known distribution of the random variable Vkand by the joint characteristic function gnλ1λnt1tn1tk+10of the random variables Zt1,,Z¯tn1,Zk+1',Zk'',Zk'.Hence, Eq. (32) with the corresponding initial condition and formula (33) determine the evolution and the jump-wise increments of gnλ1λnt1tnat the points tk+1when tnincreases starting from the value tn1.

2.4 Linear StS

For differential linear StS and Wbeing StP with independent increments V=Ẇ


corresponding Eqs for n-dimensional characteristic function are as follows:


Explit formulae for n-dimensional characteristic function is described by formulae


Here u=utkτbeing fundamental solution of Eq u̇=auat condition: utt=I(unit n×nmatrix).

In case of the Gaussian white noise Vwith intensity matrix vcharacteristic function gnis Gaussian


2.5 Linear StS with the parametric Gaussian noises

In the case of StS with the Gaussian discrete additive and parametric noises described by Eq


we have the infinite set of equations which in this case is decomposed into independent sets of equations for the initial moments αkof each given order


Corresponding Eqs of correlational theory are as follows:


where khlis the covariance of the components Zhand Zlof the vector Zhl=1p. Eq. (41) with the initial condition Kt0=Kokpqt0=kpq0completely determines the covariance matrix Ktof the vector Ztat any time moment tafter funding its expectation m.

For discrete StS with the Gaussian parametric noises correlational Eqs may be presented in the following form:


2.6 Normal approximation method

For StS of high dimensions methods of normal approximation (MNA) are the only used at engineering practice. In case of additive noises bxt=b0tMNA is known as the method of statistical linearization (MSL).

Basic Eqs of MNA are as follows [9]:


Eq. (49) may be rewritten in form




For discrete StS equations of MNA may be presented in the following form:


at conditions


Corresponding MNA equations for Eq. (15) are the special case of Eqs. (54)(56).


3. Conditionally optimal forecasting in StS

Optimal forecasting is well developed for linear StS and off-line regimes [9]. For nonlinear StS linear StS with the parametric Gaussian noises and on-line regimes different versions approximate (suboptimal) methods are proposed. In [9] general results for complex statistical criteria and Bayes criteria are developed. Let us consider m.s. conditionally optimal forecasters for StS being models of stochastic OTES.

3.1 Continuous StS

Conditionally optimal forecasting (COFc) for mean square error (mse) criterion was suggested by Pugachev [10]. Following [9] we define COFC as a forecaster from class of admissible forecasters which at any joint distributions of variables Xt(state variable) X̂t(estimate of Xt), Yt(observation variable) at forecasting time Δ>0and time moments tt0in continuous (differential) StS


(W1,W2being independent white noises with the independent increments; φφ1ψψ1being known nonlinear functions) gives the best estimate of Xs+Δat infinitesimal time moment s>t,strealizing minimum EX̂sX̂s+Δ2. Then COFc at any time moment tt0is reduced to finding optimal coefficients αt,βt,γtin the following Eq:


Here ξ=ξX̂tYtt,η=ηX̂tYttare given functions of current observations Yt, estimate X̂tand time t.

Using theory of conditionary optimal estimation (13, 17, 18) for Eq


we get the following Eqs for coefficients αt,βt,γt


at condition detκ220.

The theory of conditionally optimal forecasting gives the opportunity for simultaneous filtering of state and identification of StS parameters for different forecasting time Δ. All complex calculations for COFc design do not need current observations and may be performed on a priori data during design procedures. Practical application of such COFc is reduced to Eq. (58) integration. The time derivative for the error covariance matrix Rtis defined by formulae


Mathematical expectations in Eq. (60)(63) are computed on the basis of joint distribution of random variables XtTXt+ΔTYtTX̂tTX̂t+ΔTTby solution of the following Pugachev Eq for characteristic function g2λ1λ2λ3μ1μ2μ3tsfor StP XtTYtTX̂tTTat s>t:


at condition


Basic algorithms are defined by the following Proposals .3.1.1–3.1.3.

Proposal 3.1.1.At the conditions of the existence of probability moments(60), (61)nonlinear COFc is defined byEqs. (58) and, (63).

Proposal 3.1.2.For linear differential StS


Eqs of exact COFc are as follows:


In case of the linear StS with the parametric Gaussian noises:


COFc is defined by exact Eqs (Proposal 3.1.3):


For nonlinear StS in case of the normal StP Xt,Yt,X̂tEqs of normal COFc (NCOFc) are defined by Proposal 3.1.1 for joint normal distribution.

3.2 Discrete and hybrid StS

Let us consider the following nonGaussian nonlinear regression StS


In this case Eqs of the discrete COFc are as follows:


at initional condition


So for the nonlinear regression StS (14) we get Proposal 3.2.1defined by Eqs. (75)(82).

In case of the nonlinear autoregression discrete StS (15) we have the following Eqs of Proposal 3.2.2:


Analogously we get from Proposal 3.2.2 COFc for discrete linear StS and linear with the Gaussian parametric noises. For hybrid StS we recommend mixed algorithm based on joint normal distribution and Proposal 3.1.1.

3.3 Generalizations

Mean square results (Subsection 2.1 and 2.2) may be extended to StS described by linear, linear with the Gaussian parametric noises and nonlinear Eqs or reducible to them by approximate suboptimal and conditionally optimal methods.

Differential StS with the autocorrelated noises in observations may be also reduced to differential StS.

Special COFc algorithms based on complex statistical criteria and Bayesian creteria are developed in [11].


4. Probability modeling in SOTES

Following [3, 4] let as consider general approach for the SOTES modeling as macroscopic (multi-level) systems including set of subsystems being also macroscopic. In our case these sets of subsystems will be clusters covering that part of MP connected with aftersales production service. More precisely the set of subsystems of lower level where input information about concrete products, personal categories etc. is formed.

For typical continuous-discrete StP in the SOTES production cluster we have the following vector stochastic equation:


Here P0tbeing the centered Poisson StP; ρXttbeing np×1intensity of vector of StP Pt, ρXtt=ρ12Xttρ13XttρukXttT;ρukXttbeing intensities of streams changes of states; φXttbeing continuous np×1vector function of quality indicators in OPB; Svbeing np×nρmatrix Poisson stream of resources (production) with volumes vaccording to the SOTES state graph. Analogously we get corresponding equations for SOTES-O and SOTES-N:


where φ1and φ2being vector functions quality indicators in OPB for the SOTES-O and the SOTES-N; Drbeing structional matrix of resources streams in the SOTES-N matrix; γYttand Drbeing the intensity function and vector of P10tjumps in the SOTES-O.

In linear case when ρukXtt=AρXtEqs. (94)(96) for the SOTES, SOTES-O and the SOTES-N may be presented as


Here notations


Aγrt, Aμϑtare derived from Eqs:


At practice a priori information about SOTES-N is poor than for the SOTES and SOTES-O. So introducing the Wiener StP Wt,W1t,W2twe get the following Eqs:


R e m a r k 4.1. Such noises from OTES-N may act at more lower levels OTES-O included into internal SOTES being with minimal from information point of view maximal. For highest OTES levels intermediate aggregative functions may be performed. So observation and estimation systems must be through (multi-level and cascade) and provide external noise protection for all OTES levels.

R e m a r k 4.2. As a rule at administrative SOTES levels processes of information aggregative and decision making are performed.

Finally at additional conditions:

1. information streams about OPB state in the OTES-O are given by formulae


and every StP GtTstis supported by corresponding resource (e.g. financial);

2. for SOTES measurement only external noise from SOTES-N and own noise due to error of personal and equipment are essential.

We get the following basic ordinary differential Eqs:


Here VΩt=VxTtVgTtVζTtVstTtTbeing vector white noise, dimVΩt=nx+ng+nζ+nts×1, MVΩt=0, with diagonal block intensity matrix vΩ=diagvxvgvζvts,dimvxt=nx×ny,dimvgt=ng×ng,dimvζt=nζ×nζ,dimvtst=nts×nnst,χx,χg,χζ,χstbeing known matrices:


R e m a r k 4.3. Noises VP, VP1, VP2(random time moments of resources or production) in are non-Gaussian noises induced by Poisson noises in the OTES, OTES-O, OTES-N, whereas noises VW, VW1, VW2(personal errors, internal noises) are Gaussian StP.

From Eqs. (110) and (111) we have the following equivalent expressions for intensities of vector VΩt:


Here the following notations are used: Svρ¯STv, Cϑμ¯CTϑ, Drγ¯DTrbeing intensities of nonGaussian white noises SvVPt, DrVP1t, CϑVP2t: ρ¯=EdiagρXtt,γ¯=EdiagγYtt,μ¯=Ediagμζttbeing mathematical expectations of intensivity diagonal matrices of Poisson streams in the SOTES, SOTES-O, SOTES-N; vW, vW1, vW2being intensivities of Gaussian white noises VW, VW1, VW2. Note the difference between intensity of Poisson stream and intensity of white noise.

In case of Eqs. (106)(109) with the Gaussian parametric noises we use the following Eqs:


where bar means parametric noises coefficients.

At additive noises VΩpresenting Eqs. (113)(116) for Zt=XtGtζtTstTin form of MSL:


we get following set of interconnected Eqs for mtz,Ktz:


Eq for Kzt1t2is given by (49).


5. Basic SOTES conditionally optimal filtering and forecasting algorithms

Proposal 5.1.Let SOTES, SOTES-O, SOTES-N being linear, satisfyEqs. (102)(104)and admit linear filter of the form:


where coefficient qtin (120) does not depend upon Tst. Then Eqs of optimal and conditionally optimal filters coincide with the Kalman-Bucy filter and may be presented in the following form:


Proposal 5.2.At condition when measuring coefficientqtdepends uponλt=λTsttand admit statistical linearization


sub- and conditionally optimal filter Eqs are follows:


R e m a r k 5.1. Filtering Eqs defined by Proposals 5.1 and 5.2 give the m.s. square optimal algorithms nonbias of Xtfor OTES at conditions of internal noises of measuring devices and external noise from OTES on measuring part of SOTES-O.

R e m a r k 5.2. Accuracy of estimation X̂tdepends upon not only upon noise ζtinfluencing on measuring signal but on rule and technical-economical quality SOTES criteria but on line state of resources TstOPB for SOTES-O.

Using [9, 10, 11] let us consider more general SOTES than Eqs. (113)(116) for system vector X¯t=XtGtζtTstTand observation vector Y¯t=Y1Y2Y3Y4Tdefined by Eqs:


Here a,a0,a1,b,b0,b1and ciji=12j=1,nx¯– vector–matrix functions tdo not depend from X¯t=X1XnxTand Y¯t=Y1YnyT. Then corresponding algorithm of conditionally optimal filter (COF) is defined by [9, 10, 11]:


For getting Eqs for βtit is necessary to have Eqs for mathematical expectation mtand covariance matrix Ktof random vector Qt=X1XnxY1YnyTerror covariance matrix Rtfor X˜t=X¯̂tX¯t. Using Eqs


we have the following Eq for the error covariance matrix




mt=mrr=1ny+nx,Kt=krs(r,s=1my¯+nx¯; Vbeing the white nonGaussian noise of intensity v. Coefficient βtin Eq. (127) is defined by formula


R e m a r k 5.3. In case when observations do not influence the state vector we have the following notations:


Proposal 5.3.Let SOTES is described byEqs. (125) and (126). Then COF algorithm is defined byEqs. (127)(133).

Theory of conditionally optimal forecasting [9, 10, 11] in case of Eqs:


where Δbeing forecasting time, V1and V2are independent nonGaussian white noises with matrix intensities v1and v2, gives the following Eqs for COFc:


where the following notations are used: ustis fundamental solution of Eq: du/ds=a1suat initial condition utt=I,εt=ut+Δt,


R e m a r k 5.4. At practice COFc may be presented as sequel connection of COF, amplifier with gain εt=ut+Δtand summater ht=ht:


where X¯̂tbeing the COF output or COF of current state X¯t.

Eq. (137) may be presented in other form:


Accuracy of COFc is defined by the following Eq:


Proposal 5.5.At conditions of Proposal 5.3 COFc is described byEqs. (137)(140), (143)orEqs. (141)(143).

Let us consider Eqs. (94)(96) at conditions of possible subdivision of measuring system and OPB in SOTES-O so that qXtt=qtXtand noise ζtis additive. In this case for SOTES, SOTES-O, SOTES-N Eqs. (94)(96) may be presented in the following form:


At condition of statistical linearization we make the following replacements:


So we get the following statistically linearized expressions:




Proposal 5.6.ForEqs. (144)(148)at statistical linearization conditionsEqs. (152) and (153) whenqtdoes not depend uponTstsuboptimal filtering algorithm is defined by Eqs:


Proposal 5.7.At conditionsλt=λTsttEqs for SOTES, SOTES-O, SOTES-N may be presented in the form:


Suboptimal algorithm at condition


is as follows:


6. Peculiarities of new SOTES generations

As it was mentioned in Introduction in lower levels of hierarchical subsystems of SOTES arise information about nomenclature and character of final production and its components.

Analogously in personal LC subsystems final production systems being categories of personal with typical works and separate specialists with common works. In [1, 2] it is presented methodology of personal structuration according to categories, typical processes graphs providing necessary professional level and healthy. Analogous approach to structuration may be used to elements macroscopic subsystems of various SOTES levels. It gives possibility to design unified modeling and filtering methods in SOTES, SOTES-O, SOTES-N and then implement optimal processes of unique budget. So we get unique methodological potential possibilities for horizontal and vertical integrated SOTES.

In case of Eqs. (107)(111) for LC subsystems in case aggregate of given personal categories defined by Eqs


where index Pdenotes variables and parameters in personal LS. According to Section 4 the following filtering Eqs:


Let us consider linear synergetical connection between Xand XP:


Here κ1and κ0being known nP×nxand nP×1synergetical matrices. Putting (151) into Eqs. (144)(150) we get Eqs for personal subsystem and its observation expressed by X:


Corresponding Eqs with combined right hand for SOTES vector Xare described by:


Analogously using Proposal 5.1 we get the Kalman-Bucy filter Eqs:


Eqs. (154)(157) define Proposal 5.5 for SOTES filter including subsystems accompanying LC production and personal taking part in production LC.

Remark 6.1. Analogously we get Eqs for SOTES filter including for financial subsystem support and other subsystems.

Remark 6.2. Eqs. (173) and (174) are not connected and may be solved a priori.


7. Example

Let us consider the simple example illustrating modeling the influence of the SOTES-N noise on rules and functional indexes of subsystems accompanying LC production, its filtration and forecasting. System includes stocks of spare parts (SP), exploitation organization with park of MP and together with repair organization (Figure 1).

Figure 1.

After Sale system (ASS) for MP.

At initial time moment necessary supplement provides the required level of effective exploitation at time period 0T. Let consider processes in ASS connected with one type of composite parts (CP) in number NT. During park exploitation CP failures appears. Non-repaired CP are or repaired again and returned into exploitation or whiting off. If the level of park readiness in exploitation is less the critical the repaired MP are taken from stocks.

In graph (Figure 1) the following notations are used: (1) being in stocks in number X1, (2) exploitation in number X2, (3) repair in number X3, (4) witting off in number X4. Using Figure 2 we nx=4; transitions: 12v=1(for the Poisson stream ρ12X1); 23ρ23X2; 24ρ24X2; 32; number of transitions is equal to np=4. As index of efficiency we use the following coefficient of technical readiness [1, 2]:

Figure 2.

Graph of production state.




Being constant number of CP of park in exploitation.

Note that the influence of the SOTES-N on SOTES and SOTES-O is expressed in the following way: system noise ζtas factor of report documentation distortion leads to fictive underestimated K¯TRt. In case when relation


is breaked down (K¯TRtbeing critical of floating park level) stocks will give necessary CP amount. So we receive the possibility to exclude defined CP amount from turn over.

Finally let us consider filtering and forecasting algorithms of ASS processes for exposition of noise ζt.


1. For getting Eq for K¯TRtwe assume XTRt=K¯TRtand take into account Eqs. (106)(109). So have the following scalar Eqs:


In our case a1=a0=0,VX=0and Eq. (10.4) may be presented in vector form




At practice the reported documentation is the complete of documents containing SP demands from stock and acknowledgement SP documents. So noise ζtacts only if its realization take part delivery and acquisition sides. This is the reason to name this noise as system noise and carried out by group of persons.

2. For setting Eqs for electronic control system we use of the following type Eq. (108)


where VΩ=VTRV1V2V3V4Tbeing noises with intensity vg; λ=λTRλ1λ2λ3λ4Tbeing ecoefficiency of measuring block. In scalar form Eq. (181) may be presented as


3. Algorithm for noise ζtdescription depends on participants. In simple case we use error


In this case we get lag in GTRmeasurement on variable ζt:


By the choice of coefficient b2necessary time temp of documentation manipulation may be realized.

4. Using Eqs of Proposals 5.1 and 5.2 we get the following matrix filtering Eqs for system noise ζton background of measuring noise VTR


at Z=Ġ,ζ=ζt000T.

R e m a r k 7.1. Realization of the described filtering solutions for internal noises needs a priori information about basic OTES characteristics. So we need special methods and algorithms.

5. Finally linear COFc is defined by Eqs. (137)(104) for various forecasting times Δ.

R e m a r k 7.2. In case of SOTES with two subsystems using Eqs. (172)(174) we have the following Kalman-Bucy filter:


where ζPbeing noise acting on the functional index of personal attendant subsystem.

These results are included into experimental software tools for modeling and forecasting of cost and readiness for parks of aircraft [1, 2].


8. Conclusion

For new generations of synergetical OTES (SOTES) methodological support for approximate solution of probabilistic modeling and mean square and forecasting filtering problems is generalized. Generalization is based on sub- and conditionally optimal filtering. Special attention is paid to linear systems and linear systems with the parametric white Gaussian noises.

Problems of optimal, sub- and conditionally optimal filtering and forecasting in product and staff subsystems at the background noise in SOTES are considered. Nowadays for highly available systems the problems of creation of basic systems engineering principles, approaches and information technologies (IT) for SOTES from modern spontaneous markets at the background inertially going world economics crisis, weakening global market relations at conditions of competition and counteraction reinforcement is very important. Big enterprises need IT due to essential local and systematic economic loss. It is necessary to form general approaches for stochastic processes (StP) and parameters estimation (filtering, identification, calibration etc) in SOTES at the background noises. Special observation SOTES (SOTES-O) with own organization-product resources and internal noise as information from special SOTES being enact noise (SOTES-N). Conception for SOTES structure for systems of technical, staff and financial support is developed. Linear, linear with parametric noises and nonlinear stochastic (discrete and hybrid) equations describing organization-production block (OPB) for three types of SOTES with their planning-economical estimating divisions are worked out. SOTES-O is described by two interconnected subsystems: state SOTES sensor and OPB supporting sensor with necessary resources. After short survey of modern modeling, sub- and conditionally optimal filtering and forecasting basic algorithms and IT for typical SOTES are given.

Influence of OTES-N noise on rules and functional indexes of subsystems accompanying life cycle production, its filtration and forecasting is considered.

Experimental software tools for modeling and forecasting of cost and technical readiness for parks of aircraft is developed.

Now we are developing presented results on the basis of cognitive approaches [12].



The authors would like to thank Russian Academy of Sciences and for supporting the work presented in this chapter.

Authors much obliged to Mrs. Irina Sinitsyna and Mrs. Helen Fedotova for translation and manuscript preparation.


  1. 1. Sinitsyn IN, Shalamov AS. Lectures on Theory of Integrated Logistic Support Systems. 2nd ed. Moscow: Torus Press; 2019. p. 1072 (in Russian)
  2. 2. Sinitsyn IN, Shalamov AS. Probabilistic modeling, estimation and control for CALS organization-technical- economic systems – Chapter 5/in book “Probability, Combinatorics and Control”. Andrey Kostogryzov and Victor Korolev, editors. IntechOpen, London, UK; 2020. p. 117-141. DOI: 10.5772/intechopen.79802
  3. 3. Sinitsyn IN, Shalamov AS. Optimal estimation and control in stochastic synergetic organization-technical-economic systems. Filtering in product and staff subsystems at background noise (I). Highly Available Systems. 2019;15(4):27-48. DOI: 10.18127/j20729472-201904-04 (in Russian)
  4. 4. Sinitsyn IN, Shalamov AS. Optimal estimation and control in stochastic synergetic organization-technical-economic systems. Filtering in product and staff subsystems at background noise (II). Highly Available Systems. 2021;17(1):51-72. DOI: 10.18127/j20729472-202101-05 (in Russian)
  5. 5. Sinitsyn IN, Shalamov AS. Problems of estimation and control in synergetic organization-technical-economic systems. In: VII International Conference “Actual Problems of System and Software”. Moscow; 2021 (in print)
  6. 6. Haken H. Synergetics and Introduction. Springer Ser. Synergistics. Vol. 3. Berlin, Heidelberg: Springer; 1983
  7. 7. Haken H. Advanced Synergetics. Springer Ser. Synergistics. Vol. 20. Berlin, Heidelberg: Springer; 1987
  8. 8. Kolesnikov AA. Synergetical Control Theory. Taganrog: TRTU: M.: Enegroatomizdat; 1994. p. 538 (in Russian)
  9. 9. Pugachev VS, Sinitsyn IN. Stochastic Systems. Theory and Application. Singapore: World Scientific; 2001. p. 908
  10. 10. Sinitsyn IN, editor. Academic Pugachev Vladimir Semenovich: To 100 Annyversary. Moscow: Torus Press; 2011. p. 376 (in Russian)
  11. 11. Kalman SIN. Pugachev Filters. 2nd ed. Vol. 2007. Moscow: Logos; 2005. p. 772 (in Russian)
  12. 12. Kostogryzov A, Korolev V. Probabilistic Methods for cognitive solving of some problems in Artifial intelligence system – Chapter 1/in book “Probability, Combinatorics and Control”. Andrey Kostogryzov and Victor Korolev, editors. IntechOpen, London, UK; 2020. p. 3–34. DOI: 10.5772/intechopen.89168.

Written By

Igor N. Sinitsyn and Anatoly S. Shalamov

Submitted: February 1st, 2022 Reviewed: February 11th, 2022 Published: April 18th, 2022