Open access peer-reviewed chapter

Theory of Control Stochastic Systems with Unsolved Derivatives

Written By

Igor N. Sinitsyn

Submitted: 19 July 2021 Reviewed: 14 September 2021 Published: 25 October 2021

DOI: 10.5772/intechopen.100448

From the Edited Volume

Automation and Control - Theories and Applications

Edited by Elmer P. Dadios

Chapter metrics overview

264 Chapter Downloads

View Full Metrics

Abstract

Various types of stochastic differential systems with unsolved derivatives (SDS USD) arise in problems of analytical modeling and estimation (filtering, extrapolation, etc.) for control stochastic systems, when it is possible to neglect higher-order time derivatives. Methodological and algorithmic support of analytical modeling, filtering, and extrapolation for SDS USD is developed. The methodology is based on the reduction of SDS USD to SDS by means of linear and nonlinear regression models. Two examples that are illustrating stochastic aspects of methodology are presented. Special attention is paid to SDS USD with multiplicative (parametric) noises.

Keywords

  • analytical modeling
  • estimation (filtering
  • extrapolation)
  • normal approximation method (NAM)
  • regression (linear
  • nonlinear)
  • stochastic differential systems with unsolved derivatives (SDS USD)

1. Introduction

Approximate methods of analytical modeling (MAM) of the wideband stochastic processes (StP) in stochastic differential systems with unsolved derivatives (SDS USD) based on normal approximate method (NAM), orthogonal expansions method, and quasi moment methods are developed in [1, 2]. For stochastic integrodifferential systems with unsolved derivatives reducible to SDS corresponding equations for MAM are given in [3, 4]. In [3, 4], problems of mean square (m.s.) synthesis of normal (Gaussian) estimators (filters, extrapolators, etc.) were firstly stated and solved in [1, 2, 3, 4]. Results presented in [1, 2, 3, 4] are valid for smooth (in m.s. sense) functions in SDS USD. For unsmooth functions in SDS USD theory of normal filtering and extrapolation is developed in [5].

Let us present an overview and generalization of [1, 2, 3, 4, 5] for linear and nonlinear regression models. Section 2 is developed to normal analytical modeling algorithms. Normal linear filtering and extrapolation algorithms are given in Sections 3 and 4. Linear modeling and estimation algorithms for SDS USD with multiplicated (parametric) noises are presented in Section 5. Normal nonlinear algorithms for filtering and extrapolation are described in Section 6. Section 7 contains two illustrative examples. In Section 8, main conclusions and some generalizations are given.

Advertisement

2. Normal modeling

Different types of SDS USD arise in problems of analytical modeling and estimator design for stochastic nonlinear dynamical systems when it is possible to neglect higher-order time derivatives [1, 2, 3].

First-order SDS USD is described by the following scalar equation:

φ=φtXtẊtUt=0,E1

where Xt and Ẋt are scalar state variable and its time derivative; Ut is noise vector StP dimUt=nU; nonlinear function φ admits regression approximation [6, 7, 8].

For vector SDS USD, we have the following vector equation:

φ¯=φ¯tXtX¯tUt=0.E2

Here X¯t being vector of derivatives till l order

X¯t=ẊtTXtl1TT;E3

Ut being autocorrelated noise vector defined by linear vector equation:

U̇t=a0tU+a1tUt+btUVt,E4

where dimXt=nX;dimUt=nU; Vt is white noise, dimVt=nV; dima0tU=nU×1; dimatU=nU×nU; dimbtU=nU×nV. Further, we consider the Wiener white noise W0t with matrix intensity v0=v0t and the mixed Wiener-Poisson white noise [9, 10, 11, 12, 13]:

Vt=Ẇt,Wt=W0t+R0qcρP0t,E5
vt=v0tW+R0qcρcρTvPtρ.E6

Here, dimcρ=dimW0t=nV; stochastic Ito integrals are taken in R0q (R0q with pricked origin).

As it is known [6, 7, 8], a deterministic model for real StP defined by Y=φZ at Z=XTX¯TUTT in (2) is given by the formula

ŷz=EYz,ŷzΨE7

at accuracy criterion

εz=p=1nYEŷpYp2z,p=1nY.E8

Class of functions ψΨ represents linear functional space satisfying the following necessary and sufficient conditions:

trE[ŷzY]ψzT=0.E9

For linear shifted and unshifted regression models, we have two known models:

ŷz=gBz,gB=ΓyzΓz1E10

(Booton [6, 7, 8]),

ŷz=a+gKz0,gK=KyzKz1,a=EYgKEzE11

(Kazakov [6, 7, 8]),

where Ez,Γz,Kz being first and second moments for given one-dimensional distribution.

For Eq. (2), linear regression model takes the Booton form

φ̂=φ̂0+k1φXt+k2φX¯t+k3φUt=0,E12

where φ̂0, k1,2,3φ being regressors depending on φ and joint distribution of StP Xt,X¯t,Ut. After Eq. (12) differentiation till the l1 order, we get the following set of l1 equations:

φ̂̇t=0,,φ̂tl1=0.E13

At algebraic solvability condition of linear Eqs. (12) and (13), we reduce SDS USD to SDS of the following form:

Ẋt=A0+A1Xt+A2Ut,E14

where A0,A1,A2 are expressed in terms φ̂0, k1,2,3φ detk2φ10 and indirectly depends on statistical characteristics of Xt, its derivatives and noise Ut. For combined vector XtTUtT=Y˜t we have equation:

Y˜̇t=B0+B1Y˜t+B2Vt,Yt0=Y0,E15

Its one and second probabilistic moments satisfy the following equations [12, 13, 14]:

Y˜̇t=B0+B1Y˜t+B2Vt,Yt0=Y0,E16
ĖtY˜=B0+B1EtY˜,Et0Y˜=E0Y˜,E17
K̇tY˜=B1KtY˜+KtY˜B1T+B2vB2T,K̇t0Y˜=K0Y˜,E18
KY˜t1t2t2=KY˜t1t2B1t2T,KY˜t1t1=Kt1Y˜E19

where EtY˜=EY˜t, KtY˜=EY˜tE˜tYY˜tE˜tYT, t1>t2. So, we get two proposals.

Proposal 1. Let vector non-Gaussian SDS USD (2) satisfy conditions:

  1. vector functions φ in Eq. (2) admit m.s. regression of linear class Ψ;

  2. linear Eqs. (12) and (13) are solvable regards all derivatives till l1 order.

Then SDS USD may be reduced to parametrized SDE. First and second moments of joint vector Y˜t=XtTUtTT satisfy Eqs. (16)(19).

Proposal 2. For normal joint distribution N=NEtYKtY of vector variables in Eqs. (16)(19) it is necessary in equations of Theorem 1 to put

Wt=W0t,EtY˜=ENY˜,KtY˜=ENY˜Y˜tENY˜Y˜tENY˜T,KY˜t1t2=ENY˜t1ENt1Y˜Y˜t2ENt2Y˜T.E20

For Eq. (2) using Kazakov form

φ¯=φ¯0+φ¯0=0E21

where

φ¯0=k1φ¯Xt0+k2φ¯X¯t0+k3φ¯Ut0,E22

we have two sets of equations for mathematical expectations and centered variables:

φ¯̇0=0,,φ¯0l1=0E23
φ¯̇0=0,,φ¯00l1=0.E24

So, we reduce SDS USD to two sets of equations for EtX and Xt0=XtEtX

ĖtX=A¯0+A¯1EtX+A¯2EtU,E25
Ẋt0=A1Xt0+A2Ut0.E26

For the composed vector Y¯t0=Xt0TUt0TT its probabilistic one and second moments satisfy the following equations:

EtY¯=B¯0+B¯1EtY¯,Y¯t0=Y¯0,E27
K̇Y¯=B¯1KtY¯+KY¯B¯1T+B¯2vB¯2T,Kt0Y¯=K0Y¯,E28
KY¯t1t2t2=KY¯t1t2B¯1t2T,KY¯t1t1=Kt1Y¯,t2>t1.E29

Here v=v0 being defined by Eq. (6).

So for Kazakov regression, Eqs. (21)(24) are the basis of Proposal 3.

The regression Eyz and its m.s. estimator ŷz represent deterministic regression model. So to obtain a stochastic regression model, it is sufficient to represent Y in the form Y=Eyz+Y' or Y=ŷz+Y'', where Y',Y'' being some random variables. For finding a deterministic linear regression model, it is sufficient to know the mathematical expectations Ez,Ey and covariance matrices Kz,Kyz. In the case of a stochastic linear regression model, it is necessary to know the distribution of Y for any z or at list its regression ŷz and covariance matrix Kyz (coinciding with the covariance matrices KY'z or KY''z). A more general problem of the best m.s. approximation of the regression by a finite linear combination of given functions χ1z,,χNz is reduced to the problem of the best approximation to the regression, as any linear combination of the functions χ1z,,χNz represents a linear function of variables z1=χ1z,,zNz=χNz. Corresponding models based on m.s. optimal regression are given in [7].

In the general case, we have the following vector equation:

Żt=azZtt+bzZttVt,E30

where Vt being defined by Eqs. (5) and (6). Functions az=azZtt and bz=bzZtt are composed on the basis of Eq. (2) after nonlinear regression approximation φ̂t=jcjχZt and Eq. (13).

According to normal approximation method (NAM), we have for Eq. (30) the following equations for normal modeling [9, 10, 11, 12]:

Ėtz=F1EtzKtzt,E31
K̇tz=F2EtzKtzt,E32
Ktzt1t2t2=F3EtzKzt2Kzt1t2t1t2.E33

Here

F1EtzKtzt=ENazZtt,E34
F2EtzKtzt=F21EtzKtzt+F21EtzKtztT+F22EtzKtzt,E35
F21EtzKtzt=ENazZttZtEtz,E36
F22EtzKtzt=ENbzZttvbzZttT,E37
F3Et2zKt2zKzt1t2t=Kzt1t2Kt2z1F21Et2zKt2zt2T,Et0z=EZt0,Kt0z=KZt0,Kzt1t2=Kt1zE38

where EN being symbol of normal mathematical expectation.

Advertisement

3. Normal linear filtering

In filtering SDS USD problems, we use two types of equations: reduced SDE USD for vector state variables Xt and equation for vector observation variables Yt and ẎtZt.

Consider SDS USD Eq. (2) reducible to SDE Eq. (3.9) at conditions of Theorem 1. We introduce new variables putting XtY˜t,

Ẋt=A0t+A1tXt+A2tV1t.E39

Let the observation vector variable Yt satisfy the following linear equations:

Zt=Ẏt=B0t+B1tXt+B2tV2t.E40

where V1t and V2t are normal white noises with matrix v1t=v01 and v2t=v02 intensities.

Equations of Kalman-Bucy filter in case of Eqs. (39) and (40) for the Gaussian white noises are as follows [12, 13, 14]:

X̂̇t=A0+A1X̂t+βtZtB0+B1X̂t.E41
βt=RtB1tTv2t1,detv2t0.E42
Ṙt=A1tRt+RtA1tT+v1tβtv2tβtTE43

at corresponding initial conditions. Rt being m.s. covariance matrix error, βt being gain coefficient. So, we have the following result.

Proposal 4. Let:

  1. USD are reducible to SDS according to Proposal 2 or Proposal 3;

  2. observations are performed according to Eq. (40).

Then equations for m.s. normal filtering have the generalized Kalman-Bucy filter of the form (41)(43).

Advertisement

4. Normal linear extrapolation

Using equations of linear m.s. extrapolation for time interval Δ [12, 13, 14] we get the following equations for the generalized Kalman–Bucy extrapolator:

X̂̇t+Δtt=A1X̂t+ΔttΔ>0E44

with initial condition

X̂t+ΔttΔ=0=X̂t.E45

For the initial time moment t and for the final time moment t+Δ according to Eq. (44), we get

Xt+Δtt=ut+ΔtXt+tt+Δut+Δτa0τ+tt+Δut+ΔτψτdWτ.E46

where utτ being the fundamental solution of equation u̇t=A1tut at condition utt=I. For conditional mathematical expectation relatively Yt0t in Eq. (46), we get m.s. estimate future state Xt+Δ

X̂t+Δtt=EXt+ΔtYt0t=ut+ΔtX̂tt+tt+Δut+Δτa0τ.E47

In this case, error covariance matrix Rt+Δtt satisfies the following equation:

Ṙt+Δtt=a1Rt+Δt+Rt+Δta1T+ψv0ψT.E48

At initial condition

Rt+ΔtΔ=0=Rt.E49

Hence, the error matrix Rt is known from Proposal 4. So, we have the following proposition.

Proposal 5. At conditions of Proposal 4 m.s. normal extrapolation X̂t+Δtt is defined by Eqs. (47)(49).

This extrapolator presents a sequel connection m.s. filter with gain ut+Δt, summator ut+ΔtX̂tt and integral term tt+Δut+ΔτA0τ. The accuracy of extrapolation is estimated according to Eqs. (48) and (49).

Advertisement

5. Linear modeling and estimation in SDS USD with multiplicated noises

Let us consider vector Eqs. (2)(6) for the multiplicative Gaussian noises:

φ=φẊtXtVt=φ1Ẋtt+φ20th=1nXφ2htXhVt=0.E50

Here, dimXt=dimẊt=nX, dimφ=nX, φ1 being nonlinear vector function of vector argument Ẋt admitting linear regression

φ1Ẋttφ11Ẋt,φ11=φ11EtẊKtẊt.E51

Here, φ11 being matrix of regressors; V1t being vector Gaussian white noise, dimVt=nV with matrix intensity v=v0t. In this case, Eqs. (50) and (51) at condition detφ110 may be resolved relatively Ẋt

Ẋt=B0+B1Xt+B2+r=1nXB3rXrtVt,E52

where B0,B1,B2,B3r depend upon regressors φ11. Using [9, 10, 11, 12], we get equations for mathematical expectations. EtX, covariance matrix KtX, and matrix of covariance functions KXt1t2:

ĖtX=B0+B1EtX,Et0X=E0X,E53
K̇tX=B1KtX+KtXB1T+B2v0B2T+r=1nXB3rv0B2T+B2v0B3rTErtXr,s=1nXB3rv0B3sTErtXEstX+KrstX,Kt0X=K0X,E54
KXt1t2t2=KXt1t2B1t2T,KXt1t1=Kt1X,t2>t1.E55

Here KtX=KrstX; KXt1t2=KrsXt1t2. So for MAM in nonstationary regimes, we have Eqs. (54) and (55) Proposal 6. In stationary case Eqs. (54) and (55) we get the following finite set of equations for E and K (Proposal 7):

B0+B1EX=0,E56
B1KX+KXB1+B2v0B2T+r=1nXB3rv0B2T+B2v0B3rTEX+r,s=1nXB3rv0B3sTErXEsX+KrsX=0E57

and ordinary differential equation for kXτ, τ=t2t1:

dkXτ=B1kXτ,kX0=K.E58

Applying linear theory Pugachev (conditionally m.s. optimal) filtering [9, 10, 11, 12] to equations

Ẋt=A0+A1Xt+A2+r=1nXA3rXrtVt,E59
Zt=Ẏt=B0+B1Xt+B2Vt,E60

We get the following normal filtering equations:

X̂̇t=A0+A1X̂t+βtZtB0t+B1X̂t,E61
βt=RtB1+A2+r=1nX+nYA3rErXv0B2κ111,E62
Ṙt=A1Rt+RtA1TRtB1T+A2+r=1nX+nYA3rErXv0B2κ111××B1+B2v0A2T+r=1nX+nYA3rTErX+A2+r=1nX+nYA3rErXv0A2T+r=1nX+nYA3rTErX+r,s=1nX+nYA3rv0A3sTKrs.E63

Here

κ11=B2v0B0T,κ22=B2v0B2T.E64

For calculating (62) we need to find mathematical expectation EtQ, covariance matrix KtQ of combined vector Qt=X1XnXY1YnYT and error X˜t,X˜t=X˜tXt covariance matrix Rt using equations

ĖtQ=aQEtQ+a0Q,E65
K̇tQ=aQKtQ+KtQa0QT+cQv0cQT+r=1nX+nYcQv0crQT+crQv0c0QTErQ++r,s=1nX+nYcrQv0csQTErQEsQ+KrsQ,E66

where

aQ=0B10A1,a0Q=B0A0,сrQ=B2A1rr=01nX+nY.E67

So, Eqs. (61)(67) define linear Pugachev filter for SDS USD with multiplicative noises reduced to SDS (59) and (60) (Proposal 8).

At last following [9, 10, 11, 12] let us consider linear Pugachev extrapolator for reduced SDS USD. Taking into account equations

Ẋt=A0+A1Xt+A2+r=1nXA3,nY+rXrtV1,E68
Zt=Ẏt=B0+B1Xt+B2V2E69

(V1,2 being independent normal white noises with v1,2 intensities) and the corresponding result (Section 5) we come to the following equation:

X̂̇t=A0t+Δ+A1t+ΔX̂t+βtZtB0+B1εt1X̂tB1εt1ht.E70

Here εt=ut+Δt, ust being fundamental solution of equation du/ds=A1s,

ht=ht=tt+Δut+ΔτA0τ.E71

Accuracy of linear Pugachev extrapolator (70) is performed by integration of the following equation:

Ṙt=A1t+ΔRt+RtA1t+ΔTβtB2v1B2TβtT+[A2t+Δ++r=nY+1nX+nYA3rt+ΔErt+Δ]v2t+Δ[A2t+ΔT+r=nY+1nX+nYA3rt+ΔTErt+Δ]++r,s=nY+1nX+nYA3rt+Δv2t+ΔA3rTt+ΔTKrs.E72

Equations (70)(72) define normal linear Pugachev extrapolator for SDS USD reduced to SDS (Proposal 9).

Advertisement

6. Normal nonlinear filtering and extrapolation

Let us consider SDS (2) reducible to SDS and fully observable measuring system described by the following equations:

Ẋt=aXtYtαt+bXtYtαtV0,E73
Zt=Ẏt=a1XtYtt+b1XtYttV0.E74

Here, a,a1,b,b1 being known functions of mentioned variable; α being vector of parameters in Eq. (73); V0 being normal white noise with intensity matrix v0=v0t.

Using the theory of normal nonlinear suboptimal filtering [10, 11, 12], we get the following equations for X̂t and Rt:

X̂̇t=fX̂tYtRttdt+hX̂tYtRttdtdYtf1X̂tYtRttdt,E75
Ṙt=f2X̂tYtRtthX̂tYtRttb1ν0b1TYtthX̂tYtRttTdt++r=1nyρrX̂tYtRttdYrfr1X̂tYtRttdt.E76

Here

fX̂tYtRtt=2πnRt1/2aYtxtexpxTX̂tTRt1xX̂t/2dx,E77
f1X̂tYtRtt=fr1X̂tYtRtt=[2πnxRt}1/2a1YtxtexpxTX̂tTRt1xX̂t/2dx,E78
hX̂tYtRtt={2πnxRt1/2xa1YtxtT+bν0b1TYtxt××expxTX̂tTRt1xX̂t/2dxX̂tf1X̂tYtRttT}b1ν0b1T1Ytt,E79
f2X̂tYtRtt=[(2p)nxRt}1/2(xX̂t)aYtxtT+aYtxtxTX̂tT+bν0b1TYtxt××expxTX̂tTRt1xX̂t/2dx,E80
ρrX̂tYtRtt=[(2p)nx{xX̂txTX̂tTarYtxt++xX̂tbrYtxtTxTX̂tT+brYtxtxTX̂tT}×expxTX̂tTRt1xX̂t/2dxr=1,ny¯,E81
X̂0=ENX0Y0,R0=ENX0X̂0X0TX̂0TY0,E82

where ar being r th element of line-matrix a1Tâ1Tb1ν0b1T1; bkr being element of k th line and r th column of the matrix b1ν0b1T; br being the rth column of the matrix b1ν0b1Tb1ν0b1T1, br=b1rbpr r=1,n1¯.

Proposal 10. If vector SDS USD (2) is reducible to Eqs. (73) and (74) then Eqs. (75)(81) at conditions (82) define normal filtering algorithm. The number of equations is equal to

QNAM=nx+nxnx+12=nxnx+32.E83

Hence, if the function a1 is linear in Xt and function b does not depend on Xt all matrices ρr=0 and Eq. (76) does not contain Ẏt (Section 3).

Analogously Section 6 we get from [12] corresponding equations of normal conditionally optimal (Pugachev) extrapolator for reduced equations

Ẋt=aXtYtt+bXttV1,E84
Zt=Ẏt=a1XtYtt+b1XtYttV2,E85

where V1 and V2 are normal independent white noises.

Advertisement

7. Examples

Let us consider scalar system

φẊtXtφ1Ẋt+φ2Xt+U1t=0E86
U̇1t=α10+α11U1t+β1V1t.E87

Here, Xt,Ẋt being state variable and its time derivative; U1t being scalar stochastic disturbance; V1t being scalar normal white noise with intensity v1t; φ1 and φ2 being nonlinear functions; α10,α11,β1 being constant parameters. After regression linearization of nonlinear functions, we have

φ1φ10+kẊφ1Ẋt0,φ2φ20+kXφ2Xt0.E88

At condition kẊφ0 we get from (86) and (88) equations for mathematical expectation mtX=EXt and centered Xt0=XtmtX:

φ10+φ20+m1tU=0,E89
Ẋt0=atXt0+btU1t0,E90

where

φ10=φ10mtẊDtẊ,φ20=φ20mtXDtX,E91
at=atmtXmtẊDtXDtẊDtU1DtXU1=kXφ2kẊφ11,bt=kẊφ11.E92

Equations (87) and (90), for U1t0=U1t=mtU1 mtU1=EU1t and X¯t=XtU1tT may be presented in vector form

ṁtX¯=A0t+AtmtX¯,E93
X¯̇t0=AtX¯t0+BtV1t,E94
At=atbt0α1,Bt=0β1.E95

Covariance matrix

KtX¯=DtXKXU1KtẊU1DtU1E96

and matrix of covariance functions

KX¯t1t2=K11X¯t1t2K12X¯t1t2K21X¯t1t2K22X¯t1t2E97

satisfy to linear equations for correlation theory (Section 3)

K̇tX¯=AtKtX¯+KtX¯AtT+Btν1tBtT,Kt0X¯=K0X¯,E98
KX¯t1t2t2=KX¯t1t2At2T,KX¯t1t1=Kt1X¯.E99

Vector Eq. (98) is equal to the following scalar equations:

ḊtX=2atDtX+btKtXU1,ḊtU1=2α1DtU1+β12ν1t,K̇tXU1=atKtXU1+btDtU1+α1KtXU1.E100

From Eq. (90) we calculate variance

DtẊ=at2DtX+bt2DtU1+2atbtKtXU1.E101

Thus, for MAM algorithm we use Eqs. (89), (91), (98)(101).

Let system (86) and (87) is observable so that

Zt=Ẏt=Xt+V2t.E102

Then for Kalman-Bucy filter equations (Proposal 4), we have

X̂̇t=AtX̂t+βtZtX̂t,βt=Rtν2t1detν2t0,Ṙt=2AtRt+ν1ν2βt2.E103

For Kalman-Bucy extrapolator equations are defined by Proposal 5 at utτ=eatτ.

In Table 1, the coefficients of statistical linearization of for typical nonlinear function are given.

φφ0
Ẏ3mm2+3D
sinωẎexpω2D2sinωm
cosωẎexpω2D2cosωm
ẎexpαẎm+αDexpαm+α2D2
ẎsinωẎmsinωmωDcosωmexpω2D2
ẎcosωẎmcosωmωDsinωmexpω2D2
sgnẎ2ΦmD
Ẏ2sgnẎ2Dm2D+1ΦmD+12πDexpm22Dm=mẎD=DẎ
lDẎ,l,lẎd;Ẏ<d;Ẏ<dl1+m1Φ1+m1σ11m1Φ1m1σ1++σ12πexp121+m1σ12exp121m1σ12
γẎ+d,0,γẎdẎ<d;Ẏd;Ẏ>dγ11m11+m1Φ1+m1σ11m1Φ1m1σ1++σ1m12πexp121m1σ12exp121+m1σ12
l,0,lẎ<d;Ẏd;Ẏ>dlΦ1+m1σ1Φ1m1σ1
Ẏ1Ẏ2m1m2+K12
Ẏ12Ẏ2m12+K11m2+2m1K12
sinω1Ẏ1+ω2Ẏ2expω12K11+2ω1ω2K12+ω22K222sinω1m1+ω2m2
sgnẎ1+Ẏ22Φζ1,2,ζ1,2=m1+m2D,D=K11+2K12+K22

Table 1.

Coefficients of statistical linearization for typical nonlinear functions [12, 13, 14].

Let us consider normal scalar system

FΦẊt+atXt+ut=0.E104

Нere random function admits Pugachev normalization

ΦẊtΦ0+kΦẊt0+ΔΦt0,E105

where ΔΦt0 being normal StP satisfying equation of forming filter

Δ̇Φt0=atΔΦΔΦt0+btΔΦVt.E106

Note that functions Φt0 and kΦ depend on EtΦ̇ and DtΦ̇. Equations (104) and (105) are decomposing on two equations. First equation at condition kΦ0 is as follows:

Φ0+atEtX+ut=0,Φ0=kΦ0EtẊ.E107

Second equation at condition kΦ0 is as follows: kΦẊt0+ΔΦt0+atXt0=0 may be presented as

Ẋt0=atkΦ1Xt0kΦ1ΔΦt0.E108

Equations (106) and (108) for Zt0=Xt0ΔΦt0T leads to the following vector equation for covariance matrix

K̇tZ=AKtZ+KtZAT+BνVBT,E109

where A=atkΦ1kΦ10aΔΦ,B=0bΔΦ. Eqs. (107) and (109) give the following final relations:

EtẊ=atEtX+utkΦ01,E110
DtẊ=at2kΦ2DtX+kΦ2DtΔΦ2atkΦ2KtXΔΦ,ḊtX=2atDtX+KtXΔΦ,ḊtΔΦ=2atΔΦDtΔΦ+btΔΦ2νV,K̇tXΔΦ=atΔΦKtΔΦatKtXΔΦ+DtΔΦkΦ1.E111
Advertisement

8. Conclusion

Models of various types of SDS USD arise in problems of analytical modeling and estimation (filtering, extrapolation, etc.) for control stochastic systems, when it is possible to neglect higher-order time derivatives. Linear and nonlinear methodological and algorithmic support of analytical modeling, filtering, and extrapolation for SDS USD is developed. The methodology is based on the reduction of SDS USD to SDS by means of linear and nonlinear regression models. Special attention is paid to SDS USD with multiplicative (parametric) noises. Examples illustrating methodology are presented. The described results may be generalized for systems with stochastically unsolved derivatives and stochastic integrodifferential systems reducible to the differential.

Advertisement

Acknowledgments

The author is grateful to experts for their appropriate and constructive suggestions to improve this template. Research is supported by the Russian Academy of Sciences (Project-AAAA-A19-119001990037-5). Also, the author is much obliged to Mrs. Irina Sinitsyna and Mrs. Helen Fedotova for translation and manuscript preparation.

References

  1. 1. Sinitsyn IN. Analytical modeling of wide band processes in stochastic systems with unsolved derivatives. Informatics and its Applications. 2017;11(1):2-12. (in Russian)
  2. 2. Sinitsyn IN. Parametric analytical modeling of processes in stochastic systems with unsolved derivatives. Systems and Means of Informatics. 2017;27(1):21-45. (in Russian)
  3. 3. Sinitsyn IN. Normal suboptimal filters for stochastic systems with unsolved derivatives. Informatics and its Applications. 2021;15(1):3-10. (in Russian)
  4. 4. Sinitsyn IN. Analytical modeling and filtering in integrodifferential systems with unsolved derivatives. Systems and Means of Informatics. 2021;31(1):37-56. (in Russian)
  5. 5. Sinitsyn IN. Analytical modeling and estimation of normal processes defined by stochastic differential equations with unsolved derivatives. Mathematics and Statistics Research. 2021. (in print)
  6. 6. Pugachev VS. Theory of Random Functions and its Application to Control Problems. Pergamon Press; 1965. p. 833
  7. 7. Pugachev VS. Probability Theory and Mathematical Statistics for Engineers. Pergamon Press; 1984. p. 450
  8. 8. Pugachev VS, Sinitsyn IN. Lectures on Functional Analysis and Applications. Singapore: World Scientific; 1999. p. 730
  9. 9. Pugachev VS, Sinitsyn IN. Stochastic Differential Systems. Analysis and Filtering. Chichester: John Wiley & Sons; 1987. p. 549
  10. 10. Pugachev VS, Sinitsyn IN. Theory of Stochastic Systems. 2nd ed. Moscow: TORUS Press; 2001. p. 1000. (in Russian)
  11. 11. Pugachev VS, Sinitsyn IN. Stochastic Systems. Theory and Applications. Singapore: World Scientific; 2001. p. 908
  12. 12. Sinitsyn IN. Kalman and Pugachev Filters. 2nd ed. Logos: Moscow; 2007. p. 772. (in Russian)
  13. 13. Socha L. Linearization Methods for Stochastic Dynamic Systems, Lect Notes Phys. 730. Springer; 2008. p. 383
  14. 14. Sinitsyn IN. Normalization of systems with stochastically unsolved derivatives. Informatics and its Applications. 2021. (in print, in Russian)

Written By

Igor N. Sinitsyn

Submitted: 19 July 2021 Reviewed: 14 September 2021 Published: 25 October 2021