Open access peer-reviewed chapter - ONLINE FIRST

Theory of Control Stochastic Systems with Unsolved Derivatives

By Igor N. Sinitsyn

Submitted: June 7th 2021Reviewed: September 14th 2021Published: October 25th 2021

DOI: 10.5772/intechopen.100448

Downloaded: 31

Abstract

Various types of stochastic differential systems with unsolved derivatives (SDS USD) arise in problems of analytical modeling and estimation (filtering, extrapolation, etc.) for control stochastic systems, when it is possible to neglect higher-order time derivatives. Methodological and algorithmic support of analytical modeling, filtering, and extrapolation for SDS USD is developed. The methodology is based on the reduction of SDS USD to SDS by means of linear and nonlinear regression models. Two examples that are illustrating stochastic aspects of methodology are presented. Special attention is paid to SDS USD with multiplicative (parametric) noises.

Keywords

  • analytical modeling
  • estimation (filtering
  • extrapolation)
  • normal approximation method (NAM)
  • regression (linear
  • nonlinear)
  • stochastic differential systems with unsolved derivatives (SDS USD)

1. Introduction

Approximate methods of analytical modeling (MAM) of the wideband stochastic processes (StP) in stochastic differential systems with unsolved derivatives (SDS USD) based on normal approximate method (NAM), orthogonal expansions method, and quasi moment methods are developed in [1, 2]. For stochastic integrodifferential systems with unsolved derivatives reducible to SDS corresponding equations for MAM are given in [3, 4]. In [3, 4], problems of mean square (m.s.) synthesis of normal (Gaussian) estimators (filters, extrapolators, etc.) were firstly stated and solved in [1, 2, 3, 4]. Results presented in [1, 2, 3, 4] are valid for smooth (in m.s. sense) functions in SDS USD. For unsmooth functions in SDS USD theory of normal filtering and extrapolation is developed in [5].

Let us present an overview and generalization of [1, 2, 3, 4, 5] for linear and nonlinear regression models. Section 2 is developed to normal analytical modeling algorithms. Normal linear filtering and extrapolation algorithms are given in Sections 3 and 4. Linear modeling and estimation algorithms for SDS USD with multiplicated (parametric) noises are presented in Section 5. Normal nonlinear algorithms for filtering and extrapolation are described in Section 6. Section 7 contains two illustrative examples. In Section 8, main conclusions and some generalizations are given.

Advertisement

2. Normal modeling

Different types of SDS USD arise in problems of analytical modeling and estimator design for stochastic nonlinear dynamical systems when it is possible to neglect higher-order time derivatives [1, 2, 3].

First-order SDS USD is described by the following scalar equation:

φ=φtXtẊtUt=0,E1

where Xtand Ẋtare scalar state variable and its time derivative; Utis noise vector StP dimUt=nU; nonlinear function φadmits regression approximation [6, 7, 8].

For vector SDS USD, we have the following vector equation:

φ¯=φ¯tXtX¯tUt=0.E2

Here X¯tbeing vector of derivatives till lorder

X¯t=ẊtTXtl1TT;E3

Utbeing autocorrelated noise vector defined by linear vector equation:

U̇t=a0tU+a1tUt+btUVt,E4

where dimXt=nX;dimUt=nU; Vtis white noise, dimVt=nV;dima0tU=nU×1;dimatU=nU×nU;dimbtU=nU×nV.Further, we consider the Wiener white noise W0twith matrix intensity v0=v0tand the mixed Wiener-Poisson white noise [9, 10, 11, 12, 13]:

Vt=Ẇt,Wt=W0t+R0qcρP0t,E5
vt=v0tW+R0qcρcρTvPtρ.E6

Here, dimcρ=dimW0t=nV; stochastic Ito integrals are taken in R0q(R0qwith pricked origin).

As it is known [6, 7, 8], a deterministic model for real StP defined by Y=φZat Z=XTX¯TUTTin (2) is given by the formula

ŷz=EYz,ŷzΨE7

at accuracy criterion

εz=p=1nYEŷpYp2z,p=1nY.E8

Class of functions ψΨrepresents linear functional space satisfying the following necessary and sufficient conditions:

trE[ŷzY]ψzT=0.E9

For linear shifted and unshifted regression models, we have two known models:

ŷz=gBz,gB=ΓyzΓz1E10

(Booton [6, 7, 8]),

ŷz=a+gKz0,gK=KyzKz1,a=EYgKEzE11

(Kazakov [6, 7, 8]),

where Ez,Γz,Kzbeing first and second moments for given one-dimensional distribution.

For Eq. (2), linear regression model takes the Booton form

φ̂=φ̂0+k1φXt+k2φX¯t+k3φUt=0,E12

where φ̂0, k1,2,3φbeing regressors depending on φand joint distribution of StP Xt,X¯t,Ut. After Eq. (12) differentiation till the l1order, we get the following set of l1equations:

φ̂̇t=0,,φ̂tl1=0.E13

At algebraic solvability condition of linear Eqs. (12) and (13), we reduce SDS USD to SDS of the following form:

Ẋt=A0+A1Xt+A2Ut,E14

where A0,A1,A2are expressed in terms φ̂0, k1,2,3φdetk2φ10and indirectly depends on statistical characteristics of Xt, its derivatives and noise Ut. For combined vector XtTUtT=Y˜twe have equation:

Y˜̇t=B0+B1Y˜t+B2Vt,Yt0=Y0,E15

Its one and second probabilistic moments satisfy the following equations [12, 13, 14]:

Y˜̇t=B0+B1Y˜t+B2Vt,Yt0=Y0,E16
ĖtY˜=B0+B1EtY˜,Et0Y˜=E0Y˜,E17
K̇tY˜=B1KtY˜+KtY˜B1T+B2vB2T,K̇t0Y˜=K0Y˜,E18
KY˜t1t2t2=KY˜t1t2B1t2T,KY˜t1t1=Kt1Y˜E19

where EtY˜=EY˜t, KtY˜=EY˜tE˜tYY˜tE˜tYT, t1>t2. So, we get two proposals.

Proposal 1. Let vector non-Gaussian SDS USD(2)satisfy conditions:

  1. vector functionsφinEq. (2)admit m.s. regression of linear classΨ;

  2. linearEqs. (12)and(13)are solvable regards all derivatives tilll1order.

Then SDS USD may be reduced to parametrized SDE. First and second moments of joint vectorY˜t=XtTUtTTsatisfyEqs. (16)(19).

Proposal 2. For normal joint distributionN=NEtYKtYof vector variables inEqs. (16)(19)it is necessary in equations of Theorem 1 to put

Wt=W0t,EtY˜=ENY˜,KtY˜=ENY˜Y˜tENY˜Y˜tENY˜T,KY˜t1t2=ENY˜t1ENt1Y˜Y˜t2ENt2Y˜T.E20

For Eq. (2) using Kazakov form

φ¯=φ¯0+φ¯0=0E21

where

φ¯0=k1φ¯Xt0+k2φ¯X¯t0+k3φ¯Ut0,E22

we have two sets of equations for mathematical expectations and centered variables:

φ¯̇0=0,,φ¯0l1=0E23
φ¯̇0=0,,φ¯00l1=0.E24

So, we reduce SDS USD to two sets of equations for EtXand Xt0=XtEtX

ĖtX=A¯0+A¯1EtX+A¯2EtU,E25
Ẋt0=A1Xt0+A2Ut0.E26

For the composed vector Y¯t0=Xt0TUt0TTits probabilistic one and second moments satisfy the following equations:

EtY¯=B¯0+B¯1EtY¯,Y¯t0=Y¯0,E27
K̇Y¯=B¯1KtY¯+KY¯B¯1T+B¯2vB¯2T,Kt0Y¯=K0Y¯,E28
KY¯t1t2t2=KY¯t1t2B¯1t2T,KY¯t1t1=Kt1Y¯,t2>t1.E29

Here v=v0being defined by Eq. (6).

So for Kazakov regression, Eqs. (21)(24) are the basis of Proposal 3.

The regression Eyzand its m.s. estimator ŷzrepresent deterministic regression model. So to obtain a stochastic regression model, it is sufficient to represent Yin the form Y=Eyz+Y'or Y=ŷz+Y'', where Y',Y''being some random variables. For finding a deterministic linear regression model, it is sufficient to know the mathematical expectations Ez,Eyand covariance matrices Kz,Kyz. In the case of a stochastic linear regression model, it is necessary to know the distribution of Yfor any zor at list its regression ŷzand covariance matrix Kyz(coinciding with the covariance matrices KY'zor KY''z). A more general problem of the best m.s. approximation of the regression by a finite linear combination of given functions χ1z,,χNzis reduced to the problem of the best approximation to the regression, as any linear combination of the functions χ1z,,χNzrepresents a linear function of variables z1=χ1z,,zNz=χNz. Corresponding models based on m.s. optimal regression are given in [7].

In the general case, we have the following vector equation:

Żt=azZtt+bzZttVt,E30

where Vtbeing defined by Eqs. (5) and (6). Functions az=azZttand bz=bzZttare composed on the basis of Eq. (2) after nonlinear regression approximation φ̂t=jcjχZtand Eq. (13).

According to normal approximation method (NAM), we have for Eq. (30) the following equations for normal modeling [9, 10, 11, 12]:

Ėtz=F1EtzKtzt,E31
K̇tz=F2EtzKtzt,E32
Ktzt1t2t2=F3EtzKzt2Kzt1t2t1t2.E33

Here

F1EtzKtzt=ENazZtt,E34
F2EtzKtzt=F21EtzKtzt+F21EtzKtztT+F22EtzKtzt,E35
F21EtzKtzt=ENazZttZtEtz,E36
F22EtzKtzt=ENbzZttvbzZttT,E37
F3Et2zKt2zKzt1t2t=Kzt1t2Kt2z1F21Et2zKt2zt2T,Et0z=EZt0,Kt0z=KZt0,Kzt1t2=Kt1zE38

where ENbeing symbol of normal mathematical expectation.

Advertisement

3. Normal linear filtering

In filtering SDS USD problems, we use two types of equations: reduced SDE USD for vector state variables Xtand equation for vector observation variables Ytand ẎtZt.

Consider SDS USD Eq. (2) reducible to SDE Eq. (3.9) at conditions of Theorem 1. We introduce new variables putting XtY˜t,

Ẋt=A0t+A1tXt+A2tV1t.E39

Let the observation vector variable Ytsatisfy the following linear equations:

Zt=Ẏt=B0t+B1tXt+B2tV2t.E40

where V1tand V2tare normal white noises with matrix v1t=v01and v2t=v02intensities.

Equations of Kalman-Bucy filter in case of Eqs. (39) and (40) for the Gaussian white noises are as follows [12, 13, 14]:

X̂̇t=A0+A1X̂t+βtZtB0+B1X̂t.E41
βt=RtB1tTv2t1,detv2t0.E42
Ṙt=A1tRt+RtA1tT+v1tβtv2tβtTE43

at corresponding initial conditions. Rtbeing m.s. covariance matrix error, βtbeing gain coefficient. So, we have the following result.

Proposal 4. Let:

  1. USD are reducible to SDS according to Proposal 2 or Proposal 3;

  2. observations are performed according toEq. (40).

Then equations for m.s. normal filtering have the generalized Kalman-Bucy filter of the form(41)(43).

Advertisement

4. Normal linear extrapolation

Using equations of linear m.s. extrapolation for time interval Δ[12, 13, 14] we get the following equations for the generalized Kalman–Bucy extrapolator:

X̂̇t+Δtt=A1X̂t+ΔttΔ>0E44

with initial condition

X̂t+ΔttΔ=0=X̂t.E45

For the initial time moment tand for the final time moment t+Δaccording to Eq. (44), we get

Xt+Δtt=ut+ΔtXt+tt+Δut+Δτa0τ+tt+Δut+ΔτψτdWτ.E46

where utτbeing the fundamental solution of equation u̇t=A1tutat condition utt=I. For conditional mathematical expectation relatively Yt0tin Eq. (46), we get m.s. estimate future state Xt+Δ

X̂t+Δtt=EXt+ΔtYt0t=ut+ΔtX̂tt+tt+Δut+Δτa0τ.E47

In this case, error covariance matrix Rt+Δttsatisfies the following equation:

Ṙt+Δtt=a1Rt+Δt+Rt+Δta1T+ψv0ψT.E48

At initial condition

Rt+ΔtΔ=0=Rt.E49

Hence, the error matrix Rtis known from Proposal 4. So, we have the following proposition.

Proposal 5. At conditions of Proposal 4 m.s. normal extrapolationX̂t+Δttis defined byEqs. (47)(49).

This extrapolator presents a sequel connection m.s. filter with gain ut+Δt, summator ut+ΔtX̂ttand integral term tt+Δut+ΔτA0τ. The accuracy of extrapolation is estimated according to Eqs. (48) and (49).

Advertisement

5. Linear modeling and estimation in SDS USD with multiplicated noises

Let us consider vector Eqs. (2)(6) for the multiplicative Gaussian noises:

φ=φẊtXtVt=φ1Ẋtt+φ20th=1nXφ2htXhVt=0.E50

Here, dimXt=dimẊt=nX, dimφ=nX, φ1being nonlinear vector function of vector argument Ẋtadmitting linear regression

φ1Ẋttφ11Ẋt,φ11=φ11EtẊKtẊt.E51

Here, φ11being matrix of regressors; V1tbeing vector Gaussian white noise, dimVt=nVwith matrix intensity v=v0t. In this case, Eqs. (50) and (51) at condition detφ110may be resolved relatively Ẋt

Ẋt=B0+B1Xt+B2+r=1nXB3rXrtVt,E52

where B0,B1,B2,B3rdepend upon regressors φ11. Using [9, 10, 11, 12], we get equations for mathematical expectations. EtX, covariance matrix KtX, and matrix of covariance functions KXt1t2:

ĖtX=B0+B1EtX,Et0X=E0X,E53
K̇tX=B1KtX+KtXB1T+B2v0B2T+r=1nXB3rv0B2T+B2v0B3rTErtXr,s=1nXB3rv0B3sTErtXEstX+KrstX,Kt0X=K0X,E54
KXt1t2t2=KXt1t2B1t2T,KXt1t1=Kt1X,t2>t1.E55

Here KtX=KrstX; KXt1t2=KrsXt1t2. So for MAM in nonstationary regimes, we have Eqs. (54) and (55) Proposal 6. In stationary case Eqs. (54) and (55) we get the following finite set of equations for Eand K(Proposal 7):

B0+B1EX=0,E56
B1KX+KXB1+B2v0B2T+r=1nXB3rv0B2T+B2v0B3rTEX+r,s=1nXB3rv0B3sTErXEsX+KrsX=0E57

and ordinary differential equation for kXτ, τ=t2t1:

dkXτ=B1kXτ,kX0=K.E58

Applying linear theory Pugachev (conditionally m.s. optimal) filtering [9, 10, 11, 12] to equations

Ẋt=A0+A1Xt+A2+r=1nXA3rXrtVt,E59
Zt=Ẏt=B0+B1Xt+B2Vt,E60

We get the following normal filtering equations:

X̂̇t=A0+A1X̂t+βtZtB0t+B1X̂t,E61
βt=RtB1+A2+r=1nX+nYA3rErXv0B2κ111,E62
Ṙt=A1Rt+RtA1TRtB1T+A2+r=1nX+nYA3rErXv0B2κ111××B1+B2v0A2T+r=1nX+nYA3rTErX+A2+r=1nX+nYA3rErXv0A2T+r=1nX+nYA3rTErX+r,s=1nX+nYA3rv0A3sTKrs.E63

Here

κ11=B2v0B0T,κ22=B2v0B2T.E64

For calculating (62) we need to find mathematical expectation EtQ,covariance matrix KtQof combined vector Qt=X1XnXY1YnYTand error X˜t,X˜t=X˜tXtcovariance matrix Rtusing equations

ĖtQ=aQEtQ+a0Q,E65
K̇tQ=aQKtQ+KtQa0QT+cQv0cQT+r=1nX+nYcQv0crQT+crQv0c0QTErQ++r,s=1nX+nYcrQv0csQTErQEsQ+KrsQ,E66

where

aQ=0B10A1,a0Q=B0A0,сrQ=B2A1rr=01nX+nY.E67

So, Eqs. (61)(67) define linear Pugachev filter for SDS USD with multiplicative noises reduced to SDS (59) and (60) (Proposal 8).

At last following [9, 10, 11, 12] let us consider linear Pugachev extrapolator for reduced SDS USD. Taking into account equations

Ẋt=A0+A1Xt+A2+r=1nXA3,nY+rXrtV1,E68
Zt=Ẏt=B0+B1Xt+B2V2E69

(V1,2being independent normal white noises with v1,2intensities) and the corresponding result (Section 5) we come to the following equation:

X̂̇t=A0t+Δ+A1t+ΔX̂t+βtZtB0+B1εt1X̂tB1εt1ht.E70

Here εt=ut+Δt, ustbeing fundamental solution of equation du/ds=A1s,

ht=ht=tt+Δut+ΔτA0τ.E71

Accuracy of linear Pugachev extrapolator (70) is performed by integration of the following equation:

Ṙt=A1t+ΔRt+RtA1t+ΔTβtB2v1B2TβtT+[A2t+Δ++r=nY+1nX+nYA3rt+ΔErt+Δ]v2t+Δ[A2t+ΔT+r=nY+1nX+nYA3rt+ΔTErt+Δ]++r,s=nY+1nX+nYA3rt+Δv2t+ΔA3rTt+ΔTKrs.E72

Equations (70)(72) define normal linear Pugachev extrapolator for SDS USD reduced to SDS (Proposal 9).

Advertisement

6. Normal nonlinear filtering and extrapolation

Let us consider SDS (2) reducible to SDS and fully observable measuring system described by the following equations:

Ẋt=aXtYtαt+bXtYtαtV0,E73
Zt=Ẏt=a1XtYtt+b1XtYttV0.E74

Here, a,a1,b,b1being known functions of mentioned variable; αbeing vector of parameters in Eq. (73); V0being normal white noise with intensity matrix v0=v0t.

Using the theory of normal nonlinear suboptimal filtering [10, 11, 12], we get the following equations for X̂tand Rt:

X̂̇t=fX̂tYtRttdt+hX̂tYtRttdtdYtf1X̂tYtRttdt,E75
Ṙt=f2X̂tYtRtthX̂tYtRttb1ν0b1TYtthX̂tYtRttTdt++r=1nyρrX̂tYtRttdYrfr1X̂tYtRttdt.E76

Here

fX̂tYtRtt=2πnRt1/2aYtxtexpxTX̂tTRt1xX̂t/2dx,E77
f1X̂tYtRtt=fr1X̂tYtRtt=[2πnxRt}1/2a1YtxtexpxTX̂tTRt1xX̂t/2dx,E78
hX̂tYtRtt={2πnxRt1/2xa1YtxtT+bν0b1TYtxt××expxTX̂tTRt1xX̂t/2dxX̂tf1X̂tYtRttT}b1ν0b1T1Ytt,E79
f2X̂tYtRtt=[(2p)nxRt}1/2(xX̂t)aYtxtT+aYtxtxTX̂tT+bν0b1TYtxt××expxTX̂tTRt1xX̂t/2dx,E80
ρrX̂tYtRtt=[(2p)nx{xX̂txTX̂tTarYtxt++xX̂tbrYtxtTxTX̂tT+brYtxtxTX̂tT}×expxTX̂tTRt1xX̂t/2dxr=1,ny¯,E81
X̂0=ENX0Y0,R0=ENX0X̂0X0TX̂0TY0,E82

where arbeing rth element of line-matrix a1Tâ1Tb1ν0b1T1; bkrbeing element of kth line and rth column of the matrix b1ν0b1T; brbeing the rth column of the matrix b1ν0b1Tb1ν0b1T1, br=b1rbprr=1,n1¯.

Proposal 10. If vector SDS USD(2)is reducible toEqs. (73)and(74)thenEqs. (75)(81)atconditions (82)define normal filtering algorithm. The number of equations is equal to

QNAM=nx+nxnx+12=nxnx+32.E83

Hence, if the function a1is linear in Xtand function bdoes not depend on Xtall matrices ρr=0and Eq. (76) does not contain Ẏt(Section 3).

Analogously Section 6 we get from [12] corresponding equations of normal conditionally optimal (Pugachev) extrapolator for reduced equations

Ẋt=aXtYtt+bXttV1,E84
Zt=Ẏt=a1XtYtt+b1XtYttV2,E85

where V1and V2are normal independent white noises.

Advertisement

7. Examples

Let us consider scalar system

φẊtXtφ1Ẋt+φ2Xt+U1t=0E86
U̇1t=α10+α11U1t+β1V1t.E87

Here, Xt,Ẋtbeing state variable and its time derivative; U1tbeing scalar stochastic disturbance; V1tbeing scalar normal white noise with intensity v1t; φ1and φ2being nonlinear functions; α10,α11,β1being constant parameters. After regression linearization of nonlinear functions, we have

φ1φ10+kẊφ1Ẋt0,φ2φ20+kXφ2Xt0.E88

At condition kẊφ0we get from (86) and (88) equations for mathematical expectation mtX=EXtand centered Xt0=XtmtX:

φ10+φ20+m1tU=0,E89
Ẋt0=atXt0+btU1t0,E90

where

φ10=φ10mtẊDtẊ,φ20=φ20mtXDtX,E91
at=atmtXmtẊDtXDtẊDtU1DtXU1=kXφ2kẊφ11,bt=kẊφ11.E92

Equations (87) and (90), for U1t0=U1t=mtU1mtU1=EU1tand X¯t=XtU1tTmay be presented in vector form

ṁtX¯=A0t+AtmtX¯,E93
X¯̇t0=AtX¯t0+BtV1t,E94
At=atbt0α1,Bt=0β1.E95

Covariance matrix

KtX¯=DtXKXU1KtẊU1DtU1E96

and matrix of covariance functions

KX¯t1t2=K11X¯t1t2K12X¯t1t2K21X¯t1t2K22X¯t1t2E97

satisfy to linear equations for correlation theory (Section 3)

K̇tX¯=AtKtX¯+KtX¯AtT+Btν1tBtT,Kt0X¯=K0X¯,E98
KX¯t1t2t2=KX¯t1t2At2T,KX¯t1t1=Kt1X¯.E99

Vector Eq. (98) is equal to the following scalar equations:

ḊtX=2atDtX+btKtXU1,ḊtU1=2α1DtU1+β12ν1t,K̇tXU1=atKtXU1+btDtU1+α1KtXU1.E100

From Eq. (90) we calculate variance

DtẊ=at2DtX+bt2DtU1+2atbtKtXU1.E101

Thus, for MAM algorithm we use Eqs. (89), (91), (98)(101).

Let system (86) and (87) is observable so that

Zt=Ẏt=Xt+V2t.E102

Then for Kalman-Bucy filter equations (Proposal 4), we have

X̂̇t=AtX̂t+βtZtX̂t,βt=Rtν2t1detν2t0,Ṙt=2AtRt+ν1ν2βt2.E103

For Kalman-Bucy extrapolator equations are defined by Proposal 5 at utτ=eatτ.

In Table 1, the coefficients of statistical linearization of for typical nonlinear function are given.

φφ0
Ẏ3mm2+3D
sinωẎexpω2D2sinωm
cosωẎexpω2D2cosωm
ẎexpαẎm+αDexpαm+α2D2
ẎsinωẎmsinωmωDcosωmexpω2D2
ẎcosωẎmcosωmωDsinωmexpω2D2
sgnẎ2ΦmD
Ẏ2sgnẎ2Dm2D+1ΦmD+12πDexpm22Dm=mẎD=DẎ
lDẎ,l,lẎd;Ẏ<d;Ẏ<dl1+m1Φ1+m1σ11m1Φ1m1σ1++σ12πexp121+m1σ12exp121m1σ12
γẎ+d,0,γẎdẎ<d;Ẏd;Ẏ>dγ11m11+m1Φ1+m1σ11m1Φ1m1σ1++σ1m12πexp121m1σ12exp121+m1σ12
l,0,lẎ<d;Ẏd;Ẏ>dlΦ1+m1σ1Φ1m1σ1
Ẏ1Ẏ2m1m2+K12
Ẏ12Ẏ2m12+K11m2+2m1K12
sinω1Ẏ1+ω2Ẏ2expω12K11+2ω1ω2K12+ω22K222sinω1m1+ω2m2
sgnẎ1+Ẏ22Φζ1,2,ζ1,2=m1+m2D,D=K11+2K12+K22

Table 1.

Coefficients of statistical linearization for typical nonlinear functions [12, 13, 14].

Let us consider normal scalar system

FΦẊt+atXt+ut=0.E104

Нere random function admits Pugachev normalization

ΦẊtΦ0+kΦẊt0+ΔΦt0,E105

where ΔΦt0being normal StP satisfying equation of forming filter

Δ̇Φt0=atΔΦΔΦt0+btΔΦVt.E106

Note that functions Φt0and kΦdepend on EtΦ̇and DtΦ̇. Equations (104) and (105) are decomposing on two equations. First equation at condition kΦ0is as follows:

Φ0+atEtX+ut=0,Φ0=kΦ0EtẊ.E107

Second equation at condition kΦ0is as follows: kΦẊt0+ΔΦt0+atXt0=0may be presented as

Ẋt0=atkΦ1Xt0kΦ1ΔΦt0.E108

Equations (106) and (108) for Zt0=Xt0ΔΦt0Tleads to the following vector equation for covariance matrix

K̇tZ=AKtZ+KtZAT+BνVBT,E109

where A=atkΦ1kΦ10aΔΦ,B=0bΔΦ. Eqs. (107) and (109) give the following final relations:

EtẊ=atEtX+utkΦ01,E110
DtẊ=at2kΦ2DtX+kΦ2DtΔΦ2atkΦ2KtXΔΦ,ḊtX=2atDtX+KtXΔΦ,ḊtΔΦ=2atΔΦDtΔΦ+btΔΦ2νV,K̇tXΔΦ=atΔΦKtΔΦatKtXΔΦ+DtΔΦkΦ1.E111
Advertisement

8. Conclusion

Models of various types of SDS USD arise in problems of analytical modeling and estimation (filtering, extrapolation, etc.) for control stochastic systems, when it is possible to neglect higher-order time derivatives. Linear and nonlinear methodological and algorithmic support of analytical modeling, filtering, and extrapolation for SDS USD is developed. The methodology is based on the reduction of SDS USD to SDS by means of linear and nonlinear regression models. Special attention is paid to SDS USD with multiplicative (parametric) noises. Examples illustrating methodology are presented. The described results may be generalized for systems with stochastically unsolved derivatives and stochastic integrodifferential systems reducible to the differential.

Advertisement

Acknowledgments

The author is grateful to experts for their appropriate and constructive suggestions to improve this template. Research is supported by the Russian Academy of Sciences (Project-AAAA-A19-119001990037-5). Also, the author is much obliged to Mrs. Irina Sinitsyna and Mrs. Helen Fedotova for translation and manuscript preparation.

DOWNLOAD FOR FREE

chapter PDF

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Igor N. Sinitsyn (October 25th 2021). Theory of Control Stochastic Systems with Unsolved Derivatives [Online First], IntechOpen, DOI: 10.5772/intechopen.100448. Available from:

chapter statistics

31total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us