Open access peer-reviewed chapter

Fault Detection and Isolation

Written By

Rajamani Doraiswami and Lahouari Cheded

Submitted: 18 May 2016 Reviewed: 14 February 2017 Published: 31 May 2017

DOI: 10.5772/67870

From the Edited Volume

Fault Diagnosis and Detection

Edited by Mustafa Demetgul and Muhammet Ünal

Chapter metrics overview

2,217 Chapter Downloads

View Full Metrics

Abstract

Fault diagnosis of a class of linear multiple‐input and multiple‐output (MIMO) systems is developed here. An emulator‐based scheme is proposed to detect and isolate faults in a system formed by interconnected subsystems. Emulators, which are hardware or software devices, are connected to the input and measurement outputs in cascade with the subsystems whose faults are to be diagnosed. The role of an emulator is to induce variations in cascade combination of the nominal fault‐free subsystem so as to mimic the actual perturbations that may occur in the subsystem during the offline identification phase. The emulator‐generated data are employed in the reliable identification of the nominal system, the associated Kalman filter, and a map that relates the emulator parameters to the feature vector. In the operational stage, the Kalman filter residual is used to detect a fault in the system; the emulator parameter that has varied is estimated, and using the emulator‐feature vector map, the faulty subsystem is isolated. The main contributions of this work are accurate and reliable identification of the system, the fault diagnosis of multivariable systems using feature vector-emulator map fault diagnosis of multivariable systems, and the establishment of the key properties of the Kalman filter for fault detection. The proposed scheme was successfully evaluated on a number of simulated as well as physical systems.

Keywords

  • fault detection
  • fault isolation
  • fault diagnosis
  • Kalman filter
  • emulators
  • identification
  • Bayes decision theory

1. Introduction

Fault detection and isolation (FDI) of physical systems—especially mission critical systems including nuclear reactors, aircraft, automotive systems, spacecraft, autonomous vehicles, and fast rail transportation—is becoming increasingly important in recent times thanks mainly to advances in sensors, computing, and communication technologies. It still poses a challenge in view of the stringent and conflicting requirements, high probability of correct detection and isolation, low false alarm probability, and timely decision on the fault status.

The identification of the system model is crucial to the performance of the fault diagnosis scheme. The more accurate the identified model, the higher is the probability of correct diagnosis and the lower is the false alarm probability. The reliability and accuracy of the identification hinges on ensuring that the identified model is captured completely and what is leftover is the information‐less zero‐mean white noise process. As the Kalman filter is a zero‐mean white noise process if and only if there is no mismatch between the identified model and the model of the system, the identification scheme should minimize the residual of the Kalman filter—instead the equation error, which in general, is a colored noise [1]. The widely popular, consistent, and efficient scheme that meets the above state requirement is the prediction error method (PEM) [2]. The PEM identifies the system by minimizing the residual of the Kalman filter.

A physical system is subject to perturbation resulting from the variations of the parameters and effects nonlinearities resulting in the deviation in the neighborhood of the nominal operating point. A model identified at a nominal operating point will not capture the static and the dynamic behavior of the perturbed system. To overcome this, an emulator, which is a hardware or a software device, is connected to either an accessible input or an accessible output in cascade with a subsystem to mimic its operating scenarios [35]. The powerful concept of emulators, which is employed to mimic the likely operating scenarios for single‐input and single‐output (SISO) system, is extended to multiple‐input and multiple‐output (MIMO) and multiple‐input and single‐output (MISO) system. The system is identified and the feature vector‐emulator map is estimated from the emulator‐generated data covering all likely operating scenarios including the normal and the faulty ones similar in spirit to that employed in training the neural network [6]. The identified nominal model, an optimal nominal model, is robust to model perturbation in the neighborhoods of the nominal operating point. It may be worth noting that the conventional scheme uses only the input‐output data from the system in the nominal operating scheme.

There are essentially three approaches to the failure detection and isolation problem: the non‐parametric approach, the parametric approach, and the combined approach. The non‐parametric approach is based on analyzing a residual. The residual is defined as a signal, which is ideally non‐zero in a statistical sense when there is a failure present, and zero otherwise. The residual may be generated using Kalman filters, observers, unknown‐input observers, other forms of detection filters, and parity equations [712]. In view of the following key properties of the Kalman filter listed below, the Kalman filter is deemed the most preferable for both fault detection and fault isolation [1]:

  1. Model matching: The residual is a zero‐mean white noise process if and only if there is no mismatch between the actual model of the system and its identified model embodied in the Kalman filter, that is, and its variance is minimum.

  2. Optimal estimation: The estimate is optimal in the sense that it is the best estimate that can be obtained by any estimator in the class of all estimators that are constrained by the same assumptions.

  3. Robustness: Thanks to the feedback (closed‐loop) configuration of the Kalman filter with residual feedback, the Kalman filter provides the highest robustness against the effect of disturbance and model variations.

  4. Model mismatch: If there is a model mismatch, the residual will not be a zero‐mean white noise process and an additive term termed fault‐indicative term. The fault‐indicative term is affine in the deviation in the linear regression or the transfer function model.

The feature vector‐emulator map relating the deviation of the feature vector and variations of the emulator parameter is used for fault isolation if a fault is detected. The influence vector, which is the partial derivative of the feature vector with respect to an emulator parameter, plays a crucial role in pinpointing the faulty subsystem and tracks its parameter variation.

The main contributions here are the development of emulator‐based system identification, and estimation of the feature vector‐emulator map and its application to performance monitoring and fault diagnosis of multivariable system. The key properties of the Kalman filter, including model matching, whitening of the equation error, and residual expression for the model‐mismatch case, are established for MIMO, MISO, and SISO systems.

The chapter is organized as follows. In Section 2, the mathematical model of the multiple‐input and multiple‐output system in state‐space, frequency‐domain, and a linear regression form is developed. The multiple‐input and single‐output and the single‐input, single‐output models are derived. Modeling of faults is also given. In Section 3, the concept of emulators, the generation of emulator‐perturbed data, and its role in the identification of the system, the estimation of the feature vector‐emulator map for fault isolation is developed. In Section 4, the identification of the system and the associated Kalman filter using prediction error method is suggested. The feature vector‐emulator map is estimated using the expression of the Kaman filter residual in the model‐mismatch case. In Section 5, the model of the Kalman filter, residual model, and the key properties of this filter are given. The key properties of the residual are established including whitening of the equation error, and expressions for the residual for the model‐mismatch case. In Section 6, Bayesian approach to fault diagnosis is explained. Finally, in Sections 7 and 8, the successful evaluation of the proposed scheme on both a simulated and physical system is given, respectively.

Advertisement

2. Mathematical model of the system

The MIMO state‐space model of the system denoted (A,B,C) is given by

x(k+1)=Ax(k)+Br(k)+Eww(k)y(k)=Cx(k)+v(k)E1

where x(k)=[x1(k)x2(k)x3(k)xn(k)]T, y(k)=[y1(k)y2(k)y3(k)yq(k)]T, r(k)=[r1(k)r2(k)r3(k)rp(k)]T, w(k) and v(k), are respectively, nx1 state vector, qx1 output, px1 input to the system, px1 disturbance and qx1 measurement noise; A, B, C, Ew are nxn state transition, nxp input, and qxn output and nxp input disturbance matrices; A and C are block diagonal matrices; A=[A10.00A2.0....00.Aq];B=[B1B2.Bq];Ew=[Ew1Ew2.Ewq];C=[C10.00C2.0....00.Cq], Aj, Bj, Ewj,and Cj are, respectively, njxnj, njxp, njxp, and 1xnj matrices. The output of the system is corrupted by disturbance w(k) and measurement noise v(k); G(z)=C(zIA)1B=D1(z)N(z); I is an identity matrix; D(z)=|(zIA)|=1+=1naz; Bj=[Bj1Bj2.Bjp]; Ew=[Ewj1Ewj2.Ewjp].

We assume that the system is controllable and observable, that is, (A,C) is observable, (A,B) is controllable, implying that all the states may be estimated from the input and the output data, and the input affects all the states. The disturbance w(k) and the measurement noise v(k) are assumed zero‐mean white noise processes. The covariance of w(k) and v(k) are

E[wwT]=QandE[vvT]=RE2

where Q and R are positive definite and positive semi‐definite matrices, Q>0 and R0. The covariances Q and R are not known a priori.

The MIMO model in the frequency domain is

y(z)=G(z)r(z)+ϑ(z)E3

where G(z) is qxp matrix transfer function, and N(z) is the qxp numerator matrix; ϑ(z) is the qx1 is the effect of disturbance w(k) and the measurement noise v(k) on the output y(z).

ϑ(z)=C(zIA)1Eww(z)+v(z)E4

2.1. Single‐input single‐output pairing

A single‐input single‐output (SISO) model derived from the state‐space model relating the input ri(z), and its associated output, termed yji(z), which is the same as the output yj(z) when the input is ri(z) and the rest of the inputs rj(z)=0 for ji, is

yji(z)=Gji(z)ri(z)+ϑji(z)E5

where Gji(z)=Cj(zIAj)1Bji=Dj1(z)Nji(z); and ϑji(z)=Cj(zIAj)1Ewjiwi(z). The transfer function Gji(z) may in general be a cascade combination of subsystems {Gji(z)}:

Gji(z)=Gji(z)E6

The subsystems Gji(z) may, for example, be a transfer function of a controller, an actuator, a plant, or a sensor associated with a position control system, process control system, magnetic levitation system, or other systems [4].

Expressing the frequency‐domain model (5) in a linear regression form yields

yji(k)=ψjiT(k)θji+υji(k)E7

where υij(z)=Dj(z)ϑij(z); ψjiT(k) is 1x2nj regression vector formed of the regression vectors, formed ψyjiT(k) associated with yji(k), and ψriT(k) associated with input ri(k):

ψijT(k)=[ψyjiT(k)ψriT(k)]E8

ψyjiT(k)=[yji(k1)yji(k2).yji(knj)]; ψriT(k)=[ri(k1)ri(k2).ri(knj)]; θji is 2njx1 feature vector formed of the nj coefficients of the denominator polynomial Dj(z) and the numerator polynomial Nij(z):

θji=[θyjθrji]TE9

Remarks: In the operational stage, we may not have access to the output yj(k), termed yji(k), generated by the input ri(k) alone when rest of the inputs are set to zero. It is estimated during the identification phase of the multi‐input and single‐output model relating the accessible output yj(k) generated by all the inputs r(k).

2.2. Multi‐input and single‐output pairing

Using Eq. (5), the output yj(z) is the output due to all the inputs r(k) of MISO system, which is

yj(z)=i=1pyji(z)=Gj(z)r(z)+ϑj(z)E10

where Gj(z)=Dj1(z)Nj(z)=[Gj1(z)Gj2(z).Gjp(z)]; υj(k)=i=1pυij(k).

Expressing the frequency‐domain model (10) in a linear regression form yields

yj(k)=ψjT(k)θj+υj(k);j=1,2,3,,qE11

where ψjT(k) is 1x(nj+njp) regression vector formed of the regression vectors ψyjT(k) associated with yj(k), and ψrT(k) associated with r(k):

ψjT(k)=[ψyjT(k)ψrT(k)]E12

ψyjT(k)=[yj(k1)yj(k2).yj(knj)]; ψrT(k)=[ψr1T(k)ψr2T(k).ψrpT(k)]; θj is (nj+njp)x1 feature vector formed of the n coefficients of the denominator polynomial Dj(z) and the njp coefficients of the numerator polynomial Nj(z);

θj=[θyjθrj]TE13

where θyj=[aj1aj2.ajnj]T; θrj=[θrj1Tθrj2T.θrjpT]T.

2.3. Multi‐input and multiple‐output system

Extending the results of the time‐domain expression to the MIMO (3), we get

y(k)=ψT(k)θ+υ(k)E14

where ψT(k) is qx(n+npq) regression matrix formed of the regression vectors {ψijT(k)}, and θ is (n+npq) x1 feature vector formed of θj, j=1,2,,q is given as follows:

ψT(k)=[ψy1T(k)ψrT(k)00.0ψy2T(k)0ψrT(k)0.0ψy3T(k)00ψrT(k).0......ψyqT(k)000.ψrT(k)];θ=[θyθr1θr2.θrp]E15

The regression model (14) is the time‐domain version of the frequency‐domain model (3). Expressing the time‐domain model (14) in the frequency domain, we get

y(z)=ψT(z)θ+υ(z)E16

2.4. Interconnected system

The system is an interconnection of subsystems such as the plant, the actuator, the sensors, and the controllers shown in Figure 1. Subfigure A at the top shows that jth output of the system yj=i=1pyji is given by Eq. (10) where yij(z) given in Eq. (5) is the output generated by the input ri acting alone.

Figure 1.

Pairing of the inputs and an output and the subsystem in the path ji.

Subfigure B at the bottom shows that the transfer function Gji(z) in the path from the input ri to the output yij is formed of subsystems {Gijl(z)}. The subsystem Gijl(z) is driven by the input ujil(z) and its output is corrupted by the disturbance wjil(z). The input and the output of Gji(z) are ri and yji, respectively, vji is the measurement noise, ϑji given in Eq. (5) is the combined effect of the disturbances {wjik} and {vji} on the output yji(z).

2.5. Modeling of faults

There are two types of fault models, namely the additive and the multiplicative (or parametric) types. In the additive type, a fault is modeled as an additive exogenous input to the system, whereas in the multiplicative type, a fault is modeled as a change in the parameters, which completely characterize the fault behavior of the subsystems. Although the multiplicative and additive perturbation models are equivalent, the multiplicative‐type perturbation model is preferable. The multiplicative perturbation model of the cascade combination of subsystems can actually model the particular perturbation in any one of the subsystems under consideration.

Advertisement

3. Emulators

The emulator‐based identification scheme is motivated by the model‐free artificial neural network approach to capture the static and the dynamic behavior by presenting neural network data covering likely operating scenarios. An identified model at each operating point characterizes the behavior of the system in the neighborhood of that point. In practice, however, the system model may be perturbed because of variations in the parameters of the system. To overcome this problem, the system model is identified by performing a number of emulator parameter‐perturbed experiments proposed in [45]. Each experiment consists of perturbing one or more emulator parameters. A linear model, termed optimal model, is identified as a best fit to the input‐output data from the set of emulated perturbations. The optimal model thus obtained characterizes the behavior of the system over wider operating regions (in the neighborhood of the operating point), whereas the conventional model characterizes the behavior merely at the nominal operating point (i.e., the conventional approach assumes that the model of the system remains unperturbed at every operating point). The optimal model is more robust, that is, the identification errors resulting from the variations in the emulator parameters are significantly lower compared to those of the conventional one based on performing a single experiment (i.e., without using emulators).

During the system identification phase, a number of experiments are performed by (a) not perturbing the emulator parameters and (b) perturbing the emulator parameters one at a time, simultaneously perturbing two at a time, three at time, and so on till perturbing all of them. The input‐output data collected from all experiments are termed emulator‐generated data.

  • Nominal system model and the Kalman filter: The emulator‐generated data are used to identify the nominal optimal model of the system and the optimal Kalman filter model using the prediction error method.

  • Estimation of the influence vectors: Using the least‐squares method, the influence vectors are identified recursively using the input‐output data obtained from the emulator‐perturbed parameter experiments. First, the influence vector for influence vector for the single parameter perturbation is identified, and then using the estimated influence vector, the influence vector for the two simultaneous emulator perturbations is estimated. Generalizing, the influence vector for m simultaneous perturbation is identified, and then using all previous m estimates of the influence vectors, the (m+1)th influence vector is identified.

The emulators are transfer functions, which are connected in cascade with the subsystems to generate likely operating scenarios including normal and faulty one for reliable and accurate identification of the system, its associated Kalman filter, and the feature vector‐emulator map.

Emulators are connected to the system during the identification phase and its parameter is varied to generate likely operating scenarios. During the operational phase, the static emulators are disconnected, as it were, by setting them to unit values. The dynamic emulator, however, is not disconnected. Its gain is set to unity and its phase made a non‐zero negligibly small value so that (a) both of these parameters have a negligible effect on the dynamic behavior of the system during the operational phase and (b) the order of the system during the identification and the operational phases remains identical to ensure mathematical tractability without causing performance degradation. The role of the emulator‐generated data includes the following:

3.1. Emulator‐generated data for MISO system

The MISO system is given by Eq. (11) relating all the inputs r(k) and the output yj(k) identified by connecting an emulator Ej(z) in cascade with r(z). The emulator is a first‐order all‐pass filter given by

Ej(z)=γj2(γj1+z11+γj1z1)E17

where |γj1|<1 to ensure stability. The emulators γj1 and γj2 are varied one at a time, and both simultaneously. During the identification, an emulator Ej(z), which is a first‐order all‐pass filter (17), is connected to the input rj(k) in cascade with nominal model Gj0(z). A number of experiments are performed by varying the emulator parameters γj1, γj2 one at a time and both simultaneously to acquire emulator‐generated data: it is assumed for simplicity that the same input is applied to all the experiments. Using Eq. (10), the MISO model relating rj(k) and yj(k) becomes

yjel(z)=Gj0(z)Ej(z)r(z)+ϑje(z),el=1,2,,nexp,l=1,2,3E18

where yje1(z), yje2(z), and yje3(z) denote, respectively, the output generated by varying γj1, γj2 and both γj1, γj2.

3.2. Emulator‐generated data for SISO system

The feature vector‐emulator map of the SISO system (5) is estimated for the isolation of faults in the subsystems {Gij(z)}. The emulators Eji(z) are connected to an accessible input or output {uji} in cascade with the subsystems {Gji(z)} to mimic their variations. In other words, the known emulator parameter variations mimic those of the unknown parameters of the associated subsystems. The accessible inputs include the tracking error, the control input, actuator input, and sensor output.

The emulator Eji(z) may be a dynamic system, a constant gain (γji), a gain, and a pure delay of d time instants (γjizd), a first‐order all‐pass filter (γji+z11+γjiz1) or a Blaschke product of all first‐order‐pass filters (γji+z11+γjiz1) [3]. The emulator Eji(z) is chosen to be a product of a static gain and a first‐order all‐pass filter to mimic the behavior of the subsystem Gji(z)==1lGji(z) of the SISO system given by Eqs. (5) and (6)

γji2(γji1+z11+γji1z1)E19

In order to ensure stability of the dynamic emulator, parameter γji1 is constrained by |γji|<1.

Connecting the emulator Eji(z) given in Eq. (19) to the nominal SISO model Gji0(z) using Eqs. (5) and (6), we get

yjiel(z)=Eji(z)Gji0(z)ri(z)+ϑjie(z),e=1,2,,nexp,l=1,2,3E20

where yjie1(z), yjie2(z), and yjie3(z) denote, respectively, the output generated by varying γji1, γji2 and both γji1 and γji2.

Figure 2 shows an example of a closed‐loop position control system formed of a controller, an actuator, a plant, and a sensor in the path connecting the tracking error eri(k)=ri(k)yji(k) and the output yji. Only eri(k), uij1(k), and uij3(k) are the measurement outputs. The emulators Eji1(z)=γji1+z11+γji1z1, and Eji2=γji2 are connected to uji1, and Eji3=γji3 is connected to uji3 to mimic the perturbations in the dynamic plant Gji1(z), the static actuator Gji2(z)=kA and the static sensor Gji3(z)=ks, respectively, where Eji1(z) is dynamic, and Eji2 and Eji3 are static emulators.

Figure 2.

Position control system: emulators and subsystems.

The nominal static emulator is set to unit value γijk0=1. The variation Δγjik of an emulator γijk may be expressed in terms of its nominal value γjik0 as Δγjik=γjikγjik0.

3.3. Feature vector‐emulator map

The feature vector‐emulator map for the SISO and the MISO systems is developed subsequently.

3.3.1. SISO system

Consider the emulator‐perturbed SISO system (20) relating the inputs ri(k) and yji(k) and the associated linear regression model (7). The feature vector θji is a nonlinear function of the emulator parameter γji=[γji1γji2]. Assuming that the feature vector θji is a continuous function of γji, then using Weierstrass approximation theorem, the feature vector‐emulator map becomes

Δθji=Ωji1Δγji1+Ωji2Δγji2+Ωji12Δγji1Δγji2E21

where Δθji=θjiθji0;Δγj=γjγj0 is the parameter variation;θji0 is the nominal feature vector; Ωji1 is a 2njx1 vector of partial derivatives of the feature vector θji with respect to γji1 evaluated at the unperturbed nominal emulator value γji10. Similarly, Ωji2 is a 2njx1 vector of partial derivatives of the feature vector θji with respect to γji2 evaluated at the unperturbed nominal emulator value γji20, Ωji12 is the second partial derivatives with respect to γji1 and γji2 evaluated at the unperturbed nominal emulator value γji10 and γji20. The partial derivative terms Ωji1, Ωji2 Ωji12, which are the Jacobean of the feature vector θji with respect to the emulator parameters {γjik}, are termed influence vectors. The influence vectors play a crucial role in isolating a fault occurring in any subsystem. The influence vectors Ωji1, Ωji2, and Ωji12 track the degree of variations in the parameters of the subsystem perturbations.

Substituting for θji in (7), the variation Δyji(k)=yji(k)yji0(k) between the actual output yji(k) and the nominal fault‐free output yji0(k) becomes

Δyji(k)=ψjiT(k)(Ωji1Δγji1+Ωji2Δγji2+Ωji12Δ γji1Δ γji2)+υji(k)E22

Let Ωji be an influence matrix associated with the emulators located at the path ij

Ωji=[ΩjikΩjikΩjikmn.Ωji12…q]E23

A number of emulator parameter‐perturbed experiments are performed by perturbing the parameters of the emulators (20). For each experiment, N input‐output data (yje(k),r(k)) are obtained, k=1,2,,N. The input r(k) for each experiment is chosen to be persistently exciting. The regression models associated with the experiments and Eq. (22) are given as follows:

Δyjie1(k)=ψjiT(k)Δγji1Ωji1+υjie1(k)Λyjie2(k)ψjiT(k)Δγji1Ωji1=ψjiT(k)Ωji2Δγji2+υjie2(k)Δyjie3(k)ψjiT(k)(Δγji1Ωji1+Ωji2Δγji2)=ψjiT(k)(Ωji12Δγji12)+υjie3(k)E24

3.3.2. MISO system

Consider the emulator‐perturbed MISO system (18) relating the inputs r(k) and yj(k), and the associated linear regression model (11). Similar to Eqs. (21) and (24), we get

Δθj=Ωj1Δγj1+Ωj2Δγj2+Ωj12Δγj1Δγj2E25
Δyje1(k)=ψjT(k)Δγj1Ωj1+υje1(k)Λyje2(k)ψjT(k)Δγj1Ωj1=ψjT(k)Ωj2Δγj2+υje2(k)Δyje3(k)ψjT(k)(Δγj1Ωj1+Ωj2Δγj2)=ψjT(k)(Ωj12Δγj12)+υje3(k)E26
Advertisement

4. Identification

The prediction error method can be derived from the residual model of the Kalman filter, which is presented in the next section. It is used to identify both the nominal system and the Kalman filter associated with the system without the need for a priori knowledge of the covariances of the noise and the disturbance statistics. Prediction error method is consistent, efficient, and a gold standard for system identification, and can identify open‐loop and closed‐loop systems. The variance the parameter estimates asymptotically approaches the Cramer‐Rao lower bound.

Optimal models: The optimal system and the associated Kalman filter are identified using the prediction error method using computationally efficient scheme. First, the MISO system is identified and then the SISO system is derived from the estimate of feature vector associated with the MISO system. The emulator‐generated data generated using Eq. (18) are used to identify MISO system (10) and the nominal feature vector θj0 for Eq. (11), which is the best least‐squared fit to set all perturbed feature vector θj, and the Kalman gain Kj0 are estimated. Let the optimal state‐space model of the MISO system be (Aj0,Bj0,Cj0) and associated Kalman filter be (Aj0Kj0Cj0,[Kj0Bj0],Cj0). Let the optimal transfer matrix of the MISO system and the optimal estimate of the output be Gjopt(z) and y^jopt(k), respectively. Using Eq. (10), we get

y^jopt(z)=Gjopt(z)r(z)+ϑj(z)E27

Then, the best estimate of the feature vector θji of the SISO system (7), denoted θji0, and the Kalman gain are estimated from θj0.

4.1. Estimation of the influence vectors

SISO system: Knowing the emulator parameter perturbations Δγji1, Δγji2, Δγji12 and the resulting emulator‐generated data, the influence vectors Ω^ji1, Ω^ji2, and Ω^ji12 are estimated recursively using the least‐squares method using Eq. (24)

Ω^ji1=arg minΩji1{Δyjiei(k)ψjiT(k)Ωji1Δγji12}Ω^ji2=arg minΩji2{Λyjie2(k)ψjiT(k)Δγji1Ω^ji1ψjiT(k)Ωji2Δγji22}Ω^ji12=arg minΩjiklm{Δyje3(k)ψjiT(k)(Δγji1Ω^ji1+Ω^ji2Δγji2)ψjiT(k)Ωji12Δγji122}E28

where x(k)2=k=1Nx2(k).

MISO system: Similar to Eq. (28), the influence vectors Ω^jk, Ω^j2, and Ω^j12 are estimated.

Advertisement

5. Model of the Kalman filter

The Kalman filter forms the backbone of the MISO and the SISO systems fault detection and for fault isolation, respectively. The Kalman filter is a closed‐loop system, which is (a) an exact copy of the identified nominal of the system driven by the residual, which is the error between the output and its estimate, and (b) is stabilized by the Kalman gain.

MISO system: Using the state‐space model (Aj0,Bj0,Cj0) derived from the identified nominal feature vector θj0. The Kalman filter (Aj0Kj0Cj0,[Kj0Bj0],Cj0) associated with the MISO system (10) is

x^j(k+1)=(Aj0Kj0Cj0)x^j(k)+Kj0yj(k)+Bj0r(k)y^j(k)=Cj0x^j(k)ej(k)=yj(k)y^j(k)E29

where x^j(k) and y^j(k) are, respectively, the minimum variance estimates of the state and the output.

Figure 3 shows the nominal fault‐free system and the Kalman filter. The structure of the Kalman filter is based on the internal model principle, which embodies the nominal system model (Aj0,Bj0,Cj0). The inputs to the Kalman filter are the input r(k) and the output yj(k) which is corrupted by the disturbance wj(k) and the measurement noise vj(k).

Figure 3.

The system and its associated Kalman filter.

5.1. Expressions of the residual

The expression for the residuals for the MISO system ej(z) and the SISO system eji(z) is derived from the Kalman filter (29).

MISO model: The frequency‐domain expression, relating the nux1 input r(z) and output yj(z) to the residual ej(z) is given by the following model, termed residual model:

ej(z)=D¯j0(z)Fj0(z)yj(z)N¯j0(z)Fj0(z)r(z)E30

where Fj0(z)=|zIAj0+Kj0Cj0| is the characteristic polynomial termed Kalman polynomial; D¯j0(z)=Fj0(z)(ICj0(zIAj0+Kj0Cj0)1Kj0)N¯j0(z)=[N¯j10(z)N¯j10(z).N¯jp0(z)]=Fj0(z)(Cj0(zIAj0+Kj0C0)1Bj0)

SISO system: The residual eji(z) is derived from the residual model (30) from the map relating eji(z) to yj(z) and ri(z):

eji(z)=D¯j0(z)Fj0(z)yj(z)N¯ji0(z)Fj0(z)ri(z)E31

where N¯0ji(z) is the ith element of N¯j0(z).

5.1.1. Key properties of the Kalman filter residual

The Kalman filter forms the backbone of the proposed scheme in view of its key properties proved in [1]. These properties exploited in developing the system identification using the residual model, and in unified approach to fault detection and isolation fault, where a fault is defined as an incipient fault resulting in the model mismatch.

5.2. Propositions

We establish important results, in the form of lemmas that are crucial to the development of the proposed fault diagnosis scheme. In Lemma 1, it is shown that (a) the system transfer function can be estimated from the residual model and (b) Kalman filter whitens the output error ϑj(z) given in Eq. (10). Lemma 2 shows that the residual will not be a zero‐mean white noise process if there is a model mismatch, and there will be an additive fault indicating term, which is a function of the deviation between the actual feature vector θj of the system model (Aj,Bj,Cj) and the nominal fault‐free feature vector θj0 of nominal fault‐free model (Aj0,Bj0,Cj0).

Case 1: The system and the nominal models are identical

Lemma 1:

Gj0(z)=Dj01(z)Nj0(z)=D¯j01(z)N¯j0(z)E32

where Gj0(z) is the transfer function of the nominal fault‐free model (Aj0,Bj0,Cj0).

Proof: Substituting for y(z) from Eq. (10), the residual model (30) becomes

ej(z)=D¯j0(z)Fj0(z)((Dj1(z)Nj(z)D¯j01(z)N¯j0(z))r(z))+D¯j0(z)Fj0(z)ϑj(z)E33

Correlating both sides with input r(k), and invoking the orthogonality properties, the residual, namely r(k), is uncorrelated with both ej(k) and the output error ϑj(k) [4], we get

D¯j0(z)Fj0(z)((Dj1(z)Nj(z)D¯j01(z)N¯j0(z))r(z))=0E34

Hence, Eq. (32) holds.

Corollary 1: The filter D¯j0(z)Fj0(z) whitens the output error ϑj(z) if there is no model mismatch:

ej(z)=D¯j0(z)Fj0(z)ϑj(z)E35

Proof: Consider the expression for the model‐matching case (33). Using Eq. (32), we establish Eq. (35).

Case 2: System and the nominal model mismatch

Lemma 2: If there is model mismatch, then

ej(z)=D¯j0(z)Fj0(z)ΔGj(z)+ϑjf E36
ej(z)=ψjfT(z)Δθj+υjf(z) E37

where ΔGj(z)=Dj1(z)Nj(z)Dj01(z)Nj0(z), Δθj=θjθj0; ψjfT(z)=D¯j0(z)Dj(z)Fj0(z)ψjT(z), ϑjf=D¯j0(z)Fj0(z)ϑj(z), and ejf(z)=D¯j0(z)Dj(z)Fj0(z)υj(z) are the filtered regression matrix ψjT(z) and filtered output error ϑj(z), filtered equation error υj(z), respectively.

Proof:

Case 1: Consider expression (33). Using Eq. (32), we get

ej(z)=D¯j0(z)Fj0(z)((Dj1(z)Nj(z)Dj01(z)Nj0(z))r(z))+ϑjf(z)E38

Substituting ΔGj(z)=Dj1(z)Nj(z)Dj01(z)Nj0(z), we get Eq. (36).

Case 2: Expressing the residual model (30) in an alternative form:

ej(z)=D¯j0(z)Fj0(z)(yj(z)D¯j01(z)N¯j0(z)r(z))E39

Using Eq. (32) and re‐arranging, we get

ej(z)=D¯j0(z)Dj0(z)Fj0(z)(Dj0(z)yj(z)N¯j0(z)r(z))E40

Adding and subtracting yj(z) inside the bracket on the right‐hand side yields

ej(z)=D¯j0(z)Dj0(z)Fj0(z)(yj(z)(1Dj0(z))yj(z)N¯j0(z)r(z))E41

Using the expression for the regression model (11) and substituting for the actual and the nominal fault‐free cases, we get

ej(z)=D¯j0(z)Dj0(z)Fj0(z)ψjT(k)Δθj+υjf(k)E42

Remarks: If there is a model mismatch because of variations in the subsystem parameters, the residual is no longer zero‐mean white noise process. The residual has an additive term, which is affine in the deviation in the system transfer function ΔGj(z) or equivalently affine in the feature vector ψjfT(z)Δθj. The additive terms are termed fault indicators. This shows that the Kalman filter provides a unifying approach to handle both fault detection and fault isolation.

In view of the key properties, the Kalman filter is employed for identification and the fault diagnosis. In system identification, the criterion for determining whether the identified model has captured completely the dynamic behavior of the system is that the residual (error between the output and its estimate obtained using the identified model) is a zero‐mean white noise process. Consider the problem of identification of the system. Since the equation error υ(k) is a colored noise process, the parameter estimates will be biased and inefficient. To overcome this, the input and the output are whitened using the Kalman filter as shown in Eq. (35) of Corollary 1. The Kalman filter model (29) may be interpreted as an inverse system generating the innovation sequence e(k), or alternatively as a whitening‐filter realization of a state‐space model that is driven by both the disturbance and measurement noise.

Lemma 3

eji(z)=ψjifT(z)Δθji+υjif(z)E43

where Fji0(z)=|zIAji0+Kji0Cj0|, ψjifT(z)=D¯j0(z)Dj(z)Fji0(z)ψjiT(z);υjif(z)=D¯0j(z)Dj(z)Fji0(z)υji(z)

Proof: The proof follows from Eqs. (31) and (37).

Advertisement

6. Bayesian approach fault diagnosis

The objective of fault detection is to assert whether the given residual belongs to a set of fault‐free data or faulty residual data, while fault isolation is determined to which class of emulator‐perturbed residual the given data belong. The problem of fault detection and fault isolation is formulated by a pattern classification problem. Fault detection is a binary pattern classification, while the fault isolation is a multi‐class pattern classification. The Bayesian decision strategy is employed to assert appropriate class label. The Bayesian decision strategy is based on the a posteriori conditional probability of deciding a hypothesis given the data, a priori probability of the hypothesis, and a performance measure. The decision strategy is determined from the minimization of the performance measure with respect to all hypotheses.

The Nx1 residual e(k) is located in a different region of the N‐dimensional plane depending upon the fault type. In the ideal case regions, there will not be overlaps between regions associated with different fault types. However, due to noise, disturbances, and other measurement artifacts there will be overlap between the various regions. Hence, Bayesian strategy is employed to asset an appropriate class label to ensure a high‐probability correct decision, and a low probability of false alarms.

6.1. Fault detection

Fault detection is posed as a binary hypothesis‐testing problem. The criterion to choose between the two hypotheses, namely the presence or an absence of a fault, is based on minimizing the Bayes risk, which quantifies the costs associated with correct and incorrect decisions. The Nx1 Kalman filter residual data e(k) generated by Eq. (29) is employed. The minimization of the Bayes risk yields the likelihood ratio test. The decision between the two hypotheses is based on comparing the likelihood ratio, which is the ratio of the conditional probabilities under the two hypotheses, to a threshold value. The resulting binary composite hypothesis‐testing problem compares the test statistics of residual e(k) with a threshold value η:

ts(e){ηnofault>ηfaultE44

The test statistics depends upon the input r(k) that generates the residual e(k) [4]:

ts(e)={|1Ni=kN+1ke(i)|r(k)=constantPee(f0)r(k)isasinusoid1Ni=kN+1ke2(i)r(k)isanarbitrarysignalE45

6.1.1. Computationally efficient scheme

A computationally efficient scheme is employed here for the detection:

  • The status of each of the MISO systems Gj(z) relating all the inputs r(z) and all the outputs yj(z) is evaluated for all j=1,2,,q using the binary hypothesis scheme (44). Using the test statistics of the residuals ej(k) given by Eq. (30) yields

    ts(ej){ηjnofault>ηjfault,j=1,2,3,,qE46

  • If a fault is asserted in Gj(z), then the status of each of the p subsystems Gji(z) of the SISO system is asserted using the test statistics of the residuals eji(k) (31):

ts(eji){ηjinofault>ηjifault,i=1,2,3,,pE47

Fault accommodation: If a fault is asserted, then the Kalman gain is adapted online, the system re‐identified, and the Kalman filter redesigned accordingly, thus the fault is accommodated and, in the extreme case, the system is shut down for safety reasons.

Advertisement

7. Evaluation on simulated system

The proposed emulator‐based system identification of the system, the associated Kalman filter, feature vector‐emulator map, and finally the fault diagnosis are illustrated using an example of a position control system formed of an actuator, a sensor, and a plant.

7.1. System model

A two‐input and two‐output system fault‐free system (A0,B0,C0) given by Eq. (1) is considered where

A0=[00.70011.5000000.820011.8];B0=[0.511010.301];C0=[01000001]E48

The nominal transfer matrix of the MIMO system (3) is

G0(z)=[G11(z)G12(z)G21(z)G22(z)]=[1+z111.5z1+0.7z2111.5z1+0.7z2111.8z1+0.82z210.2z111.8z1+0.82z2]E49

The nominal MISO transfer matrix, Gj0(z)=D01(z)Nj0(z), j=1,2, of the system is

D0(z)= 1 3.3z1 +4.22z2  2.49z3  +  0.574z4N0(z)=[z11.3z20.08z1  + 0.41z2z11.8z21.15z1 0.2z1]E50

Figure 4a shows the emulator‐generated MISO output 1, y1e1 MISO output 2, y2e1, SISO output 11, y11e1, SISO output 12, y12e1, SISO output 21, y21e1 and SISO output 22, y22e1 given in Eqs. (18) and (20) resulting from the variations of the emulator parameters γj1 and γji1, respectively. Subfigures A and B show plots of the perturbed step responses y1e1(k) and y2e1(k) with respect to time, while subfigures C–F show plots of the perturbed outputs y11e1(k), y12e1(k), y21e1(k), and y22e1(k) with respect to time. The outputs are in centimeters (cm) and the time is in seconds (s). The plots are generated when the emulator parameter γj1 is varied. The variations Δγj1 are {0.10.50.91}.

Figure 4.

(a) Emulator generated data and (b) performance of the identified model.

The mean‐squared error (or residual), namely the error between the output of the optimal model, denoted by y^jopt(k) and given by Eq. (27), and the perturbed outputs yje1(k) resulting from the variations of the emulator parameter γj1. The mean‐squared error, denoted msej(γj1), is computed as follows:

msej(γj1)=1Nk=1N(y^jopt(k)ye1(k))2E51

The conventional scheme identifies only the unperturbed nominal model. Let the identified model of the MISO system (10) be G^j0(z), the estimated output be y^j0c(z). The mean‐squared error, denoted msejc(γj1), becomes

msejc(γj1)=1Nk=1N(y^j0(k)ye1(k))2E52

The mean‐squared errors msej(γj1) and msejc(γj1) are plotted as functions of the emulator parameter perturbations Δγj1. The mean‐squared profiles of both the proposed emulator‐based and the conventional identification schemes are shown in subfigures A and B of Figure 4b.

The identified state‐space model and the Kalman gain are

A^0 =[   0.9843   0.1588    0.0213    0.0224   0.1572      0.9266      0.2317 0.25020.06310.3144     0.9090 0.3230    0.0171     0.0153       0.1659     0.7225],B^0=[ 0          0 0.1       0.10.3 0.3 2.6      2.6]103,C^0=[  21       564.2   245.6    49.5 1679.2   336.3   211.4    44.0]E65

The ranges of the mean‐squared errors msej(γj1) and msejc(γj1) are given below:

 1.8390mse1(γj1) 2.225   0.0137mse1c(γj1)7.1815E53
18.2224mse2(γj1) 21.7167   0.0007mse2c(γj1)69.8841E54

Remarks: The emulator‐generated data cover the operating scenarios, including both the normal and abnormal ones, exhibiting variations of the rise time, the settling times, and the overshoots.

The identified optimal model (A^0,B^0,C^0) is different from the nominal system model (A0,B0,C0). Even the block diagonal strictures of A0 and B0 are not preserved.

It can be deduced from Figure 4b on the right, Eqs. (53) and (54), that compared to the conventional scheme, the proposed emulator‐based identification is significantly more robust to variations in the operating points, which are simulated by emulator parameter perturbations.

The poles of the MISO transfer functions G2(z) of y2(k) and G1(z) of y1(k) were, respectively, 0.8500 ± j0.3122 and 0.7500 ± j0.3708. The same emulator was used for inducing phase shift to the MISO models. G2(z) with poles close to the unit circle was affected more than G1(z) with poles well inside. In view of the difference in the perturbations induced in the two models, the mean‐squared errors mse2 and mse2c are higher than mse1 and mse1c.

7.2. Fault diagnosis

Detection of a fault: Various types of faults include (a) actuator, (b) sensor, and (c) plant, we introduced by varying the columns of B0, the rows of C0, and the diagonal matrices of A0. A fault is detected using appropriate test statistics depending upon the reference input waveform from Eq. (45). Since the reference input r(k) is a constant waveform, the test statistics for the MISO and the SISO system using Eqs. (46) and (47) are

ts(ej)=|1Ni=kN+1kej(i)|;ts(eji)=|1Ni=kN+1keji(i)|E55

A visual picture of the faulty and the normal subsystems may be deduced from the autocorrelations of the residuals associated with the fault‐free, sensor fault, actuator fault, and the plant faults shown in Figure 5. Subfigures A, and B, subfigures C and D, subfigures E, and F, and subfigures G and H show respectively autocorrelations of the residual for the ideal no fault, the sensor fault, the actuator fault, and the plant fault.

Figure 5.

Autocorrelations of the residuals: ideal, sensor fault, actuator fault, and plant faults.

Remarks: The maximum value of the autocorrelation of the residual (i.e., its variance) provides an indication of the presence or an absence of the fault. In the case of the sensor fault introduced by perturbing C20, it affects only the residual e2(k). The variance of the autocorrelation e2(k) is large while that of e1(k) indicating a fault in C2. However, a fault in either the actuator or the plant, depending upon which elements of B0 or A0 are perturbed, may affect both residuals, and hence would be difficult to isolate.

7.2.1. Fault isolation

If a fault is asserted, and the path where the fault is located, then it is isolated using Bayesian multiple hypotheses testing scheme. The size of the fault is also estimated. The objective of fault isolation is to determine which of the emulator parameter has varied using the residual data generated or parameters using the expression for the Kalman filter residual for the model‐mismatch case given in Eq. (43). The residual eji(k) is affine in the unknown emulator parameter variations {Δγijk}. The emulator parameter variation that is most likely to fit the perturbed residual with additive term ψjifT(z)Δθji is determined sequentially by first hypothesizing single faults. If the estimates thus obtained do not fit the residual, then two simultaneous faults are hypothesized. If again the estimates do not fit the residual model, then hypothesize triple faults, and so on until the estimates fit the residual model. The maximum likelihood method, which is efficient and unbiased, is employed herein to estimate the variation Δγ. The maximum likelihood estimates of the emulator parameters are obtained by minimizing the log likelihood function [13].

Let H(1), H(2), and H(3) denote a hypothesis that emulator parameter γji1, γji2, and γji12 has varied. The Kalman filter residual for H(1) becomes

H(1):eji(1)(k)=ψjifT(k)Δθji(1)+υjif(k) E56

The least‐squares estimate Δγ^ji1 from

Δγ^ji1=argmin{Δγji1}{eji(k)ψjifT(k)Δθji 2} E57

If the estimate does not meet the criteria, then hypothesize that γji2 has varied. The criteria for fitting a hypothesis are given later. The Kalman filter residual for H(2) becomes

H(2):eji(2)(k)=ψjifT(k)Δθji(2)+υjif(k) E58

The least‐squares estimate Δγ^ji2 from

Δγ^ji2=argmin{Δγji2}{ψjifT(k)Δθji(2)+υjif(k)2} E59

If it does not meet the criteria, then hypothesize that γji12 has varied. The Kalman filter residual for H(3) becomes

H(3):eji(3)(k)=ψijfT(k)Δθji(3)+υjif(k) E60

The least‐squares estimate Δγ^ji12 from

Δγ^ji12=argmin{Δγji12}{eji(3)(k)ψijfT(k)Δθji(3)2} E61

where Δθji(1), Δθji2, and Δθji3 are deviations in the feature vector when γji1, γji2, and γji12 are assumed to have varied.

7.2.2. Criteria for asserting the hypothesis

The most likely hypotheses is determined by verifying which of the emulator parameter or parameters have varied by comparing the deviation with some threshold value

Assert H(1)ifΔγ^ji1η1E62
Assert H(2)ifΔγ^ji2η2E63
Assert H(3)ifΔγ^ji12η3E64

where η1, η2, and η3 are threshold values. The subsystem associated with the subsystem is asserted to be faulty if the criterion is met.

Advertisement

8. Evaluation on physical process control system

A laboratory‐scale two‐tank physical system is formed of a controller, a DC motor, a pump, two tanks connected by a pipe, a flow rate sensor, and a liquid level sensor. The system is interfaced to a PC with the National Instruments LABVIEW for data acquisition and implementing the controller and the soft sensor [14]. The actuator, namely the pump driven by the DC motor, sends the fluid to the first tank to maintain a specified fluid level in the second tank. An evaluation of the proposed scheme for fault diagnosis was performed on a benchmark laboratory‐scale process control system using the National Instruments LABVIEW as shown below in Figure 6. The sampling period is Ts=0.05.

Figure 6.

Process control system: controller, actuator, and tank.

Emulator‐generated height and flow rate profiles under various types of faults are shown in under the caption Height/Flow rate Profiles for PI controller with Consumer in Fig. 7. Figures 7ac show the height and flow rate profiles when subjected to (a) leakage fault, (b) actuator fault, and (c) sensor faults, respectively. The height profile is shown on the top and the flow rate profile is shown at the bottom of Figure 7. The faults are induced by varying the appropriate emulator parameters to 0.25, 0.5, and 0.75 time the nominal values, in order to represent “small,” “medium,” and “large” faults. However, by virtue of its control design objective, the closed‐loop PI controller will hide any fault that may occur in the system and hence will make it difficult to detect it. In addition, the physical system exhibits a highly nonlinear behavior. The flow rate saturates at 4.5 ml/s. The dead‐band effect in the actuator exhibits itself as a delay in the output response: when a step reference input is applied, the height output responds after some delay, as a minimum force is required to drive the actuator. These nonlinearities affect the steady‐state value of the height: even though there is an integral action in the closed‐loop control system, the steady‐state error is non‐zero for a constant reference input.

Figure 7.

Emulator‐generated data: height and flow.

The system is modeled as a single‐input, multi‐output system where r is the reference input, and the outputs are the control input u, the flow rate f and the height h. Faults were induced in the height sensor, the flow sensor, the actuator, and also as a leakage. The proposed fault diagnosis successfully detected and isolated all the faults compared to SISO scheme [14], where all the faults were detected and isolated using the reference input and the height output.

Advertisement

9. Conclusions

Fault detection and isolation of a class of linear multiple‐input and multiple‐output system based on the Kalman residual and the emulators were presented. The key properties of the Kalman filter, namely the residual, is a zero‐mean white noise process if and only if there is no model mismatch, drive the prediction error identification of the nominal system model, and the Kalman filter. In view of the closed‐loop configuration, the noise and the disturbance are attenuated at the estimated output. The Kalman filter is the best minimum variance estimator in the class of all linear estimators.

To handle fault isolation, the powerful and effective concept of emulators was introduced. Similar in spirit to the training of the artificial neural network, a number of emulator parameter‐perturbed experiments were performed to capture the perturbation model of the subsystems to help with fault isolation. The influence vectors of the emulator parameters, which are indirectly the associated subsystems, were estimated. The influence vectors captured the emulator perturbation model and hence that of the subsystem.

The residual of the Kalman filter was shown to have an additive fault indicating term when there is a model mismatch due to emulator perturbations. The model‐mismatch term is affine in the emulator parameter variations. Using the expression for the fault indicating term, the fault was isolated using the influence vectors and its size was estimated. The residual, being affine in the emulator parameter variation, easily lends itself to the widely used and successful composite Bayes hypothesis‐testing scheme for fault isolation.

The future work generated from this work includes its extension to a class of nonlinear multiple‐input and multiple‐output systems, and the development of a computationally efficient identification of the Kalman filter directly from the input data even for unstable systems. Although a gold standard for system identification, the prediction error method involves a nonlinear optimization problem and hence can suffer from the existence of local minima. Unlike the least‐squares approach, it does not offer a closed‐form solution to the parameter estimation problem. Instead, it relies on a recursive solution that may be time‐consuming (slow convergence rate), computationally complex, and which may also suffer from initialization problems.

Advertisement

Acknowledgments

The first author acknowledges support of the Department of Electrical and Computer Engineering, the University of New Brunswick, and the National Science and Engineering Research Council (NSERC) of Canada. The author is grateful to Professor C.P. Diduch of the University of New Brunswick, Mr. Jiong Tang of MDS and Mr. H.M. Khalid of KFUPM for their help and suggestions. The second author acknowledges the support of KFUPM, Saudi Arabia, and the help of Mr. H. M. Khalid of KFUPM.

References

  1. 1. Doraiswami, Rajamani, and Lahouari Cheded. “Kalman Filter for Fault Detection: An Internal Model Approach.” IET Control Theory and Applications 6, no. 5 (2012): 1–11.
  2. 2. Ljung, Lennart. System Identification: Theory for the User. New Jersey: Prentice‐Hall, 1999.
  3. 3. Doraiswami, Rajamani, and Lahouari Cheded. “A Unified Approach to Detection and Isolation of Parametric Faults Using a Kalman Filter Residuals.” Journal of Franklin Institute 350, no. 5 (2013): 938–965.
  4. 4. Doraiswami, Rajamani, Chris Diduch, Maryhelen Stevenson. Identification of Physical Systems: Applications to Condition Monitoring, Fault Diagnosis, Soft Sensor and Controller Design. John Wiley and Sons, ISBN 9781119990123, 2014.
  5. 5. Doraiswami, Rajamani, and Lahouari Cheded. “Linear Parameter Varying Modelling and Identification for Condition‐based Monitoring of Systems.” Journal of Franklin Institute 352, no. 4 (2015): 1766–1790.
  6. 6. Haykin, Simon. Neural Networks: A Comprehensive Foundation. New Jersey: Prentice Hall, 1999.
  7. 7. Ding, SX. Model‐Based Fault Diagnosis Techniques: Design Schemes. Springer‐Verlag, London, ISBN 978‐3‐540‐76304‐8, 2008.
  8. 8. Gertler, Janos, F. Fault Detection and Diagnosis in Engineering Systems. Marcel‐Dekker Inc, New York, ISBN 0‐8247‐9427‐3, 1998.
  9. 9. Isermann, Rolf. Fault-Diagnosis Systems: An Introduction from Fault Detection to Fault Tolerance. Springer‐Verlag, 2006.
  10. 10. Patton, Ronald, J, Paul, M Frank, and Robert, N Clark. Issues in Fault Diagnosis for Dynamic Systems. Springer‐Verlag, 2000.
  11. 11. Pertew, AM, HJ Marquez, and Q Zhao. “H‐infinity Observer Design with Applications in Fault Diagnosis.” Seville, Spain, 2006. 3803–3809.
  12. 12. Silvio, Simani, Cesare Fantuzzi, and Ronald, J Patton. Model‐based Diagnosis using Identification Techniques. Advances in Industrial Control, Springer‐Verlag New York, Secaucus, NJ USA,1852336854, 2003.
  13. 13. Doraiswami, R, C Diduch, and J Tang. “A New Diagnostic Model for Identifying Parametric Faults.” IEEE Transactions on Control System Technology 18, no. 3 (2010): 533–544.
  14. 14. Doraiswami, Rajamani, L Cheded, and M.H Khalid. Sequential integration approach to fault diagnosis with applications: model‐free and model‐based approaches. VDM Verlag Dr. Muller Aktiengesselschaft and Co, 2010.

Written By

Rajamani Doraiswami and Lahouari Cheded

Submitted: 18 May 2016 Reviewed: 14 February 2017 Published: 31 May 2017