Part of the book: Time-Delay Systems
In this chapter, iterated sigma‐point Kalman filter (ISPKF) methods are used for nonlinear state variable and model parameter estimation. Different conventional state estimation methods, namely the unscented Kalman filter (UKF), the central difference Kalman filter (CDKF), the square‐root unscented Kalman filter (SRUKF), the square‐root central difference Kalman filter (SRCDKF), the iterated unscented Kalman filter (IUKF), the iterated central difference Kalman filter (ICDKF), the iterated square‐root unscented Kalman filter (ISRUKF) and the iterated square‐root central difference Kalman filter (ISRCDKF) are evaluated through a simulation example with two comparative studies in terms of state accuracies, estimation errors and convergence. The state variables are estimated in the first comparative study, from noisy measurements with the several estimation methods. Then, in the next comparative study, both of states and parameters are estimated, and are compared by calculating the estimation root mean square error (RMSE) with the noise‐free data. The impacts of the practical challenges (measurement noise and number of estimated states/parameters) on the performances of the estimation techniques are investigated. The results of both comparative studies reveal that the ISRCDKF method provides better estimation accuracy than the IUKF, ICDKF and ISRUKF. Also the previous methods provide better accuracy than the UKF, CDKF, SRUKF and SRCDKF techniques. The ISRCDKF method provides accuracy over the other different estimation techniques; by iterating maximum a posteriori estimate around the updated state, it re‐linearizes the measurement equation instead of depending on the predicted state. The results also represent that estimating more parameters impacts the estimation accuracy as well as the convergence of the estimated parameters and states. The ISRCDKF provides improved state accuracies than the other techniques even with abrupt changes in estimated states.
Part of the book: Nonlinear Systems
Data based monitoring methods are often utilized to carry out fault detection (FD) when process models may not necessarily be available. The partial least square (PLS) and principle component analysis (PCA) are two basic types of multivariate FD methods, however, both of them can only be used to monitor linear processes. Among these extended data based methods, the kernel PCA (KPCA) and kernel PLS (KPLS) are the most well-known and widely adopted. KPCA and KPLS models have several advantages, since, they do not require nonlinear optimization, and only the solution of an eigenvalue problem is required. Also, they provide a better understanding of what kind of nonlinear features are extracted: the number of the principal components (PCs) in a feature space is fixed a priori by selecting the appropriate kernel function. Therefore, the objective of this work is to use KPCA and KPLS techniques to monitor nonlinear data. The improved FD performance of KPCA and KPLS is illustrated through two simulated examples, one using synthetic data and the other using simulated continuously stirred tank reactor (CSTR) data. The results demonstrate that both KPCA and KPLS methods are able to provide better detection compared to the linear versions.
Part of the book: Fault Diagnosis and Detection
Principal component analysis (PCA) is a linear data analysis technique widely used for fault detection and isolation, data modeling, and noise filtration. PCA may be combined with statistical hypothesis testing methods, such as the generalized likelihood ratio (GLR) technique in order to detect faults. GLR functions by using the concept of maximum likelihood estimation (MLE) in order to maximize the detection rate for a fixed false alarm rate. The benchmark Tennessee Eastman Process (TEP) is used to examine the performance of the different techniques, and the results show that for processes that experience both shifts in the mean and/or variance, the best performance is achieved by independently monitoring the mean and variance using two separate GLR charts, rather than simultaneously monitoring them using a single chart. Moreover, single-valued data can be aggregated into interval form in order to provide a more robust model with improved fault detection performance using PCA and GLR. The TEP example is used once more in order to demonstrate the effectiveness of using of interval-valued data over single-valued data.
Part of the book: Fault Detection, Diagnosis and Prognosis