Open access peer-reviewed chapter

Fault Detection by Signal Reconstruction in Nuclear Power Plants

Written By

Ibrahim Ahmed, Enrico Zio and Gyunyoung Heo

Submitted: 15 October 2021 Reviewed: 18 October 2021 Published: 14 December 2021

DOI: 10.5772/intechopen.101276

From the Edited Volume

Nuclear Reactors - Spacecraft Propulsion, Research Reactors, and Reactor Analysis Topics

Edited by Chad L. Pope

Chapter metrics overview

292 Chapter Downloads

View Full Metrics

Abstract

In this work, the recently developed auto associative bilateral kernel regression (AABKR) method for on-line condition monitoring of systems, structures, and components (SSCs) during transient process operation of a nuclear power plant (NPP) is improved. The advancement enhances the capability of reconstructing abnormal signals to the values expected in normal conditions during both transient and steady-state process operations. The modification introduced to the method is based on the adoption of two new approaches using dynamic time warping (DTW) for the identification of the time position index (the position of the nearest vector within the historical data vectors to the current on-line query measurement) used by the weighted-distance algorithm that captures temporal dependences in the data. Applications are provided to a steady-state numerical process and a case study concerning sensor signals collected from a reactor coolant system (RCS) during start-up operation of a NPP. The results demonstrate the effectiveness of the proposed method for fault detection during steady-state and transient operations.

Keywords

  • auto associative kernel regression
  • auto associative bilateral kernel regression
  • condition monitoring
  • dynamic time warping
  • signal reconstruction
  • fault detection
  • nuclear power plant

1. Introduction

In a nuclear power plant (NPP), accurate situation awareness of key systems, structures, and components (SSCs) is important for safety, reliability, and economics which are key drivers for operation. However, faults and failures can occur in sensors and equipment, which can lead to unexpected shutdown of the power reactors. Such situations may compromise the safety and reliability of the SSCs and result in risk and economic losses that may amount to hundreds of thousands of Euros [1]. For example, in United State of America, the economic loss as a result of shutting down a NPP is approximately $1.25 million per day [2]. Thus, if unnecessary shutdown of the system as a result of faults and failures can be prevented, economic loss due to shutdown can be minimized. Therefore, it is of paramount importance to improve the situation awareness of SSCs in NPPs in order to ensure that their faults and failures are detected early, which can be achieved through on-line signal analysis techniques [3, 4, 5, 6], and fault detection and diagnosis (FDD) methods [7, 8, 9, 10, 11, 12, 13]. There are several techniques of signal analysis, fault detection, and fault diagnostics, which can be classified into two main categories: model-based [14, 15, 16, 17] and data-driven [18, 19, 20, 21, 22] methods. The model-based approaches require understanding of the target system’s physical structure in order to develop a mathematical model of system response for the purpose of FDD. Data-driven techniques, instead, use the historical data measured by the installed sensors and collected overtime during system’ operation to develop an empirical model. In both cases, the developed model can, then, be applied to the target SSC for on-line signal analysis, monitoring, and FDD during operation, from which the condition of the SSC can be retrieved and sent to the human operator/maintenance engineer as alert or alarm, in case of any fault or failure has occurred in the SSC. Based on the status of the SSC, necessary operator action or maintenance intervention can be performed on the SSC to avoid undesired conditions during operation. Adopting these methods in NPP come with several benefits, including [23]:

  • Provide system engineers and maintenance staff with necessary information to make informed, cost-effective operations, and maintenance decisions based on the actual condition of the system/equipment.

  • Allow early mitigation or corrective actions.

  • Reduce the likelihood of unplanned plant trips or power reductions.

  • Reduce challenges to safety systems.

  • Reduce equipment damage.

  • Facilitate the implementation of condition-based predictive maintenance (PdM and CBM) practices.

  • Provide significant financial savings, especially if outage duration is reduced.

The recent advancement in data analysis and computational efficiency are motivating the nuclear and other industries to apply CBM for allowing early mitigation, minimizing unplanned shutdown, increasing safety, and reducing maintenance costs. A simple CBM strategy is a scheme that monitors the target component via a fault detection system that continuously collects data from sensors installed on the target component [24], makes a detection decision based on the collected information, provides to operators the condition of the component (normal or abnormal), and triggers an alarm in case of abnormal conditions, which alert the decision makers, for example, stakeholders, operators, and maintenance engineers, for deciding whether or not an intervention on a maintenance action is required on the component.

Figure 1 illustrates an architecture of the fault detection system considered in this work, which is based on an empirical model for signal reconstruction. Typically, as shown in Figure 1, a fault detection system is a decision tool based on (i) the model that reconstructs the values of on-line signals expected in normal conditions; and (ii) the residual calculator that analyses the differences between the measured on-line signal values and the reconstructed values, whereby an alarm is triggered if the residuals are statistically deviated from the allowable range representative of normal conditions.

Figure 1.

A typical fault detection system.

Several empirical models have been developed and used for signal reconstruction. Such techniques include kernel regression (KR)—a special and simple form of Gaussian process regression (GPR) (which has been adapted for signal reconstruction as an auto-associative kernel regression (AAKR) [20] in nuclear industry), auto-associative artificial neural networks (AANNs) [25, 26, 27], principal component analysis (PCA) [28, 29, 30, 31, 32, 33], multivariate state estimation technique (MSET) [34, 35], Parzen estimation [36], support vector machines [37, 38], evolving clustering method [39], partial least squares (PLS) [40], and fuzzy logic systems [41, 42, 43, 44, 45]. However, most data-driven models are developed under steady state plant operation, whereas it is fundamental to have signal validation and monitoring during transient operation as well, considering the fact that most industrial systems’ operations are time-varying. Transient operations are any non-steady state, time-varying conditions, which includes start-up, shutdown, and load-following modes of the system, whose time-series data are characterized by an explicit order dependency between observations—a time dimension.

In general, model-based approaches provide valid FDD techniques and are a powerful way to investigate FDD issues in highly dynamic and time-varying systems. However, the high performance of model-based FDD is often achieved at the cost of highly complex process modeling that requires sophisticated system design procedures [46]. Consequently, there is the need for low-complexity data-driven algorithms that could be used for time-varying analysis of the transient operation of the process system. In this respect, AAKR has proven superior to PCA [47] and is less computationally demanding than AANN. AAKR is typically trained to reconstruct the output of its own input under normal conditions. It has been successfully used in actual NPP steady-state operations for instrument channel calibration and condition monitoring [48]. It is a nonparametric technique for estimating a regression function. Unlike parametric models, AAKR relies on the data to determine the model structure.

However, some drawbacks of AAKR, such as spillover effects and robustness issues, can lead to missed alarms or delays in fault detection, and to a difficulty in correctly identifying the sensor variable responsible for a fault that is detected [49]. In order to address these drawbacks, a robust distance measure has been proposed, based on removing the largest elemental difference that contributes to the Euclidean distance metric so as to enable the model to correctly predict sensor values [50]. In [51], a modified AAKR has been proposed, based on a similarity measure between the observational data and historical data, with a pre-processing step that projects both the observed and historical data into a new space defined by a penalty vector.

Although those modifications have improved the AAKR performance, the underlining structure of the AAKR is still based on the traditional unilateral kernel regression and lacks temporal information, which makes its application inappropriate for signal analysis during transient operation because only the current query vector affects the model. Any previous information leading to the current query signals vector is completely ignored. Although this procedure is acceptable and even preferable for many applications, it is not acceptable for transient operations, in which the previous information directly affects the next data point [52, 53].

Recently, a weighted-distance based auto associative bilateral kernel regression (AABKR) for on-line monitoring during process transient operations has been proposed [54] and successfully applied to start-up transient data from an NPP [54, 55]. The AABKR captures both the spatial and temporal information in the data. The time dimension of these kinds of time-series data is, in fact, a structure that provides additional information. The AABKR systematically distributes the weights along the time dimension, using a weighted-distance algorithm that captures temporal dependences in the data [54, 55]. The weighted-distance algorithm uses a derivative-based comparator for the identification of a ‘time position index’ (the position of the nearest vector, within the historical data vectors, to the current on-line query observation) [54], which directly eliminates the use of on-line time input to the model.

However, when applied to data from steady-state, the performance of the AABKR in terms of correct fault diagnosis (i.e., the identification of the sensor variable responsible for the fault) is not satisfactory, as the fault, in most cases, is detected in both faulty and fault-free sensor signals [54, 55]. After thorough examination, it has been observed that [54]: (1) the AABKR suffers significantly from the spillover effect; (2) the effect is the result of the wrong identification of the ‘time position index’ by derivatives; and (3) the values of derivatives approximated from a typical steady-state process are, obviously, constants (and nearly zeroes) for most of the data points, particularly, when the process change in time is almost negligible, resulting in wrong identification of the ‘time position index’.

It is worth noting that, a correct identification of time position index is crucial for the temporal weighted-distance algorithm that captures the temporal correlation in the data. The consequence of this effect is that, if a fault occurs, it might indeed be detected but, with an incorrect fault diagnosis of the variable responsible for that fault.

Motivated by these observations, we here propose a modified AABKR for efficient on-line monitoring, applicable not only in transient process operations but also in steady-state operations. We develop new algorithms, based on dynamic time warping (DTW), for the identification of temporal dependencies in the data [55]. To evaluate the performance of the proposed methods, we use both synthetic data from a numerical process and real-time data collected from a pressurized water reactor power plant.

Advertisement

2. Problem formulation

We consider a fault detection system designed to monitor the condition of a plant, as shown in Figure 1, and the sequence of time-varying observations ordered in time (a time series data). Time is the independent variable, and we assume it to be discrete; thus, time-varying data are a sequence of pairs x1t1x2t2xMtM with t1<t2<<tM, where each xi is a data point in the feature space and ti is the time at which xi is observed. The data for more than one signal are sequences of time-varying data points, so long as their sampling rates titi1=t=η are the same.

With this definition, we assume that:

  1. The sequences of data within an historical time-varying dataset, taken by the sensor in healthy condition, are measured and are available as a memory data matrix, XRM×p, whose elements, xij, are functions of the scalar parameter time, t, where X is a p-dimensional matrix of signals with M observation sequence vectors, and xij represents the ith observation of the jth signal.

  2. The sequences of data in X are large enough to be representative of the plant’s normal operating condition.

  3. With r defined as a window length, the sequences of the real-time on-line measurement vectors XqRr×p can be collected, with xqr=xr,1xr,2xr,p being the last vector, i.e., the current measurement vector at present time, t.

  4. The sequence of real-time observational matrix, Xq, can be updated from current measurement, xqr backward to the size of the moving window, r, whenever another on-line query vector is available.

On the basis of above descriptions and by using X, the objective of the present work is to develop a signal reconstruction model reproducing the plant behavior in normal conditions. Such model receives in input the observed sequence of real-time measurement matrix, Xq, whose rth vector is the present measurement, xqr, containing the actual observations of p signals monitored at the present time, t, and produces in output, x̂qr, the reconstruction values of the signals expected in normal condition. Based on this, the actual plant condition at the present time, t can, then, be determined by the analysis of residuals: a fault is detected if the variations between the observations and the reconstructions are large enough in, at least, one of the signals in comparison to predefined thresholds.

Advertisement

3. Auto associative bilateral kernel regression

3.1 Mathematical framework of AABKR

In AABKR, the validation of the signals at present time, t is based on the analysis of the on-line query pattern, Xq, to reconstruct the current vector, xqr, at time, t. The basic idea behind AABKR is to capture both the spatial and temporal correlations in the time-series data (see Figure 2), for effective signal reconstruction in the transient operation of industrial systems. The historical memory matrix X is reorganized into ARN×r×p sequences of array of N matrices of length r, containing the measurement vectors having r1 overlapping between the two consecutive time windows, with the r sequence vectors in each matrix of A array represented as ArkRr×p, where k=1,2,,N, and N=Mr+1.

Figure 2.

Graphical representation of the bilateral directions for a time-series.

The AABKR is expressed in such a way that each neighboring value is weighted on its proximity in space and time. Hence, the mathematical framework of the AABKR is summarized as follows [54, 55]:

1) Feature distance calculation

The feature distance between xqr and each of the historical memory vectors in matrix X, is computed using the Manhattan distance (L1-Norm) as

diXixqr=Xixqr1=j=1pxi,jxr,jE1

and produces the distance vector, dRM×1.

2) Feature kernel quantification

The feature distances calculated above are used to determine the feature weights by evaluating the Gaussian kernel

kif=expdi22hf2E2

where hf is a kernel bandwidth for feature preservation, which controls how much the nearby memory feature vectors are weighted. This leads to the kfRM×1 vector. The superscript/subscript f indicates the feature components.

3) Time position index identification

Here, the time position index, that is, the temporal location of the nearest vector, within the memory vector, to the query vector observation, is determined using a derivative-based comparator. This provides the input to the weighted-distance algorithm in Step 4. Instead of directly using the derivative in the prediction to capture the temporal correlation of the data, which might not be a good choice because of process measurement noise, the derivatives are used as a comparator to determine the time position index within the memory vectors to which the query data vector is nearest. The derivative-based comparator is described as follows.

The backward-difference first-order derivative approximation of the current historical measurement vector in each matrix Ark based on r data points accuracy with respect to t is the element-by-element derivative:

Ark∂t=xr,1ktxr,2ktxr,pktE3

whereas, that of the current query vector, xqr in matrix Xq is the element-by-element derivative:

xqr∂t=xr,1txr,2txr,pt.E4

The first-order derivatives in Eqs. (3) and (4) have been approximated from the data using finite-difference derivative approximation. The backward finite difference derivative approximation is chosen to implement real-time on-line monitoring. The model needs r successive data points to evaluate the derivative of the current data point from the current measurement backward to the size of the moving window r at every sampling time, using backward finite-difference derivative approximation.

From Eqs. (3) and (4), the distance between the derivative of a query vector, xqr and each kth derivative vector of Ark can be calculated by Eq. (5) using the Manhattan distance (L1 norm):

ΔkArk∂txqrt=Ark∂txqrt1=j=1pxr,jktxr,jtE5

This gives the derivative distance vector, ΔRN×1.

Then, using the minimum value in Δ, the index i=ε, which indicates the location of the vector in the memory data, X, to which the current query vector xqr is closest, can be obtained. The time position index is, therefore, the index at which the Manhattan distance between the derivative of the current query vector and those of the rth vectors in each of the Ark is minimized plus the overlapping length between the two consecutive time windows, which is determined as:

ε=argmin1:NΔk+r1E6

4) Temporal weighted-distance algorithm

The temporal weighted-distance algorithm captures the temporal correlations in the data. It calculates the measures that capture the temporal variations in the data. The distance, δ, accounts for the time at which the query vector is observed. This algorithm calculates the temporal correlation of a query input with the memory data, without using the query time input tq, and eliminates the direct use of tq, which becomes indefinite when applied to on-line monitoring. In this way, the effect of the query time input is confined within the time duration of the historical memory data. The distance is calculated based on the assumption that the time-varying historical data collected in building the model were sampled at a constant time interval, η.

Based on the time position index determined in Step 3, the temporal weighted-distance algorithm that captures the temporal correlation is formulated as

δi=δε,δε+iε.η,δε+εi.η,i=εi>ε&εMi<ε&ε1;1ME7

giving the weighted-distance vector δRM×1:

δ=δ1δε2δε1δεδε+1δε+2δMTE8

Once the values of δε and η are known, the other values in Eq. (8) can be determined progressively using Eq. (7). The second and third equations in Eq. (7) follow arithmetic progression (AP): the first term and the common difference of the two progressions are δε and η, respectively. A zero value for the first term of the two progressions, δε=0, has been recommended [55] because the distance of the nearest vector in the memory data to the query vector is close to zero. Whereas the value of the common difference can be arbitrarily selected or taken to be the time interval, η. The other distance values to the right and left of δε in Eq. (8) can be progressively calculated using the second and third equations in Eq. (7), respectively. See Appendix C of [55] for the proof of this algorithm (Eq. (7)).

5) Temporal kernel quantification

Having determined the weighted-distance, the kernel weight can be calculated using the Gaussian kernel function:

kit=expδi22ht2E9

where the kit is the ith kernel weight calculated from the temporal weighted-distance, δi; the superscript/subscript t indicates the temporal components; ht is the bandwidth for the time-domain preservation, which can also serve as noise rejection and controls how much the nearby times in the memory vectors are weighted. This gives the vector ktR(M×1.

6) Adaptive bilateral kernel evaluation

Depending on the magnitude of a fault that occurs in a process, the result of the direct multiplication of the two kernels at i=ε could be zero, which would result in an inaccurate model prediction because the model prediction tends to follow the fault occurrence, so the fault would not be detected. To resolve such issue and achieve robust model signal reconstruction, and to reduce the impact of spillover onto other signals when one or more signals is in fault condition, Eq. (10) is formulated [54] adaptively for the combined kernels of Eqs. (2) and (9) as:

kiab=kifkit,1iM&iεkif+kit2,i=ε;1ME10

resulting in the adaptive bilateral kernel vector kbRM×1.

This reduces the effect of the dominance of one feature distance value over another when a fault occurs. The adaptive nature of Eq. (10) is to dynamically compensate for faulty sensor inputs to the bilateral kernel evaluation and always ensure that a larger weight is assigned to the closest vector within the memory data to the query vector, so as to guarantee an approximate signal reconstruction. This reduces the effect of the degeneration of the feature kernel when a fault of high magnitude has occurred.

7) Output estimation

Finally, the adaptive bilateral kernel weights are combined with the memory data vectors to give the predictions as:

x̂r,j=i=rMkiab.xi,ji=rMkiabE11

If a normalized adaptive bilateral kernel vector, wRMr+1×1 is defined:

wi=kiabi=rMkiab,E12

then, Eq. (11) can be rewritten in matrix form to predict all the signals of the query vector simultaneously as:

x̂qr=wTXE13

where XRMr+1×p.

8) Fault detection

After training of the model, the root mean square error (RMSE) on the predictions of the fault-free validation dataset can be calculated using residuals eqr=xqrx̂qr between the actual values and the predicted values of the validation dataset, and can be used to set the threshold limit for fault detection in each signal as follows:

TjD=3RMSEj.E14

Because the residuals can be assumed to be Gaussian and randomly distributed with a mean of zero and variance of RMSEj2, a constant value equal to 3 has been selected in [54] to minimize the false alarm rate and ensure that a fault is detected when the residuals exceed the threshold.

3.2 Analysis of the limitation of the AABKR

In this section, we analyze the limitation of the AABKR described in Section 3.1, in terms of signal reconstruction from faulty sensor signals. The major limitation can be understood from the description presented as follows.

We observed that, in an extreme, limit or worst case scenario, where the fault deviation intensity in a faulty sensor signal is significant, the feature distance vector degenerates and tends to zero (i.e., kf0), so that the signal reconstructed by the AABKR model is bound to be:

x̂q,j=xε,jE15

This observation can be understood better by the following analysis.

Recall the reconstructed output from a weighted average of Eq. (11), re-written as:

x̂q,j=i=1Mkifkitxi,ji=1MkifkitE16

where, kifkit=kiab is the adaptive bilateral kernel evaluated at xi. The symbol represents the bilateral kernel combination operator that combines the feature and temporal kernels, given by Eq. (10).

Applying the properties of limit to Eq. (16), we have:

limkf0x̂q,j=limkf0i=1Mkifkitxi,ji=1MkifkitE17

Equation (17) can be re-written as:

limkf0x̂q,j=limkf0i=1Mkifkitxi,jlimkf0i=1MkifkitE18

provided that:

limkf0i=1Mkifkit0E19

Note that, judging from the Eq. (10), Eqs. (18) and (19) hold valid. Hence, the limit in Eq. (18) can be simplified as:

limkf0x̂q,j=limk1f0k1fk1tx1,j+limk2f0k2fk2tx2,j++limkMf0kMfkMtxM,jlimk1f0k1fk1t+limk2f0k2fk2t++limkMf0kMfkMtE20

But, from the adaptive bilateral combination of Eq. (10):

limkif0;iε&1iM;kifkit=0E21

and

limkif0,i=εkifkit=0.5E22

Hence Eq. (20) becomes

limkf0x̂q,j=limkif0,i=εkifkitxi,jlimkif0,i=εkifkit=xε,j0.50.5E23

Thus,

limkf0x̂q,j=xε,jE24

This implies that, in a limit case of faulty sensor query signal, xq,j, the reconstructed query signal, x̂q,j is equal to the historical (memory) data point, xε,j located at the identified time position index, ε.

From the above analysis, it is clear that the fault detection capability of the AABKR largely depends on the accuracy of the time position index identification algorithm. If the time position index has been identified correctly, the fault can be detected even though the fault deviation intensity is large and tends to infinity. However, if the time position index has been identified wrongly, the fault might or might not be detected depending on the identified index and the intensity of the fault. The consequence of this situation, even if the fault has been detected, is that the sensor signal responsible for the fault might not be diagnosed correctly which might lead to wrong diagnosis of the system under consideration. In this regards, a more robust approach for time position index identification is proposed and discussed in the next section.

Advertisement

4. Modified AABKR

In this section, the modified AABKR is presented. The framework of the proposed method is depicted in Figure 3. It comprises the steps of calculating the feature distance that captures the spatial variation in the data; calculating the feature kernel weights based on the calculated feature distance; identifying the time position index using DTW technique; computing the temporal weighted-distance that captures the temporal variation and dependencies in the data, based on the time position index; calculating the temporal kernel weights, based on the calculated weighted-distance; and evaluating the adaptive bilateral kernel that computes the combined kernels and dynamically compensates for faulty sensor inputs to the bilateral kernel evaluation, and then, makes the prediction using a weighted average for the purpose of fault detection. Only the modifications to the original AABKR are discussed in this section. The basics of the DTW technique is first presented in Section 4.1. Then, the developed methods based on the DTW are discussed in Section 4.2. Finally, a demonstration of the developed identification methods is showcased in Section 4.3.

Figure 3.

Framework of the modified AABKR for fault detection system.

4.1 Dynamic time warping

Dynamic time warping (DTW) is a technique for finding an optimal alignment between two time-dependent sequences. This technique uses a dynamic programming approach to align the time-series data [56]. Suppose we have two time-series sequences of values taken from the feature space, x=x1x2xL and y=y1y2yM, of length L and M, respectively. To align these sequences using DTW, an L×M matrix, D, is first established, where the element dl,m of D is a local distance measured between the points xl and xm, usually called cost function since the DTW technique is based on the dynamic programming algorithm. Thus, the task of optimal alignment of these sequences is the arrangement of all sequence points for minimizing the cost function. Once the local cost matrix has been built, the algorithm finds the alignment path which runs through the low-cost area—valley on the matrix. The alignment path, usually called the warping path, is a sequence of points, w=w1w2wP and defines the correspondence of an element xl to ym with wp=lpmp1:L×1:M for p1:P satisfying the following three criteria [57, 58]:

  1. Boundary condition: w1=11 and wP=LM.

  2. Monotonicity condition: l1l2lP and m1m2mP.

  3. Step size condition: wp+1wp100111 for p1:L1.

The total cost of a warping path w between x and y with respect to the local cost matrix (which represents all pairwise distances) is:

dwxy=p=1Pdwp=p=1PdxlpympE25

Moreover, an optimal warping path between x and y is a warping path, w, that has minimal total cost among all possible warping paths. The DTW distance, DTW(x, y) between x and y is, then, defined as the total cost of w:

DTWxy=dwxy=mindwxywWL×ME26

where WL×M is the set of all possible warping paths.

To determine the optimal warping path, one could test every possible warping path between x and y, which would however very be computationally intensive. Therefore, the optimal warping path can be found using dynamic programming by building an accumulated cost matrix called global cost matrix, G, which is defined by the following recursion [57]:

First row:

G1m=k=1mdx1yk,m1ME27

First column:

Gl1=k=1mdxky1,l1LE28

All other elements:

Glm=dxlym+minGl1m1,Gl1m,Glm1,l1L&m1ME29

This means that, the accumulated global distance is the sum of the distance between the current elements and the minimum of the accumulated global distances of the neighboring elements. The required time for building matrix G is ΟLM.

Having determined the accumulated cost matrix G, obviously, the DTW distance between x and y is simply given by Eq. (30):

DTWxy=GLM.E30

Once the accumulated cost matrix has been built, the optimal warping path could be found by backtracking from the point wP=LM to the w1=11, using a greedy strategy.

4.2 DTW-based time position index identification approaches

One of the significant steps that requires modification in AABR is the identification of the time position index. The goal here is that the identification of the time position index should be solely based on the feature data and must be freed from the use of the derivatives, in order to improve the monitoring performance of the AABKR during steady-state operations. To achieve this goal, we developed two approaches based on the DTW algorithm described in Section 4.1 for the identification of the time position index. The two approaches are described in the following subsections.

4.2.1 First approach: based on the generated subsequences of the memory data

For simplicity of description, we assume p=1 (single signal) and present the description of the algorithm by referring to Figure 4, where XRM×p and XqRr×p are memory data and query data, respectively. We also assume that the memory data, X has already been reorganized into an array containing N matrices each of length r with r1 overlapping between them, as described in Section 3.1 where N=Mr+1. By using DTW, the goal is to find a subsequence Arka:εRr×p=xaxa+1xε, with 1aεM, that minimizes the DTW distance to Xq over all N matrices generated from X. Note that, Ark is a kth subsequence in the generated array, where k=1,2,,N. Thus, the DTW distance between Xq and each of the kth matrix, Ark in A can be determined by:

Figure 4.

Alignment between two time-dependent data: Sequences Xq and X.

DTWXqArk=GrrE31

which can be calculated from the local cost function matrix, D, using Eqs. (27)(29) of the DTW algorithm described in Section 4.1. Each element of D is a local distance measured between the points in Xq and Ark, which can be calculated by:

dlm=dxlxm=xlxm1E32

where l=1,2,,r, is the row index of the query data, Xq, and m=1,2,,r, is the row index of the kth subsequence, Ark, of the memory data.

Then, the time position index can be determined as:

ε=argmink1:NDTWXqArk+r1.E33

This index can, then, be used in the weighted-distance algorithm for the calculation of the temporal weighted-distance. This approach is summarized in Algorithm A.2.1 of Appendix A.2.

4.2.2 Second approach: based on the entire memory data

In this approach, instead of calculating the DTW distances between the query input data and each of the subsequence data generated from the memory data, the mapping between the query input data and the memory data can be determined directly, from which the time position index can be obtained. Thus, the generation of the array from the memory data is not required in this case. This approach is described as follows.

First, calculate the local cost function matrix, D between the query data, Xq, and the memory data, X. Having calculated D, the calculation of G from D using dynamic programming [57] is a bit modified through the following recursion:

First row:

G1i=dx1xi,i1ME34

First column:

Gl1=s=1idxsx1,l1rE35

All other elements:

Gli=dxlxi+minGl1i1,Gl1i,Gli1,l1r&i1ME36

leading to an accumulated matrix, GRr×M.

Finally, the time position index is obtained as follows, using the last row of G:

ε=argminr:MGriE37

Note that the calculation of G is the same as earlier discussed, except that, its first row is taken equal to the first row of D without accumulating [57], as shown in Eq. (34). This is because, our goal is to determine the time position index using the last row of G and to minimize the impact of the fault (if it occurs) during the determination of the index. The obtained time position index in Eq. (37) can, then, be used in the weighted-distance algorithm for the calculation of the temporal weighted distance. This approach is summarized in Algorithm A.2.2 of Appendix A.2.

4.3 Demonstration case

To demonstrate the two approaches of time position index identification, we consider a typical univariate time-dependent process:

xt=1200,000t2+8+gt.E38

where gt is assumed to be an independent and normally distributed Gaussian noise at present time, t with mean equal to zero and standard deviation, 0.08. For simplicity of demonstration, a memory dataset, X, consisting of 50 samples (with t = 1–50 s at constant time intervals of η=1s), has been generated from the above process. By setting the window size, r = 10, a query input of length r is, then, generated. The actual location of the query input within the memory data is from t = 36–45 s, where the time position index of the current data point is at time t = 45 s. The goal is to automatically locate this index using the proposed methods. The generated memory data is plotted as shown in Figure 5, where the location of the query input is indicated in blue color.

Figure 5.

Memory data (in red) and query input (in blue, located at t = 36–45 s).

With respect to the first approach, the global matrices, Gs, between the query observations and each of the subsequences of the memory data, are first computed and visualized in Figure 6 for the case of fault-free data. Next, a fault is added to the data point at present time t (t = 45 s) within the query data pattern and, then, the global matrices are recomputed as depicted in Figure 7. The DTW distances from global matrices are presented in Figure 8 for both cases of fault and no fault. It can be seen that, the index at which the present data point is closest to has been identified in both fault-free and faulty cases. From Figure 8 and by using Eq. (33), the time position index is ε=36+101=45.

Figure 6.

Global matrices, G, between X*q and subsequences of X (fault-free case).

Figure 7.

Global matrices, G, between X*q and subsequences of X (fault case).

Figure 8.

DTW distances between X*q and subsequences of X. (a) Fault-free case and (b) faulty case.

With respect to the second approach, the global matrix between the query observation and memory data is first computed and visualized in Figure 9 for the fault-free case. Next, a fault is added to the data point at present time t (t = 45 s) within the query data pattern and, then, the global matrix is recomputed as shown in Figure 10. Finally, the last row of the global matrix is used to determine the index in both cases of fault and no fault, using Eq. (37). The locations identified in both cases (ε=45) are marked in red square box as shown in Figures 9 and 10.

Figure 9.

Global matrix, G, between X*q and memory data, X (fault-free case).

Figure 10.

Global matrix, G, between X*q and memory data, X (fault case).

We observe that if the fault deviation intensity increases, the identification accuracy of the second approach decreases and the time position index would not be identified correctly, whereas, the first approach would still identify the time position index correctly but with high computational demand. That is, the second approach is less computationally demanding than the first approach but it is less accurate.

Advertisement

5. Applications

In this section, a typical steady-state numerical example taken from a literature is first used to evaluate the fault detection capability of the proposed signal reconstruction methods in steady-state operation and, then, applied to the transient start-up operation of a nuclear power plant.

5.1 Validation on the steady-state process

A typical steady-state numerical example [54], mimicking a typical industrial system, is considered to evaluate the performance of the proposed method in steady-state operation. The model of the process is:

x1x2x3x4x5x6=0.23100.08160.26620.32410.70550.21580.21700.30560.52070.40890.34420.45010.64080.31020.23720.46550.43300.5938t1t2t3+noiseE39

where t1, t2, and t3 are zero-mean random variables with standard deviations of 1, 0.8, and 0.6, respectively. The noise included in the process is zero-mean with a standard deviation of 0.2, and is normally distributed. To build the model, 1000 samples are generated using such process. The number of simulated faults is 2000, with the data samples generated according to the model above and the fault magnitude being a random number uniformly distributed between 0 and 5. The signal variable in fault is also random uniformly sampled among the six possible variables, as simulated in [59].

Although this application represents a typical steady-state operation, we assume a set of sequential time-series data, assigning a constant time interval of η=1s with r = 3. To measure the performance of the propose methods, we employ different measures of the performance metrics (e.g., missed alarm rate (MAR), missed and false alarms rate (M&FAR), true and false alarms rate (T&FAR), true alarm rate (TAR), and fault detection rate (FDR)) as proposed in [54] and briefly defined in Appendix A.3. The purpose of this application is to verify the performance of the proposed method in monitoring during steady-state operation and to compare the results with those of AAKR (see Appendix A.1) and AABKR.

Table 1 and Figure 11 show the alarm rates of AAKR, AABKR, and the modified AABKR computed from the prediction of the simulated faults. It is interesting to note that although the MAR of AAKR is a bit higher than that of AABKR, the performance of the two models does not differ significantly and both models have suffered from the spillover effects (i.e., the effect that a faulty signal has on the predictions of the fault-free signals) as evident from the values of T&FAR (i.e., the detection of faults in both faulty signal and at least one fault-free signal). Conversely, the performance of the modified AABKR is better than those of the other two methods in terms of TAR (i.e., the detection of fault only in a signal that actually has the fault, without false alarm in other fault-free signals) and T&FAR; hence, the modified AABKR is more resistant to spillover and more robust than both AAKR and AABKR. It can be observed that even though the TAR value of the modified AABKR is larger than those of the other two models, FDR values of the three methods did not differ significantly because of the larger values of T&FAR for AAKR and AABKR (53.4 and 61.8%, respectively). Therefore, it is important to further examine the rate of correct fault diagnosis of the three methods using absolute residual values of the faults successfully detected, which produced the FDR values. Figure 12 shows the rate of correct fault diagnosis of the three methods. We observe that the performance of the modified AABKR is comparable to that of AAKR, and the method can, thus, also be used effectively for signal validation during steady-state process operation.

ModelMARM&FART&FARTARFDR
AAKR19.201.6053.4025.8080.80
AABKR5.658.2661.8224.2794.35
Modified AABKR12.466.8128.1852.5587.54

Table 1.

Alarm rates (%) of validation on the numerical steady-state process.

Figure 11.

Alarm rates in a steady-state numerical process.

Figure 12.

Rate of correct fault diagnosis in a steady-state numerical process.

5.2 Start-up transient operations in nuclear power plants

The real-time nuclear simulator data used in [54] is taken here to test the applicability of the proposed methods in transient operations. The data is collected from the simulator without any faults during heating from the cool-down mode (start-up operation). The simulator was designed to reproduce the behavior of a three loop pressurized water reactor (PWR) and to carry out various operational modes, such as start-up, preoperational tests, preheating, hot start-up, cold shutdown, power control, and the operational conditions in steady and accident states. Figure 13 shows a basic three-loop PWR which is just an illustration of a real process. Six sensors’ process signals from the reactor coolant system (RCS) were selected for monitoring during the start-up operation: S1 (cold leg temperature), S2 (core exit temperature), S3 (hot leg temperature), S4 (safety injection flow), S5 (residual heat removal flow), and S6 (sub-cooling margin temperature). The data consist of 1000 observations sequentially collected at constant time intervals of 1 s.

Figure 13.

Schematic diagram of three loops PWR reactor coolant system [60].

Because these data are fault-free, we first used the entire 1000 observations to train the method and to determine the optimal model parameters using 10-fold cross-validation. Then, we simulated an abnormal condition within these data for use as the testing data set for model evaluation. To effectively examine the fault detection capability of the proposed method, we conducted a thousand-runs Monte Carlo simulation experiment, in which the fault magnitude was a random number sampled from a bimodal uniform distribution, U102210, and added to a signal. In this case, the sensor signal in fault at any time step of the Monte Carlo simulation was random uniformly sampled among the six possible sensor signals.

Summary statistics for the alarm rates from AAKR, AABKR, and the modified AABKR from the thousand-runs Monte Carlo simulation experiment are presented in Table 2. The mean values of the distributions of the alarm rates are depicted in Figure 14. The mean values of the distributions of the TARs are 61.4, 99.8, and 95.3% for AAKR, AABKR, and the modified AABKR, respectively. While the mean values of the distributions of the FDRs are 84.57, 99.86, and 99.99% for AAKR, AABKR, and the modified AABKR, respectively. From the results, the performance of both AABKR and the modified AABKR is better than that of AAKR. On average, the TAR from AABKR is slightly higher than that of the modified AABKR. However, there is no significant difference between the FDR of the AABKR and that of the modified AABKR. Thus, for single-sensor faults, the modified AABKR, on average, has performance similar to that of the AABKR, and can be used to validate the sensors’ states during transient operations, with the benefit of eliminating the use of derivatives entirely, thereby extending the applicability to steady-state process operation.

ModelSummary statisticsAlarm rates
MARM&FART&FARTARFDR
AAKRMean15.432.1821.0261.3784.57
Median15.412.1521.0261.3584.59
Max19.543.7325.7866.1389.11
Min10.890.8116.8655.8280.46
AABKRMean0.140.010.0999.7699.86
Median0.120.000.1299.7799.88
Max0.590.230.70100100
Min0.000.000.0098.9599.41
Modified AABKRMean0.010.324.3695.3099.99
Median0.000.354.3295.33100
Max0.230.957.5097.69100
Min0.000.002.2091.9799.76

Table 2.

Alarm rates (%) of a thousand-runs Monte Carlo simulation in start-up data.

Figure 14.

Means of the alarm rates in start-up process operating condition.

Advertisement

6. Conclusions

In this work, we have considered signal reconstruction models for fault detection in nuclear power plants. In order to improve the performance of the AABKR and extend its applicability to steady-state operating conditions, we have proposed a modification, based on a different procedure for the determination of the time position index, the position of the nearest vector within the memory vector to the query vector observation, which provides the input to the weighted-distance algorithm that captures temporal dependencies in the data. Two different approaches based on DTW for time position index identification, have been developed. The basic idea is that, the use of derivative in AABKR, which becomes constant and nearly zero during steady-state operation when the process change in time is negligible and makes it impossible to identify the time position index correctly, can be completely eliminated while maintaining an acceptable performance in monitoring during both steady-state and transient operations.

The modified AABKR method has been applied, first, to a typical steady-state process and, then, to a case study concerning the monitoring of a reactor coolant system of a PWR NPP during start-up transient operation. We have conducted Monte Carlo simulation experiments to critically examine the fault detection capability of the proposed method and the results have been compared to those of AAKR and AABKR using several performance metrics. The obtained results have shown that the reconstructions provided by the modified AABKR are more robust than those of AAKR and AABKR, in particular, during steady-state operations. The method can, then, be used for signal reconstruction during both steady-state and transient operations, with the benefit of eliminating the use of derivatives entirely while maintaining an acceptable performance. If these approaches are adopted, the cause of abnormalities can be identified, proper maintenance intervention can be planned and earlier mitigation can be allowed to avoid the risk of catastrophic failure.

The future works will focus on (1) the development of an ensemble model in order to benefit from the exploitations of different capabilities of the three methods for signal reconstructions; and (2) the development of a method for on-line updating of the memory data, allowing the model to automatically adapt to the changes in different operating conditions.

Advertisement

Advertisement

A.1 Auto associative kernel regression

The framework of the AAKR technique comprises three steps, briefly presented below [61, 62].

1) Distance calculation

The distance between the query vector xq and each of the memory data vectors is computed. There are many different distance metrics that can be used, but the most commonly used one is the Euclidean distance (L2-Norm):

diXixq=Xixq2=j=1pxijxqj2EA1

For a single query vector, this calculation is repeated for each of M memory vectors, resulting intodRM×1.

2) Similarity weight quantification

The distance di in vector d is used to determine the weights for the AAKR, for example, by evaluating the Gaussian kernel:

kiXixq=expdi22h2EA2

where h is the bandwidth.

3) Output estimation

Finally, the quantified weights (Eq. (A2)) are combined with the memory data vectors to make estimations by using a weighted average:

x̂qj=i=1Mkixiji=1MkiEA3
Advertisement

A.2 Algorithms

A.2.1 Time position index identification—First approach

A.2.2 Time position index identification—Second approach

Advertisement

A.3 Performance metrics

We evaluated and compared the proposed methods using a set of performance metrics proposed in [54]: missed alarm rate (MAR), missed and false alarms rate (M&FAR), true and false alarms rate (T&FAR), true alarm rate (TAR), and fault detection rate (FDR). These metrics are briefly summarized as follows:

A.3.1 Missed alarm rate

A missed alarm occurs when at least one process variable is erroneously not detected as faulty, ejTjD, when a fault is actually present. In this case, at least one missed alarm occurs, and no false alarm occurs in any of the other variables. The MAR is calculated as:

MAR=missed alarmstotal number of samples in fault condition100%EA4

A.3.2 Missed and false alarms rate

It is possible that a fault will be missed in a faulty signal but detected in at least one fault-free signal. This gives both missed and false alarms: a fault is detected ej>TjD in at least one process signal, when no fault is actually present (false alarm), and at least one process signal has a fault that is not detected ejTjD (missed alarm). The M&FAR is calculated as:

M&FAR=simultaneous missed&false alarmstotal number of samples in fault condition100%EA5

A.3.3 True and false alarms rate

It is possible that a fault will be detected in a faulty signal and also detected in at least one fault-free signal. This gives both true and false alarms: a fault is detected in at least one process signal, ej>TjD, when no fault is actually present (false alarm), and a fault is correctly detected in one process signal, ej>TjD (true alarm). The T&FAR is calculated as:

T&FAR=simultaneous true&false alarmstotal number of samples in fault condition100%EA6

A.3.4 True alarm rate

This represents the presence of only true alarms. A fault is detected in at least one process signal, ej>TjD, when a fault is actually present, and no false alarm exists in other fault-free signals (true alarm). The TAR is calculated as:

TAR=true alarmsnofalse alarmstotal number of samples in fault condition100%EA7

A.3.5 Fault detection rate

The FDR is expressed as the ratio of the number of faulty data points detected as faulty to the total number of samples specific to a fault. In this case, a fault is detected in at least one process signal, ej>TjD, when a fault is actually present regardless of false alarms in other signals. The FDR is calculated as:

FDR=correctly decteted faults in the systemtotal number of samples in fault condition100%EA8

FDR measures the ability of a model to detect the presence of the fault in a system when a fault is actually present. Thus, FDR is a summation of the M&FAR, T&FAR, and TAR, which implies that:

FDR=M&FAR+T&FAR+TAREA9

Nomenclature

Abbreviations

AABKR

auto-associative bilateral KR

AAKR

auto-associative KR

AANN

auto-associative artificial neural network

DTW

dynamic time warping

ECM

evolving clustering method

FDD

fault detection and diagnosis

FDR

fault detection rate

GPR

Gaussian process regression

KR

kernel regression

M&FAR

missed and false alarms rate

MAR

missed alarm rate

MSET

multivariate state estimation technique

NPPs

nuclear power plants

PCA

principal component analysis

PLS

partial least squares

PWR

pressurized water reactor

RCS

reactor coolant system

RMSE

root mean square error

SSCs

systems, structures, and components

SVM

support vector machine

T&FAR

true and false alarms rate

TAR

true alarm rate

Symbols and notations

X

matrix of training/memory data set

Xq

query test pattern r×p matrix

xqr

the last rth vector in Xq

A

array containing N matrices generated from X

Ark

kth r×p matrix in the array A

M

number of observations in the memory data

N

number of matrices in the array A generated from X

p

number of process variables

r

sliding window length

i

observation time index of memory data

q

a subscript, indicating query input

j

process variable index

xij

ith observation of the jth variable

t

time, independent variable

x̂qr

estimated value ofxqr

d

distance metric

k

kernel weight of AAKR

h

kernel bandwidth of AAKR

tq

query time input

f

feature component of AABKR

kf

feature kernel vector

kt

temporal kernel vector

hf

feature kernel bandwidth

ht

temporal kernel bandwidth

η

constant time interval

ε

time position index

Ark/t

derivative of the last row in Ark with respect to time, t

xqr/t

derivative of current measurement vector in Xq with respect to t

Δ

derivative distance vector

δ

temporal weighted distance vector

kab

adaptive bilateral kernel vector

eqr

residual vector

TjD

threshold for fault detection in the jth variable

D

local cost (distance) matrix of the DTW

G

global cost (accumulated) matrix of the DTW

References

  1. 1. Rouhiainen V. Safety and Reliability: Technology Theme—Final Report. Espoo: VTT Publications 592; 2006
  2. 2. Coble J, Ramuhalli P, Bond L, Hines JW, Upadhyaya B. A review of prognostics and health management applications in nuclear power plants. International Journal of Prognostics and Health Management. 2015;6:1-22
  3. 3. IAEA. On-Line Monitoring for Improving Performance of Nuclear Power Plants Part 2: Process and Component Condition Monitoring and Diagnostics. IAEA Nuclear Energy Series: No. NP-T-1.2. Vienna, Austria: IAEA; 2008
  4. 4. EPRI. On-Line Monitoring of Instrument Channel Performance, Volume 1: Guidelines for Model Development and Implementation. Vol. Vol. 1. Palo Alto, CA: EPRI; 2004. p. 1003361
  5. 5. IAEA. On-Line Monitoring for Improving Performance of Nuclear Power Plants Part 1: Instrument Channel Monitoring. IAEA Nuclear Energy Series: No. NP-T-1.1. Vienna, Austria: IAEA; 2008
  6. 6. EPRI. Equipment Condition Assessment Volume 1: Application of On-Line Monitoring Technology. Vol. Vol. 1. Palo Alto, CA: EPRI; 2004. p. 1003695
  7. 7. Heo G, Lee SK. Internal leakage detection for feedwater heaters in power plants using neural networks. Expert Systems with Applications. 2012;39:5078-5086
  8. 8. An SH, Heo G, Chang SH. Detection of process anomalies using an improved statistical learning framework. Expert Systems with Applications. 2011;38:1356-1363
  9. 9. Li F, Upadhyaya BR, Coffey LA. Model-based monitoring and fault diagnosis of fossil power plant process units using group method of data handling. ISA Transactions. 2009;48:213-219
  10. 10. Isermann R. Fault-Diagnosis Applications: Model-Based Condition Monitoring—Actuators, Drives, Machinery, Plants, Sensors, and Fault-Tolerant Systems. Berlin Heidelberg, Springer; 2011
  11. 11. Nabeshima K, Suzudo T, Suzuki K. Real-time nuclear power plant monitoring with neural network. Journal of Nuclear Science and Technology. 1998;35:93-100
  12. 12. Baraldi P, Gola G, Zio E, Roverso D, Hoffmann M. A randomized model ensemble approach for reconstructing signals from faulty sensors. Expert Systems with Applications. 2011;38:9211-9224
  13. 13. Jiang Q, Yan X. Plant-wide process monitoring based on mutual information-multiblock principal component analysis. ISA Transactions. 2014;53:1516-1527
  14. 14. Ding SX. Model-Based Fault Diagnosis Techniques: Design Schemes, Algorithms, and Tools. Berlin Heidelberg, Springer; 2008
  15. 15. Chen J. Robust residual generation for model-based fault diagnosis of dynamic systems [Ph.D. thesis]. University of York; 1995
  16. 16. Patton RJ, Chen J. Observer-based fault detection and isolation: Robustness and applications. Control Engineering Practice. 1997;5:671-682
  17. 17. Simani S, Fantuzzi C, Patton RJ. Model-Based Fault Diagnosis in Dynamic Systems Using Identification Techniques. London, Springer; 2002
  18. 18. Chiang LH, Russell EL, Braatz RD. Fault Detection and Diagnosis in Industrial Systems. London, Springer; 2000
  19. 19. Ma J, Jiang J. Applications of fault detection and diagnosis methods in nuclear power plants: A review. Progress in Nuclear Energy. 2011;53:255-266
  20. 20. Garvey J, Garvey D, Seibert R, Hines JW. Validation of on-line monitoring techniques to nuclear plant data. Nuclear Engineering and Technology. 2007;39:149-158
  21. 21. Zio E. Prognostics and health management of industrial equipment. In: Kadry S, editor. Diagnostics and Prognostics of Engineering Systems: Methods and Techniques. USA: IGI Global; 2012. pp. 333-356
  22. 22. Zio E, Di Maio F, Stasi M. A data-driven approach for predicting failure scenarios in nuclear systems. Annals of Nuclear Energy. 2010;37:482-491. DOI: 10.1016/j.anucene.2010.01.017
  23. 23. EPRI. On-Line Monitoring for Equipment Condition Assessment: Application at Progress Energy. Palo Alto, CA: EPRI; 2004. p. 1008416
  24. 24. Ahmad R, Kamaruddin S. An overview of time-based and condition-based maintenance in industrial application. Computers and Industrial Engineering. 2012;63:135-149
  25. 25. Zio E, Broggi M, Pedroni N. Nuclear reactor dynamics on-line estimation by locally recurrent neural networks. Progress in Nuclear Energy. 2009;51:573-581. DOI: 10.1016/j.pnucene.2008.11.006
  26. 26. Simani S, Marangon F, Fantuzzi C. Fault diagnosis in a power plant using artificial neural networks: Analysis and comparison. In: European Control Conference. Karlsrushe, Germany: IEEE; 1999. pp. 2270-2275
  27. 27. Na MG, Shin SH, Jung DW, Kim SP, Jeong JH, Lee BC. Estimation of break location and size for loss of coolant accidents using neural networks. Nuclear Engineering and Design. 2004;232:289-300
  28. 28. Yao M, Wang H. On-line monitoring of batch processes using generalized additive kernel principal component analysis. Journal of Process Control. 2015;28:56-72
  29. 29. Dunia R, Qin SJ, Edgar TF, McAvoy TJ. Identification of faulty sensors using principal component analysis. AICHE Journal. 1996;42:2797-2812
  30. 30. Harkat M-F, Djelel S, Doghmane N, Benouaret M. Sensor fault detection, isolation and reconstruction using nonlinear principal component analysis. International Journal of Automation and Computing. 2007;4:149-155
  31. 31. Heo G, Choi SS, Chang SH. Thermal power estimation by fouling phenomena compensation using wavelet and principal component analysis. Nuclear Engineering and Design. 2000;199:31-40
  32. 32. Cui P, Li J, Wang G. Improved kernel principal component analysis for fault detection. Expert Systems with Applications. 2008;34:1210-1219
  33. 33. Zhao C, Gao F. Fault-relevant principal component analysis (FPCA) method for multivariate statistical modeling and process monitoring. Chemometrics and Intelligent Laboratory Systems. 2014;133:1-16
  34. 34. Gross KC, Singer RM, Wegerich SW, Herzog JP, VanAlstine R, Bockhorst F. Application of a model-based fault detection system to nuclear plant signals. In: International Conference on Intelligent Systems Applications to Power Systems (ISAP’97). Seoul, Korea, Argonne National Laboratory; 1997
  35. 35. Singer RM, Gross KC, Herzog JP, King RW, Wegerich S. Model-based nuclear power plant monitoring and fault detection: Theoretical foundations. In: International Conference on Intelligent Systems Applications to Power Systems (ISAP’97). Seoul, Korea, Argonne National Laboratory; 1997
  36. 36. Chen X, Xu G. A self-adaptive alarm method for tool condition monitoring based on Parzen window estimation. Journal of Vibroengineering. 2013;15:1537-1545
  37. 37. Mandal SK, Chan FTS, Tiwari MK. Leak detection of pipeline: An integrated approach of rough set theory and artificial bee colony trained SVM. Expert Systems with Applications. 2012;39:3071-3080
  38. 38. Rocco CM, Zio E. A support vector machine integrated system for the classification of operation anomalies in nuclear components and systems. Reliability Engineering and System Safety. 2007;92:593-600
  39. 39. Zio E, Baraldi P, Zhao W. Confidence in signal reconstruction by the evolving clustering method. In: Proceedings of the Annual Conference of the Prognostics and System Health Management (PHM 2011). Shenzhen: IEEE; 2011
  40. 40. Muradore R, Fiorini P. A PLS-based statistical approach for fault detection and isolation of robotic manipulators. IEEE Transactions on Industrial Electronics. 2012;59:3167-3175
  41. 41. Zio E, Di Maio F. A fuzzy similarity-based method for failure detection and recovery time estimation. International Journal of Performability Engineering. 2010;6:407-424
  42. 42. Zio E, Di Maio F. A data-driven fuzzy approach for predicting the remaining useful life in dynamic failure scenarios of a nuclear system. Reliability Engineering and System Safety. 2010;95:49-57. DOI: 10.1016/j.ress.2009.08.001
  43. 43. Zio E, Gola G. A neuro-fuzzy technique for fault diagnosis and its application to rotating machinery. Reliability Engineering and System Safety. 2009;94:78-88. DOI: 10.1016/j.ress.2007.03.040
  44. 44. Zio E, Baraldi P, Popescu IC. A fuzzy decision tree method for fault classification in the steam generator of a pressurized water reactor. Annals of Nuclear Energy. 2009;36:1159-1169. DOI: 10.1016/j.anucene.2009.04.011
  45. 45. Marseguerra M, Zio E, Baraldi P, Oldrini A. Fuzzy logic for signal prediction in nuclear systems. Progress in Nuclear Energy. 2003;43:373-380
  46. 46. Ding SX. Data-driven design of monitoring and diagnosis systems for dynamic processes: A review of subspace technique based schemes and some recent results. Journal of Process Control. 2014;24:431-449
  47. 47. Hu Y, Palmé T, Fink O. Fault detection based on signal reconstruction with auto-associative extreme learning machines. Engineering Applications of Artificial Intelligence. 2017;57:105-117
  48. 48. Hines JW, Garvey J, Garvey DR, Seibert R. Technical Review of On-Line Monitoring Techniques for Performance Assessment Volume 3: Limiting Case Studies. U.S. Nuclear Regulatory Commission, NUREG/CR-6895. Vol. 32008. Washington DC, USA, US Nuclear Regulatory Commission
  49. 49. Baraldi P, Di Maio F, Genini D, Zio E. Comparison of data-driven reconstruction methods for fault detection. IEEE Transactions on Reliability. 2015;64:852-860
  50. 50. Hines JW. Robust distance measures for on-line monitoring. US Patent. Patent No.: US008311774 B2; 2012
  51. 51. Baraldi P, Di Maio F, Turati P, Zio E. Robust signal reconstruction for condition monitoring of industrial components via a modified auto associative kernel regression method. Mechanical Systems and Signal Processing. 2015;60:29-44
  52. 52. Ahmed I, Heo G. Development of a modified kernel regression model for a robust signal reconstruction. In: Transactions of the Korean Nuclear Society Virtual Autumn Meeting. Gyeongju, Korea, Korean Nuclear Society; 2016
  53. 53. Ahmed I, Heo G. Development of a transient signal validation technique via a modified kernel regression model. In: 10th International Embedded Topical Meeting on Nuclear Plant Instrumentation, Control and Human-Machine Interface Technologies (NPIC&HMIT 2017). San Francisco, CA, USA, American Nuclear Society; 2017. pp. 1943-1951
  54. 54. Ahmed I, Heo G, Zio E. On-line process monitoring during transient operations using weighted distance auto associative bilateral kernel regression. ISA Transactions. 2019;92:191-212. DOI: 10.1016/j.isatra.2019.02.010
  55. 55. Ahmed I. Bilateral kernel methods for time-series states validation in process systems [Ph.D. thesis]. Republic of Korea: Department of Nuclear Engineering, Kyung Hee University; 2020
  56. 56. Sakoe H, Chiba S. Dynamic programming algorithm optimization for spoken word recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing. 1978;26:43-49
  57. 57. Meinard M. Dynamic Time Warping. Information Retrieval for Music and Motion. Berlin, Heidelberg: Springer; 2007. DOI: 10.1007/978-3-540-74048-3_4
  58. 58. Berndt DJ, Clifford J. Using dynamic time warping to find patterns in time series. In: Work Knowledge Discovery in Databases. Palo Alto, California, USA, AAAI; 1994. pp. 359-370
  59. 59. Alcala CF, Qin SJ. Reconstruction-based contribution for process monitoring. Automatica. 2009;45:1593-1600
  60. 60. Xiao B-B, Wang T-C. An investigation of FLEX implementation in Maanshan NPP by using MAAP 5. In: Jiang H, editor. Proceedings of the 20th Pacific Basin Nuclear Conference (PBNC 2016). Singapore: Springer; 2017. pp. 81-96. DOI: 10.1007/978-981-10-2311-8_8
  61. 61. Heo G. Condition monitoring using empirical models: Technical review and prospects for nuclear applications. Nuclear Engineering and Technology. 2008;40:49-68
  62. 62. Hines JW, Garvey D, Seibert R, Usynin A. Technical Review of On-Line Monitoring Techniques for Performance Assessment, Volume 2: Theoretical Issues. U.S. Nuclear Regulatory Commission, NUREG/CR-6895. Vol. Vol. 22008. Washington DC, USA, US Nuclear Regulatory Commission

Written By

Ibrahim Ahmed, Enrico Zio and Gyunyoung Heo

Submitted: 15 October 2021 Reviewed: 18 October 2021 Published: 14 December 2021