Open access peer-reviewed chapter

ANFIS: Establishing and Applying to Managing Online Damage

Written By

Sy Dzung Nguyen

Submitted: October 5th, 2018 Reviewed: December 10th, 2018 Published: March 11th, 2019

DOI: 10.5772/intechopen.83453

Chapter metrics overview

955 Chapter Downloads

View Full Metrics

Abstract

Fuzzy logic (FL) and artificial neural networks (ANNs) own individual advantages and disadvantages. Adaptive neuro-fuzzy inference system (ANFIS), a fuzzy system deployed on the structure of ANN, by which FL and ANN can interact to not only overcome their limitations but also promote the ability of each model has been considered as a reasonable option in the real fields. With the vital strong points, ANFIS has been employed well in many technology applications related to filtering, identifying, predicting, and controlling noise. This chapter, however, focuses mainly on building ANFIS and its application to identifying the online bearing fault. First, a traditional structure of ANFIS as a data-driven model is shown. Then, a recurrent mechanism depicting the relation between the processes of filtering impulse noise (IN) and establishing ANFIS from a noisy measuring database is presented. Finally, one of the typical applications of ANFIS related to online managing bearing fault is shown.

Keywords

  • fuzzy logic
  • artificial neural networks
  • adaptive neuro-fuzzy inference system

1. Introduction

As well known, the mathematical tools FL and ANN possess both the advantages and disadvantages as their specific characteristics. The hybrid structure ANFIS, where ANN and FL can interact to not only overcome partly the limitations of each model but also uphold their strong points [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25], is, hence, considered as a reasonable option in many technology applications such as identifying [1, 2, 4, 6, 12], predicting [9, 11, 17, 25], controlling [3, 5, 7, 18, 19, 20, 21, 22, 23, 24, 25, 26], and filtering noise [14, 15, 16, 27, 28, 29].

To build an ANFIS from a given database, firstly, an initial data space (IDS) expressing the mapping f : X Y must be created. A cluster data space (CDS) is then built from the IDS to form the ANFIS via a training algorithm. Being viewed as a popular technique for unsupervised pattern recognition, clustering is an effective tool for analyzing and exploring data structures to build CDSs [30, 31, 32, 33, 34, 35, 36, 37]. Reality shows that the accuracy and training time of the ANFIS depend deeply on the features of both the IDS and CDS [2, 3, 4, 5, 6]. In the process of building ANFIS, the two issues as follows should be considered: (1) What is the essence of the interactive relation between ANFIS’s convergence capability and CDS’ attributes? (2) How to exploit this essence for increasing ANFIS’s ability to converge to the desired accuracy with the improved calculating cost?

Many different clustering approaches have been discovered [2, 3, 4, 10, 30, 31, 32, 33, 34, 37]. Separating data in X and in Y distinctly with a mutual result reference, step by step, was described in [10]. The method, however, could not solve appropriately the above issues. Besides, the difficulty in deploying fuzzy clustering strategies along with the high calculating cost was their disadvantage. Generally, a hard relation could not reflect fully database attributes [31, 34]. The well-known method of fuzzy C-means clustering was seen as a better option in this case. It, however, was not effective enough for the “non-spherical” general datasets [30, 37]. Therefore, the idea of fuzzy clustering in a kernel feature space was then developed to deal with these cases [30, 31, 32, 33, 34, 37]. A weighted kernel-clustering algorithm could be referred to [30], or a method of weighted kernel fuzzy C-means clustering based on adaptive distances was detailed in [31]. In spite of owning considerable advantages, the identification and prediction accuracy of the ANFIS based on the CDS coming from [30, 31] are sensitive to attributes of the CDS due to the negative influence of noise.

Reality has shown that noise status including IN always exists in the measured IDSs [2, 4, 9, 16], which degrades violently the accuracy of ANFISs deriving from them. There are many reasons resulting in this, such as the lack of precision of the measurement devices, tools, measurement methods, or the negative impact of the surrounding environment. In [7], an ANFIS took part in the system in the form of an inverse MR damper (MRD) model to specify the time-verifying desired control current. To maintain the accuracy of the inverse MRD model, the ANFIS was retrained after each certain period due to the dynamic response of the MRD depending quite deeply on temperature. Another more active approach is filtering noise or preprocessing data [7, 9, 11, 17, 21, 38, 39, 40]. In [11, 17], where ANFISs were employed to predict the health of mechanical systems, vibration signal was always measured and filtered to update the ANFISs. Related to the preprocessing data to set up ANFIS, it can observe that to maintain the stability of the above online ANFIS-based applications, reducing time delay is really meaningful. One of the becoming solutions for this can be referred to [16] where filtering IN and building ANFIS were carried out synchronously via a recurrent mechanism. A recurrent strategy for forming ANFIS was carried out, in which the capability to converge to a desired accuracy of the ANFIS training process could be estimated and directed online. As a solution, increasing the quality of both the IDS and CDS was paid attention. Building an ANFIS via a filtered database and exploiting the ANFIS as an updated filter to refilter the database were depicted via an online and recurrent mechanism. The process was upholden until the ANFIS-based database approximation convergent to the desired accuracy or a stop condition appears.

Inspired by the ANFIS’s capability, in order to provide the readers with the theoretical basis and application direction of the model, this chapter presents the formulation of ANFIS and one of its typical applications. The rest of the chapter is organized as follows. Section 2 shows a structure of ANFIS as a data-driven model deriving from fuzzy logic and artificial neural networks. Setting up the CDS consisting of the input data clusters, output data clusters, and the CDS-based ANFIS as a jointed structure is all detailed. Deriving from this relation, a theoretical basis for building ANFIS from noisy measuring datasets is presented in Section 3. An online and recurrent mechanism for filtering noise and building ANFIS synchronously is clarified via algorithms for filtering IN and establishing ANFIS. A typical application of ANFIS related to online managing bearing fault status is shown in Section 4. Finally, some general aspects are mentioned in the last section.

Advertisement

2. Structure of ANFIS

Let’s consider a given IDS having P input–output data samples x ¯ i y i , x ¯ i = x i 1 x in n , y i 1 , and i = 1 P . With a data normalization solution and a used certain clustering algorithm, a CDS is then created. The kth cluster, signed Γ k , k = 1 C , consists of one input cluster and one output cluster signed Γ k A and Γ k B , respectively. The CDS can be seen as a framework for establishing ANFIS. This section presents how to build the CDS as well as the CDS-based ANFIS structure.

2.1 Some related notions

Some notions shown in [16] are used in this chapter as follows.

Definition 1. Normalizing a given IDS to set up a normalized initial data space signed IDS ¯ is performed as follows:

x ˜ ij = x ij / max k x kj , i , k = 1 P , j = 1 n . E1

By this way, the ith data sample (also signed ( x ¯ i , y i )) in the IDS ¯ is constituted as follows:

x ¯ i = x ˜ i 1 x ˜ in T y i i = 1 P . E2

Definition 2. The root-mean-square error (RMSE) in Eq. (3) is used to evaluate accuracy rate of ANFIS. The required RMSE value is signed E . The absolute error, ε ¯ i , i = 1 P , between the data output y i = f x ¯ i and the corresponding ANFIS-based output y ̂ i x ¯ i is defined in Eq. (4). The desired value of ε ¯ i is signed ε ¯ :

RMSE = P 1 i = 1 P  y ̂ i x ¯ i f x ¯ i 2 , E3
ε ¯ i = y ̂ i x ¯ i f x ¯ i , i = 1 P . E4

Definition 3. Let’s consider x ¯ i IDS ¯ in which IDS ¯ depicts an unknown mapping f : X Y . The ANFIS-based approximation of f : X Y is called to be continuous at x ¯ p y p IDS ¯ if.

y ̂ i x ¯ i y p ± ε ¯ when x ¯ i x ¯ p . E5

Definition 4. Let’s consider an ANFIS-based approximation of a mapping expressed by an IDS ¯ . The ANFIS is said to be a uniform approximation with a required error ε ¯ if at ∀  x ¯ i X , by choosing any small constant ε ε ¯ , corresponding data sample x ¯ j IDS ¯ always exists such that.

x ¯ h IDS ¯ ; if x ¯ h x ¯ i x ¯ j x ¯ i , then y ̂ h x ¯ h f x ¯ i y ̂ j x ¯ j f x ¯ i = ε . E6

Definition 5. Data cluster Γ k and data sample x ¯ p y p Γ k in a CDS derived from an IDS ¯ are depicted in Figure 1. Let sign Γ k \ p to be a subset consisting of the data samples belonging to Γ k except x ¯ p y p . The subset contains Q kp data samples. It is assumed that all of data samples in Γ k A are distributed closely, while in Γ k B , most of them are located closely, except y p ; it is far from the other and distributes at one side of Γ k B . This status is described in Eq. (7):

y p  ≫ max y i Γ k B \ p y i or y p  ≪ min y i Γ k B \ p y i . E7

Figure 1.

Two typical distribution types in data cluster Γ k : Impulse noise point IN x ¯ p y p Γ k causing the distribution at one side, the right side (a), and the left side (b).

and satisfies Eqs. (8) and (9):

d k 1 = y p max y i Γ k B \ p y i > P E 2 Q kp ε ¯ 2 0.5 if y p > max y i Γ k B \ p y i . E8
d k 2 = min y i Γ k B \ p y i y p > P E 2 Q kp ε ¯ 2 0.5 if y p < min y i Γ k B \ p y i . E9

In this case, x ¯ p y p is called a critical data sample in the CDS.

2.2 Setting up the input data clusters

Let’s consider the normalized initial data space IDS ¯ (see Def. 1). Many well-known clustering methods can be used to build a CDS from the IDS ¯ . Here, the CDS is built by using the clustering algorithm KFCM-K (kernel fuzzy C-means clustering with kernelization of the metric) presented in [31]. By this way, distribution of data samples in the CDS is established. The membership degree of the jth data sample belonging to the ith cluster is denoted by μ ij 0 1 i , j and j = 1 P , i = 1 C . Cluster centroids x ¯ 1 0 , , x ¯ C 0 in the CDS are specified such that the following objective function is minimized:

J KFCM U x ¯ 0 = i = 1 C j = 1 P μ ij m ϕ x ¯ j ϕ x ¯ i 0 2 E10

subjected to i = 1 C  μ ij = 1 j and μ ij 0 1 i , j . In Eq. (10), x ¯ i 0 = x 0 i 1 x 0 in n is the ith cluster center; ϕ x ¯ j ϕ x ¯ i 0 2 denotes the squared distance between x ¯ j and x ¯ i 0 in the kernel space; ϕ . is the kernel function; U = U μ ij C × P is the distribution matrix; and m > 1 is the fuzzy factor.

The objective function can be rewritten via Gaussian kernel function as follows:

J KFCM U x ¯ 0 = 2 i = 1 C j = 1 P μ ij m 1 exp x ¯ j x ¯ i 0 2 / σ 2 . E11

Deriving J KFCM U x ¯ 0 in Eq. (11) with respect to x ¯ i 0 , at the optimal centers, the following must be obtained:

x ¯ i 0 J KFCM U x ¯ i 0 = 4 σ 2 j = 1 P μ ij m x ¯ j x ¯ i 0 exp x ¯ j x ¯ i 0 2 / σ 2 = 0 E12

From Eqs. (11) to (12) and the use of Lagrange multipliers with μ ij 0 1 i , j and i = 1 C μ ij = 1 j , the following update laws are obtained:

x ¯ i 0 = j = 1 P  μ ij m x ¯ j K x ¯ j x ¯ i 0 j = 1 P  μ ij m K x ¯ j x ¯ i 0 , i = 1 C . E13
μ ij = h = 1 C 1 K x ¯ j x ¯ i 0 1 K x ¯ j x ¯ h 0 1 / m 1 1 if x ¯ j x ¯ i 0 1 and μ ik k j = 0 if x ¯ j = x ¯ i 0 i = 1 C ; j = 1 P . E14

By using index ts as in Eq. (15), ts to be the required value of ts and r to denote the rth loop, the clustering phase is accomplished until ts ts :

ts = J KFCM r J KFCM r 1 / J KFCM r 1 E15

Specification of the optimal centers and their relationship values as abovementioned is detailed in Appendix A of [12].

2.3 Setting up the output data clusters

The result of the clustering process in the input data space is an input cluster centroid vector x ¯ 1 0 x ¯ C 0 of corresponding data clusters, respectively, signed as Γ 1 , , Γ C . Let A 1 , , A C , respectively, be input fuzzy sets established via x ¯ 1 0 , , x ¯ C 0 [12, 16]. The membership value of x ˜ il belonging to A k is inferred from Eq. (14):

μ ¯ ki x ˜ il = h = 1 C 1 K x ˜ il x kl 0 1 K x ˜ il x hl 0 1 / m 1 1 k = 1 C i = 1 P l = 1 n . . E16

With following the product law, membership value of x ¯ q belonging to A i is

μ kq x ¯ q = l = 1 n μ ¯ kq x ˜ ql , k = 1 C , q = 1 P , E17

and its normalized membership value is as follows:

N k x ¯ q = μ kq x ¯ q / h = 1 C μ hq x ¯ q , q = 1 P , k = 1 C . E18

The membership of a data sample in each cluster determined based on Eqs. (16)(18) is then used to specify the hard distribution status of the data samples in each cluster. It is then used to specify the index vector a of hyperplanes (or the output data clusters) w k . and k = 1 C . The ith data sample is hardly distributed into the kth data cluster if

N k x ¯ i = max h = 1 C N h x ¯ i , i = 1 P , k = 1 C . E19

Deriving from the t k data samples hardly distributed in the kth data cluster, by using the least mean squares method, vector a = a 0 a 1 a n T = a 0 a ¯ T of w k . is specified which is the solution of Eq. (20):

a n i = 1 t k x ˜ in + a n 1 i = 1 t k x ˜ i n 1 + + a 1 i = 1 t k x ˜ i 1 + a 0 t k = i = 1 t k y i a n i = 1 t k x ˜ in x ˜ i 1 + a n 1 i = 1 t k x ˜ i n 1 x ˜ i 1 + + a 1 i = 1 t k x ˜ i 1 2 + a 0 i = 1 t k x ˜ i 1 = i = 1 t k y i x ˜ i 1 a n i = 1 t k x ˜ in 2 + a n 1 i = 1 t k x ˜ i n 1 x ˜ in + + a 1 i = 1 t k x ˜ i 1 x ˜ in + a 0 i = 1 t k x ˜ in = i = 1 t k y i x ˜ in E20

Finally, the value of hyperplane w k corresponding to x ¯ i is calculated in Eq. (21):

w k x ¯ i = a 0 + a ¯ T x ¯ i E21

2.4 Structure of ANFIS

As mentioned in Eq. (1), the ANFIS for approximating the mapping   f : X Y is derived from M fuzzy laws in Eq. (22):

R i : IF x ˜ q 1 is A 1 i x ˜ q 1 , AND , AND x ˜ qn is A n i x ˜ qn THEN y q i is B i x ¯ q i = 1 M M C , E22

where A l i x ˜ ql , i = 1 M , l = 1 n , is the membership value of x ˜ ql belonging to input fuzzy set A i , meaning A l i x ¯ q μ ¯ iq x ˜ ql in Eq. (16); B i x ¯ q is the corresponding output fuzzy set of data sample x ¯ q , x ¯ q = x ˜ q 1 x ˜ qn T , q = 1 P .

In the fuzzification phase, membership value of x ¯ q belonging to input fuzzy set A i signed A i x ¯ q μ iq x ¯ q is specified by Eq. (17). For the defuzzification, if the center-average method is used, the output of the qth data sample is expressed via the membership values in the input fuzzy space of x ¯ q as follows:

y ̂ q x ¯ q = i = 1 M y q i μ iq x ¯ q / i = 1 M μ iq x ¯ q E23

where y q i = w x ¯ q i is the value of hyperplane w i corresponding to data sample x ¯ q calculated in Eq. (21).

Finally, all the above-mentioned contents can be depicted via the ANFIS with five layers signed D, CL, Π , N, and S in Figure 2. Layer D (data) has n input nodes corresponding to n elements of data vector x ¯ i = x i 1 x in T , i = 1…P, while its outputs are the corresponding normalized values using Eq. (1). Layer CL (clustering) expresses the clustering process. The result of this process is C clusters with C corresponding cluster centroids x ¯ 1 0 , , x ¯ C 0  , to which C fuzzy sets, A 1 ,…, A C , are given. The output of this layer is the membership value of x ¯ i calculated for each dimension x ˜ i 1 x ˜ in via Eq. (16). Layer Π (product layer) specifies membership values based on Eq. (17). Layer N (normalization) estimates the normalized membership value of a data sample belonging to each fuzzy set upon Eq. (18). Layer S (specifying) is used to estimate the output of the ANFIS based on any well-known method. In case of using the center-average defuzzification, it is calculated by Eq. (23), while it is specified by Eq. (24) if the “the winner takes all” law is employed:

y ̂ i = w k x ¯ i , i = 1 P , E24

where w k x ¯ i is the value of the kth hyperplane corresponding to input data sample x ¯ i (21); k is the index of the data cluster where data sample x ¯ i gets the maximum membership specified via N . x ¯ i as in Eq. (25):

N k x ¯ i = max h = 1 C N h x ¯ i E25

Figure 2.

Structure of the ANFIS.

Advertisement

3. Building ANFIS from a noise measuring database

This section presents the recurrent mechanism together with the related algorithms consisting of the one for ANFIS-based noise filtering and the one for building ANFIS showed in [16].

3.1 Convergence condition of the ANFIS-based approximation

Deriving from a given IDS having P input–output data samples x ¯ i y i , x ¯ i = x i 1 x in n and y i 1 , i = 1 P , with a data normalization solution as in Def. 1, the IDS ¯ is built, to which a CDS is created as depicted in Section 2. It should be noted that IN is often considered as disturbances distributed uniformly in a signal source which impacts negatively on the created CDS. In general, IN causes raising the number of critical data samples in the CDS. The negative impact of IN on the convergent ability of training ANFIS is formulated via Theorem 1 as follows.

Theorem 1 [16]: Let’s consider a given IDS ¯ deriving from an IDS and an ANFIS uniformly approximating an unknown mapping f : X Y expressed by the IDS. The ANFIS is built via a CDS built from the IDS ¯ . Assume that X is compact. The necessary condition for the approximation convergent to a desired error E is that in the CDS there is not any critical data sample.

Proof: Let’s consider cluster Γ k belonging to the CDS. Assume that x ¯ p y p Γ k is a critical data sample (see Def. 5); it has to be proven that the ANFIS will not converge to E .

It can infer from Eq. (3) that

RMSE P 0.5 y ̂ p x ¯ p f x ¯ p 2 + i = 1 Q kp y ̂ i x ¯ i f x ¯ i 2 0.5 E26

Because the ANFIS is a uniform approximation of f : X Y and X is the compact set, it can infer that the ANFIS is continuous in Γ k \ p , so Eq. (27) can be inferred from Eq. (26):

RMSE P 0.5 y ̂ p x ¯ p f x ¯ p 2 + Q kp ε ¯ 2 0.5 E27

It should be noted that the ANFIS is a uniform approximation of the f : X Y in Γ k \ p , x ¯ p y p Γ k is a critical data sample, and samples in Γ k A are distributed closely. As a result, Eq. (28) can be inferred:

y ̂ p x ¯ p min y i Γ k B \ p y i max y i Γ k B \ p y i E28

Due to y p > max y i Γ k B \ p y i (see Def. 5), the following can be obtained from Eqs. (27) to (28):

RMSE P 0.5 max y i Γ k B \ p y i y p 2 + Q kp ε ¯ 2 0.5 E29

From Eqs. (8) and (29), it can conclude that RMSE > E . Similarly, due to y p < min y i Γ k B \ p y i (see Def. 5), from Eqs. (27) to (28), the following can be also inferred:

RMSE P 0.5 min y i Γ k B \ p y i y p 2 + Q kp ε ¯ 2 0.5 E30

From Eq. (9) to (30), RMSE > E can be implied.

Finally, it can conclude that if existing at least a critical data sample in the CDS, the ANFIS could not converge to the required error [E]. □.

3.2 Algorithm for filtering IN

An essential advantage of the clustering algorithms presented in [30, 31] is the convergent rate. However, the quality of the ANFIS based on the CDS deriving from them is sensitive to the IDS attributes. It can be observed that the main reason of this status via Theorem 1 is the appearance of critical data samples. Besides, regarding the preprocessing IDS shown in [9], in spite of the positive filtering effectiveness, the calculating cost of the method is quite high. A becoming solution for the above issues can be referred in [16] where the recurrent mechanism illustrated in Figure 3 was employed. The recurrent mechanism has two phases being performed synchronously: filtering IN in the database and building ANFIS based on the filtered database.

Figure 3.

Flowchart of the FIN-ANFIS consisting of the three main phases, the clustering, establishing and estimating ANFIS, and filtering IN, which are performed simultaneously.

Firstly, an adaptive online impulse noise filter (AOINF) is proposed. The recurrent mechanism is then depicted via the algorithm named FIN-ANFIS consisting of three main phases: filtering IN, clustering data, and building ANFIS. By this way, the filtered IDS ¯ is used to build the ANFIS, then the created ANFIS is applied as an updated filter to refilter the IDS ¯ , and so on, until either the process converges or a stop condition is satisfied. To get a guarantee of convergence and stability, an update law for the AOINF is discovered via Lyapunov stability theory.

Remark 1. ANFIS cannot converge to the required error [E] if there is at least one critical data sample in the CDS (see Theorem 1). The clustering strategy of the FIN-ANFIS therefore focuses on preventing the clustering process from appearing critical data samples, along with seeking to exterminate the critical data samples in the CDS having been taking form. As a result, in each loop of the ANFIS training process, the strategy well directs the clustering process to a new CDS where either there is not any critical data sample or there exist with a smaller amount. Theorem 2 shows the convergence condition of the training process.

Theorem 2 [16]: Following the flowchart in Figure 3, the ANFIS-based approximation of an unknown mapping   f : X Y expressed by the given IDS is built via a CDS which drives from the IDS ¯ (the normalized IDS). Let Q be the number of critical data points in the CDS at the rth loop. At these critical data samples, if the data output is filtered by law (31), then the RMSE (3) of the ANFIS will converge to [E]:

r + 1 y i = y r i ρ sgn y i y ̂ i r , i = 1 Q . E31

In the above, ρ > 0 is the update coefficient to be optimized by any well-known optimal method; y i y ̂ i r is the error between the ith data output and the corresponding ANFIS-based output; and function sgn . is defined as

sgn z = 1 if z > 0 1 otherwise . E32

Proof: A Lyapunov candidate function is chosen as in Eq. (33), to which expression (34) can be inferred:

e X = X T X . E33
e ̇ X = 2 i = 1 P Q X i X ̇ i + 2 j = 1 Q X j X ̇ j . E34

In the above, Ξ ̇ = d Ξ / dt expresses derivative of Ξ with respect to time; X is the vector of state variables deriving from IDS ¯ as follows:

X i = y i y ̂ i ; X = X 1 X P T E35

From update law (31), Eq. (34) can be rewritten as in Eq. (36):

e ̇ X = 2 i = 1 P Q X i X ̇ i + 2 j = 1 Q X j y ̇ j = 2 i = 1 P Q X i X ̇ i 2 ρ j = 1 Q X j sgn X j . E36

It should be noted that the update process is performed with respect to the critical data points; hence, Eq. (36) can be rewritten as follows:

e ̇ X = 2 ρ j = 1 Q X j sgn X j = 2 ρ j = 1 Q X j < 0 E37

In addition, the following can be implied from (33) to (35):

e 0 = 0 ; e X 0 X . E38

Finally, it can infer from Eqs. (37) to (38) that e X 0 is a stable Lyapunov process. Hence, from Eq. (3) one can infer the aspect needing to be proven:

RMSE = lim r r e X P 1 E .     □ E39

Remark 2. (1) To enhance the ability to adapt to the noise status of the IDS ¯ , ρ in Eq. (31) is specified as follows:

ρ = α y i y ̂ i r , E40

where α 0 is an adaptive coefficient chosen by the designer. Thus, ρ = ρ X i t takes part in adjusting the filtering level Δ i = r + 1 y i y r i . (2) It can infer from Theorem 1 that disposing of critical data samples in the CDS needs to be carried out. Therefore, the useful solution offered in Theorem 2 via update law (31) is employed to establish the filtering mechanism of the AOINF as shown below.

The algorithm AOINF for filtering IN:

  1. Look for critical data samples in the CDS to specify the worst data point (WP) where the continuous status of the ANFIS is worst:

WP x ¯ i WP y i WP such that y i WP y ̂ i WP = max h = 1 P y h y ̂ h . E41
  1. Specify the data samples satisfying condition (42):

y q y ̂ q 1 σ y i WP y ̂ i WP , q = 1 Q ¯ . E42

In the above, y ̂ i WP is the ANFIS-based output, while y i WP is the corresponding data output at the WP; σ > 1 is an adaptive coefficient (to be 1.35 for the surveys shown in [16]).

  1. Based on the updating law (43) to filter the data samples satisfying condition (42)

y r + 1 q y r q + α y q y ̂ q r sgn y q y ̂ q r , q = 1 Q ¯ . E43

3.3 Algorithm for building ANFIS

Figure 4 illustrates the establishment of the CDS from the IDS ¯ . It consists of (1) building fuzzy clusters with centroids x ¯ 1 0 x ¯ C 0 or the input data clusters (see Subsection 2.2), (2) estimating the hard distribution of samples in each input data cluster indicated by x ¯ 1 0 x ¯ C 0 , and (3) building the hyperplanes or the output data clusters (see Subsection 2.3) in the output data space using the specified hard distribution status. Based on the created CDS, Figure 3 shows the flowchart of the FIN-ANFIS consisting of three main phases: filtering IN, building the CDS driving from the filtered IDS ¯ , and forming ANFIS.

Figure 4.

A process of establishing the CDS driving from the IDS ¯ consists.

3.4 Algorithm FIN-ANFIS

Initializing: The initial index of the loop process, r = 1; the number of clusters C  ≪  P 1 ; J KFCM r = Ω , where Ω is a real number Ω > ts ; and the initial cluster centroids corresponding to r = 1 chosen randomly:

x ¯ i 0 r = x i 1 0 x in 0 , 1 i C E44

Build the input data clusters:

  1. Establish the input data clusters:

    Based on the x ¯ i 0 r to be known, calculate μ ij via Eq. (14) to update x ¯ i 0 r via Eq. (13).

  2. Specify the stop condition of the clustering phase via ts in Eq. (15):

    If ts ts : go to Step 3; ff ts > ts and r < r , setup r r + 1 and return to Step 1; if ts > ts and r = r and C < P 1 , set C C + 1 , r 1 , and return to Step 1; and if ts > ts and r = r and C = P 1 , stop (not converge).

    Build ANFIS:

  3. Build and estimate ANFIS:

    Establish ANFIS as presented in Subsection 2.4.

    Calculate RMSE = P 1 i = 1 P y ̂ i x ¯ i y i 2 0.5 in which y ̂ i x ¯ i is the ANFIS-based output, while y i is the data output. If RMSE E , stop (the ANFIS is the desired one); if RMSE > E and C < P 1 , go to Step 4; and if RMSE > E and C = P 1 , stop (not converge).

  4. Set up a new cluster centroid:

    Based on Eq. (41), seek the worst data point WP x ¯ i WP y i WP ; set C C + 1 , r 1 , and set up a new cluster centroid x ¯ C 0 in the neighborhood of the WP; and go to Step 5.

  5. Filter IN:

    Call the algorithm AOINF and return to Step 1.

Advertisement

4. ANFIS for managing online bearing fault

An application of ANFIS to estimating online bearing fault upon the ability to extract meaningful information from big data of intelligent structures is shown in this section. Estimating online bearing status to hold the initiative in exploiting the systems is meaningful because bearing is an important machine detailed in almost mechanical structures.

In [17], an Online Bearing Damage Identifying Method (ASSBDIM) based on ANFIS, singular spectrum analysis (SSA), and sparse filtering (SF) was shown. The method consists of two phases: offline and online. In offline, the ANFIS identifies the dynamic response of the mechanical system in the individual bearing statuses. The trained ANFIS is then used to estimate its real status in the online phase. These aspects are detailed in the following paragraphs.

4.1 Some related theories

4.1.1 Singular spectrum analysis

By using SSA, from a given time series, a set of independent additive time series can be generated [41, 42, 43]. This work is clarified via the algorithm for SSA presented in [42] as follows.

  1. Embedding:

    Let’s consider a given time series of N0 data points z 0 z 1 z N 0 1 . From selected window length L0, 1 < L0 < N0, sliding vectors X j = z j 1 z j z j + L 0 2 T , j = 1,…,K=N0 − L0 + 1, and matrix X as in Eq. (45) are built:

    X = z 0 z 1 z N 0 L 0 z 1 z 2 z N 0 L 0 + 1 z L 0 2 z L 0 1 z N 0 2 z L 0 1 z L 0 z N 0 1 . E45

  2. Building the trajectory matrix:

    From Eq. (45), one builds matrix S = XX T L 0 × L 0 . Vectors V i are then constructed, V i = X T U i / λ i , i = 1 d , in which λ 1 , , λ d are the non-zero eigenvalues of S arranged in the descending order and U1,…, Ud are the corresponding eigenvectors. A decomposition of the trajectory matrix into a sum of matrices X = i = 1 d E i is then established, where E i = λ i U i V i T .

  3. Reconstruction:

    Each elementary matrix is transformed into a principal component of length N by applying a linear transformation known as diagonal averaging or Hankelization. Let Z L 0 × K be a matrix of elements z i , j . By calculating L = min L 0 K , K = max L 0 K , then Z can be transformed into the reconstructed time series g 0 , g 1 , , g N 1 as in Eq. (46):

    g k = 1 k + 1 m = 1 k + 1 z m , k m + 2 , 0 k < L 1 1 L m = 1 L z m , k m + 2 , L 1 k < K 1 N 0 k m = k K + 2 N K + 1 z m , k m + 2 , K k < N 0 . E46

4.1.2 Sparse filtering

In this work SF is used to extract features from a given time series-typed measured database. Relying an objective function defined via the features, the method tries to specify the good features such that the objective function is minimized [11, 44, 45]. To deploy SF effectively, a process with the two following phases is operated. Preprocessing data based on the whitening method [46] is carried out in the first phase. A H-by-L matrix signed F of real numbers depicting the relation between each of the H training data samples and the L selected features is established in the second phase. SF presented in [11, 45] is detailed as follows.

In the first phase, a training set of the H data samples x i 1 × N , i = 1 H , in the form of a matrix signed S H × N is established from the given time series-typed measuring dataset. By adopting the whitening method [46], it then tries to make the data samples less correlated with each other and speed up the convergence of the sparse filtering process which employs the eigenvalue decomposition of the covariance matrix cov S = Z ¯ D Z ¯ T . In the expression, D is the diagonal matrix of its eigenvalues, and Z ¯ is the orthogonal matrix of eigenvectors of cov S . Finally, the whitened training set signed S white is formed as in Eq. (47):

S white = Z ¯ D 1 / 2 Z ¯ T S . E47

Subsequently, in the second phase, SF maps the data sample x i 1 × N of S white onto L features f i , i = 1 L , relied on a weight matrix signed W ¯ N × L . A linear relation between data samples in S white and the L features is expressed via W ¯ as in Eq. (48), in which F H × L is called the feature distribution matrix:

F = S white W ¯ . E48

Optimizing the feature distribution in F is then performed as detailed in [45]. The features in each column of F is normalized by dividing them by their l2-norm, f ˜ l = f l / f l 2 , l = 1 L . For each row of the obtained matrix, these features per example are normalize by computing f ̂ i = f ˜ i / f ˜ i 2 , i = 1 H , by which they lie on the unit l2-ball. The features normalized after the two above steps are optimized for sparseness using the l1-penalty to get a matrix signed F ̂ H × L . A loop process is then maintained via Eq. (48), in which F ̂ takes the role of F , until the optimal weights of W ¯ are to be established that make the objective function J SF W ¯ of Eq. (49) be minimized, to which, finally, F ̂ is resigned F :

J SF W ¯ = i = 1 H j = 1 L F ̂ i j . E49

4.2 The ASSBDIM

The ASSBDIM focuses on online bearing fault estimation. The aim is detailed in this subsection consisting of the way of setting up the databases and the algorithm ASSBDIM for online bearing fault estimation upon the built databases.

4.2.1 Building the databases for the ASSBDIM

A measuring dataset deriving from the mechanical system vibration is established for each surveyed bearing fault type. Regarding Q fault types, one obtains Q original datasets as in Eq. (50):

D 1 D 2 D Q T , E50

where D i is corresponding to the ith bearing fault type 1 i Q .

By using SSA for D i , m time series as in Eq. (51) are set up:

D i 1 D i 2 D im , i = 1 Q E51

where m is parameter selected by the designer. This work is carried out by the three steps as presented in Subsection 4.1.1, in which D i is used in the first step as the given time series of N0 data points z 0 z 1 z N 0 1 for building the trajectory matrix X in Eq. (45). Because the mechanical vibration signal is prone to the low frequency range [42], among the m time series, the (m-k) ones owning the highest frequencies are considered as noise. The k remainder time series as in Eq. (52) is hence kept to build the databases:

D i 1 D i 2 D ik , i = 1 Q E52

Specifying the optimal value of both k and m will be mentioned in Subsection 4.2.2.

For each time series in Eq. (52), for example, D ij , j = 1 k , based on SF one obtains the feature distribution matrix as in Eq. (48) which is resigned F ij ω H × L . By using this result for all the time series in Eq. (52), a new data matrix D ¯ i as in Eq. (53) is formed which is the input data space of the ith data subset corresponding to the ith bearing fault type:

D ¯ i = F i 1 ω F i 2 ω F ik ω H × kL . E53

By employing this way for Q, the surveyed bearing fault types, an input data space in the form of matrix (54), are established, which relates to building two offline databases signed Off_DaB and Off_testDaB as well as one online database signed On_DaB used for the algorithm ASSBDIM as follows:

D ¯ = ω 11 ω 12 ω 1 kL ω 21 ω 22 ω 2 kL ω QH 1 1 ω QH 1 2 ω QH 1 kL ω QH 1 ω QH 2 ω QH kL QH × kL E54

Namely, matrix D ¯ relates to the input data space (IDS), to which the databases for identifying the bearing status are built as follows. Firstly, by encoding the ith fault type by a real number y i , the output data space (ODS) of the ith subset can be depicted by vector y ¯ i of H elements y i as in Eq. (55):

y ¯ i = y i y i T H × 1 , i = 1 Q E55

Then, by combining Eq. (55) with Eq. (54), the input-output relation in the three datasets Off_DaB, Off_testDaB, and On_DaB can be described as in Eq. (56):

database IDS ODS D ¯ y ¯ E56

In the above, the input space D ¯ comes from Eq. (54), while the output space y ¯ as in Eq. (57) is constituted of y ¯ i H × 1 in Eq. (55):

y ¯ = y 1 , y 1 H y Q , y Q H T QH × 1 E57

4.2.2 The algorithm ASSBDIM for estimating health of bearings

In the offline phase, by initializing the parameters in vector ps in Eq. (58), together with applying SSA and SF to the measuring data stream, the Off_DaB and Off_testDaB are built as in Eq. (56):

ps = L 0 N 0 m k H L E58

where L 0 , N 0 come from Eq. (45); m and k relate to Eqs. (51) and (52), respectively; H and L derive from Eq. (53).

An ANFIS built by the algorithm FIN-ANFIS (see Subsection 3.3) is utilized to identify dynamic response of the mechanical system corresponding to the bearing damage statuses reflected by the Off_DaB. Optimizing the parameters in ps in Eq. (58) is then performed using the percentage of correctly estimated samples ( Ac ) as in Eq. (59) and the mean accuracy ( MeA ) as in Eq. (60) and the algorithm DE [47]:

Ac = 100 × cr _ samples n / to _ samples n % , E59
MeA = 100 × n = 1 Q cr _ samples n / n = 1 Q to _ samples n % , E60

where corresponding to the nth damage type, n = 1 Q , cr _ samples n is the number of checking samples expressing correctly the real status of the bearing, while to _ samples n is the total of checking samples used in the survey; Q is the number of surveyed bearing fault types as mentioned in Eq. (50).

Following the MeA , an objective function is defined as follows:

J = MeA ASSBDIM L 0 N 0 m k H L max . E61

The Off_testDaB, function J , and DE [47] are then employed to optimize the parameters in vector ps , to get L 0 N 0 m k H L opt .

Namely, by using the input of the Off_testDaB for the ANFIS which has been trained by the Off_DaB, one obtains the outputs y ̂ i , i = 1 H . These outputs are then compared with the corresponding encoded outputs to estimate the bearing real status, which is the one encoded by “q” satisfying Eq. (62):

i = 1 H y ̂ i y q = min h = 1 Q i = 1 H y ̂ i y h . E62

The completion of the offline phase as above can be seen as the beginning of the only phase. During the next operating process, first, by the way similar to the one for building the offline database Off_DaB, the online dataset On_DaB in the form D ¯ ON D ¯ i H × kL as in Eq. (53) is built. By using the On_DaB for the ANFIS trained in the offline, the bearing real status at this time is then specified based on Eq. (62).

The ASSBDIM is hence can be summarized as follows.

The offline process:

Initialize vector ps in Eq. (58):

  1. Build the Off-DaB and Off-testDaB in the form of Eq. (56).

  2. Train an ANFIS to identify the Off-DaB using the algorithm FIN-ANFIS.

  3. Accomplish the system.

    The Off-testDaB is used as database of the trained ANFIS, using the condition (62) to calculate MeA in Eq. (60). If MeA MeA , then go to Step 4; otherwise, adjust the value of the elements in vector ps in Eq. (58) using the algorithm DE [47], and then return to Step 1.

    The online process:

  4. Establish online database On-DaB D ¯ ON D ¯ i H × kL as in Eq. (53).

  5. Estimate online bearing fault status based on the On-DaB, trained ANFIS, and condition (62); check the stop condition: if it is not satisfied, then return to Step 4; otherwise, stop.

4.3 Some survey results

4.3.1 Experimental apparatus and estimating way

The experimental apparatus for measuring vibration signal is shown in Figure 5. The apparatus consists of the motor (1), acceleration sensors (2) and (4), surveyed bearings (3) and (5), module for processing and transforming series vibration signal incorporating software-selectable AC/DC coupling (Model: NI-9234) (6), and computer (7).

Figure 5.

Experimental apparatus for measuring vibration signal.

In Table 1, “encoding value” is abbreviated to “EV.” The three cases listed in Table 1 related to nine of the widespread single-bearing faults as in Table 2 are surveyed. In the above, Q = 7 (see Eq. 50) for the Cases 1–2, while Q = 10 for Case 3; the damaged location is the inner or outer or balls (signed In, or Ou, or Ba, respectively); damaged degrees are from 1 to 3 (signed D1 or D2 or D3); the load impacting on the system at the survey time consists of Load 1 or 2 or 3 (signed L1 or L2 or L3). For example, LmUnd shows the load degree to be m and the bearing to be undamaged, or LmDnBa expresses the load degree to be m (1,…,3), the damage level to be n (1,…,3), and the damage location to be the ball.

Table 1.

Surveyed cases and the corresponding encoding values (EV).

Faults Width (mm) Depth (mm)
D1Ou 0.20 0.3
D2Ou 0.30 0.3
D3Ou 0.46 0.3
D1In 0.20 0.3
D2In 0.30 0.3
D3In 0.40 0.3
D1Ba 0.15 0.2
D2Ba 0.20 0.2
D3Ba 0.25 0.2

Table 2.

The size of bearing single fault types used for surveys.

The ASSBDIM with H = 303, m = 30, k = 7 along with four other methods [48, 49, 50, 51] is employed to be surveyed. The first one [48] (Nin = Nout = 100; number of segments to be 20 × 10 3 and λ = 1 E 5 ) is the intelligent fault diagnosis method using unsupervised feature learning toward mechanical big data. The second one [49] employs the energy levels of the various frequency bands as features. In the third one [50], a bearing fault diagnosis upon permutation entropy, empirical mode decomposition, and support vector machines is shown. In the last one [51], a method of identifying bearing fault based on SSA is presented.

For the surveys, along with Ac and MeA, the root-mean-square error as in Eq. (63) is also employed, where y i and y ̂ i , respectively, are encoding and predicting outputs:

LMS = i = 1 H y i y ̂ i 2 / H . E63

4.3.2 Some survey results

The measured databases from Cases 1 to 3 with Q = 7 as in Table 1 along which the methods consist of the ASSBDIM [17] and the ones from [48, 49, 50, 51] were adopted to identify the status of the bearing. The obtained results were shown in Figures 6, 7, 8, 9 and Tables 3 and 4.

Figure 6.

The predicting (−pre) output y ̂ i of the ASSBDIM in Case 1 and the corresponding encoded (−enc) output y i .

Figure 7.

The y ̂ i and y i depicted by lines (6) in Figure 6 to be zoomed in.

Figure 8.

The error reflecting the difference between y i and y ̂ i in Figure 6.

Figure 9.

Ac and MeA (mean accuracy) of the ASSBDIM in Case 2.

Surveyed cases Ac (%)
[48] [49] [50] [51] [17]
L2UnD 99.67 93.73 98.68 99.67 100
L2D1In 98.35 95.05 92.74 95.38 99.67
L2D2In 99.67 98.68 99.34 97.36 99.01
L2D3In 99.01 93.07 95.38 92.74 100
L2D1Ba 98.68 91.09 96.37 96.70 97.36
L2D2Ba 99.67 92.08 94.72 99.01 98.68
L2D3Ba 97.36 98.68 100 98.68 100
MeA (%) 98.92 94.93 96.75 97.08 99.26

Table 3.

The accuracy of the methods in Case 2.

Surveyed cases Ac (%)
[48] [49] [50] [51] [17]
L1UnD 95.05 87.46 85.81 100 100
L1D1In 94.72 90.10 89.77 83.17 99.67
L1D2In 92.08 92.41 88.12 85.48 99.34
L1D3In 93.40 92.74 92.41 94.72 99.34
L1D1Ou 95.33 89.77 84.16 85.15 82.51
L1D2Ou 92.41 92.41 84.82 84.16 94.39
L1D3Ou 95.05 88.78 88.78 99.34 89.11
L1D1Ba 86.47 89.44 90.43 83.17 94.72
L1D2Ba 87.79 90.10 96.04 97.36 92.74
L1D3Ba 88.12 86.14 100 88.45 84.82
MeA (%) 92.04 89.94 90.03 90.10 93.66

Table 4.

The accuracy of the methods in Case 3.

4.3.3 Discussion

Following the above results, it can observe that among the surveyed methods, the ASSBDIM which is based on ANFIS gained the best accuracy. This aspect can be recognized via the quite equivalent values between the encoding and predicting outputs from the tested data samples. The small difference depicted by the zooming in in Figure 7 and the root-mean-square error in Figure 8 as well as the high/higher values of Ac and MeA deriving from the ASSBDIM in Tables 3 and 4 and Figure 9 reflect clearly the ANFIS’s identification ability.

It should be noted that the methodology shown via the algorithm ASSBDIM can be also used to discover the method of managing damage of mechanical structures as well.

Advertisement

5 Conclusion

The hybrid structure ANFIS, where ANN and FL can interact to not only overcome partly the limitations of each model but also uphold their strong points, has been seen as a useful mathematical tool for many fields. Inspired by the ANFIS’s capability, in order to provide the readers with the theoretical basis and application direction of the model, this chapter presents the formulation of ANFIS and one of its typical applications.

Firstly, the structure of ANFIS as a data-driven model deriving from fuzzy logic and artificial neural networks is depicted. The setting up the input data clusters, output clusters and ANFIS as a joint structure is all detailed. Deriving from this relation, the method of building ANFIS from noisy measuring datasets is presented. The online and recurrent mechanism for filtering noise and building ANFIS synchronously is clarified via the algorithms for filtering noise and establishing ANFIS. Finally, the application of ANFIS coming from the online managing bearing fault is presented. The compared results reflect that among the surveyed methods, the ASSBDIM which exploited the identification ability of ANFIS gains the best accuracy. Besides, the methodology shown via this application can be also used as appropriate solution for developing new methods of managing damage of mechanical structures.

In addition to the above identification field, it should be noted that (1) ANFIS has also attracted the attention of many researchers in the other fields related to prediction, control, and so on, as mentioned in Section 1 and (2) ANFIS can collaborate effectively with some other mathematical tools to enhance the effectiveness of technology applications.

References

  1. 1. Kosko B. Fuzzy systems as universal approximators. IEEE Transactions on Computers. 1994;43(11):1329-1333
  2. 2. Nguyen SD, Choi SB. A new neuro-fuzzy training algorithm for identifying dynamic characteristics of smart dampers. Smart Materials and Structures. 2012;21(8):1-14
  3. 3. Nguyen SD, Choi SB. A novel minimum-maximum data-clustering algorithm for vibration control of a semi-active vehicle suspension system. Journal of Automobile Engineering, Part D. 2013;227(9):1242-1254
  4. 4. Nguyen SD, Choi SB, Nguyen QH. An optimal design of interval type-2 fuzzy logic system with various experiments including magnetorheological fluid damper. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science. 2014:1-17. DOI: 10.1177/0954406214526585
  5. 5. Nguyen SD, Nguyen QH, Choi SB. Hybrid clustering based fuzzy structure for vibration control – Part 1: A novel algorithm for building neuro-fuzzy system. In: Mechanical Systems and Signal Processing. Vol. 50-51. 2014. pp. 510-525
  6. 6. Nguyen SD, Choi SB. Design of a new adaptive neuro-fuzzy inference system based on a solution for clustering in a data potential field. Fuzzy Sets and Systems. 2015;279:64-86
  7. 7. Nguyen SD, Nguyen QH. Design of active suspension controller for train cars based on sliding mode control, uncertainty observer and neuro-fuzzy system. Journal of Vibration and Control. 2015:1-20. DOI: 10.1177/1077546315592767
  8. 8. Jang JSR. ANFIS: Adaptive-network-based fuzzy inference systems. IEEE Transactions on Systems, Man, and Cybernetics. 1993;23:665-685
  9. 9. Chen C, Bin Z, George V, Marcos O. Machine condition prediction based on adaptive neuro-fuzzy and high-order particle filtering. IEEE Transactions on Industrial Electronics. 2011;58(9):4353-4364
  10. 10. Panella M, Gallo AS. An input–output clustering approach to the synthesis of ANFIS networks. IEEE Transactions on Fuzzy Systems. 2005;13(1):69-81
  11. 11. Lei Y, Jia F, Lin J, Xing S, Ding SX. An intelligent fault diagnosis method using unsupervised feature learning towards mechanical big data. IEEE Transactions on Industrial Electronics. 2016;63(5)
  12. 12. Nguyen SD, Nguyen QH, Seo TI. ANFIS deriving from jointed input-output data space and applying in smart-damper identification. Applied Soft Computing. 2017;53:45-60
  13. 13. Theocharis JB. A high-order recurrent neuro-fuzzy system with internal dynamics: Application to the adaptive noise cancellation. Fuzzy Sets and Systems. 2006;157:471-500
  14. 14. Besdok E, Civicioglu P, Alci M. Using an adaptive neuro-fuzzy inference system based interpolant for impulsive noise suppression from highly distorted images. Fuzzy Sets and Systems. 2005;150:525-543
  15. 15. Kumari R, Gambhir D, Kumar V, Intensity difference based neuro-fuzzy system for impulse noisy image restoration: ID-NFS. In: Proceedings of International Conference on Signal Processing and Integrated Networks (SPIN), 978-1-4799-2866-8/14/$31.00 ©2014 IEEE, 2014
  16. 16. Nguyen SD, Choi S-B, Seo T-I. Recurrent mechanism and impulse noise filter for establishing ANFIS. IEEE Transactions on Fuzzy Systems. 2018;26(2):985-997
  17. 17. Seo T-I, Sy DN. Algorithm for estimating online bearing fault upon the ability to extract meaningful information from big data of intelligent structures. IEEE Transactions on Industrial Electronics. 2018. DOI: 10.1109/TIE.2018.2847704
  18. 18. Nguyen SD, Choi SB, Nguyen QH. A new fuzzy-disturbance observer-enhanced sliding controller for vibration control of a train-car suspension with magneto-rheological dampers. Mechanical Systems and Signal Processing. 2018;105:447-466
  19. 19. Nguyen SD, Jung D, Choi SB. A robust vibration control of a magnetorheological damper based railway suspension using a novel adaptive type-2 fuzzy sliding mode controller. Shock and Vibration. 2017:7306109. DOI: 10.1155/2017/7306109
  20. 20. Nguyen SD, Vo HD, Seo TI. Nonlinear adaptive control based on fuzzy sliding mode technique and fuzzy-based compensator. ISA Transactions. 2017;70:309-321
  21. 21. Nguyen SD, Ho HV, Nguyen TT, Truong NT, Seo TI. Novel fuzzy sliding controller for MRD suspensions subjected to uncertainty and disturbance. Engineering Applications of Artificial Intelligence. 2017;61:65-76
  22. 22. Nguyen SD, Nguyen QH, Seo TI. ANFIS deriving from jointed input-output data space and applying in smart-damper identification. Applied Soft Computing. 2017;53:45-60
  23. 23. Nguyen SD, Kim WH, Park JH, Choi SB. A new fuzzy sliding mode controller for vibration control systems using integrated structure smart dampers. Smart Materials and Structures. 2017;26(2017):045038
  24. 24. Nguyen SD, Choi SB, Seo TI. Adaptive fuzzy sliding control enhanced by compensation for explicitly unidentified aspects. International Journal of Control, Automation and Systems. 2017. DOI: 10.1007/s12555-016-0569-6
  25. 25. Nguyen SD, Seo TI. Establishing ANFIS and the use for predicting sliding control of active railway suspension systems subjected to uncertainties and disturbances. International Journal of Machine Learning and Cybernetics. 2016. DOI: 10.1007/s13042-016-0614-z
  26. 26. Nguyen SD, Nguyen QH. Design of active suspension controller for train cars based on sliding mode control, uncertainty observer and neuro-fuzzy system. Journal of Vibration and Control. 2015;23(8):1334-1353
  27. 27. Turkmen I. Efficient impulse noise detection method with ANFIS for accurate image restoration. International Journal of Electronics and Communications. 2011;65:132-139
  28. 28. Hemalatha C, Periasamy A, Muruganand S. A hybrid approach for efficient removal of impulse, Gaussian and mixed noise from highly corrupted images using adaptive neuro fuzzy inference system (ANFIS). International Journal of Computer Applications. 2012;45(16):15-21
  29. 29. Saradhadevi V, Sundaram DV. An enhanced two-stage impulse noise removal technique based on fast ANFIS and fuzzy decision. International Journal of Computer Science Issues. 2011;8(1):79-88
  30. 30. Shen H, Yang J, Wangm S, Liu X. Attribute weighted mercer kernel based fuzzy clustering algorithm for general non-spherical datasets. Soft Computing. 2006;10:1061-1073
  31. 31. Marcelo RP, Ferreira AT, Francisco C. Kernel fuzzy C-means with automatic variable weighting. Fuzzy Sets and Systems. 2014;237:1-46
  32. 32. Filippone M, Camastra F, Masulli F, Rovetta S. A survey of kernel and spectral methods for clustering. Pattern Recognition. 2008;41:176-190
  33. 33. Camastra F, Verri A. A novel kernel method for clustering. IEEE Transactions on Neural Networks. 2005;27(5):801-804
  34. 34. Graves D, Pedrycz W. Kernel-based fuzzy clustering and fuzzy clustering: A comparative experimental study. Fuzzy Sets and Systems. 2010;161:522-543
  35. 35. Winkler R, Klawonn F, Kruse R. Problems of fuzzy c-means clustering and similar algorithms with high dimensional data sets. International Journal of Fuzzy Systems. 2011;1(1):1-16
  36. 36. Xu R, Wunusch DII. Survey of clustering algorithms. IEEE Transactions on Neural Networks. 2005;16(3):645-678
  37. 37. Girolami M. Mercer kernel-based clustering in feature space. IEEE Transactions on Neural Networks. 2002;13:780-784
  38. 38. Thevaril J, Kwan HK. Speech enhancement using adaptive neuro-fuzzy filtering. Proceedings of International Symposium on Intelligent Signal Processing and Communication Systems. 2005
  39. 39. Balaiah P, Ilavennila. Comparative evaluation of adaptive filter and neuro-fuzzy filter in artifacts removal from electroencephalogram signal. American Journal of Applied Sciences. 2012;9(10):1583-1593
  40. 40. Lakra S, Prasad TV, Ramakrishna G. Selective noise filtering of speech signals using an adaptive neuro-fuzzy inference system as a frequency pre-classifier. Journal of Theoretical and Applied Information Technology. 2015;81(3):496-501
  41. 41. Golyandina, Nekrutkin V, Zhigljavsky A. Analysis of Time Series Structure—SSA and Related Techniques, Chapman & Hall/CRC. Boca Raton, Florida; 2001
  42. 42. Salgado DR, Alonso FJ. Tool wear detection in turning operations using singular spectrum analysis. Journal of Materials Processing Technology. 2006;171:451-458
  43. 43. Kilundu B, Dehombreux P, Chiementin X. Tool wear monitoring by machine learning techniques and singular spectrum analysis. Mechanical Systems and Signal Processing. 2011;25:400-415
  44. 44. Willmore B, Tolhurst DJ. Characterizing the sparseness of neural codes. Network: Computation in Neural Systems. 2001;12(3):255-270
  45. 45. Ngiam J, Chen Z, Bhaskar SA, Koh PW, Ng AY. Sparse filtering. In: Proceedings of Neural Information Processing Systems. 2011. pp. 1125-1133
  46. 46. Hyvärinen A, Oja E. Independent component analysis: Algorithms and applications. Neural Networks. 2000;13(4):411-430
  47. 47. Gong W, Cai Z. Differential evolution with ranking-based mutation operators. IEEE Transactions on Cybernetics. 2013;43:1-16
  48. 48. Lei Y, Jia F, Lin J, Xing S, Ding SX. An intelligent fault diagnosis method using unsupervised feature learning towards mechanical big data. IEEE Transactions on Industrial Electronics. 2016;63(5)
  49. 49. Ao HL, Cheng J, Li K, Truong TK, A Roller Bearing Fault Diagnosis Method Based on LCD Energy Entropy and ACROA-SVM, Shock and Vibration; 2014. Vol. 2014. Article ID: 825825. 12 p
  50. 50. Zhang X, Liang Y, Zhou J, Zang Y. A novel bearing fault diagnosis model integrated permutation entropy, ensemble empirical mode decomposition and optimized SVM. Measurement. 2015;69:164-179
  51. 51. Liu T, Chen J, Dong G. Singular spectrum analysis and continuous hidden Markov model for rolling element bearing fault diagnosis. Journal of Vibration and Control. 2015;21(8):1506-1521

Written By

Sy Dzung Nguyen

Submitted: October 5th, 2018 Reviewed: December 10th, 2018 Published: March 11th, 2019