Surveyed cases and the corresponding encoding values (EV).
Fuzzy logic (FL) and artificial neural networks (ANNs) own individual advantages and disadvantages. Adaptive neuro-fuzzy inference system (ANFIS), a fuzzy system deployed on the structure of ANN, by which FL and ANN can interact to not only overcome their limitations but also promote the ability of each model has been considered as a reasonable option in the real fields. With the vital strong points, ANFIS has been employed well in many technology applications related to filtering, identifying, predicting, and controlling noise. This chapter, however, focuses mainly on building ANFIS and its application to identifying the online bearing fault. First, a traditional structure of ANFIS as a data-driven model is shown. Then, a recurrent mechanism depicting the relation between the processes of filtering impulse noise (IN) and establishing ANFIS from a noisy measuring database is presented. Finally, one of the typical applications of ANFIS related to online managing bearing fault is shown.
- fuzzy logic
- artificial neural networks
- adaptive neuro-fuzzy inference system
As well known, the mathematical tools FL and ANN possess both the advantages and disadvantages as their specific characteristics. The hybrid structure ANFIS, where ANN and FL can interact to not only overcome partly the limitations of each model but also uphold their strong points [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25], is, hence, considered as a reasonable option in many technology applications such as identifying [1, 2, 4, 6, 12], predicting [9, 11, 17, 25], controlling [3, 5, 7, 18, 19, 20, 21, 22, 23, 24, 25, 26], and filtering noise [14, 15, 16, 27, 28, 29].
To build an ANFIS from a given database, firstly, an initial data space (IDS) expressing the mapping must be created. A cluster data space (CDS) is then built from the IDS to form the ANFIS via a training algorithm. Being viewed as a popular technique for unsupervised pattern recognition, clustering is an effective tool for analyzing and exploring data structures to build CDSs [30, 31, 32, 33, 34, 35, 36, 37]. Reality shows that the accuracy and training time of the ANFIS depend deeply on the features of both the IDS and CDS [2, 3, 4, 5, 6]. In the process of building ANFIS, the two issues as follows should be considered: (1) What is the essence of the interactive relation between ANFIS’s convergence capability and CDS’ attributes? (2) How to exploit this essence for increasing ANFIS’s ability to converge to the desired accuracy with the improved calculating cost?
Many different clustering approaches have been discovered [2, 3, 4, 10, 30, 31, 32, 33, 34, 37]. Separating data in X and in Y distinctly with a mutual result reference, step by step, was described in . The method, however, could not solve appropriately the above issues. Besides, the difficulty in deploying fuzzy clustering strategies along with the high calculating cost was their disadvantage. Generally, a hard relation could not reflect fully database attributes [31, 34]. The well-known method of fuzzy C-means clustering was seen as a better option in this case. It, however, was not effective enough for the “non-spherical” general datasets [30, 37]. Therefore, the idea of fuzzy clustering in a kernel feature space was then developed to deal with these cases [30, 31, 32, 33, 34, 37]. A weighted kernel-clustering algorithm could be referred to , or a method of weighted kernel fuzzy C-means clustering based on adaptive distances was detailed in . In spite of owning considerable advantages, the identification and prediction accuracy of the ANFIS based on the CDS coming from [30, 31] are sensitive to attributes of the CDS due to the negative influence of noise.
Reality has shown that noise status including IN always exists in the measured IDSs [2, 4, 9, 16], which degrades violently the accuracy of ANFISs deriving from them. There are many reasons resulting in this, such as the lack of precision of the measurement devices, tools, measurement methods, or the negative impact of the surrounding environment. In , an ANFIS took part in the system in the form of an inverse MR damper (MRD) model to specify the time-verifying desired control current. To maintain the accuracy of the inverse MRD model, the ANFIS was retrained after each certain period due to the dynamic response of the MRD depending quite deeply on temperature. Another more active approach is filtering noise or preprocessing data [7, 9, 11, 17, 21, 38, 39, 40]. In [11, 17], where ANFISs were employed to predict the health of mechanical systems, vibration signal was always measured and filtered to update the ANFISs. Related to the preprocessing data to set up ANFIS, it can observe that to maintain the stability of the above online ANFIS-based applications, reducing time delay is really meaningful. One of the becoming solutions for this can be referred to  where filtering IN and building ANFIS were carried out synchronously via a recurrent mechanism. A recurrent strategy for forming ANFIS was carried out, in which the capability to converge to a desired accuracy of the ANFIS training process could be estimated and directed online. As a solution, increasing the quality of both the IDS and CDS was paid attention. Building an ANFIS via a filtered database and exploiting the ANFIS as an updated filter to refilter the database were depicted via an online and recurrent mechanism. The process was upholden until the ANFIS-based database approximation convergent to the desired accuracy or a stop condition appears.
Inspired by the ANFIS’s capability, in order to provide the readers with the theoretical basis and application direction of the model, this chapter presents the formulation of ANFIS and one of its typical applications. The rest of the chapter is organized as follows. Section 2 shows a structure of ANFIS as a data-driven model deriving from fuzzy logic and artificial neural networks. Setting up the CDS consisting of the input data clusters, output data clusters, and the CDS-based ANFIS as a jointed structure is all detailed. Deriving from this relation, a theoretical basis for building ANFIS from noisy measuring datasets is presented in Section 3. An online and recurrent mechanism for filtering noise and building ANFIS synchronously is clarified via algorithms for filtering IN and establishing ANFIS. A typical application of ANFIS related to online managing bearing fault status is shown in Section 4. Finally, some general aspects are mentioned in the last section.
2. Structure of ANFIS
Let’s consider a given IDS having P input–output data samples , , , and With a data normalization solution and a used certain clustering algorithm, a CDS is then created. The kth cluster, signed consists of one input cluster and one output cluster signed and , respectively. The CDS can be seen as a framework for establishing ANFIS. This section presents how to build the CDS as well as the CDS-based ANFIS structure.
2.1 Some related notions
Some notions shown in  are used in this chapter as follows.
Definition 1. Normalizing a given IDS to set up a normalized initial data space signed is performed as follows:
By this way, the ith data sample (also signed ()) in the is constituted as follows:
Definition 2. The root-mean-square error (RMSE) in Eq. (3) is used to evaluate accuracy rate of ANFIS. The required RMSE value is signed . The absolute error, between the data output and the corresponding ANFIS-based output is defined in Eq. (4). The desired value of is signed :
Definition 3. Let’s consider in which depicts an unknown mapping . The ANFIS-based approximation of is called to be continuous at if.
Definition 4. Let’s consider an ANFIS-based approximation of a mapping expressed by an . The ANFIS is said to be a uniform approximation with a required error if at , by choosing any small constant , corresponding data sample always exists such that.
Definition 5. Data cluster and data sample in a CDS derived from an are depicted in Figure 1. Let sign to be a subset consisting of the data samples belonging to except . The subset contains data samples. It is assumed that all of data samples in are distributed closely, while in , most of them are located closely, except ; it is far from the other and distributes at one side of . This status is described in Eq. (7):
In this case, is called a critical data sample in the CDS.
2.2 Setting up the input data clusters
Let’s consider the normalized initial data space (see Def. 1). Many well-known clustering methods can be used to build a CDS from the . Here, the CDS is built by using the clustering algorithm KFCM-K (kernel fuzzy C-means clustering with kernelization of the metric) presented in . By this way, distribution of data samples in the CDS is established. The membership degree of the jth data sample belonging to the ith cluster is denoted by and . Cluster centroids in the CDS are specified such that the following objective function is minimized:
subjected to and . In Eq. (10), is the ith cluster center; denotes the squared distance between and in the kernel space; is the kernel function; is the distribution matrix; and is the fuzzy factor.
The objective function can be rewritten via Gaussian kernel function as follows:
Deriving in Eq. (11) with respect to , at the optimal centers, the following must be obtained:
By using index as in Eq. (15), to be the required value of and r to denote the rth loop, the clustering phase is accomplished until :
Specification of the optimal centers and their relationship values as abovementioned is detailed in Appendix A of .
2.3 Setting up the output data clusters
The result of the clustering process in the input data space is an input cluster centroid vector of corresponding data clusters, respectively, signed as Let respectively, be input fuzzy sets established via [12, 16]. The membership value of belonging to is inferred from Eq. (14):
With following the product law, membership value of belonging to is
and its normalized membership value is as follows:
The membership of a data sample in each cluster determined based on Eqs. (16)–(18) is then used to specify the hard distribution status of the data samples in each cluster. It is then used to specify the index vector of hyperplanes (or the output data clusters) and . The ith data sample is hardly distributed into the kth data cluster if
Deriving from the data samples hardly distributed in the kth data cluster, by using the least mean squares method, vector of is specified which is the solution of Eq. (20):
Finally, the value of hyperplane corresponding to is calculated in Eq. (21):
2.4 Structure of ANFIS
where is the membership value of belonging to input fuzzy set meaning in Eq. (16); is the corresponding output fuzzy set of data sample , .
In the fuzzification phase, membership value of belonging to input fuzzy set signed is specified by Eq. (17). For the defuzzification, if the center-average method is used, the output of the qth data sample is expressed via the membership values in the input fuzzy space of as follows:
where is the value of hyperplane corresponding to data sample calculated in Eq. (21).
Finally, all the above-mentioned contents can be depicted via the ANFIS with five layers signed D, CL, , N, and S in Figure 2. Layer D (data) has n input nodes corresponding to n elements of data vector , i = 1…P, while its outputs are the corresponding normalized values using Eq. (1). Layer CL (clustering) expresses the clustering process. The result of this process is C clusters with C corresponding cluster centroids to which C fuzzy sets, ,…, , are given. The output of this layer is the membership value of calculated for each dimension via Eq. (16). Layer (product layer) specifies membership values based on Eq. (17). Layer N (normalization) estimates the normalized membership value of a data sample belonging to each fuzzy set upon Eq. (18). Layer S (specifying) is used to estimate the output of the ANFIS based on any well-known method. In case of using the center-average defuzzification, it is calculated by Eq. (23), while it is specified by Eq. (24) if the “the winner takes all” law is employed:
3. Building ANFIS from a noise measuring database
This section presents the recurrent mechanism together with the related algorithms consisting of the one for ANFIS-based noise filtering and the one for building ANFIS showed in .
3.1 Convergence condition of the ANFIS-based approximation
Deriving from a given IDS having P input–output data samples , and , , with a data normalization solution as in Def. 1, the is built, to which a CDS is created as depicted in Section 2. It should be noted that IN is often considered as disturbances distributed uniformly in a signal source which impacts negatively on the created CDS. In general, IN causes raising the number of critical data samples in the CDS. The negative impact of IN on the convergent ability of training ANFIS is formulated via Theorem 1 as follows.
Theorem 1 : Let’s consider a given deriving from an IDS and an ANFIS uniformly approximating an unknown mapping expressed by the IDS. The ANFIS is built via a CDS built from the . Assume that X is compact. The necessary condition for the approximation convergent to a desired error is that in the CDS there is not any critical data sample.
Proof: Let’s consider cluster belonging to the CDS. Assume that is a critical data sample (see Def. 5); it has to be proven that the ANFIS will not converge to .
It can infer from Eq. (3) that
It should be noted that the ANFIS is a uniform approximation of the in , is a critical data sample, and samples in are distributed closely. As a result, Eq. (28) can be inferred:
Finally, it can conclude that if existing at least a critical data sample in the CDS, the ANFIS could not converge to the required error [E]. □.
3.2 Algorithm for filtering IN
An essential advantage of the clustering algorithms presented in [30, 31] is the convergent rate. However, the quality of the ANFIS based on the CDS deriving from them is sensitive to the IDS attributes. It can be observed that the main reason of this status via Theorem 1 is the appearance of critical data samples. Besides, regarding the preprocessing IDS shown in , in spite of the positive filtering effectiveness, the calculating cost of the method is quite high. A becoming solution for the above issues can be referred in  where the recurrent mechanism illustrated in Figure 3 was employed. The recurrent mechanism has two phases being performed synchronously: filtering IN in the database and building ANFIS based on the filtered database.
Firstly, an adaptive online impulse noise filter (AOINF) is proposed. The recurrent mechanism is then depicted via the algorithm named FIN-ANFIS consisting of three main phases: filtering IN, clustering data, and building ANFIS. By this way, the filtered is used to build the ANFIS, then the created ANFIS is applied as an updated filter to refilter the , and so on, until either the process converges or a stop condition is satisfied. To get a guarantee of convergence and stability, an update law for the AOINF is discovered via Lyapunov stability theory.
Remark 1. ANFIS cannot converge to the required error [E] if there is at least one critical data sample in the CDS (see Theorem 1). The clustering strategy of the FIN-ANFIS therefore focuses on preventing the clustering process from appearing critical data samples, along with seeking to exterminate the critical data samples in the CDS having been taking form. As a result, in each loop of the ANFIS training process, the strategy well directs the clustering process to a new CDS where either there is not any critical data sample or there exist with a smaller amount. Theorem 2 shows the convergence condition of the training process.
Theorem 2 : Following the flowchart in Figure 3, the ANFIS-based approximation of an unknown mapping expressed by the given IDS is built via a CDS which drives from the (the normalized IDS). Let Q be the number of critical data points in the CDS at the rth loop. At these critical data samples, if the data output is filtered by law (31), then the RMSE (3) of the ANFIS will converge to [E]:
In the above, is the update coefficient to be optimized by any well-known optimal method; is the error between the ith data output and the corresponding ANFIS-based output; and function is defined as
In the above, expresses derivative of with respect to time; is the vector of state variables deriving from as follows:
It should be noted that the update process is performed with respect to the critical data points; hence, Eq. (36) can be rewritten as follows:
Remark 2. (1) To enhance the ability to adapt to the noise status of the , in Eq. (31) is specified as follows:
where is an adaptive coefficient chosen by the designer. Thus, takes part in adjusting the filtering level . (2) It can infer from Theorem 1 that disposing of critical data samples in the CDS needs to be carried out. Therefore, the useful solution offered in Theorem 2 via update law (31) is employed to establish the filtering mechanism of the AOINF as shown below.
The algorithm AOINF for filtering IN:
Look for critical data samples in the CDS to specify the worst data point (WP) where the continuous status of the ANFIS is worst:
Specify the data samples satisfying condition (42):
In the above, is the ANFIS-based output, while is the corresponding data output at the WP; is an adaptive coefficient (to be 1.35 for the surveys shown in ).
3.3 Algorithm for building ANFIS
Figure 4 illustrates the establishment of the CDS from the . It consists of (1) building fuzzy clusters with centroids or the input data clusters (see Subsection 2.2), (2) estimating the hard distribution of samples in each input data cluster indicated by , and (3) building the hyperplanes or the output data clusters (see Subsection 2.3) in the output data space using the specified hard distribution status. Based on the created CDS, Figure 3 shows the flowchart of the FIN-ANFIS consisting of three main phases: filtering IN, building the CDS driving from the filtered , and forming ANFIS.
3.4 Algorithm FIN-ANFIS
Initializing: The initial index of the loop process, r = 1; the number of clusters ; , where is a real number ; and the initial cluster centroids corresponding to r = 1 chosen randomly:
Build the input data clusters:
Establish the input data clusters:
Specify the stop condition of the clustering phase via in Eq. (15):
If : go to Step 3; ff and , setup and return to Step 1; if and and , set and return to Step 1; and if and and , stop (not converge).
Build and estimate ANFIS:
Establish ANFIS as presented in Subsection 2.4.
Calculate in which is the ANFIS-based output, while is the data output. If , stop (the ANFIS is the desired one); if and , go to Step 4; and if and , stop (not converge).
Set up a new cluster centroid:
Based on Eq. (41), seek the worst data point ; set ,and set up a new cluster centroid in the neighborhood of the WP; and go to Step 5.
Call the algorithm AOINF and return to Step 1.
4. ANFIS for managing online bearing fault
An application of ANFIS to estimating online bearing fault upon the ability to extract meaningful information from big data of intelligent structures is shown in this section. Estimating online bearing status to hold the initiative in exploiting the systems is meaningful because bearing is an important machine detailed in almost mechanical structures.
In , an Online Bearing Damage Identifying Method (ASSBDIM) based on ANFIS, singular spectrum analysis (SSA), and sparse filtering (SF) was shown. The method consists of two phases: offline and online. In offline, the ANFIS identifies the dynamic response of the mechanical system in the individual bearing statuses. The trained ANFIS is then used to estimate its real status in the online phase. These aspects are detailed in the following paragraphs.
4.1 Some related theories
4.1.1 Singular spectrum analysis
Let’s consider a given time series of N0 data points . From selected window length L0, 1 < L0 < N0, sliding vectors , j = 1,…,K=N0 − L0 + 1, and matrix as in Eq. (45) are built:E45
Building the trajectory matrix:
From Eq. (45), one builds matrix . Vectors are then constructed, , in which are the non-zero eigenvalues of S arranged in the descending order and U1,…, Ud are the corresponding eigenvectors. A decomposition of the trajectory matrix into a sum of matrices is then established, where .
Each elementary matrix is transformed into a principal component of length N by applying a linear transformation known as diagonal averaging or Hankelization. Let be a matrix of elements . By calculating , then Z can be transformed into the reconstructed time series as in Eq. (46):
4.1.2 Sparse filtering
In this work SF is used to extract features from a given time series-typed measured database. Relying an objective function defined via the features, the method tries to specify the good features such that the objective function is minimized [11, 44, 45]. To deploy SF effectively, a process with the two following phases is operated. Preprocessing data based on the whitening method  is carried out in the first phase. A H-by-L matrix signed of real numbers depicting the relation between each of the H training data samples and the L selected features is established in the second phase. SF presented in [11, 45] is detailed as follows.
In the first phase, a training set of the H data samples , in the form of a matrix signed is established from the given time series-typed measuring dataset. By adopting the whitening method , it then tries to make the data samples less correlated with each other and speed up the convergence of the sparse filtering process which employs the eigenvalue decomposition of the covariance matrix In the expression, is the diagonal matrix of its eigenvalues, and is the orthogonal matrix of eigenvectors of . Finally, the whitened training set signed is formed as in Eq. (47):
Subsequently, in the second phase, SF maps the data sample of onto features , relied on a weight matrix signed . A linear relation between data samples in and the features is expressed via as in Eq. (48), in which is called the feature distribution matrix:
Optimizing the feature distribution in is then performed as detailed in . The features in each column of is normalized by dividing them by their l2-norm, . For each row of the obtained matrix, these features per example are normalize by computing , by which they lie on the unit l2-ball. The features normalized after the two above steps are optimized for sparseness using the l1-penalty to get a matrix signed . A loop process is then maintained via Eq. (48), in which takes the role of , until the optimal weights of are to be established that make the objective function of Eq. (49) be minimized, to which, finally, is resigned :
4.2 The ASSBDIM
The ASSBDIM focuses on online bearing fault estimation. The aim is detailed in this subsection consisting of the way of setting up the databases and the algorithm ASSBDIM for online bearing fault estimation upon the built databases.
4.2.1 Building the databases for the ASSBDIM
A measuring dataset deriving from the mechanical system vibration is established for each surveyed bearing fault type. Regarding Q fault types, one obtains Q original datasets as in Eq. (50):
where is corresponding to the ith bearing fault type .
By using SSA for , m time series as in Eq. (51) are set up:
where m is parameter selected by the designer. This work is carried out by the three steps as presented in Subsection 4.1.1, in which is used in the first step as the given time series of N0 data points for building the trajectory matrix in Eq. (45). Because the mechanical vibration signal is prone to the low frequency range , among the m time series, the (m-k) ones owning the highest frequencies are considered as noise. The k remainder time series as in Eq. (52) is hence kept to build the databases:
Specifying the optimal value of both k and m will be mentioned in Subsection 4.2.2.
For each time series in Eq. (52), for example, , , based on SF one obtains the feature distribution matrix as in Eq. (48) which is resigned . By using this result for all the time series in Eq. (52), a new data matrix as in Eq. (53) is formed which is the input data space of the ith data subset corresponding to the ith bearing fault type:
By employing this way for Q, the surveyed bearing fault types, an input data space in the form of matrix (54), are established, which relates to building two offline databases signed Off_DaB and Off_testDaB as well as one online database signed On_DaB used for the algorithm ASSBDIM as follows:
Namely, matrix relates to the input data space (IDS), to which the databases for identifying the bearing status are built as follows. Firstly, by encoding the ith fault type by a real number , the output data space (ODS) of the ith subset can be depicted by vector of H elements as in Eq. (55):
4.2.2 The algorithm ASSBDIM for estimating health of bearings
An ANFIS built by the algorithm FIN-ANFIS (see Subsection 3.3) is utilized to identify dynamic response of the mechanical system corresponding to the bearing damage statuses reflected by the Off_DaB. Optimizing the parameters in in Eq. (58) is then performed using the percentage of correctly estimated samples () as in Eq. (59) and the mean accuracy () as in Eq. (60) and the algorithm DE :
where corresponding to the nth damage type, , is the number of checking samples expressing correctly the real status of the bearing, while is the total of checking samples used in the survey; is the number of surveyed bearing fault types as mentioned in Eq. (50).
Following the , an objective function is defined as follows:
The Off_testDaB, function , and DE  are then employed to optimize the parameters in vector , to get .
Namely, by using the input of the Off_testDaB for the ANFIS which has been trained by the Off_DaB, one obtains the outputs . These outputs are then compared with the corresponding encoded outputs to estimate the bearing real status, which is the one encoded by “q” satisfying Eq. (62):
The completion of the offline phase as above can be seen as the beginning of the only phase. During the next operating process, first, by the way similar to the one for building the offline database Off_DaB, the online dataset On_DaB in the form as in Eq. (53) is built. By using the On_DaB for the ANFIS trained in the offline, the bearing real status at this time is then specified based on Eq. (62).
The ASSBDIM is hence can be summarized as follows.
The offline process:
Initialize vector in Eq. (58):
Build the Off-DaB and Off-testDaB in the form of Eq. (56).
Train an ANFIS to identify the Off-DaB using the algorithm FIN-ANFIS.
Accomplish the system.
The Off-testDaB is used as database of the trained ANFIS, using the condition (62) to calculate in Eq. (60). If , then go to Step 4; otherwise, adjust the value of the elements in vector in Eq. (58) using the algorithm DE , and then return to Step 1.
The online process:
Establish online database On-DaB as in Eq. (53).
Estimate online bearing fault status based on the On-DaB, trained ANFIS, and condition (62); check the stop condition: if it is not satisfied, then return to Step 4; otherwise, stop.
4.3 Some survey results
4.3.1 Experimental apparatus and estimating way
The experimental apparatus for measuring vibration signal is shown in Figure 5. The apparatus consists of the motor (1), acceleration sensors (2) and (4), surveyed bearings (3) and (5), module for processing and transforming series vibration signal incorporating software-selectable AC/DC coupling (Model: NI-9234) (6), and computer (7).
In Table 1, “encoding value” is abbreviated to “EV.” The three cases listed in Table 1 related to nine of the widespread single-bearing faults as in Table 2 are surveyed. In the above, Q = 7 (see Eq. 50) for the Cases 1–2, while Q = 10 for Case 3; the damaged location is the inner or outer or balls (signed In, or Ou, or Ba, respectively); damaged degrees are from 1 to 3 (signed D1 or D2 or D3); the load impacting on the system at the survey time consists of Load 1 or 2 or 3 (signed L1 or L2 or L3). For example, LmUnd shows the load degree to be m and the bearing to be undamaged, or LmDnBa expresses the load degree to be m (1,…,3), the damage level to be n (1,…,3), and the damage location to be the ball.
|Faults||Width (mm)||Depth (mm)|
The ASSBDIM with H = 303, m = 30, k = 7 along with four other methods [48, 49, 50, 51] is employed to be surveyed. The first one  (Nin = Nout = 100; number of segments to be and ) is the intelligent fault diagnosis method using unsupervised feature learning toward mechanical big data. The second one  employs the energy levels of the various frequency bands as features. In the third one , a bearing fault diagnosis upon permutation entropy, empirical mode decomposition, and support vector machines is shown. In the last one , a method of identifying bearing fault based on SSA is presented.
For the surveys, along with Ac and MeA, the root-mean-square error as in Eq. (63) is also employed, where and , respectively, are encoding and predicting outputs:
4.3.2 Some survey results
The measured databases from Cases 1 to 3 with Q = 7 as in Table 1 along which the methods consist of the ASSBDIM  and the ones from [48, 49, 50, 51] were adopted to identify the status of the bearing. The obtained results were shown in Figures 6, 7, 8, 9 and Tables 3 and 4.
|Surveyed cases||Ac (%)|
Following the above results, it can observe that among the surveyed methods, the ASSBDIM which is based on ANFIS gained the best accuracy. This aspect can be recognized via the quite equivalent values between the encoding and predicting outputs from the tested data samples. The small difference depicted by the zooming in in Figure 7 and the root-mean-square error in Figure 8 as well as the high/higher values of Ac and MeA deriving from the ASSBDIM in Tables 3 and 4 and Figure 9 reflect clearly the ANFIS’s identification ability.
It should be noted that the methodology shown via the algorithm ASSBDIM can be also used to discover the method of managing damage of mechanical structures as well.
The hybrid structure ANFIS, where ANN and FL can interact to not only overcome partly the limitations of each model but also uphold their strong points, has been seen as a useful mathematical tool for many fields. Inspired by the ANFIS’s capability, in order to provide the readers with the theoretical basis and application direction of the model, this chapter presents the formulation of ANFIS and one of its typical applications.
Firstly, the structure of ANFIS as a data-driven model deriving from fuzzy logic and artificial neural networks is depicted. The setting up the input data clusters, output clusters and ANFIS as a joint structure is all detailed. Deriving from this relation, the method of building ANFIS from noisy measuring datasets is presented. The online and recurrent mechanism for filtering noise and building ANFIS synchronously is clarified via the algorithms for filtering noise and establishing ANFIS. Finally, the application of ANFIS coming from the online managing bearing fault is presented. The compared results reflect that among the surveyed methods, the ASSBDIM which exploited the identification ability of ANFIS gains the best accuracy. Besides, the methodology shown via this application can be also used as appropriate solution for developing new methods of managing damage of mechanical structures.
In addition to the above identification field, it should be noted that (1) ANFIS has also attracted the attention of many researchers in the other fields related to prediction, control, and so on, as mentioned in Section 1 and (2) ANFIS can collaborate effectively with some other mathematical tools to enhance the effectiveness of technology applications.