Open access

Search Algorithm for Image Recognition Based on Learning Algorithm for Multivariate Data Analysis

Written By

Juan G. Zambrano, E. Guzmán-Ramírez and Oleksiy Pogrebnyak

Submitted: 25 April 2012 Published: 13 February 2013

DOI: 10.5772/52179

From the Edited Volume

Search Algorithms for Engineering Optimization

Edited by Taufik Abrão

Chapter metrics overview

3,054 Chapter Downloads

View Full Metrics

1. Introduction

An image or a pattern can be recognized using prior knowledge or the statistical information extracted from the image or the pattern. The systems for image recognition and classification have diverse applications, e.g. autonomous robot navigation[1], image tracking radar [2], face recognition [3], biometrics [4], intelligent transportation, license plate recognition, character recognition [5] and fingerprints [6].

The problem of automatic image recognition is a composite task that involves detection and localization of objects in a cluttered background, segmentation, normalization, recognition and verification. Depending on the nature of the application, e.g. sizes of training and testing database, clutter and variability of the background, noise, occlusion, and finally, speed requirements, some of the subtasks could be very challenging. Assuming that segmentation and normalization haven been done, we focus on the subtask of object recognition and verification, and demonstrate the performance using several sets of images.

Diverse paradigms have been used in the development of algorithms for image recognition, some of them are: artificial neural networks [7, 8], principal component analysis [9, 10], fuzzy models [11, 12], genetic algorithms [13, 14] and Auto-Associative memory [15]. The following paragraphs describe some work done with these paradigms.

Abrishambaf et al designed a fingerprint recognition system based in Cellular Neural Networks (CNN). The system includes a preprocessing phase where the input fingerprint image is enhanced and a recognition phase where the enhanced fingerprint image is matched with the fingerprints in the database. Both preprocessing and recognition phases are realized by means of CNN approaches. A novel application of skeletonization method is used to perform ridgeline thinning which improves the quality of the extracted lines for further processing, and hence increases the overall system performance [6].

In [16], Yang and Park developed a fingerprint verification system based on a set of invariant moment features and a nonlinear Back Propagation Neural Network (BPNN) verifier. They used an image-based method with invariant moment features for fingerprint verification to overcome the demerits of traditional minutiae-based methods and other image-based methods. The proposed system contains two stages: an off-line stage for template processing and an on-line stage for testing with input fingerprints. The system preprocesses fingerprints and reliably detects a unique reference point to determine a Region of Interest (ROI). A total of four sets of seven invariant moment features are extracted from four partitioned sub-images of an ROI. Matching between the feature vectors of a test fingerprint and those of a template fingerprint in the database is evaluated by a nonlinear BPNN and its performance is compared with other methods in terms of absolute distance as a similarity measure. The experimental results show that the proposed method with BPNN matching has a higher matching accuracy, while the method with absolute distance has a faster matching speed. Comparison results with other famous methods also show that the proposed method outperforms them in verification accuracy.

In [17] the authors presents a classifier based on Radial Basis Function Network (RBFN) to detect frontal views of faces. The technique is separated into three main steps, namely: preprocessing, feature extraction, classification and recognition. The curvelet transform, Linear Discriminant Analysis (LDA) are used to extract features from facial images first, and RBFN is used to classify the facial images based on features. The use of RBFN also reduces the number of misclassification caused by not-linearly separable classes. 200 images are taken from ORL database and the parameters like recognition rate, acceptance ratio and execution time performance are calculated. It is shown that neural network based face recognition is robust and has better performance of recognition rate 98.6% and acceptance ratio 85 %.

Bhowmik et al. designed an efficient fusion technique for automatic face recognition. Fusion of visual and thermal images has been done to take the advantages of thermal images as well as visual images. By employing fusion a new image can be obtained, which provides the most detailed, reliable, and discriminating information. In this method fused images are generated using visual and thermal face images in the first step. At the second step, fused images are projected onto eigenspace and finally classified using a radial basis function neural network. In the experiments Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database benchmark for thermal and visual face images have been used. Experimental results show that the proposed approach performs well in recognizing unknown individuals with a maximum success rate of 96% [8].

Zeng and Liu described state of the art of important advances of type-2 fuzzy sets for pattern recognition [18]. The success of type-2 fuzzy sets has been largely attributed to their three-dimensional membership functions to handle more uncertainties in real-world problems. In pattern recognition, both feature and hypothesis spaces have uncertainties, which motivate us of integrating type-2 fuzzy sets with conventional classifiers to achieve a better performance in terms of the robustness, generalization ability, or recognition accuracy.

A face recognition system for personal identification and verification using Genetic algorithm (GA) and Back-propagation Neural Network (BPNN) is described in [19]. The system consists of three steps. At the very outset some pre-processing are applied on the input image. Secondly face features are extracted, which will be taken as the input of the Back-propagation Neural Network and Genetic Algorithm in the third step and classification is carried out by using BPNN and GA. The proposed approaches are tested on a number of face images. Experimental results demonstrate the higher degree performance of these algorithms.

In [20], Blahuta et al. applied pattern recognition on finite set brainstem ultrasound images to generate neuro solutions in medical problems. For analysis of these images the method of Principal Component Analysis (PSA) was used. This method is the one from a lot of methods for image processing, exactly to pattern recognition where is necessary a feature extraction. Also the used artificial neural networks (ANN) for this problem and compared the results. The method was implemented in NeuroSolutions software that is very sophisticated simulator of ANN with PCA multilayer (ML) NN topology.

Pandit and Gupta proposed a Neural Network model that has been utilized to train the system for image recognition. The NN model uses Auto-Associative memory for training. The model reads the image in the form of a matrix, evaluates the weight matrix associated with the image. After training process is done, whenever the image is provided to the system the model recognizes it appropriately. The evaluated weight matrix is used for image pattern matching. It is noticed that the model developed is accurate enough to recognize the image even if the image is distorted or some portion/ data is missing from the image. This model eliminates the long time consuming process of image recognition [15].

In [21], authors present the design of three types of neural networks with different features for image recognition, including traditional backpropagation networks, radial basis function networks and counterpropagation networks. The design complexity and generalization ability of the three types of neural network architectures are tested and compared based on the applied digit image recognition problem. Traditional backpropagation networks require very complex training process before being applied for classification or approximation. Radial basis function networks simplify the training process by the specially organized 3-layer architecture. Counterpropagation networks do not need training process at all and can be designed directly by extracting all the parameters from input data. The experimental results show the good noise tolerance of both RBF networks and counterpropagation network on the image recognition problem, and somehow point out the poor generalization ability of traditional backpropagation networks. The excellent noise rejection ability makes the RBF networks very proper for image data preprocessing before applied for recognition.

The remaining sections of this Chapter are organized as follows. In next Section, a brief theoretical background of the Learning Algorithm for Multivariate Data Analysis (LAMDA) is given. In Section 3 we describe the proposed search algorithm for image recognition based on LAMDA algorithm. Then, in Section 4 we present the implementation results obtained by the proposed approach. Finally, Section 5 contains the conclusions of this Chapter.

Advertisement

2. Learning Algorithm for Multivariate Data Analysis

The Learning Algorithm for Multivariate Data Analysis (LAMDA) is an incremental conceptual clustering method based on fuzzy logic, which can be applied in the processes of formation and recognition of concepts (classes). LAMDA has the following features [22-24]:

  • The previous knowledge of the number of classes is not necessary (unsupervised learning).

  • The descriptors can be qualitative, quantitative or a combination of both.

  • LAMDA can use a supervised learning stage followed by unsupervised one; for this reason, it is possible to achieve an evolutionary classification.

  • Formation and recognition of concepts are based on the maximum adequacy (MA) rule.

  • This methodology has the possibility to control the selectivity of the classification (exigency level) through the parameter α .

  • LAMDA models the concept of maximum entropy (homogeneity). This concept is represented by a class denominated Non-Informative Class (NIC). The NIC concept plays the role of a threshold of decision, in the concepts formation process.

Traditionally, the concept of similarity between objects has been considered fundamental to determine whether the descriptors are members of a class or not. LAMDA does not uses similarity measures between objects in order to group them, but it calculates a degree of adequacy. This concept is expressed as a membership function between the descriptor and any of the previously established classes [22, 25].

2.1. Operation of LAMDA

The objects X (input vectors) and the classes C are represented by a number of descriptors denoted by ( d 1 , ... , d n ) . Then, every d i has its own value inside the set D k , the n-ary product of the D k , written as D 1 × , ... , × D p , with { ( d 1 , ... , d n ) : d i D k f o r 1 i n , 1 k p } and it is denominated Universe ( U ).

The set of objects can be described by X = { x j : j = 1 , 2 , ... , M } and any object can be represented by a vector x j = ( x 1 , ... , x n ) where x i U , so every component x i will correspond to the value given by the descriptor d i for the object x j . The set of classes can be described by C = { c l : l = 1 ,   2 , ... ,   N } and any class can be represented by a vector c l = ( c 1 , ... , c n ) where c i U , so every component c i will corresponds to the value given by the descriptor d i for the class c l [23].

2.1.1. Marginal Adequacy Degree

Given an object x j and a class c l , LAMDA computes for every descriptor the so-called marginal adequacy degree (MAD) between the value of component x i of object x j and the value that the component c i takes in c l , which is denoted as:

M A D ( x i j / c i l ) = x j × c l [ 0 , 1 ] n E1

Hence, one MAD vector can be associated with an object x j (see Figure 1). To maintain consistency with fuzzy logic, the descriptors must be normalized using (1). This stage generates N MADs, and this process is repeated iteratively for every object with all classes [26].

x i = x ˜ i x min x max x min = x ˜ i 2 L 1 E2

Figure 1.

LAMDA basic structure.

Membership functions, denoted as   μ X ( x ) , are used to associate a degree of membership of each of the elements of the domain to the corresponding fuzzy set. This degree of membership indicates the certainty (or uncertainty) that the element belongs to that set. Membership functions for fuzzy sets can be of any shape or type as determined by experts in the domain over which the sets are defined. Only must satisfy the following constraints [27].

  • A membership function must be bounded from below 0 and from above 1 .

  • The range of a membership function must therefore be [0, 1].

  • For each x U , the membership function must be unique. That is, the same element cannot map to different degrees of membership for the same fuzzy set.

The MAD is a membership function derived from a fuzzy generalization of a binomial probability law [26]. As before, x j = ( x 1 , ... , x n ) , and let E be a non-empty, proper subset of X . We have an experiment where the result is considered a “success” if the outcome x i is in E . Otherwise, the result is considered a “failure”. Let P ( E ) = ρ be the probability of success so P ( E ) = q = 1 ρ is the probability of failure; then intermediate values have a degree of success or failure. The probability mass function of X is defined as [28].

f ( x ) = ( ρ ) ( x ) ( 1 ρ ) ( 1 x ) E3

where ρ [ 0 , 1 ] . The following Fuzzy Probability Distributions are typically used by LAMDA methodology to calculate the MADs [25],[29].

  • Fuzzy Binomial Distribution.

  • Fuzzy Binomial-Center Distribution.

  • Fuzzy Binomial-Distance Distribution.

  • Gaussian Distribution.

2.1.2. Global Adequacy Degree

Global Adequacy degree (GAD) is obtained by aggregating or summarizing of all marginal information previously calculated (see Figure 1), using mathematical aggregation operators (T-norms and S-conorms) given N MADs of an object x j relative to class c l , through a linear convex T-S function L α T , S . Some T-norms and their dual S-conorm used in LAMDA methodology are shown in Table 1 [22, 23].

The aggregation operators are mathematical objects that have the function of reducing a set of numbers into a unique representative number. This is simply a function, which assigns a real number y to any n-tuple ( x 1 , x 2 , ... x n ) of real numbers, y = A ( x 1 , x 2 , ... x n ) [30].

The T-norms and S-conorms are two families specialized on the aggregation under uncertainty. They can also be seen as a generalization of the Boolean logic connectives to multi-valued logic. The T-norms generalize the conjunctive 'AND' (intersection) operator and the S-conorms generalize the disjunctive 'OR' (union) operator [30].

Linear convex T-S function is part of the so-called compensatory functions, and is utilized to combine a T-norm and a S-conorm in order to compensate their opposite effects. Zimmermann and Zysno [30] discovered that in a decision making context humans neither follow exactly the behavior of a T-norm nor a S-conorm when aggregating. In order to get closer to the human aggregation process, they proposed an operator on the unit interval based on T-norms and S-conorms.

Name T-Norm (Intersection) S-Conorm (Union)
Min-Max min ( x 1 , ... , x n ) max ( x 1 , ... , x n )
Product i = 1 n x i 1 ( i = 1 n x i )
Lukasiewicz max { 1 n + i = 1 n x i , 0 } min { i = 1 n x i , 1 }
Yaguer 1 min { ( i = 1 n ( 1 x i ) 1 λ ) λ , 1 } min { ( i = 1 n ( x i ) 1 λ ) λ , 1 }
Hammacher 1 1 + i = 1 n ( 1 x i x i )
0 , if it exist x i = 0
i = 1 n ( x i 1 x i ) 1 + i = 1 n ( x i 1 x i )
1 , if it exist x i = 1

Table 1.

T-norms and S-conorms.

One class of non-associative T-norm and T-conorm-based compensatory operator is the linear convex T-S function [31]:

L α T , S ( x 1 , ... , x n ) = ( α ) T ( x 1 , ... , x n ) + ( 1 α ) S ( x 1 , ... , x n ) E4

where α [ 0 , 1 ] , T L α T , S S , T = L 1 T , S (intersection) and S = L 0 T , S (union). The parameter α is called exigency level [22, 25].

Finally, once computed the GAD of the object x j related to all classes, and according to the MA rule, x j will be placed in the highest adequation degree class [23]. The MA rule is defined as

M A = max ( G A D c 1 ( x j ) , G A D c 2 ( x j ) , ... , G A D c l ( x j ) ) E5

LAMDA has been applied to different domains: medical images [32], pattern recognition [33], detection and diagnosis of failures of industrial processes [34], biological processes [35], distribution systems of electrical energy [36], processes for drinking water production [29], monitoring and diagnosis of industrial processes [37], selection of sensors [38], vector quantization [39].

  
     
                         
Advertisement

3. Image recognition based on Learning Algorithm for Multivariate Data Analysis

In this section the image recognition algorithm based on LAMDA is described. Our proposal is divided into two phases, training and recognition. At training phase, a codebook is generated based on LAMDA algorithm, let us name it LAMDA codebook. At recognition phase, we propose a search algorithm based on LAMDA and we show its application in image recognition process.

3.1. Training phase

The LAMDA codebook is calculated in two stages, see Figure 2.

Figure 2.

LAMDA codebook generation scheme

Stage 1. LAMDA codebook generation. At this stage, a codebook based on LAMDA algorithm is generated. This stage is a supervised process; the training set used in the codebook generation is formed by a set of images.

Let x = [ x i ] n be a vector, which represents an image; the training set is defined as A = { x j : j = 1 , 2 , ... , M } . The result of this stage is a codebook denoted as C = { c l : l = 1 ,   2 , ... ,   N } , where c = [ c i ] n .

Stage 2. LAMDA codebook normalization. Before using the LAMDA codebook, it must be normalized:

c i = c ˜ i c min c max c min = c ˜ i 2 L 1 E6

where i = 1 , 2 , ... , n , c ˜ i is the descriptor before normalization, c i is the normalized descriptor, 0 c i 1 , c min = 0 and c max = 2 L 1 ; in the context of image processing, L is the number of bits necessary to represent the value of a pixel. The limits (minimum and maximum) of the descriptors values are the limits of the data set.

3.2. Search algorithm for image recognition based on LAMDA

The proposed search algorithm performs the recognition task according to a membership criterion, computed in four stages.

Stage 1. Image normalization: Before using the descriptors of the image in the search algorithm LAMDA, it must be normalized:

x i = x ˜ i x min x max x min = x ˜ i 2 L 1 E7

where i = 1 , 2 , ... , n , x ˜ i is the descriptor before normalization, x i is the normalized descriptor, 0 x i 1 , x min = 0 and x max = 2 L 1 , L is the number of bits necessary to represent the value of a pixel. The limits (minimum and maximum) of the descriptors values are the limits of the data set.

Stage 2. Marginal Adequacy Degree (MAD). MADS are calculated for each descriptor x i j of each input vector x j with each descriptor c i l of each class c l . For this purpose, we can use the following fuzzy probability distributions:

Fuzzy Binomial Distribution:

M A D ( x i j / c i l ) = ( ρ i l ) ( x i j ) ( 1 ρ i l ) ( 1 x i j ) E8

where i = 1 , 2 , ... , n ; j = 1 , 2 , ... , M and l = 1 , 2 , ... , N . For all fuzzy probability distributions, ρ i l = c i l .

Fuzzy Binomial-Center Distribution:

M A D ( x i j / c i l ) = ( ρ i l ) ( x i j ) ( 1 ρ i l ) ( 1 x i j ) ( x i j ) ( x i j ) ( 1 x i j ) ( 1 x i j ) E9

Fuzzy Binomial-Distance Distribution:

M A D ( x i j / c i l ) = ( a ) ( 1 x d i s t ) ( 1 a ) ( x d i s t ) E10

where a = max ( ρ i l , 1 ρ i l ) , denotes a rounding operation to the largest previous integer value and x d i s t = a b s ( x i j ρ i l ) .

Gaussian Function:

M A D ( x i j / c i l ) = e 1 2 ( x i j ρ i j σ 2 ) 2 E11

where σ 2 = 1 n 1 i = 1 n ( x i j x ¯ ) 2 and x ¯ = 1 n i = 1 n x i j are the variance and arithmetic mean of the vector x j , respectively.

Stage 3. Global Adequacy Degree (GAD). This stage determines the grade of membership of each input vector x j to each class c l , by means of a convex linear function (12) and the use of mathematical aggregation operators (T-norms and S-conorms), these are shown in Table 2.

G A D c l ( x j ) = L α T , S = ( α ) T ( M A D    ( x i j / c i l ) ) + ( 1 α ) S ( M A D    ( x i j / c i l ) ) E12
Operator T-Norm (Intersection) S-Conorm (Union)
Min-Max min ( M A D ( x i j / c i l ) ) max ( M A D ( x i j / c i l ) )
Product i = 1 n M A D ( x i j / c i l ) 1 ( i = 1 n M A D ( x i j / c i l ) )

Table 2.

Mathematical aggregation operators

Stage 4. Obtaining the index. Finally, this stage generates the index of the class to which the input vector belongs. The index is determined by the GAD that presents the maximum value (MA rule):

i n d e x = max ( G A D c 1 ( x j ) , G A D c 2 ( x j ) , ... , G A D c l ( x j ) ) E13

Figure 3 shows the proposed VQ scheme that makes use of the LAMDA algorithm and the codebook generated by LAMDA algorithm.

Figure 3.

Search algorithm LAMDA

Advertisement

4. Results

In this section, the findings of the implementation of the search algorithm LAMDA, in image recognition of gray-scale are presented. In this implementation the fuzzy probability distributions, binomial and binomial center, and the aggregation operators, product and min-max are only used because only they have a lower computational complexity.

Figure 4.

Images of set-1, (a) original image. Altered images, erosive noise (b) 60%, (c) 100%; mixed noise (d) 30 %, (e) 40%

Figure 5.

Images of set-2, (a) original image. Altered images, erosive noise (b) 60%, (c) 100%; mixed noise (d) 30 %, (e) 40%

For this experiment we chose two test sets of images, called set-1 and set-2, and their altered versions (see Figures 4, 5). We say that an altered version x ˜ γ of the image x γ has undergone an erosive change whenever x ˜ γ x γ , dilative change whenever x ˜ γ x γ and mixed change when include a mixture of erosive and dilative change. These images were used to training the LAMDA codebook. At this stage, it was determined by means of some tests that if we only used the original images and the altered versions with erosive noise 60%, the best results were obtained for the test images of the set-1. In the case of the test images of the set-2, to obtain the best results we only used the original images and the altered versions with erosive noise 60% and 100%.

To evaluate the proposed search algorithm performance, altered versions of these images distorted by random noise were presented to the classification stage of the search algorithm LAMDA (see Figures 4, 5).

The fact of using two fuzzy probability distributions and two aggregation operators allows four combinations. This way, four versions of the search algorithm LAMDA are obtained: binomial min-max, binomial product, binomial center min-max and binomial center product. Moreover, we proceeded to modify it in the range from 0 to 1 with step 0.1 to determine the value of the level of exigency (α) that provide the best results. Each version of LAMDA was evaluated using two sets of test images. The results of this experiment are shown in Tables 3 and 4.

Table 3 shows the results obtained using the combinations: binomial min-max, binomial product, binomial center min-max y binomial center product and using the set of test images of the set-1.

Image Fuzzy distribution Aggregation operator Exigency level (α) Distortion percentage added to image
original Erosive noise Mixed noise
0% 60% 100% 30% 40%
Binomial Min-max 1 100% 100% 100% 100% 100%
100% 100% 100% 100% 100%
100% 100% 100% 100% 100%
100% 100% 100% 100% 100%
100% 100% 100% 100% 100%
Binomial Product 0-1 100% 100% 100% 100% 100%
0% 0% 0% 0% 0%
0% 0% 0% 0% 0%
0% 0% 0% 0% 0%
0% 0% 0% 0% 0%
Binomial center Min-max 1 100% 100% 100% 100% 100%
100% 100% 100% 100% 100%
100% 100% 100% 100% 100%
100% 100% 100% 100% 100%
100% 100% 100% 0% 0%
Binomial center Product 1 100% 100% 100% 100% 100%
100% 100% 0% 0% 0%
100% 100% 100% 100% 0%
100% 100% 0% 0% 0%
100% 0% 0% 0% 0%

Table 3.

Performance results (recognition rate) showed by the proposed search algorithm withaltered versions of the test images of set-1

Image Fuzzy distribution Aggregation operator Exigency level (α) Distortion percentage added to image
original Erosive noise Mixed noise
0% 60% 100% 30% 40%
Binomial Min-max 1 100% 100% 100% 100% 100%
100% 100% 100% 100% 100%
100% 100% 100% 100% 100%
100% 100% 100% 100% 100%
100% 100% 100% 100% 100%
100% 100% 100% 0% 100%
100% 100% 100% 100% 100%
100% 100% 100% 100% 100%
100% 100% 100% 100% 100%
Binomial Product 1 100% 100% 100% 100% 100%
0% 0% 0% 0% 0%
0% 0% 0% 0% 0%
0% 0% 0% 0% 0%
0% 0% 0% 0% 0%
0% 0% 0% 0% 0%
0% 0% 0% 0% 0%
0% 0% 0% 0% 0%
0% 0% 0% 0% 0%
Binomial center Min-max 1 100% 100% 100% 100% 100%
100% 100% 100% 100% 100%
100% 100% 100% 100% 100%
100% 100% 100% 100% 100%
100% 100% 100% 100% 100%
100% 100% 100% 0% 100%
100% 100% 100% 100% 100%
100% 100% 100% 100% 100%
100% 100% 100% 100% 100%
Binomial center Product 1 100% 100% 100% 100% 100%
0% 0% 0% 0% 0%
0% 0% 0% 0% 0%
0% 0% 0% 0% 0%
0% 0% 0% 0% 0%
0% 0% 0% 0% 0%
0% 0% 0% 0% 0%
0% 0% 0% 0% 0%
0% 0% 0% 0% 0%

Table 4.

Performance results (recognition rate) showed by the proposed search algorithm withaltered versions of the test images of set-2 .

In the case of the combination of the binomial distribution with the aggregation operator min-max, the best results were obtained with a value of exigency level in the range from 0.8 to 1. We chose the exigency level equal to 1. As a result, the linear convex function is reduced by half, and, consequently, the number of operations is reduced. On the other hand, the combination of the binomial distribution with the aggregation operator product was unable to perform the classification.

In the combination of the binomial center distribution with the aggregation operator min-max, the best results were obtained with a value of exigency level in the range from 0.1 to 1. We chose the exigency level equal to 1. This way, the linear convex function is reduced by half thus reducing the number of operations.

On the other hand, using the combination of the binomial center distribution with the aggregation operator product, the best results were obtained with a value of exigency level equal to 1. Although, as it is shown in Table 3, the classification is not efficient with the images altered with erosive noise of 100% and with mixed noise of 30% and 40%. With this combination, the best results were obtained in comparison to the combination of the binomial distribution with the aggregation operator product.

Table 4 show the results obtained using the combinations: binomial min-max, binomial product, binomial center min-max and binomial center product and using the set of test images of the set-2.

For the combination of the binomial distribution with the aggregation operator min-max, the best results were obtained with a value of exigency level in the range from 0.7 to 1. With the exigency level equal to 1, the linear convex function is reduced by half thus reducing the number of operations. On the other hand, the combination of the binomial distribution with the aggregation operator product was unable to perform classification.

In the combination of the binomial center distribution with the aggregation operator min-max, the best results were obtained with a value of exigency level in the range from 0.1 to 1. Choosing the exigency level equal to 1, the linear convex function is reduced by half and the number of operations is reduced too. On the other hand, the combination of the binomial center distribution with the aggregation operator product was unable to perform classification.

Advertisement

5. Conclusions

In this Chapter, we have proposed the use of LAMDA methodology as a search algorithm for image recognition. It is important to mention that we used LAMDA algorithm both in the training phase and in the recognition phase.

The advantage of the LAMDA algorithm is its versatility which allows obtaining different versions making the combination of fuzzy probability distributions and aggregation operators. Furthermore, it also has the possibility to vary the exigency level, and we can locate the range or the value of the exigency level where the algorithm has better results.

As it was shown in Tables 3 and 4, the search algorithm is competitive, since acceptable results were obtained in the combinations: binomial min-max, binomial center min-max with both sets of images. As you can see the product aggregation operator was not able to perform the recognition. In both combinations the exigency level was equal to 1, this fact allowed to reduce the linear convex function.

Finally, from these two combinations it is better to choose the binomial min-max, because with this combination fewer operations are performed.

References

  1. 1. Kala R. Shukla A. Tiwari R. Rungta S. Janghel R. R. 2009 Mobile Robot Navigation Control in Moving Obstacle Environment Using Genetic Algorithm, Artificial Neural Networks and A* Algorithm. World Congress on Computer Science and Information Engineering 4 705 13
  2. 2. Zhu Y. Yuan Q. Wang Q. Fu Y. Wang H. 2009 Radar HRRP Recognition Based on the Wavelet Transform and Multi-Neural Network Fusion. Electronics Optics & Control 16 1 34 8
  3. 3. Esbati H. Shirazi J. 2011Face Recognition with PCA and KPCA using Elman Neural Network and SVM. World Academy of Science, Engineering and Technology.;58 174 8
  4. 4. Bowyer K. W. Hollingsworth K. Flynn P. J. 2008 Image understanding for iris biometrics: A survey Computer Vision and Image Understanding 110 2 281 307
  5. 5. Anagnostopoulos-N C. Anagnostopoulos E. Psoroulas I. E. Loumos I. D. Kayafas V. E. 2008 License Plate Recognition From Still Images and Video Sequences: A Survey IEEE Transactions on Intelligent Transportation Systems 9 3 377 91
  6. 6. Abrishambaf R. Demirel H. Kale I. 2008 A Fully CNN Based Fingerprint Recognition System 11th International Workshop on Cellular Neural Networks and their Applications 146 9
  7. 7. Egmont-Petersen M. Ridder D. Handels H. 2002 Image processing with neural networks-a review. Pattern Recognition Letters 35 10 2279 301
  8. 8. Bhowmik M. K. Bhattacharjee D. Nasipuri M. Basu D. K. Kundu M. 2009 Classification of Fused Images using Radial Basis Function Neural Network for Human Face Recognition World Congress on Nature & Biologically Inspired Computing 19 24
  9. 9. Yang J. Zhang D. Frangi A. Yang J-y. 2004 Two-Dimensional PCA: A New Approach to Appearance-Based Face Representation and Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 26 1 131 7
  10. 10. Gottumukkal R. Asari V. 2004 An improved face recognition technique based on modular PCA approach Pattern Recogn Letters 25 4 429 36
  11. 11. Bezdek J. C. Keller J. Krisnapuram R. Pal N. 2005 Fuzzy Models and Algorithms for Pattern Recognition and Image Processing (The Handbooks of Fuzzy Sets): Springer-Verlag New York, Inc
  12. 12. Mitchell H. B. 2005 Pattern recognition using type-II fuzzy sets. Inf. Sciences 170 2-4 409 18
  13. 13. Bandyopadhyay S. Maulik U. 2002 Geneticclustering for automaticevolution of clusters and application to imageclassification. Pattern Recognition 35 6 1197 208
  14. 14. Bhattacharya M. Das A. 2010 Genetic Algorithm Based Feature Selection In a Recognition Scheme Using Adaptive Neuro Fuzzy Techniques Int. Journal of Computers, Communications & Control 4 458 468
  15. 15. Pandit M. Gupta M. 2011 Image Recognition With the Help of Auto-Associative Neural Network. International Journal of Computer Science and Security 5 1 54 63
  16. 16. Yang J. C. Park D. S. 2008 Fingerprint Verification Based on Invariant Moment Features and Nonlinear BPNN. International Journal of Control, Automation, and Systems 6 6 800 8
  17. 17. Radha V. Nallammal N. 2011 Neural Network Based Face Recognition Using RBFN Classifier. Proceedings of the World Congress on Engineering and Computer Science 1
  18. 18. Zeng J. Liu-Q Z. 2007 Type-2 Fuzzy Sets for Pattern Recognition: The State-of-the-Art. Journal of Uncertain Systems 11 3 163 77
  19. 19. Sarawat Anam. Md Shohidul Islam. M. A. Kashem M. N. Islam M. R. Islam M. S. 2009 Face Recognition Using Genetic Algorithm and Back Propagation Neural Network. Proceedings of the International MultiConference of Engineers and Computer Scientists 1
  20. 20. Blahuta J. Soukup T. Cermak P. 2011 The image recognition of brain-stem ultrasound images with using a neural network based on PCA. IEEE International Workshop on Medical Measurements and Applications Proceedings 5 2 137 42
  21. 21. Yu H. Xie T. Hamilton M. Wilamowski B. 2011 Comparison of different neural network architectures for digit image recognition. 4th International Conference on Human System Interactions 98 103
  22. 22. Piera N. Desroches P. Aguilar-Martin J. 1989 LAMDA: An Incremental Conceptual Clustering Method. LAAS Laboratoired’Automatiqueetd’Analyse des Systems.;Report 89420 1 21
  23. 23. Piera N. Aguilar-Martin J. 1991 Controlling Selectivity in Nonstandard Pattern Recognition Algorithms. IEEE Transactions on Systems, Man and Cybernetics 21 1 71 82
  24. 24. Aguilar-Martin J. Sarrate R. Waissman J. 2001 Knowledge-based Signal Analysis and Case-based Condition Monitoring of a Machine Tool. Joint 9th IFSA World Congress and 20th NAFIPS International Conference Proceedings 1 286 91
  25. 25. Aguilar-Martin J. Agell N. Sánchez M. Prats F. 2002 Analysis of Tensions in a Population Based on the Adequacy Concept 5th Catalonian Conference on Artificial Intelligence, CCIA 2504 17 28
  26. 26. Waissman J. Ben-Youssef C. Vázquez G. 2005 Fuzzy Automata Identification Based on Knowledge Discovery in Datasets for Supervision of a WWT Process. 3rd International Conference on Sciences of Electronic Technologies of Information and Telecommunications
  27. 27. Engelbrecht A. P. 2007 Computational intelligence Anintoduction: John Wiley & Sons Ltd
  28. 28. Buckley J. J. 2005 Simulating Fuzzy Systems Kacprzyk J, editor: Springer-Verlag Berlin Heidelberg
  29. 29. Hernández H. R. 2006 Supervision et diagnostic des procédés de productiond’eau potable. PhD thesis.l’Institut National des Sciences Appliquées de Toulouse
  30. 30. Detyniecki M. 2000 Mathematical Aggregation Operators and their Application to Video Querying PhD thesis.Université Pierre et Marie Curie
  31. 31. Beliakov G. Pradera A. Calvo T. 2007 Aggregation Functions: A Guide for Practitioners. Kacprzyk J, editor: Springer-Verlag Berlin Heidelberg
  32. 32. Chan M. Aguilar-Martin J. Piera N. Celsis P. Vergnes J. 1989 Classification techniques for feature extraction in low resolution tomographic evolutives images: Application to cerebral blood flow estimation. In 12th Conf GRESTI Grouped’Etudes du Traitement du Signal et des Images
  33. 33. Piera N. Desroches P. Aguilar-Martin J. 1990 Variation points in pattern recognition Pattern Recognition Letters 11 519 24
  34. 34. Kempowsky T. 2004 Surveillance de Procedes a Base de Methodes de Classification: Conception d’un Outild’aide Pour la Detection et le Diagnostic des Defaillances. PhD Thesis. l’Institut National des Sciences Appliquées de Toulouse
  35. 35. Atine-C J. Doncescu A. Aguilar-Martin J. 2005 A Fuzzy Clustering Approach for Supervision of Biological Processes by Image Processing. EUSFLAT European Society for Fuzzy Logic and Technology 1057 63
  36. 36. Mora J. J. 2006 Localización de fallas en sistemas de distribución de energía eléctrica usando métodos basados en el modelo y métodos basados en el conocimiento. PhDThesis. Universidad de Girona
  37. 37. Isaza C. V. 2007 Diagnostic par Techniquesd’apprentissageFloues :Conceptiond’uneMethode De Validation Et d’optimisation des Partitions. PhD Thesis. l’Université de Toulouse
  38. 38. Orantes A. Kempowsky T. Lann-V M. Prat L. Elgue L. Gourdon S. Cabassud C. M. 2007 Selection of sensors by a new methodology coupling a classification technique and entropy criteria Chemical engineering research & design Journal 825 38
  39. 39. Guzmán E. Zambrano J. G. García I. Pogrebnyak O. 2011 LAMDA Methodology Applied to Image Vector Quantization. Computer Recognition Systems 4 95 347 56

Written By

Juan G. Zambrano, E. Guzmán-Ramírez and Oleksiy Pogrebnyak

Submitted: 25 April 2012 Published: 13 February 2013