Open access peer-reviewed chapter

Cellular Automata for Pattern Recognition

Written By

Sartra Wongthanavasu and Jetsada Ponkaew

Submitted: March 1st, 2012Reviewed: August 14th, 2012Published: May 8th, 2013

DOI: 10.5772/52364

Chapter metrics overview

2,959 Chapter Downloads

View Full Metrics

1. Introduction

Cellular Automata (CA) are spatiotemporal discrete systems (Neumann, 1966) that can model dynamic complex systems. A variety of problem domains have been reported to date in successful CA applications. In this regard, digital image processing is one of those as reported by Wongthanavasu et. al. (Wongthanavasu et al., 2003; 2004; 2007) and Rosin (Rosin, 2006).

Generalized Multiple Attractor CA (GMACA) is introduced for elementary pattern recognition (Ganguly et al., 2002; Maji et al., 2003; 2008). It is a promising pattern classifier using a simple local network of Elementary Cellular Automata (ECA) (Wolfram, 1994), called attractor basin that is a reverse tree-graph. GMACA utilizes a reverse engineering technique and genetic algorithm in ordering the CA rules. This leads to a major drawback of computational complexity, as well as recognition performance. There are reports in successful applications of GMACA in error correcting problem with only one bit noise. It shows the promising results for the restricted one bit noise, but becomes combinatorial explosion in complexity, using associative memory, when a number of bit noises increases.

Due to the drawbacks of complexity and recognition performance stated previously, the binary CA-based classifier, called Two-class Classifier Generalized Multiple Attractor Cellular Automata with artificial point (2C2-GMACA), is presented. In this regard, a pattern recognition of error correcting capability is implemented comprehensively in comparison with GMACA. Following this, the basis on CA for pattern recognition and GMACA’s configuration are presented. Then, the 2C2-GMACA model and its performance evaluation in comparison with GMACA are provided. Finally, conclusions and discussions are given.


2. Cellular Automata for Pattern Recognition

Elementary Cellular Automata (ECA) (Wolfram, 1994) is generally utilized as a basis on pattern recognition. It is the simplest class of one dimension (1d) CA with ncells, 2 states and 3 neighbors. A state is changed in discrete time and space (SitSit+1; where Sitis the present state and Sit+1is the next state for the ithcell) by considering it nearest neighbor (Si-1t, Sit, Si+1t) of the present state.

Figure 1.

Elementary Cellular Automata (ECA) and Generalized Multiple Attractor Cellular Automata (GMACA).

For n-cell ECA, the next state function (SitSit+1)can be represented by a rule matrix (M) with size |nx8| and the nearest neighbour configuration (Si-1t, Sit, Si+1t) of the present state. Suppose ann-cell ECA (S0tS1tS2tSn-1t) at time ‘t’ is changed in discrete time by a rule vector <R0,R1,,Rn-1>. A truth table is a simplified form of the rule vector as illustrated in Fig. 1(a). It comprises the possible 3 neighbor values of Si-1t Sit Si+1tfrom 000 to 111, and the next states for the rule Ri; where i=0, 1, 2…, n-1. Each rule is represented in binary numbers (b7b6b5b4b3b2b1b0). If the binary numbers are decoded into decimal, it must equal to the number Risuch as ‘01011010’ for the rule-90. Simultaneously, A rule matrix (M) can also be represented the rule vector.

Let M(i,j) be an element of the matrix at the ith(i=0,1,2,...,n-1) row and the jth(j=0,1,2,...,7) column. The M(i,j) is contained bjof the rule-Ri. For example, M(2,3) is b3 of the rule R2(the rule-90) that is ‘1’. Consequently, the next state (Sit+1) for the ithcell is represented by the M(i,j) as the following:



Sit+1is the next state of the ithcell.

jiis the 3 neighbouring values (Si-1tSitSi+1t) of the present state at the ithcell decoded in decimal.

The next state St+1 for n-cell ECA calculated is also defined by the rule matrix Mas following:

St+1=(S0t+1,S1t+1,,Sn1t+1) =(M(0,j0),M(1,j1),M(n1,jn1))=(M,St)E2

Suppose a system designed with a rule matrix (M) comprises a set of solutions Y= { yi| yi0,1n}and an inputx; x0,1n, where i=1, 2…, N. Consequently, the pattern classifiers based on the evolution of the ECA is defined as following

St+1={(M,St), ifStY Stand stop,otherwiseE3

For an inputx, it must be identified a solution from Yusing the equation (3). Firstly, the present state (St) will be set to x. Then, the next state (St+1) will be generated using the rule matrix Muntil it reaches some solution (StY). The structure for pattern classification using ECA can be represented by a simple local network called attractor basin. It consists of a cyclic and non-cyclic states. The cyclic state contains a pivotal point which is a solution to classification problem, while the transient states (all possible inputs) are contained in the non-cyclic states. The attractor cycle lengths (height) in the GMACA (Oliveira, et al., 2006; Sipper, 1996) are greater than or equal to one, while Multiple Attractor Cellular Automata (MACA) (Das, et al., 2008; Maji, et al., 2003; Sipper, 1996) is limited to one. In Fig. 1(b), two attractor basins of 4-bit pattern of MACA with null boundary condition are designed with a rule vector <60, 150, 60, 240>. The target solution patterns are 0000 and 1011, respectively. The rule vector is ordered by the evolution of heuristic search using simulated annealing algorithm.


3. Generalized Multiple Attractor Cellular Automata

This section gives the detailed configuration of GMACA and its application in ECC. Suppose an n-bit pattern is sent in a communication system. Let Xbe the sender‘s pattern and Ybe the receiver’s pattern. Thus, the number of different bits between Xand Yis determined by Hamming distance (r) defined as follows:


where X=x0x1xn-1;xi{0,1}and Y=y0y1yn-1;yi{0,1}.

The number of possible error patterns (pr) for a given rof n-bit communication can be expressed as follow:


Then, the number of all possible error patterns (pAll) for a given rmax, where rmax0,nis the maximum permissible noise, is given by:


The maximum permissible noise (rmax) is the highest value of rallowed to occur in the communication system. The Hamming distance model of a message (pattern) and it errors are also represented by an attractor basin—that is, the messages is a pivotal point while the errors are transient states. Thus, the error correcting codes can be solved by the Generalized Multiple Attractor Cellular Automata (GMACA).

Figure 2.

Two-Class Classifier GMACA with artificial point (2C2-GMACA): <232,212,178,142 >.

Suppose a communication system comprises koriginal messages of n-bit data and the maximum permissible noise rmax. If error messages are corrected using the GMACA, thus a satisfied rule vector is required. The rule vector is a result of a reverse engineering technique. Firstly, kattractor basins are randomly constructed with the number of nodes for each attractor basin equals pAll. Then original messages are randomly mapped into pivotal points while its possible errors are also randomly mapped into transient states at the same attract basin. Finally, the search heuristics, such as simulated annealing (SA) and genetic algorithm (GA) (Holland, 1992; Shuai, et al., 2007; Jie, et al., 2002) have been taken to explore the optimal structure. The search heuristics then iteratively changes directions and height of the attractor basins until the satisfied rule vector is acquired.

As reported in Ganguly, et al., 2002, Maji, et al., 2003 and Maji, et al., 2008, the GMACA provides the best performance of pattern recognition if it is trained with the rmaxhaving a value of 1. Although percentage of recognition in testing is high when deals with the rmaxequals 1, it sharply decreases the recognition performance when the rmaxis greater than 1.


4. Proposed 2C2-GMACA Model

Due to the drawbacks of recognition performance resulting from the increasing rmaxand search space complexity in rule ordering, the proposed method, called Two-class Classifier Generalized Multiple Attractor Cellular Automata with artificial point (2C2-GMACA) (Ponkaew, et al., 2011; Ponkaew, et al., 2011), is introduced. The 2C2-GMACA is designed based on two class classifier architecture basis. In this regard, two classes are taken to process at a time and a solution is binary answer +1 or -1, which is a pointer to the class label of solution. There are two kinds of attractor basins: a positive attractor basin that returns the +1 as the result and a negative attractor basin, otherwise.

Suppose a system consists of patterns xi, yi, where xi0,1nis the ithpattern,  and  yi{L+,L-}is the ithclass label and i=1,2,…N.Let L+ and L-  be a class label of the positive and negative attractor basins, respectively. Given x{0,1}nas an input, the x must be assigned a class label which is a solution to the pattern recognition. The 2C2-GMACA begins with setting the present state (St) to  x. Then, the St will be evolved with the equation (2) to the next state (St+1). Next, the binary decision function will take  St+1and artificial point (A)as parameters as the equation (7) to assign the class.



sgn_denotes the sign function.

Sit+1represents the next state for the ithcell.

Airepresents the artificial point for the ithcell.

S¯it+1represents a bit complement of the next state for the ithcell.

(.) denotes “AND” logical operator.

Finally, the xis considered to be a member of the positive attractor basin and returns L+if  fSt+1,A =+1, and returns L-, otherwise.

Example 1: Consider two attractor basins of 4-bit recognizer of 2C2-GMACA with periodic boundary condition given in Fig. 2, they are designed by a rule vector <232,212,178,142> representing in a matrix M, and an artificial point (A) of ‘0001’. Suppose a class label of the positive (L+) and the negative attractor basins (L-) are ‘1101’ and ‘0010’, respectively. For an input x'=‘1100’, firstly the present state  St;t=0is set to x'and then evolved with the given rule vector to the next state   (St+1;t+1=1)by the equation (2), resulting


where jiis the 3 neighbour values (Si-1tSitSi+1t) for the ithcell decoded in decimal. That is, j0= (011)2=3, j1= (110)2=6, j2= (100)2=4 and j3= (001)2=1. Thus, the above equation is replaced with the jiin decimal as following


Finally, the binary decision function will process theSt+1, which equals “1111” using the artificial point A=0001 as co-parameters resulting in the following

fSt+1,A =sgni=0n-1Sit+1.Ai-i=0n-1S-it+1.Ai=sgn1.0+1.0+1.0+1.1-1-.0+1-.0+1-.0+1-.1=+1E10

The function returns 1 meaning that the input x'is a member of positive attractor basin and then the label ‘1101’ is assigned as the solution.

4.1. 2C2-GMACA with Associative and Nonassociative Memories

Given a set of patterns  Y={y1,y2,yk}represents original messages; where yi{0,1}nand i=1,2…,k. 2C2-GMACA takes two patterns {yi,yj}: yiyjand yi,yjYto process at a time. For associative memory learning, all possible transient states of the yiand yjare generated using the equation (6) with the maximum permissible noise (rmax), while all transient states are randomly generatedr[0,rmax] for non-associative memory. Then, all states of yiand yjare mapped into the leaf nodes of the positive and negative attractor basins, respectively. After two attractor basins are completely constructed, it will be synthesized by a majority voting technique to arrive at the rule vector. In other word, the rule vector is determined in only one time step which is different from GMACA in that it is iteratively determined through the evolution of heuristic search. In this regard, complexity is the main drawback excluding recognition performance.

According to a binary classifier, 2C2-GMACA conducts multiclass classification by DDAG (Decision Directed Acyclic Graph), One-versus-All, One-versus-One, etc., for example. However, this paper focuses on DDAG approach [28]. Suppose that a set of three patterns {y1, y2, y3}, where yi{0,1}nand i=1, 2, 3, is constructed using the DDAG scheme. Thus, total number of binary classifier is (32/2)= 3. That is, (1 vs 3), (1 vs 2) and (2 vs 3) and the number of levels is log23= 2. A root node is (1 vs 3) contained in the 0th-level. Then, (1 vs 2) and (2 vs 3) are contained in the 1st-level. Finally, the solutions {3, 2, 1} are labeled in the leaf nodes of the 2nd-level. In order to assign a class label for an unknown input x{0,1}n, it is first evaluated at the root node. The node is exited through the left edge if the binary decision function is -1. On the other hand, it is exited via the right edge if the binary decision function is +1. The xis evaluated until it reaches final level. At this point, a leaf node connecting to the edge of the binary decision function is assigned as the solution.

4.2. Design of Rule Vector

A majority voting rule is utilized to synthesize a rule vector for two attractor basins. It is one time step process which is different from a reverse engineering technique (Maji, et al., 2003; Maji, et al., 2008) using in GMACA. Reverse engineering technique continues reconstructing attractor basins randomly until arriving at the rule vector with the lowest collision. In this regard, 2C2-GMACA’s time complexity for ordering the rule is simply O(1). However, it must search for an optimal artificial point which applies evolutionary heuristic search. The 2C2-GMACA synthesis scheme comprises three phases as follows.

Figure 3.

GMACA synthesize scheme under the majority voting rule.

Phase I--- Two attractor basins, namely, positive and negative attractor basins, are generated. In this phase, two patterns {yl,ym}, where ylymand yl,ymYare chosen from a set of learnt patterns to process according to the multiclass classification scheme [28]. Suppose ylis assigned to a class label of L+. Thus, the ymis a class label of L-. Then, transient states of the yland ymare generated into the leaf nodes of the positive and negative attractor basins, respectively.

Example 1: Fig. 3(a) represents two attractor basins based on associative memory learning of 4 bit patterns with rmax=1. Suppose Y={1101, 0010} is a set of learnt patterns. The 2C2-GMACA takes two patterns {y1=1101, y2=0010} to process according to the multiclass classification algorithm. Let a class label of the positive (L+)and negative(L-)attractor basins be ‘1101’ and ‘0010’, respectively. Then, two sets of noisy patterns with rmax=1 are generated resulting in {1101, 0101, 1001, 1111, 1100} and {0010, 1010, 0110, 0000, 0011}, respectively. Then, all patterns are mapped into leaf nodes of attractor basins corresponding with its label as shown in Fig. 3(a).

Phase II--- Let M+and M-be matrices with size |nx8|, and M+(i,j)and M-i,j,  where i=0,1,2,...,n-1 and j=0,1,2,...,7, be an element of the matrices M+ and M-, respectively. The M+i,jrepresents numbers of nodes from the positive attractor basin where the 3 neighbors, (Si-1tSitSi+1t), for the ith cell is decoded in decimal satisfying the jth column. The negative attractor basin considers the  M-i,junder the similar condition with the positive one.

Example 2: As shown in Fig. 3(b), two matrices M+andM-are constructed with size |4x8|, each element of which is represented the numbers of nodes from corresponding attractor basin. For example, M+(1,1)represents an element of matrix  M+at the 1st row and the 1st column; it is a total number of leaf nodes from the positive attractor basin where 3 neighbors (S0tS1tS2t) of the 1st cell decoded in decimal equal to 1, i.e. j=1=0012=(Si-1tSitSi+1t)2 where i=1.

Phase III--- Rule matrix Mis determined. The matrix Mwith size |nx8| is the simplified form of the rule vector (RV), while an element M(i, j) represents the next state for the ithcell, where the 3 neighbor(Si-1tSitSi+1t) of the cell decoded in decimal equal to j. The Mis designed by comparing between  M+(i,j)and  M-i,j, where i=0,1,2,...,n-1 and j=0,1,2,...,7, due to the following conditions:

1) if  M+(i,j)>M-(i,j)then Mi,j=1

2) if M+i,jM-i,j then Mi,j=0

Fig. 3(c) shows that a rule vector <232, 212, 178, 142> is obtained by the majority voting technique. The rule vector (matrix rule) is utilized to evolve the given pattern in one time step to the pattern at the next time step which becomes one of parameters of the binary decision function.

4.3. Design of Artificial Point

An artificial point (A) takes a major role in the binary decision function. It interprets the next state (St+1) in features space to be a pointer identifying the class label of solution. In this respect, Genetic Algorithm (GA) (Holland, 1992; Buhmann, et al., 1989) is implemented to determine the optimal artificial point. A chromosome with n genes in GA represents an n-bit artificial point as follows:

chromosome= [b0 b1 b2 bn1]E11

Selection is done by using a random pairing approach and a traditional single point crossover is also performed by random at the same point of the nelement array of the selected two parents. Mutation makes a small change in the bits in the list of a chromosome with a small percentage. The fitness function is calculated as a cost for each chromosome. It is created from a true positive (TP) and a false positive (FP) of the confusion matrix (Simon, et al., 2010) calculated by the below equation (8). The fitness function is given as following


The search space complexity for rule ordering of the 2C2-GMACA is the all possible patterns of the artificial point, 000…000 to 111….111, which is 2n, i.e. O(2n).


5. Performance Evaluation

This section reports performance evaluation of the proposed method in comparison with GMACA on a set of measured matrices consisting of search space and classification complexities, recognition percentage, evolution time for rule ordering, and effects of the number of pivotal point, permissible noises, p-parameter, pattern size on error correcting problem.

5.1. Reduction of Search Space

Given a set of learnt patterns  Y={y1,y2,yk}, where yi{0,1}nand i=1,2…,k, is original messages. The 2C2-GMACA and GMACA based associative memory learning will generate all transient states using the equation (6) with the maximum permissible noise (rmax). Then, the transient states are constructed to be attractor basins.

Theorem 1:In training phase, a search space complexity of the GMACA (SGMACA) depends on parameters of bit patterns (n),the maximum permissible noise (rmax) and the maximum permissible height (hmax), while the search space complexity of 2C2-GMACA (S2C2-GMACA) depends only on a parameter n.

Proof:From the set  Y=y1,y2,yk, GMACA constructs kattractor basins randomly until a satisfied rule vector is acquired. Thus, the search space of the GMACA (SGMACA) is all possible patterns of kattractor basins defined by


where Gis the number of learnt patterns in each attractor basin previously defined by Cayley ‘s formula (Maji, et al., 2003) as follows:


where pis the number of possible transient states calculated from (6). Therefore, the above equation is defined following


It shows that search space complexity of GMACA is factorial growth O(n!), which depends on parameters nand rmax. In real world application, it must face a severe search space in which the search heuristics cannot reach the optimal solution if nor rmaxis considered at a high number. In this regard, GMACA tries to examine the optimal values of the rmaxand hmax. GMACA shows that the search space complexity can be reduced to O(nn) if the rmax=1 as shown following


The search space complexity in Maji, et al., 2003 and Maji, et al., 2008 is examined under the hmax=2and the rmax=1as described below.


For the proposed 2C2-GMACA, the search space is the number of possible patterns (G) of artificial point: 000…000 to 111….111—that is; 2n. Due to DDAG approach for multiclass classification algorithm, the machine consists of k(k-1)/2binary classifier. Thus, the search space complexity of the 2C2-GMACA (S2C2-GMACA) is:


When comparing the search space complexity between GMACA and 2C2-GMACA, we found that GMACA can only be implemented if it is considered at the hmax=2and rmax=1,while 2C2-GMACA can be implemented whatsoever with the exact solution through heuristic search. This corresponds to the reports in Maji, et al., 2003 and Maji, et al., 2008, the GMACA provides the best performance of pattern recognition when it is trained with the rmax=1 and hmax=2. However, the percentage of recognition in testing is also high if the Hamming distance of patterns is less than or equal to 1 and it is decreased sharply when the Hamming distance is greater than 1.

5.2. Reduction of Classification Complexity

Theorem 2:In worst case scenario of learning based on associative memory model, the classification complexity of n-bit pattern for GMACA is O(n2), while 2C2-GMACA is O(n).

Proof:In general, time spent in classifying nnodes of GMACA depends on an arrangement of nodes in attractor basins. At worst, the attractor basin is a linear tree. Thus, time for classifying nnodes is the summation of the number of traversal paths from each node to a pivotal point. For example, the number of traversal paths of a pivotal point is 0 while the nth-node is (n-1). This can be solved by arithmetic series (Sn). Given the common different dis 1 and an initial term (a1) is 0, the equation in determining the summation is given as follows.


As being designed the height of attractor basis of 2C2-GMACA is limited to 1, the time of classifying nnodes is n, ie. O(n).

5.3. Performance Analysis of 2C2-GMACA on Associative Memory

Pattern classifiers based on an associative memory is independent from the number of patterns to be learnt, because all possible distorted patterns are generated into learning system. Suppose a set of pivotal points  Y=y1,y2,yk,where yi0,1nand i=1, 2…, k,is original messages. 2C2-GMACA takes two pivotal points {yl, ym}, where yl, ymY, ylym and l, m=1, 2…,k, to process at a time using the DDAG scheme. Thus, the number of classifiers of the 2C2-GMACA is k(k-1)/2,while GMACA takes all pivotal points to process at once.

5.3.1. Recognition and Evolution Time

This section reports recognition rate and evolution time for rule ordering between 2C2-GMACA and GMACA based on associative memory. Table 1 presents the recognition rate at different sizes of bit patterns (n) and the number of attractor basins (k). It generates patterns with maximum permissible noise in training phase (rmax) and testing with different sizes of noise r; r(1,rmax). Table 2 presents the evolution time in second for the genetic algorithm in determining the well-fitting attractor basins and artificial point with different values of nand k. The results show that 2C2-GMACA is superior to GMACA both recognition performance and times spent in rule ordering. This corresponds the previous mention that search space is the major problem of GMACA for ordering the rules when deals with high number of rmax.

5.3.2. Effects of Number of Pivotal Points and Pattern Size

A pivotal point in 2C2-GMACA represents an original message in communication systems. Fig. 4 shows the effects of the number of pivotal points (k) in the recognition performance of the proposed 2C2-GMACA based on associative memory learning at a particular rmaxand bit pattern. It shows that if is trained by rmax= 3 the recognition rate is almost 100% when the number of bit noises (r) is not greater than 5 no matter of the number of classes (k), and declined sharply when the number of bit noises increases. The less the number of classes, the better the recognition performance. Fig. 5 shows the effects of the number of bit pattern in recognition performance of the 2C2-GMACA based on associative memory learning by fixing rmaxand the number of classes (k). In this regard, when the number of bit noises in testing increases, the recognition of different number of bit patterns decreases in distinguishable manner. The more the number of bit patterns, the less the recognition performance.

5.4. Performance Analysis of 2C2-GMACA on Non-Associative Memory

The memory capacity becomes a serious problem of pattern classifier based on an associative memory learning if the classfier deal with the high values of n, rmaxand k.It generates a large number of transient states. In ordet to solve this problem, the 2C2-GMACA based on non-associtive memory is presented. The transient states will be generated by randomly choosing bit noiser(0,rmax), the number of which is limited into some number p;pI+.

5.4.1. Effects of Maximum Permissible Noise and P-Parameter

In order to examine the effects of the maximum permissible noise rmaxon the error correcting problem of 2C2-GMACA based non-associative memory, two pivotal points are randomly generated and then the number of transient states is limited to some number p; pI+. Thus, the transient states are randomly generated from the equation (6) using r(0,rmax)until the number of states equals to p. This method is called uniform distribution learning. Fig. 6 shows the effects of the rmaxat 1/4n ,  2/4n and  3/4n ; where n=100 and nis bits pattern. The number of pivotal points (k) and transient states (p) is fixed to 2 and 2000, respectively. Results are plotted in the inverted bell curve. It shows that the 2C2-GMACA has the lowest capability in range of  r(0 ,1/2n )if it is trained by the rmax3/4n, which opposed to the rmax=1/2n. However, overall average percentage of the rmax3/4nis the highest value.

The effects of the number of transient states (p;  pI+) for two attractor basins (k=2) are examined and shown in Fig. 7. During the training phase, the number of bit pattern (n) is set to 100, while the maximum permissible noise (rmax) is set nearly to 3/4n75. Then, the percentage of recognition is observed at different numbers of p---that is 2000, 4000 and 10000. The results show that the average percentage of recognition is highest if it is trained with the highest number of p. However, it is memory consumptions as already mentioned.

Figure 4.

The effect ofk-parameter on the percentage of recognition of 2C2-GMACA based on associative memory.

Figure 5.

The effect ofn-parameter on the percentage of recognition of 2C2-GMACA based on associative memory.

Figure 6.

The effect ofrmaxparameter on the percentage of recognition of 2C2-GMACA based on non-associative memory.

Figure 7.

The effect ofp-parameter on the percentage of recognition of 2C2-GMACA based on non-associative memory.


6. Conclusions and Discussions

This chapter presents a non-uniform cellular automata-based algorithm with binary classifier, called Two-class Classifier Generalized Multiple Attractor Cellular Automata with artificial point (2C2-GMACA), for pattern recognition. The 2C2-GMACA is built around the simple structure of evolving non-uniform cellular automata called attractor basin, and classify the patterns on the basis of two-class classifier architecture similar to support vector machines. To reduce computational time complexity in ordering the rules, 2C2-GMACA is limited the height of attractor basin to 1, while GMACA can have its height to n, where n is a number of bit pattern. Genetic algorithm is utilized to determine the CA’s best rules for classification. In this regard, GMACA designs one chromosome consists of k-genes, where k is a number of classes (target patterns) to be classified. This leads to abundant state spaces and combinatorial explosion in computation, especially when a number of bit noises increases. For the design of 2C2-GMACA, a chromosome represents an artificial point which is consists of n-bit pattern. Consequently, the state space is minimal and feasible in computation in general pattern recognition problem. The 2C2-GMACA reduces search space for ordering a rule vector from GMACA which is O(nn) to O(1)+O(2n). In addition, multiple errors correcting problem is empirically experimented in comparison between the proposed method and GMACA based on associative and non-associative memories for performance evaluation. The results show that the proposed method provides the 99.98% recognition rate superior to GMACA which reports 72.50% when used associative memory, and 95.00% and 64.30% when used non-associative memory, respectively. For computational times in ordering the rules through genetic algorithm, the proposed method provides 7 to 14 times faster than GMACA. These results suggests the extension of 2C2-GMACA to other pattern recognition tasks. In this respect, we are improving and extending the 2C2-GMACA to cope with complicated patterns in which state of the art methods, SVM, ANN, etc., for example, poorly report the classification performance, and hope to report our findings soon.


  1. 1.NeumannJ.V.Theory of Self-Reproducing Automata.University of Illinois Press1966
  2. 2.WongthanavasuS.SadanandaR.A CA-based edge operator and its performance evaluation.Journal of Visual Communication and Image Representation14220038396
  3. 3.WongthanavasuS.LursinsapC.A 3D CA-based edge operator for 3D imagesin Image Processing2004ICIP’04. 2004 International Conference on, 2004235238
  4. 4.WongthanavasuS.TangvoraphonkchaiV.Cellular Automata-Based Algorithm and its Application in Medical Image ProcessingImage Processing2007ICIP 2007. IEEE International Conference on, 200741-44
  5. 5.RosinP.L.Training cellular automata for image processing.IEEE Transactions on Image Processing15720062076-2087
  6. 6.WolframS.Cellular automata and complexity: collected papers.Addison-Wesley Pub. Co.1994
  7. 7.Delgado, O.G., Encinas, L.H., White, S.H., Rey, A.M.d. and Sanchez, G.R.,Characterization of the reversibility of Wolfram cellular automata with rule number 150 and periodic boundary conditions.INFORMATION842005483492
  8. 8.DasS.ChowdhuryD. R.An Efficient n x n Boolean Mapping Using Additive Cellular AutomataProceedings of the 8th international conference on Cellular Automata for Reseach and IndustryYokohama, Japan Springer-Verlag,2008168173
  9. 9.MajiP.ShawC.GangulyN.SikdarB.K.ChaudhuriP.P.Theory and application of cellular automata for pattern classification.Fundam. Inf.583-42003321-354
  10. 10.ChadyM.Chadya. M.PoliR.Evolution of Cellular-automaton-based Associative MemoriesTechnical Report CSRP-97-15University of Birmingham, School of Computer Science, May1997
  11. 11.GangulyN.MajiP.SikdarB.K.ChaudhuriP.P.Generalized Multiple Attractor Cellular Automata (GMACA) For Associative Memory.International Journal of Pattern Recognition and Artificial Intelligence, Special Issue: Computational Intelligence for Pattern Recognition1672002781-795
  12. 12.MajiP.GangulyN.ChaudhuriP.P.Error correcting capability of cellular automata based associative memory.Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on3342003466-480
  13. 13.MajiP.ChaudhuriP.P.Non-uniform cellular automata based associative memory: Evolutionary design and basins of attraction.Information Sciences1781020082315-2336
  14. 14.NiloyG.MajiP.SikdarB.K.ChaudhuriP.P.Design and characterization of cellular automata based associative memory for pattern recognition.Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on3412004672678
  15. 15.TanS.K.GuanS.U.Evolving Cellular Automata to Generate Nonlinear Sequences with Desirable Properties.Applied Soft Computing73200711311134
  16. 16.ShannonC.E.WeaverW.The Mathematical Theory of Communication.Univ. of Illinois Press Urbana1949
  17. OliveiraP.P.B.BortotJ.C.OliveiraG.M.B.The best currently known class of dynamically equivalent cellular automata rules for density classification.Neurocomputing2006701-33543
  18. 18.SipperM.Co-evolving Non-Uniform Cellular Automata to Perform Computations1996
  19. 19.HollandJ.H.Adaptation in natural and artificial systems.MIT Press1992
  20. 20.OhH.Application of a Heuristic Search to the Deadlock Avoidance Algorithm that Achieves the High Concurrency.INFORMATION442001
  21. 21.ShuaiD.DongY.ShuaiQ.A new data clustering. approach Generalized cellular automata.Information Systems2007327968977
  22. 22.JieY.BinggangC.Study on an Improved Genetic Algorithm.INFORMATION542002
  23. 23.PonkaewJ.WongthanavasuS.LursinsapC.A nonlinear classifier using an evolution of Cellular Automata,Intelligent Signal Processing and Communications Systems (ISPACS)2011 International Symposium on20111-5
  24. 24.PonkaewJ.WongthanavasuS.LursinsapC.Two-class classifier cellular automataIndustrial Electronics and Applications (ISIEA)2011 IEEE Symposium on2011354-359
  25. 25.BuhmannJ.DivkoR.SchultenK.Associative memory with high information contentPhysical Review A39519892689-2692
  26. 26.SimonD.SimonD.L.Analytic Confusion Matrix Bounds for Fault Detection and Isolation Using a Sum-of-Squared-Residuals ApproachReliability, IEEE Transactions on5922010287-296

Written By

Sartra Wongthanavasu and Jetsada Ponkaew

Submitted: March 1st, 2012Reviewed: August 14th, 2012Published: May 8th, 2013