Open access peer-reviewed chapter

Improved Kohonen Feature Map Probabilistic Associative Memory Based on Weights Distribution

By Shingo Noguchi and Osana Yuko

Submitted: December 20th 2011Reviewed: July 13th 2012Published: January 16th 2013

DOI: 10.5772/51581

Downloaded: 2145

1. Introduction

Recently, neural networks are drawing much attention as a method to realize flexible information processing. Neural networks consider neuron groups of the brain in the creature, and imitate these neurons technologically. Neural networks have some features, especially one of the important features is that the networks can learn to acquire the ability of information processing.

In the field of neural network, many models have been proposed such as the Back Propagation algorithm [1], the Kohonen Feature Map (KFM) [2], the Hopfield network [3], and the Bidirectional Associative Memory [4]. In these models, the learning process and the recall process are divided, and therefore they need all information to learn in advance.

However, in the real world, it is very difficult to get all information to learn in advance, so we need the model whose learning process and recall process are not divided. As such model, Grossberg and Carpenter proposed the ART (Adaptive Resonance Theory) [5]. However, the ART is based on the local representation, and therefore it is not robust for damaged neurons in the Map Layer. While in the field of associative memories, some models have been proposed [6 - 8]. Since these models are based on the distributed representation, they have the robustness for damaged neurons. However, their storage capacities are small because their learning algorithm is based on the Hebbian learning.

On the other hand, the Kohonen Feature Map (KFM) associative memory [9] has been proposed. Although the KFM associative memory is based on the local representation as similar as the ART[5], it can learn new patterns successively [10], and its storage capacity is larger than that of models in refs.[6 - 8]. It can deal with auto and hetero associations and the associations for plural sequential patterns including common terms [11, 12]. Moreover, the KFM associative memory with area representation [13] has been proposed. In the model, the area representation [14] was introduced to the KFM associative memory, and it has robustness for damaged neurons. However, it can not deal with one-to-many associations, and associations of analog patterns. As the model which can deal with analog patterns and one-to-many associations, the Kohonen Feature Map Associative Memory with Refractoriness based on Area Representation [15] has been proposed. In the model, one-to-many associations are realized by refractoriness of neurons. Moreover, by improvement of the calculation of the internal states of the neurons in the Map Layer, it has enough robustness for damaged neurons when analog patterns are memorized. However, all these models can not realize probabilistic association for the training set including one-to-many relations.

Figure 1.

Structure of conventional KFMPAM-WD.

As the model which can realize probabilistic association for the training set including one-to-many relations, the Kohonen Feature Map Probabilistic Associative Memory based on Weights Distribution (KFMPAM-WD) [16] has been proposed. However, in this model, the weights are updated only in the area corresponding to the input pattern, so the learning considering the neighborhood is not carried out.

In this paper, we propose an Improved Kohonen Feature Map Probabilistic Associative Memory based on Weights Distribution (IKFMPAM-WD). This model is based on the conventional Kohonen Feature Map Probabilistic Associative Memory based on Weights Distribution [16]. The proposed model can realize probabilistic association for the training set including one-to-many relations. Moreover, this model has enough robustness for noisy input and damaged neurons. And, the learning considering the neighborhood can be realized.

2. KFM Probabilistic Associative Memory based on Weights Distribution

Here, we explain the conventional Kohonen Feature Map Probabilistic Associative Memory based on Weights Distribution (KFMPAM-WD)(16).

2.1. Structure

Figure 1 shows the structure of the conventional

KFMPAM-WD. As shown in Fig. 1, this model has two layers; (1) Input/Output Layer and (2) Map Layer, and the Input/Output Layer is divided into some parts.

2.2. Learning process

In the learning algorithm of the conventional KFMPAM-WD, the connection weights are learned as follows:

  1. The initial values of weights are chosen randomly.

  2. The Euclidian distance between the learning vector X(p) and the connection weights vector Wi , d(X(p), Wi) is calculated.

  3. If d(X(p), Wi) t is satisfied for all neurons, the input pattern X(p) is regarded as an unknown pattern. If the input pattern is regarded as a known pattern, go to (8).

  4. The neuron which is the center of the learning area r is determined as follows:

    r=argmini:Diz+Dzi<diz(forzF)d(X(p),Wi)MathType@MTEF@5@5@+=feaagCart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaadkhacqGH9aqpdaWfqaqaaiaabggacaqGYbGaae4zaiaab2gacaqGPbGaaeOBaaWceaqabeaacaWGPbGaaGPaVlaaykW7caGG6aGaaGPaVlaaykW7caWGebWaaSbaaWqaaiaadMgacaWG6baabeaaliabgUcaRiaaykW7caWGebWaaSbaaWqaaiaadQhacaWGPbaabeaaliabgYda8iaaykW7caWGKbWaaSbaaWqaaiaadMgacaWG6baabeaaaSqaaiaacIcacaWGMbGaam4BaiaadkhacaaMc8UaaGPaVlaaykW7cqGHaiIicaaMc8UaamOEaiaaykW7cqGHiiIZcaaMc8UaamOraiaacMcaaaqabaGccaaMc8UaaGPaVlaaykW7caWGKbWaaeWaaeaacaWGybWaaWbaaSqabeaadaqadaqaaiaadchaaiaawIcacaGLPaaaaaGccaGGSaGaam4vamaaBaaaleaacaWGPbaabeaaaOGaayjkaiaawMcaaaaa@7262@E1

    where F is the set of the neurons whose connection weights are fixed. diz is the distance between the neuron i and the neuron z whose connection weights are fixed. In Eq.(1), Dij is the radius of the ellipse area whose center is the neuron i for the direction to the neuron j, and is given by

    Dij=ai,(dijy=0)bi,(dijx=0)ai2bi2bi2+mij2ai2(mij2+1),(otherwise)E2

    where ai is the long radius of the ellipse area whose center is the neuron i and bi is the short radius of the ellipse area whose center is the neuron i. In the KFMPAM-WD, ai and bi can be set for each training pattern. mij is the slope of the line through the neurons i and j. In Eq.(1), the neuron whose Euclidian distance between its connection weights and the learning vector is minimum in the neurons which can be take areas without overlaps to the areas corresponding to the patterns which are already trained. In Eq.(1), ai and bi are used as the size of the area for the learning vector.

  5. If d(X(p), Wr)> t is satisfied, the connection weights of the neurons in the ellipse whose center is the neuron r are updated as follows:

    Wi(t+1)=Wi(t)+α(t)(X(p)Wi(t)),(driDri)Wi(t),(otherwise)E3

    where (t) is the learning rate and is given by

    α(t)=α0(tT)T.E4

    Here, 0 is the initial value of (t) and T is the upper limit of the learning iterations.

  6. (5) is iterated until d(X(p), Wr)t is satisfied.

  7. The connection weights of the neuron r Wr are fixed.

  8. (2) (7) are iterated when a new pattern set is given.

2.3. Recall process

In the recall process of the KFMPAM-WD, when the pattern X is given to the Input/Output Layer, the output of the neuron i in the Map Layer, ximapis calculated by

ximap=1,(i=r)0,(otherwise)E5

where r is selected randomly from the neurons which satisfy

1NinkCg(XkWik)>θmapE6

where θmapis the threshold of the neuron in the Map Layer, and g()is given by

g(b)=1,(|b|<θd)0,(otherwise).E7

In the KFMPAM-WD, one of the neurons whose connection weights are similar to the input pattern are selected randomly as the winner neuron. So, the probabilistic association can be realized based on the weights distribution.

When the binary pattern X is given to the Input/Output Layer, the output of the neuron k in the Input/Output Layer xkiois given by

xkio=1,(Wrkθbno)0,(otherwise)E8

where θbiois the threshold of the neurons in the Input/Output Layer.

When the analog pattern X is given to the Input/Output Layer, the output of the neuron k in the Input/Output Layer xkiois given by

xkio=Wrk.E9

3. Improved KFM Probabilistic Associative Memory based on Weights Distribution

Here, we explain the proposed Improved Kohonen Feature Map Probabilistic Associative Memory based on Weights Distribution (IKFMPAM-WD). The proposed model is based on the conventional Kohonen Feature Map Probabilistic Associative Memory based on Weights Distribution (KFMPAM-WD) [16] described in 2.

3.1. Structure

Figure 2 shows the structure of the proposed IKFMPAM-WD. As shown in Fig. 2, the proposed model has two layers; (1) Input/Output Layer and (2) Map Layer, and the Input/Output Layer is divided into some parts as similar as the conventional KFMPAM-WD.

3.2. Learning process

In the learning algorithm of the proposed IKFMPAM-WD, the connection weights are learned as follows:

  1. The initial values of weights are chosen randomly.

  2. The Euclidian distance between the learning vector X(p) and the connection weights vector Wi , d(X(p), Wi), is calculated.

  3. If d(X(p), Wi) θt is satisfied for all neurons, the input pattern X(p) is regarded as an unknown pattern. If the input pattern is regarded as a known pattern, go to (8).

  4. The neuron which is the center of the learning area r is determined by Eq.(1). In Eq.(1), the neuron whose Euclid distance between its connection weights and the learning vector is minimum in the neurons which can be take areas without overlaps to the areas corresponding to the patterns which are already trained. In Eq.(1), ai and bi are used as the size of the area for the learning vector.

  5. If d(X(p), Wr) t is satisfied, the connection weights of the neurons in the ellipse whose center is the neuron r are updated as follows:

    Wi(t+1)=X(p),(θ1learnH(dri¯))Wi(t)+H(dri¯)(X(p)Wi(t)),(θ2learnH(dri¯)<θ1learnandH(dii¯)<θ1learn)Wi(t),(otherwise)E10

    where θ1learnare thresholds. H(dri¯)and H(dii¯)are given by Eq.(11) and these are semi-fixed function. Especially, H(dri¯)behaves as the neighborhood function. Here, i* shows the nearest weight-fixed neuron from the neuron i.

    H(dij¯)=11+expdij¯DεE11

    where dij¯shows the normalized radius of the ellipse area whose center is the neuron i for the direction to the neuron j, and is given by

    dij¯=dijDij.E12

    In Eq.(11), D (1 D) is the constant to decide the neighborhood area size and is the steepness parameter. If there is no weight-fixed neuron,

    H(dii¯)=0E13

    is used.

  6. (5) is iterated until d(X(p), Wr)t is satisfied.

  7. The connection weights of the neuron r Wr are fixed.

  8. (2) (7) are iterated when a new pattern set is given.

Figure 2.

Structure of proposed IKFMPAM-WD.

3.3. Recall process

The recall process of the proposed IKFMPAM-WD is same as that of the conventional KFMPAM-WD described in 2.3.

4. Computer experiment results

Here, we show the computer experiment results to demonstrate the effectiveness of the proposed IKFMPAM-WD.

4.1. Experimental conditions

Table 1 shows the experimental conditions used in the experiments of 4.2 4.6.

4.2. Association results

4.2.1. Binary patterns

In this experiment, the binary patterns including one-to-many relations shown in Fig. 3 were memorized in the network composed of 800 neurons in the Input/Output Layer and 400 neurons in the Map Layer. Figure 4 shows a part of the association result when “crow” was given to the Input/Output Layer. As shown in Fig. 4, when “crow” was given to the network, “mouse” (t=1), “monkey” (t=2) and “lion” (t=4) were recalled. Figure 5 shows a part of the association result when “duck” was given to the Input/Output Layer. In this case, “dog” (t=251), “cat” (t=252) and “penguin” (t=255) were recalled. From these results, we can confirmed that the proposed model can recall binary patterns including one-to-many relations.

Parameters for Learning
Threshold for Learningt learn10-4
Neighborhood Area SizeD3
Steepness Parameter in Neighborhood Functionε0.91
Threshold of Neighborhood Function (1)1 learn0.9
Threshold of Neighborhood Function (2)2 learn0.1
Parameters for Recall (Common)
Threshold of Neurons in Map Layermap0.75
Threshold of Difference between Weight Vector
and Input Vector
d0.004
Parameter for Recall (Binary)
Threshold of Neurons in Input/Output Layerb in0.5

Table 1.

Experimental Conditions.

Figure 3.

Training Patterns including One-to-Many Relations (Binary Pattern).

Figure 4.

One-to-Many Associations for Binary Patterns (When “crow” was Given).

Figure 5.

One-to-Many Associations for Binary Patterns (When “duck” was Given).

Figure 6 shows the Map Layer after the pattern pairs shown in Fig. 3 were memorized. In Fig. 6, red neurons show the center neuron in each area, blue neurons show the neurons in areas for the patterns including “crow”, green neurons show the neurons in areas for the patterns including “duck”. As shown in Fig. 6, the proposed model can learn each learning pattern with various size area. Moreover, since the connection weights are updated not only in the area but also in the neighborhood area in the proposed model, areas corresponding to the pattern pairs including “crow”/“duck” are arranged in near area each other.

Learning PatternLong Radius aiShort Radius bi
“crow”–“lion”2.51.5
“crow”–“monkey”3.52.0
“crow”–“mouse”4.02.5
“duck”–“penguin”2.51.5
“duck”–“dog”3.52.0
“duck”–“cat”4.02.5

Table 2.

Area Size corresponding to Patterns in Fig. 3.

Figure 6.

Area Representation for Learning Pattern in Fig. 3.

Input PatternOutput PatternArea SizeRecall Times
crowlion11 (1.0)43 (1.0)
monkey23 (2.1)87 (2.0)
mouse33 (3.0)120 (2.8)
duckpenguin11 (1.0)39 (1.0)
dog23 (2.1)79 (2.0)
cat33 (3.0)132 (3.4)

Table 3.

Recall Times for Binary Pattern corresponding to “crow” and “duck”.

Table 3 shows the recall times of each pattern in the trial of Fig. 4 (t=1250) and Fig. 5 (t=251 500). In this table, normalized values are also shown in ( ). From these results, we can confirmed that the proposed model can realize probabilistic associations based on the weight distributions.

4.2.2. Analog patterns

In this experiment, the analog patterns including one-to-many relations shown in Fig. 7 were memorized in the network composed of 800 neurons in the Input/Output Layer and 400 neurons in the Map Layer. Figure 8 shows a part of the association result when “bear” was given to the Input/Output Layer. As shown in Fig. 8, when “bear” was given to the network, “lion” (t=1), “raccoon dog” (t=2) and “penguin” (t=3) were recalled. Figure 9 shows a part of the association result when “mouse” was given to the Input/Output Layer. In this case, “monkey” (t=251), “hen” (t=252) and “chick” (t=253) were recalled. From these results, we can confirmed that the proposed model can recall analog patterns including one-to-many relations.

Figure 7.

Training Patterns including One-to-Many Relations (Analog Pattern).

Figure 8:

One-to-Many Associations for Analog Patterns (When “bear” was Given).

Figure 9.

One-to-Many Associations for Analog Patterns (When “mouse” was Given).

Learning PatternLong Radius aiShort Radius bi
“bear”–“lion”2.51.5
“bear”–“raccoon dog”3.52.0
“bear”–“penguin”4.02.5
“mouse”–“chick”2.51.5
“mouse”–“hen”3.52.0
“mouse”–“monkey”4.02.5

Table 4.

Area Size corresponding to Patterns in Fig. 7.

Figure 10.

Area Representation for Learning Pattern in Fig. 7.

Input PatternOutput PatternArea SizeRecall Times
bearlion11 (1.0)40 (1.0)
raccoon dog23 (2.1)90 (2.3)
penguin33 (3.0)120 (3.0)
mousechick11 (1.0)38 (1.0)
hen23 (2.1)94 (2.5)
monkey33 (3.0)118 (3.1)

Table 5.

Recall Times for Analog Pattern corresponding to “bear” and “mouse”.

Figure 10 shows the Map Layer after the pattern pairs shown in Fig. 7 were memorized. In Fig. 10, red neurons show the center neuron in each area, blue neurons show the neurons in the areas for the patterns including “bear”, green neurons show the neurons in the areas for the patterns including “mouse”. As shown in Fig. 10, the proposed model can learn each learning pattern with various size area.

Table 5 shows the recall times of each pattern in the trial of Fig. 8 (t=1 250) and Fig. 9 (t=251 500). In this table, normalized values are also shown in ( ). From these results, we can confirmed that the proposed model can realize probabilistic associations based on the weight distributions.

Figure 11.

Storage Capacity of Proposed Model (Binary Patterns).

Figure 12.

Storage Capacity of Proposed Model (Analog Patterns).

4.3. Storage capacity

Here, we examined the storage capacity of the proposed model. Figures 11 and 12 show the storage capacity of the proposed model. In this experiment, we used the network composed of 800 neurons in the Input/Output Layer and 400/900 neurons in the Map Layer, and 1-to-P (P=2,3,4) random pattern pairs were memorized as the area (ai =2.5 and bi =1.5). Figures 11 and 12 show the average of 100 trials, and the storage capacities of the conventional model(16) are also shown for reference in Figs. 13 and 14. From these results, we can confirm that the storage capacity of the proposed model is almost same as that of the conventional model(16). As shown in Figs. 11 and 12, the storage capacity of the proposed model does not depend on binary or analog pattern. And it does not depend on P in one-to-P relations. It depends on the number of neurons in the Map Layer.

4.4. Robustness for noisy input

4.4.1. Association result for noisy input

Figure 15 shows a part of the association result of the proposed model when the pattern “cat” with 20% noise was given during t=1 500. Figure 16 shows a part of the association result of the propsoed model when the pattern “crow” with 20% noise was given t=501 1000. As shown in these figures, the proposed model can recall correct patterns even when the noisy input was given.

Figure 13.

Storage Capacity of Conventional Model [16] (Binary Pattern

Figure 14.

Storage Capacity of Conventional Model [16] (Analog Patterns).

Figure 15.

Association Result for Noisy Input (When “crow” was Given.).

Figure 16.

Association Result for Noisy Input (When “duck” was Given.).

Figure 17.

Robustness for Noisy Input (Binary Patterns).

Figure 18.

Robustness for Noisy Input (Analog Patterns).

4.4.2. Robustness for noisy input

Figures 17 and 18 show the robustness for noisy input of the proposed model. In this experiment, 10 randam patterns in one-to-one relations were memorized in the network composed of 800 neurons in the Input/Output Layer and 900 neurons in the Map Layer. Figures 17 and 18 are the average of 100 trials. As shown in these figures, the proposed model has robustness for noisy input as similar as the conventional model(16).

4.5. Robustness for damaged neurons

4.5.1. Association result when some neurons in map layer are damaged

Figure 19 shows a part of the association result of the proposed model when the pattern “bear” was given during t=1 500. Figure 20 shows a part of the association result of the proposed model when the pattern “mouse” was given t=501 1000. In these experiments, the network whose 20% of neurons in the Map Layer are damaged were used. As shown in these figures, the proposed model can recall correct patterns even when the some neurons in the Map Layer are damaged.

4.5.2. Robustness for damaged neurons

Figures 21 and 22 show the robustness when the winner neurons are damaged in the proposed model. In this experiment, 1 10 random patterns in one-to-one relations were memorized in the network composed of 800 neurons in the Input/Output Layer and 900 neurons in the Map Layer. Figures 21 and 22 are the average of 100 trials. As shown in these figures, the proposed model has robustness when the winner neurons are damaged as similar as the conventional model [16].

Figure 19.

Association Result for Damaged Neurons (When “bear” was Given.).

Figure 20.

Association Result for Damaged Neurons (When “mouse” was Given.).

Figure 21.

Robustness of Damaged Winner Neurons (Binary Patterns).

Figure 22.

Robustness of Damaged Winner Neurons (Analog Patterns).

Figure 23:

Robustness for Damaged Neurons (Binary Patterns).

Figure 24.

Robustness for Damaged Neurons (Analog Patterns).

Figures 23 and 24 show the robustness for damaged neurons in the proposed model. In this experiment, 10 random patterns in one-to-one relations were memorized in the network composed of 800 neurons in the Input/Output Layer and 900 neurons in the Map Layer. Figures 23 and 24 are the average of 100 trials. As shown in these figures, the proposed model has robustness for damaged neurons as similar as the conventional model [16].

4.6. Learning speed

Here, we examined the learning speed of the proposed model. In this experiment, 10 random patterns were memorized in the network composed of 800 neurons in the Input/Output Layer and 900 neurons in the Map Layer. Table 6 shows the learning time of the proposed model and the conventional model(16). These results are average of 100 trials on the Personal Computer (Intel Pentium 4 (3.2GHz), FreeBSD 4.11, gcc 2.95.3). As shown in Table 6, the learning time of the proposed model is shorter than that of the conventional model.

5. Conclusions

In this paper, we have proposed the Improved Kohonen Feature Map Probabilistic Associative Memory based on Weights Distribution. This model is based on the conventional Kohonen Feature Map Probabilistic Associative Memory based on Weights Distribution. The proposed model can realize probabilistic association for the training set including one-to-many relations. Moreover, this model has enough robustness for noisy input and damaged neurons. We carried out a series of computer experiments and confirmed the effectiveness of the proposed model.

Learning Time (seconds)
Proposed Model(Binary Patterns)0.87
Proposed Model(Analog Patterns)0.92
Conventional Model(16) (Binary Patterns)1.01
Conventional Model(16) (Analog Patterns)1.34

Table 6.

Learning Speed.

© 2013 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Shingo Noguchi and Osana Yuko (January 16th 2013). Improved Kohonen Feature Map Probabilistic Associative Memory Based on Weights Distribution, Artificial Neural Networks - Architectures and Applications, Kenji Suzuki, IntechOpen, DOI: 10.5772/51581. Available from:

chapter statistics

2145total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Biologically Plausible Artificial Neural Networks

By João Luís Garcia Rosa

Related Book

First chapter

Introduction to the Artificial Neural Networks

By Andrej Krenker, Janez Bešter and Andrej Kos

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us