InTech uses cookies to offer you the best online experience. By continuing to use our site, you agree to our Privacy Policy.

Computer and Information Science » Computer Graphics » "Image Processing", book edited by Yung-Sheng Chen, ISBN 978-953-307-026-1, Published: December 1, 2009 under CC BY-NC-SA 3.0 license. © The Author(s).

Chapter 20

Analog-Digital Self-Learning Fuzzy Spiking Neural Network in Image Processing Problems

By Artem Dolotov and Yevgeniy Bodyanskiy
DOI: 10.5772/7061

Article top

Overview

Biological neuron
Figure 1. Biological neuron
Biological neuron behaviour: a) Dynamics of membrane potential 
							
								
									
										u
										(
										t
										)
									
								
							
							
						 (
							
								
									θ
								
							
							
						is the firing threshold, 
							
								
									
										
											u
											
												rest
											
										
										(
										t
										)
									
								
							
							
						is the rest potential); b) Incoming spikes; c) Outgoing spike
Figure 2. Biological neuron behaviour: a) Dynamics of membrane potential u ( t ) ( θ is the firing threshold, u rest ( t ) is the rest potential); b) Incoming spikes; c) Outgoing spike
Self-learning spiking neural network (spiking neurons are depicted the way to stress they act in integrate-and-fire manner)
Figure 3. Self-learning spiking neural network (spiking neurons are depicted the way to stress they act in integrate-and-fire manner)
An example of population coding. Incoming signal 
								
									
										
											
												x
												i
											
											(
											k
											)
										
									
								
								
							 fires receptive neurons 
								
									
										
											R
											
												N
												
													2,
													i
												
											
										
									
								
								
							 and
								
									
										
											R
											
												N
												
													3,
													i
												
											
										
									
								
								
							. It is considered here that width of activation function of all receptive neurons within the pool is the same
Figure 4. An example of population coding. Incoming signal x i ( k ) fires receptive neurons R N 2, i and R N 3, i . It is considered here that width of activation function of all receptive neurons within the pool is the same
Multiple synapse
Figure 5. Multiple synapse
Learning function 
								
									
										
											L
											(
											•
											)
Figure 6. Learning function L ( • )
Spiking neuron as a nonlinear dynamic system
Figure 7. Spiking neuron as a nonlinear dynamic system
Triangular impulse 
								
									
										
											a
											(
											t
											
											
											Δ
											)
Figure 8. Triangular impulse a ( t Δ )
Terms of linguistic variable for the i-th input. Membership functions are adjusted to represent a priori knowledge of input data structure. Incoming signal x
							
								i
							(k) fires fuzzy receptive neurons FRN
							2,i
							 and FRN
							3,i
Figure 9. Terms of linguistic variable for the i-th input. Membership functions are adjusted to represent a priori knowledge of input data structure. Incoming signal x i (k) fires fuzzy receptive neurons FRN 2,i and FRN 3,i
The Lenna image processing: a) Original image; b) Training set (30% of the original image); c) The 1st epoch of self-learning fuzzy spiking neural network learning; d) The 3rd epoch of self-learning fuzzy spiking neural network learning; e) The 3rd epoch of fuzzy c-means learning; f) The 30th epoch of fuzzy c-means learning
Figure 10. The Lenna image processing: a) Original image; b) Training set (30% of the original image); c) The 1st epoch of self-learning fuzzy spiking neural network learning; d) The 3rd epoch of self-learning fuzzy spiking neural network learning; e) The 3rd epoch of fuzzy c-means learning; f) The 30th epoch of fuzzy c-means learning

Analog-Digital Self-Learning Fuzzy Spiking Neural Network in Image Processing Problems

Artem Dolotov1 and Yevgeniy Bodyanskiy1

1. Introduction

Computational intelligence provides a variety of means that can perform complex image processing in a rather effective way. Among them, self-learning systems, especially self-learning artificial neural networks (self-organizing maps, ART neural networks, ‘Brain-State-in-a-Box’ neuromodels, etc.) (Haykin, 1999) and fuzzy clustering systems (fuzzy c-means, algorithms of Gustafson-Kessel, Yager-Filev, Klawonn-Hoeppner, etc) (Bezdek et al., 2005, Sato-Ilic & Jain, 2006), occupy a significant place as they make it possible to solve a data processing problem in the absence of a priori knowledge of it.

While there are many artificial neural networks that can be successfully used in image processing tasks, the most prominent of them are networks of a new, the third generation, commonly known as spiking neural networks (Maass & Bishop, 1998, Gerstner & Kistler, 2002). On the one hand, spiking neural networks are biologically more plausible than neural networks of the previous generations that is of fundamental importance for computational intelligence from theoretical point of view. On the other hand, networks of spiking neurons appeared to be computationally more powerful than conventional neural networks (Maass, 1997b). In addition, complex data processing via artificial neural networks of the second generation is time consuming due to multi-epoch learning; instead, spiking neural networks can perform the same processing tasks much faster as they require a few learning epochs only (Bohte et al., 2002, Berredo, 2005, Meftah et al., 2008,Lindblad & Kinser, 2005). All these facts are causing considerable interest in networks of spiking neurons as a powerful computational intelligence tool for image processing

Although spiking neural networks are becoming a popular computational intelligence tool for various technical problems solving, their architecture and functioning are treated in terms of neurophysiology rather than in terms of any technical sciences apparatus in the most research works on engineering subjects. Yet none technically plausible description of spiking neurons functioning has been provided.

In contrast to artificial neural networks, fuzzy logic systems are capable of performing accurate and efficient data processing under a priori and current uncertainty, particularly if classes to be separated overlap one another. Integrating artificial neural networks and fuzzy systems together allows of combining capabilities of both in a synergetic way (Jang et al., 1997), thus producing hybrid intelligent systems that achieve high performance and reliability in real life problems solving, particularly in image processing. Obviously, designing hybrid intelligent systems based on the new generation of artificial neural networks attracts both practical and theoretical interest.

In the present chapter of the book, analog-digital self-learning fuzzy spiking neural network that belongs to a new type of computational intelligence hybrid systems combining spiking neurons computational capabilities and fuzzy systems tolerance for uncertainty is proposed. It is demonstrated that both fuzzy probabilistic and fuzzy possibilistic approaches can be implemented on spiking neural network basis. A spiking neural network is treated in terms of well-known and widely used apparatus of classical automatic control theory based on the Laplace transform. It is shown that a spiking neuron synapse is nothing other than a second- order critically damped response unit, and a spiking neuron soma is a kind of threshold detection system. An improved unsupervised learning algorithm for the proposed neural network based on ‘Winner-Takes-More’ rule is introduced. Capabilities of the neural network in solving image processing problems are investigated. From theoretical point of view, the proposed neural network is another step toward evolution of artificial neural networks theory as a part of computational intelligence paradigm.

2. Formal models of spiking neurons

Biological neuron constructive features that are significant for the discussion that follows are sketched on Fig. 1 (Dayan & Abbott, 2001, Scott, 2002). As illustrated, neuron includes synapses, dendritic tree, soma and axon and its terminals. Synapse connects axonal terminals of a neuron with dendrites of another neuron. Soma processes incoming information and transmits it through axon and axonal terminals to synapses of the subsequent neurons. Neurons communicate one another by nerve pulses (action potentials, spikes).

media/image1.jpeg

Figure 1.

Biological neuron

Neuron behaviour can be briefly described in the following way (Fig. 2). Spike arrived to synapse from presynaptic neuron generates postsynaptic potential (either excitatory or inhibitory – depending on synapse type). Postsynaptic potential reaches neuron soma through a dendrite and either increases membrane potential, or decreases it. Neuron soma accumulates all postsynaptic potentials incoming from different synapses. When membrane potential exceeds firing threshold, neuron fires and emits outgoing spike that moves through axon to postsynaptic neurons. Once neuron has fired, its soma produces spike after-potential, namely, the membrane potential drops steeply below the rest potential and then it ascends gradually to the rest potential back. Period when membrane potential is below the rest potential is called refractory period. Within this period, appearance of another spike is unlikely. If the firing threshold is not reached after arrival of a postsynaptic potential, membrane potential gradually descends to rest potential until another postsynaptic potential incomes.

media/image2.png

Figure 2.

Biological neuron behaviour: a) Dynamics of membrane potential u(t) ( θ is the firing threshold, urest(t) is the rest potential); b) Incoming spikes; c) Outgoing spike

Traits of any artificial neural networks generation depend upon the formal model of biological neuron that is considered within scope of that generation. Any formal model treats biological neuron on a certain level of abstraction. It takes into account some details of biological neuron behaviour and features, but disregards other ones. On the one hand, prescribing complexity level of formal model sets computational and performance properties of artificial neural networks originated by that model. On the other hand, chosen level of abstraction of formal model defines how realistic artificial neural networks are.

Both the first and the second generations of artificial neural networks (Maass, 1997b) rest on the rate model that neglects temporal properties of biological neuron (Maass & Bishop, 1998). One of the essential elements for both generations is a neuron activation function. The rate model based on the threshold activation function (McCulloch & Pitts, 1943) gave birth to the first generation of artificial neural networks. Though such networks were capable of performing some elementary logic functions, their computational capabilities were very limited (Minsky & Papert, 1969). Replacing the threshold activation function with continuous one resulted in appearance of the second generation, that turned out to be significantly powerful than networks of the previous generation (Cybenko, 1989, Hornik et al., 1989). Nevertheless, neurons of the second generation are even far from real biological neurons than the first generation neurons since they ignore soma firing mechanism totally (Gerstner & Kistler, 2002). This gap is avoided in threshold-fire models (Maass & Bishop, 1998). One of such models, namely, the leaky integrate-and-fire model (Maass & Bishop, 1998, Gerstner & Kistler, 2002), is the basis for the third generation of artificial neural networks.

The leaky integrate-and-fire model is the one of the simplest and well-known formal models of a biological neuron that are used in different areas of neuroscience. It captures neuron behaviour described above except the neuron refractoriness. The model considers that on firing, membrane potential drops to the rest potential, not below it.

The spike-response model (Maass & Bishop, 1998, Gerstner & Kistler, 2002), another threshold-fire model, captures biological neuron behaviour more accurately. Besides postsynaptic potentials accumulating and spike firing, it models the neuron refractoriness also. This model will be used in the subsequent discussion.

It makes sense to mention here that computational neuroscience and computational intelligence sometimes understand spiking neurons in a different way. Spiking neuron in computational neuroscience is any model of biological neuron that transmits and processes information by spike trains. Within scope of computational intelligence, spiking neuron is a leaky integrate-and-fire model usually. This results from the fact that self-learning properties of spiking neurons are caused by capability of any threshold-fire model to detect coincidences in input signal. Since the leaky integrate-and-fire model is the simplest one among the threshold-fire models, there is no sense to use any complicated ones. Obviously, if any more complicated model reveals some particular properties that are useful for solving technical problems, the concept of spiking neurons will be extended in computational intelligence.

3. Self-learning spiking neural network

3.1. Introduction

Ability of spiking neurons to respond to incoming signal selectively was originally discovered by J. Hopfield in 1995 (Hopfield, 1995). He found that spiking neuron soma behaviour was similar to a radial basis function: the neuron fired as earlier as higher degree of coincidence of incoming spikes was; if the degree was sufficiently low, the neuron did not fire at all. And spiking neuron synapses appeared to be acting as a spike pattern storing unit: one was able to get a spiking neuron to fire to a certain spike pattern by adjusting synaptic time delays the way that they evened out (in temporal sense) incoming signal and made it to reach the neuron soma simultaneously. Spike pattern encoded in synaptic time delays of a neuron was called a center of spiking neuron in the following. Here it is worth to note that synchronization phenomena is of primary importance in nature (Pikovsky et al., 2001), particularly in the brain functioning (Malsburg, 1994).

The discovered capabilities of spiking neurons provided the basis for constructing self-learning networks of spiking neurons. Original architecture of self-learning spiking neural network and its learning algorithm, namely, a temporal Hebbian rule were introduced in (Natschlaeger & Ruf, 1998). The proposed self-learning network was able to separate clusters as long as their number was not greater than dimensionality of input signal. If clusters number exceeded number of input signal dimensions, spiking neural network performance decreased. This drawback was overcome by using population coding of incoming signal based on pools of receptive neurons in the first hidden layer of the network (Bohte et al., 2002). Such spiking neural network was shown to be considerably powerful and significantly fast in solving real life problems. Henceforward we will use this network as a basis for its further improvements and hybrid architectures designing.

3.2. Architecture

Self-learning spiking neural network architecture is shown on Fig. 3. As illustrated, it is heterogeneous two-layered feed-forward neural network with lateral connections in the second hidden layer.

The first hidden layer consists of pools of receptive neurons and performs transformation of input signal. It encodes an (n×1) -dimensional input sampled pattern x(k) (here n is the dimensionality of input space, k=1,N¯ is a patter number, N is number of patterns in incoming set) into (hn×1) -dimensional vector of input spikes t[0](x(k)) (here h is the number of receptive neurons in a pool) where each spike is defined by its firing time.

Spiking neurons form the second hidden layer of the network. They are connected with receptive neurons with multiple synapses where incoming vector of spikes transforms into postsynaptic potentials. Number of spiking neurons in the second hidden layer is set to be equal to the number of clusters to be detected. Each spiking neuron corresponds to a certain cluster. The neuron that has fired to the input pattern defines cluster that the pattern belongs to. Thus, the second hidden layer takes (hn×1) -dimensional vector of input spikes t[0](x(k)) and outputs (m×1) -dimensional vector of outgoing spikes t[1](x(k)) that defines the membership of input pattern x(k)

This is the basic architecture and behaviour of self-learning spiking neural network. The detailed architecture is stated below.

3.3. Population coding and receptive neurons

The first hidden layer is constructed to perform population coding of input signal. It acts in such a manner that each dimensional component xi(k) i=1,n¯ , of input signal x(k) is processed by a pool of h receptive neurons RNli l=1,h¯ . Obviously, there can be different number of receptive neurons hi in a pool for each dimensional component in the general case. For the sake of simplicity, we will consider here that the number of neurons is equal for all pools.

As a rule, activation functions of receptive neurons within a pool are bell-shaped (Gaussians usually), shifted, overlapped, of different width, and have dead zone. Generally firing time of a spike emitted by a receptive neuron RNli upon incoming signal xi(k) lies in a certain interval {1}[0,tmax[0]] referred to as coding interval and is described by the following expression:

tli[0](xi(k))={tmax[0](1ψ(|xi(k)cli[0]|σli))ψ(|xi(k)cli[0]|σli)θr.n.1ψ(|xi(k)cli[0]|σli)θr.n
(1)
media/image27.png

Figure 3.

Self-learning spiking neural network (spiking neurons are depicted the way to stress they act in integrate-and-fire manner)

where is the floor function, ψ() cli[0] σli and θr.n. are the receptive neuron’s activation function, center, width and dead zone, respectively ( r.n. in the last parameter means ‘receptive neuron’), -1 indicates that the neuron does not fire. An example of population coding is depicted on Fig. 4. It is easily seen that the closer xi(k) is to the center cli[0] of receptive neuron RNli the earlier the neuron emits spike tli[0](xi(k))

media/image38.png

Figure 4.

An example of population coding. Incoming signal xi(k) fires receptive neurons RN2,i and RN3,i . It is considered here that width of activation function of all receptive neurons within the pool is the same

In this work, we used Gaussian as activation function of receptive neurons:

ψ(|xi(k)cli[0]|σli)=exp((xi(k)cli[0])22σli2)
(2)

There can be several ways to set widths and centers of receptive neurons within a pool. As a rule, activation functions can be of two types – either ‘narrow’ or ‘wide’. Centers of each width type of activation function are calculated in different ways but in either case they cover date range uniformly. More details can be found in (Bohte et al., 2002).

3.4. Spiking neurons layer

Spiking neuron is considered to be formed of two constituents, they are: synapse and soma.

As it was mentioned above, synapses between receptive neurons and spiking neurons are multiple structures. As shown on Fig. 5, a multiple synapse MSjli consists of a set of q subsynapses with different time delays dp dpdp10 dqd1tmax[0] and adjustable weights wjlip (here p=1,q¯ ). It should be noted that number of subsynapses within a multiple synapse are fixed for the whole network. Having a spike tli[0](xi(k)) from the li -th receptive neuron, the p -th subsynapse of the j -th spiking neuron produces delayed weighted postsynaptic potential

ujlip(t)=wjlipεjlip(t)=wjlipε(t(tli[0](xi(k))+dp))
(3)

where ε() is a spike-response function usually described by the expression (Natschlaeger & Ruf, 1998)

ε(t(tli[0](xi(k))+dp))=t(tli[0](xi(k))+dp)τPSPexp(1t(tli[0](xi(k))+dp)τPSP)××H(t(tli[0](xi(k))+dp))
(4)
τPSP – membrane potential decay time constant whose value can be obtained empirically ( PSP means ‘postsynaptic potential’), H() – the Heaviside step function. Output of the multiple synapse MSjli forms total postsynaptic potential
ujli(t)=p=1qujlip(t)
(5)
media/image61.jpeg

Figure 5.

Multiple synapse

Each incoming total postsynaptic potential contributes to membrane potential of spiking neuron SNj as follows:

uj(t)=i=1nl=1hujli(t)
(6)

Spiking neuron SNj generates at most one outgoing spike tj[1](x(k)) during a simulation interval (the presentation of an input pattern x(k) ), and fires at the instant the membrane potential reaches firing threshold θs.n. ( s.n. means here ‘spiking neuron’). After neuron has fired, the membrane potential is reset to the rest value urest (0 usually) until the next input pattern is presented.

It is easily seen that the descried behaviour of spiking neuron corresponds to the one of the leaky integrate-and-fire model.

Spiking neurons are linked with lateral inhibitory connections that disable all other neurons to fire after the first one has fired. Thus, any input pattern makes only one spiking neuron to fire that is only one component of the vector of outgoing spikes has non-negative value. There can be cases when a few spiking neurons fire simultaneously for an input pattern. Such cases are rare enough, and their appearance depends directly on initial synaptic weights distribution.

3.5. Crisp data clustering

As it was mentioned above, a spiking neuron acts similarly to a radial basis function, and its response depends on degree of coincidence of the input. There was considered a spiking neuron center to describe such neuron behaviour in a convenient way (Natschlaeger & Ruf, 1998). In the general case, it is considered to possess the following property: the closer input pattern is to the neuron’s center, the earlier output spike fires. Hence, a spiking neuron firing time reflects the similarity (Natschlaeger & Ruf, 1998) (or distance (Bohte et al., 2002)) between the input pattern and the neuron center. Degree of coincidence is utilized here as a similarity (distance) measure.

Center of spiking neuron is encoded in the synaptic time delays. They produce coincidence output (and in its terns it makes the soma to fire at the earliest possible time) if incoming pattern is similar to the encoded one. Thus, the learned spiking neuron can respond selectively to the input set of patterns. Data clustering in the described neural network rests on this property of spiking neuron. Input pattern fires the neuron whose center is the most similar (the closest) to it, and the fired spike prevents the rest neurons to fire through the lateral inhibitory connections. This way self-learning spiking neural network performs clusters separation if classes to be detected do not overlap one another.

One can readily see that the unsupervised pattern classification procedure of the spiking neurons layer is identical with the one of self-organizing maps (Kohonen, 1995).

4. Spiking neural network learning algorithms

4.1. Winner-takes-all

The purpose of an unsupervised learning algorithm of spiking neural network is to adjust centers of spiking neurons so as to make each of them to correspond to centroid of a certain data cluster. Such learning algorithm was introduced on the basis of two learning rules, namely, ‘Winner-Takes-All’ rule and temporal Hebbain rule (Natschlaeger & Ruf, 1998, Gerstner et al., 1996). The first one defines which neuron should be updated, and the second one defines how it should be updated. The algorithm updates neuron centers through synaptic weights adjusting, whereas synaptic time delays always remain constant. The concept here is that significance of the given time delay can be changed by varying corresponding synaptic weight.

Each learning epoch consists of two phases. Competition, the first phase, defines a neuron-winner. Being laterally linked with inhibitory connections, spiking neurons compete to respond to the pattern. The one wins (and fires) whose center is the closest to the pattern. Following competition phase, weights adjusting takes place. The learning algorithm adjusts synaptic weights of the neuron-winner to move it closer to the input pattern. It strengthens weights of those subsynapses which contributed to the neuron-winner’s firing (i.e. the subsynapses produced delayed spikes right before the neuron firing) and weakens ones which did not contribute (i.e. the delayed spikes appeared right after the neurons firing or long before it). Generally, the learning algorithm can be expressed as

wjlip(K+1)={wjlip(K)+ηw(K)L(Δtjlip)j=j˜wjlip(K)jj˜
(7)

where K is the current epoch number, ηw()0 is the learning rate (while it is constant in (Natschlaeger & Ruf, 1998), it can depend on epoch number in the general case; w means ‘weights’), L() is the learning function (Gerstner et al, 1996), j˜ is the number of neuron that has won on the current epoch, Δtjlip is the time delay between delayed spike tli[0](xi(k))+dp produced by the p -th subsynapse of the li -th synapse and spiking neuron firing time tj[1](x(k)) :

Δtjlip=tli[0](xi(k))+dptj[1](x(k))
(8)

As a rule, the learning function has the following form (Berredo, 2005):

L(Δtjlip)=(1+β)exp((Δtjlipα)22(κ1))β
(9)
κ=1ν22ln(β1+β)
(10)

where α0 β0 ν are the shape parameters of the learning function L() that can be obtained empirically (Berredo, 2005, Natschlaeger & Ruf, 1998). The learning function and its shape parameters are depicted on Fig. 6. The effect of the shape parameters on results of information processing performed by spiking neural network can be found in (Meftah et al., 2008).

Upon the learning stage, center of a spiking neuron represents centroid of a certain data cluster, and spiking neural network can successfully perform unsupervised classification of the input set.

4.2. Winner-takes-more

The learning algorithm (7) updates only neuron-winner on each epoch and disregards other neurons. It seems more natural to update not only spiking neuron-winner, but also its neighbours (Bodyanskiy & Dolotov, 2009). This approach is known as ‘Winner-Takes-More’

media/image88.png

Figure 6.

Learning function L()

rule. It implies that there is a cooperation phase before weights adjustment. Neuron-winner determines a local region of topological neighbourhood on each learning epoch. Within this region, the neuron-winner fires along with its neighbours, and the closer a neighbour is to the winner, the more significantly its weights are changed. The topological region is represented by the neighbourhood function ϕ(|Δtjj˜|) that depends on difference |Δtjj˜| between the neuron-winner firing time tj˜[1](x(k)) and the neighbour firing time tj[1](x(k)) (distance between the neurons in temporal sense) and a parameter that defines effective width of the region. As a rule, ϕ() is a kernel function that is symmetric about its maximum at the point where Δtjj˜=0 . It reaches unity at that point and monotonically decreases as Δtjj˜ tends to infinity. The functions that are the most frequently used as neighbourhood function are Gaussian, paraboloid, Mexican Hat, and many others (Bodyanskiy & Rudenko, 2004).

For self-learning spiking neural network, the learning algorithm based on ‘Winner-Takes-More’ rule can be expressed in the following form (Bodyanskiy & Dolotov, 2009):

wjlip(K+1)=wjlip(K)+ηw(K)ϕ(|Δtjj˜|)L(Δtjlip)
(11)

where temporal distance Δtjj˜ is

Δtjj˜=tj˜[1](x(k))tj[1](x(k))
(12)

Obviously, expression (11) is a generalization of (7).

Analysis of competitive unsupervised learning convergence showed that width parameter of the neighbourhood function should decrease during synaptic weights adjustment (Cottrell & Fort, 1986). For Gaussian neighbourhood function

ϕ(|Δtjj˜|K)=exp((Δtjj˜)22ρ2(K))
(13)

width parameter ρ can be adjusted as follows (Ritter & Schulten, 1986):

ρ(K)=ρ(0)exp(Kγ)
(14)

where γ0 is a scalar that determines rate of neuron-winner effect on its neighbours.

Noteworthily that exponential decreasing of width parameter can be achieved by applying the simpler expression instead of (14) (Bodyanskiy & Rudenko, 2004):

ρ(K)=γρ(K1)0γ1
(15)

Learning algorithm (11) requires modification of self-learning spiking neural architecture. Lateral inhibitory connections in the second hidden layer should be replaced with excitatory ones during the network learning stage in order to implement ‘Winner-Takes-More’ rule.

In the following sections, it will be shown that the learning algorithm based on ‘Winner-Takes-More’ rule is more natural than the one based on ‘Winner-Takes-All’ rule to learn fuzzy spiking neural network.

5. Spiking neural network as an analog-digital system

5.1. Introduction

Hardware implementations of spiking neural network demonstrated fast processing ability that made it possible to apply such systems in real-life applications where processing speed was a rather critical parameter (Maass, 1997a, Maass & Bishop 1998, Schoenauer et al., 2000, Kraft et al., 2006). From theoretical point of view, the current research works on spiking neurons hardware implementation subject are very particular, they lack for a technically plausible description on a general ground. In this section, we consider a spiking neuron as a processing system of classical automatic control theory (Feldbaum & Butkovskiy, 1971, Dorf & Bishop, 1995, Phillips & Harbor, 2000, Goodwin et al., 2001). Spiking neuron functioning is described in terms of the Laplace transform. Having such a general description of a spiking neuron, one can derive various hardware implementations of self-learning spiking neural network for solving specific technical problems, among them realistic complex image processing.

Within a scope of automatic control theory, a spike t(x(k)) can be represented by the Dirac delta function δ(tt(x(k))) . Its Laplace transform is

L{δ(tt(x(k)))}=et(x(k))s
(16)

where s is the Laplace operator. Spiking neuron takes spikes on its input, performs spike–membrane potential–spike transformation, and produces spikes on its output. Obviously, it is a kind of analog-digital system that processes information in continuous-time form and transmits it in pulse-position form. This is the basic concept for designing analog-digital architecture of self-learning spiking neural network. Overall network architecture is depicted on Fig. 7 and is explained in details in the following subsections.

media/image107.png

Figure 7.

Spiking neuron as a nonlinear dynamic system

5.2. Synapse as a second-order critically damped response unit

Multiple synapse MSjli of a spiking neuron SNj transforms incoming pulse-position signal δ(ttli[0](xi(k))) to continuous-time signal ujli(t) . Spike-response function (4), the basis of such transformation, has form that is similar to the one of impulse response of second-order damped response unit. Transfer function of a second-order damped response unit with unit gain factor is

G˜(s)=1(τ1s+1)(τ2s+1)=1τ42s2+τ3s+1
(17)

where τ1,2=τ32±τ324τ42 τ1τ2 τ32τ4 and its impulse response is

ε˜(t)=1τ1τ2(etτ1etτ2)
(18)

Putting τ1=τ2=τPSP (that corresponds to a second-order critically damped response unit) and applying l'Hôpital's rule, one can obtain

ε˜(t)=tτPSP2etτPSP
(19)

Comparing spike-response function (4) with the impulse response (19) leads us to the following relationship:

ε(t)=eτPSPε˜(t)
(20)

Thus, transfer function of the second-order critically damped response unit whose impulse response corresponds to a spike-response function is

GSRF(s)=eτPSP(τPSPs+1)2
(21)

where SRF means ‘spike-response function’.

Now, we can design multiple synapse in terms of the Laplace transform (Bodyanskiy et al., 2009). As illustrated on Fig. 7, multiple synapse MSjli is a dynamic system that consists of a set of subsynapses that are connected in parallel. Each subsynapse is formed by a group of time delay, second-order critically damped response unit, and gain. As a response to incoming spike δ(ttli[0](xi(k))) the subsynapse produces delayed weighted postsynaptic potential ujlip(s) and the multiple synapse produces total postsynaptic potential ujli(s) that arrives to spiking neuron soma.

Taking into account (21), transfer function of the p -th subsynapse of MSjli takes the following form:

Ujlip(s)=wjlipedpsGSRF(s)=τPSPwjlipe1dps(τPSPs+1)2
(22)

and its response to a spike δ(ttli[0](xi(k))) is

ujlip(s)=etli[0](xi(k))sUjlip(s)=τPSPwjlipe1(tli[0](xi(k))+dp)s(τPSPs+1)2
(23)

So finally, considering transfer function of multiple synapse MSjli

Ujli(s)=p=1qUjlip(s)=p=1qτPSPwjlipe1dps(τPSPs+1)2
(24)

the Laplace transform of the multiple synapse output can be expressed in the following form:

ujli(s)=etli[0](xi(k))sUjli(s)=p=1qτPSPwjlipe1(tli[0](xi(k))+dp)s(τPSPs+1)2
(25)

Here it is worth to note that since it is impossible to use δ -function in practice (Phillips & Harbor, 2000), it is convenient to model it with impulse of a triangular form (Feldbaum & Butkovskiy, 1971) as shown on Fig. 8. Such impulse is similar to δ -function under the following condition

limΔ0a(tΔ)=δ(t)
(26)
media/image133.png

Figure 8.

Triangular impulse a(tΔ)

In this case, numerator of (21) should be revised the way to take into account finite peak of a(tΔ) (in contrast to the one of the Dirac delta function).

5.3. Soma as a threshold detection unit

Spiking neuron soma performs transformation that is opposite to one of the synapse. It takes continuous-time signals ujli(t) and produces pulse-position signal δ(ttj[1](x(k))) . In doing so, soma responds each time membrane potential reaches a certain threshold value. In other words, spiking neuron soma acts as a threshold detection system and consequently it can be designed on the base of relay control systems concept (Tsypkin, 1984). In (Bodyanskiy et al., 2009), mechanisms of threshold detection behaviour and firing process were described. Here we extend this approach to capture refractoriness of spiking neuron.

Threshold detection behaviour of a neuron soma can be modelled by an element relay with dead zone θs.n. that is defined by the nonlinear function

Φrelay(uj(t)θs.n.)=sign(uj(t)θs.n.)+12
(27)

where sign (•)is the signum function. Soma firing can be described by a derivative unit that is connected with the element relay in series and produces a spike each time the relay switches. In order to avoid a negative spike that appears as a response to the relay resetting, a conventional diode is added next to the derivative unit. The diode is defined by the following function:

Φdiode(δ(ttrelay[1]))=δ(ttrelay[1])H(δ(ttrelay[1]))
(28)

where trelay[1] is a spike produced by the derivative unit upon the relay switching.

Now we can define the Laplace transform of an outgoing spike tj[1](x(k)) , namely,

L{δ(ttj[1](x(k)))}=etj[1](x(k))s=L{Φdiode(sL{Φrelay(u(t)jθs.n.)})}
(29)

As it was mentioned above, the leaky integrate-and-fire model disregards the neuron refractoriness. Anyway, the refractory period is implemented in the layer of spiking neurons indirectly. The point is that a spiking neuron cannot produce another spike after firing and until the end of the simulation interval since the input pattern is provided only once within the interval. In the analog-digital architecture of spiking neuron, the refractoriness can be modelled by a feedback circuit. As shown on Fig. 7, it is a group of a time delay, a second-order critically damped response unit, and a gain that are connected in series. The time delay defines duration of a spike generation period dspike (usually, dspike0 ). The second-order critically damped response unit defines a spike after-potential. Generally, spike after-potential can be represented by a second-order damped response unit, but for the sake of simplicity, we use critically damped response unit as it can be defined by one parameter only, namely, τSAP ( SAP means here ‘spike after-potential’). This parameter controls duration of the refractory period. Finally, the gain unit sets amplitude of the spike after-potential wSAP . Obviously, wSAP should be much greater than any synaptic weight.

Thus, transfer function of the feedback circuit is

GF.B.(s)=wSAPedspikes(τSAPs+1)2
(30)

where F.B. means ‘feedback circuit’, and transfer function of the soma is

Gsoma(s)=GF.F.1+GF.F.GF.B.
(31)

where G F.F. is defined by (29) (F.F. means ‘feed-forward circuit’).

It is easily seen that the functioning of spiking neuron analog-digital architecture introduced above is similar to the spike-response model.

6. Self-learning hybrid systems based on spiking neural network

6.1. Fuzzy receptive neurons

A common peculiarity of artificial neural networks is that they store dependence of system model outputs on its inputs in the form of ‘black box’. Instead, data processing methods based on fuzzy logic allow of system model designing and storing in analytical form that can be substantially interpreted in a relatively simple way. This fact arises interest in designing of hybrid systems that can combine spiking neural networks computational capabilities with capability of fuzzy logic methods to conveniently describe input-output relationships of the system being modelled. The present section shows how receptive neuron layers, a part of spiking neural network, can be ‘fuzzified’.

One can readily see that the layer of receptive neuron pools is identical to a fuzzification layer of neuro-fuzzy systems like Takagi-Sugeno-Kang networks, ANFIS, etc. (Jang et al., 1997). Considering activation function ψli(xi(k)) as a membership function, the receptive neurons layer can be treated as the one that transforms input data set to a fuzzy set that is defined by values of activation-membership function ψli(xi(k)) and is expressed over time domain in form of firing times tli[0](xi(k)) (Bodyanskiy et al., 2008a). In fact, each pool of receptive neurons performs zero order Takagi-Sugeno fuzzy inference (Jang et al., 1997)

IF xi(k)IS xliTHEN OUTPUT IStli[0]
(32)

where Xli is the fuzzy set with membership function ψli(xi(k)) . Thus, one can interpret a receptive neurons pool as a certain linguistic variable and each receptive neuron (more precisely, fuzzy receptive neuron) within the pool – as a linguistic term with membership function ψli(xi(k)) (Fig. 9). This way, having any a priori knowledge of data structure, it is possible beforehand to adjust activation functions of the first layer neurons to fit them and thus, to get better clustering results.

6.2. Fuzzy clustering

Conventional approach of data clustering implies that each pattern x(k) can belong to one cluster only. It is more natural to consider that a pattern can belong to a several clusters with different membership levels. This case is the subject matter of fuzzy cluster analysis that is heading in several directions. Among them, algorithms based on objective function are the most mathematically rigorous (Bezdek, 1981). Such algorithms solve data processing tasks by optimizing a certain preset cluster quality criterion.

One of the commonly used cluster quality criteria can be stated as follows:

E(μj(x(k))vj)=k=1Nj=1mμjζ(x(k))x(k)vjA2
(33)

where μj(x(k))[0,1] is the membership level of the input pattern x(k) to the j -th cluster, vj is the center of the j -th cluster, ζ0 is the fuzzifier that determines boundary between

media/image164.png

Figure 9.

Terms of linguistic variable for the i-th input. Membership functions are adjusted to represent a priori knowledge of input data structure. Incoming signal x i (k) fires fuzzy receptive neurons FRN 2,i and FRN 3,i

clusters and controls the amount of fuzziness in the final partition, x(k)vjA is the distance between x(k) and vj in a certain metric, A is a norm matrix that defines distance metric. By applying the method of indefinite Lagrange multipliers under restrictions

j=1mμj(x(k))=1k=1,N¯
(34)
0k=1Nμj(x(k))Nj=1,m¯
(35)

minimization of (33) leads us to the following solution:

μj(x(k))=(x(k)vjA2)11ζι=1m(x(k)vιA2)11ζ
(36)
vj=k=1Nμjζ(x(k))x(k)k=1Nμjζ(x(k))
(37)

that originates the methods of so-called fuzzy probabilistic clustering (Bezdek et al., 2005). In the case when norm matrix A is the identity matrix and ζ=1 , equations (36), (37) present hard c-means algorithm, and for ζ=2 they are conventional fuzzy c-means algorithm.

Efficiency of fuzzy probabilistic clustering decreases in the presence of noise. Algorithm (36), (37) produces unnaturally high degree of membership for outliers that are equidistant from clusters centers. This drawback is avoided by applying fuzzy possibilistic approach that is based on the following objective function:

E(μj(x(k))vj)=k=1Nj=1mμjζ(x(k))x(k)vjA2+j=1mλjk=1N(1μj(x(k)))ζ
(38)

where λj0 is the scalar parameter that defines the distance at which membership level takes the value 0.5, i.e. if x(k)vjA2=λj , then μj(x(k))=0.5 . Minimization of (38) with respect to μj(x(k)) vj , and λj yields the following solution:

μj(k)=(1+(x(k)vjA2λj)1ζ1)1
(39)
vj=k=1Nμjζ(k)x(k)k=1Nμjζ(k)
(40)
λj=k=1Nμjζ(k)x(k)vjA2k=1Nμjζ(k)
(41)

that gives conventional possibilistic c-means algorithm if ζ=2 and A is the identity matrix (Bezdek et al., 2005).

After spiking neural network learning has been done, center cj[1] of a spiking neuron SNj represents center vj of a certain data cluster, and its firing time tj[1](x(k)) reflects distance x(k)vjA in temporal sense (Natschlaeger & Ruf, 1998, Bohte et al., 2002). This notion allows us of using self-learning spiking neural network output in fuzzy clustering algorithms described above. In order to implement fuzzy clustering on the base of the spiking neural network, its architecture is modified in the following way: lateral connections in the second hidden layer are disabled, and output fuzzy clustering layer is added next to spiking neuron layer. Such modification is applied to the spiking neural network on data clustering stage only. Output fuzzy clustering layer receives information on the distances of input patter to centers of all spiking neurons and produces fuzzy partition using either probabilistic approach (36), (37) as follows (Bodyanskiy & Dolotov, 2008a-2008b):

μj(x(k))=(tj[1](x(k)))21ζι=1m(tι[1](x(k)))21ζ
(42)

or possibilistic approach (39)-(41) as follows (Bodyanskiy et al., 2008b):

μj(x(k))=(1+((tj[1](x(k)))2λj)1ζ1)1
(43)
λj=k=1Nμjζ(tj[1](x(k)))2k=1Nμjζ(x(k))
(44)

Obviously, the learning algorithm (11) is more natural here then (7) since response of each spiking neuron within the second hidden layer matters for producing fuzzy partition by the output layer.

The advantage of fuzzy clustering based on self-learning spiking neural network is that it is not required to calculate centers of data clusters according to (37) or (40) as the network finds them itself during learning.

7. Simulation experiment

The proposed self-learning fuzzy spiking neural network was tested on the coloured Lenna image shown on Fig. 10a (USC-SIPI Image Database). The image is a standard benchmark that is widely used in image processing. The image has 3 layers (RGB) with spatial dimensions 512512 so the set to process is formed of 262144 three-dimensional data points (n=3). The purpose was to separate classes by colour of pixels avoiding their spatial location. Obviously, some classes overlap one another as three RGB-components define a plenty of colours, and the boundary between colours is indistinct. There were considered 8 classes to be separated on the image (m=8). A certain grade of grey was assigned to each of the eight classes to visualize the obtained results. 30% of the image pixels were randomly selected to generate a training set (Fig. 10b).

Self-leaning fuzzy spiking neural network settings were set as follows (the most settings were taken from (Berredo, 2005)): time step is 0.1 sec, h=6, receptive neuron type – crisp, θr.n.=0.1 tmax[0]=20 sec, τPSP=3 sec, q=16 d1=0 d16=15 , minimum value of a synaptic weight is 0, maximum value is 1, simulation interval length is 30 sec, ηw=0.35 α=2.3 sec, β=0.2 ν=5 sec, θs.n.(0)=9 θs.n.(K+1)=θs.n.(K)+0.3θs.n.(K)Kmax Kmax=3 , neighbourhood function – Gaussian, ρ(0)=6 γ=0.5 , calculating ρ(K) – expression (15), fuzzy clustering – probabilistic, ζ=2 , defuzzification method – the largest value. Results of image processing produced by the spiking neural network on the 1st and the 3rd epochs are shown on Fig. 10c and Fig. 10d, respectively.

Fuzzy c-means algorithm was also trained over the same testing set ( ζ=2 , defuzzification method – the largest value). Results of image processing produced by the algorithm on the 3rd and the 30th epochs are shown on Fig. 10e and Fig. 10f, respectively.

Thus, self-learning fuzzy spiking neural network requires a number of epochs that is in an order less then conventional fuzzy c-means algorithm requires.

media/image212.png

Figure 10.

The Lenna image processing: a) Original image; b) Training set (30% of the original image); c) The 1st epoch of self-learning fuzzy spiking neural network learning; d) The 3rd epoch of self-learning fuzzy spiking neural network learning; e) The 3rd epoch of fuzzy c-means learning; f) The 30th epoch of fuzzy c-means learning

8. Conclusion

Spiking neural networks are more realistic models of real neuronal systems than artificial neural networks of the previous generations. Nevertheless, they can be described, as it was shown in earlier sections, in a strict technically plausible way based on the Laplace transform. Spiking neural network designed in terms of transfer functions is an analog-digital nonlinear dynamic system that conveys and processes information both in pulse-position and continuous-time forms. Such precise formal description of spiking neural network architecture and functioning provides researchers and engineers with a framework to construct hardware implementations of various spiking neural networks for image processing of different levels of complexity.

Networks of spiking neurons introduced new, biologically more plausible essence of information processing and gave rise to a new, computationally more powerful generation of computational intelligence hybrid systems. In the present chapter, self-learning fuzzy spiking neural network that combined spiking neural network and fuzzy probabilistic and fuzzy possibilistic clustering algorithms was described as an example of such hybrid systems. It was shown that using of hybrid systems constructed on a spiking neural network basis made it possible to reduce number of learning epochs as compared to conventional fuzzy clustering algorithms. In addition, the way to ‘fuzzify’ spiking neural network architecture was demonstrated with consideration of a pool of receptive neurons to be a linguistic variable.

Although the temporal Hebian learning algorithm of spiking neural network is biologically plausible, even more realistic learning algorithm based on ‘Winner-Takes-More’ rule was proposed as its improvement.

Both theoretical innovations and simulation experiment presented in this chapter confirmed that self-learning spiking neural network and hybrid systems developed on its basis are powerful and efficient advanced tool of computational intelligence for data clustering and, particularly, for image processing.

References

1 - R.C. Berredo de, 2005 A review of spiking neuron models and applications, M. Sc. Dissertation, Pontifical Catholic University of Minas Gerais, Belo Horizonte, http://www.loria.fr/~falex/pub/Teaching/disserta.pdf.
2 - J. C. Bezdek, 1981 Pattern Recognition with Fuzzy Objective Function Algorithms, Plenum Press, 0-30640-671-3 York.
3 - J. C. Bezdek, J. Keller, R. Krishnapuram, N. R. Pal, 2005 Fuzzy Models and Algorithms for Pattern Recognition and Image Processing, Springer, 0-38724-515-4 York.
4 - Ye. Bodyanskiy, O. Rudenko, 2004 Artificial Neural Networks: Architectures, Learning, Applications, Teletekh, 9-66954-162-2in Russian).
5 - Ye. Bodyanskiy, A. Dolotov, 2008a Image processing using self-learning fuzzy spiking neural network in the presence of overlapping classes, Proceedings of the 11th International Biennial Baltic Electronics Conference “BEC 2008”, 213 216 , 978-1-42442-060-5 Tallinn/Laulasmaa, Estonia, October 2008, Tallinn University of Technology, Tallinn.
6 - Ye. Bodyanskiy, A. Dolotov, 2008b A self-learning spiking neural network for fuzzy clustering task, Scientific Proceedings of Riga Technical University, Computer Science Series: Information Technology and Management Science, 36 27 33 , 1407-7493
7 - Ye. Bodyanskiy, A. Dolotov, 2009 Hebbian learning of fuzzy spiking neural network based on’Winner-Takes-More’ rule, Proceedings of the 11th International Conference on Science and Technology “System Analysis and Information Technologies”, 271 978-9-66215-327-9 Kyiv, May 2009, ESC “IASA” NTUU “KPI”, Kyiv.
8 - Ye. Bodyanskiy, A. Dolotov, I. Pliss, 2008a Fuzzy receptive neurons using in self-learning spiking neural network, Proceedings of International Scientific and Technical Conference “Automation: Problems, Ideas, Solutions”, 12 14 , 978-9-66296-032-7 Sevastopil, September 2008, Publishing House of SevNTU, Sevastopil (in Russian).
9 - Ye. Bodyanskiy, A. Dolotov, I. Pliss, Ye. Viktorov, 2008b Fuzzy possibilistic clustering using self-learning spiking neural network, Wissenschaftliche Berichte der Hochschule Zittau/Goerlitz, 100 53 60 , 3-98080-899-9
10 - Ye. Bodyanskiy, A. Dolotov, I. Pliss, 2009 Self-learning fuzzy spiking neural network as a nonlinear pulse-position threshold detection dynamic system based on second-order critically damped response units, In: International Book Series “Information Science and Computing”: 9 Intelligent Processing, K. Markov, P. Stanchev, K. Ivanova, I. Mitov, (Eds.), 63 70 , Institute of Information Theories and Applications FOI ITHEA, 1313-0455 Sofia.
11 - S. M. Bohte, J. N. Kok, H. La Poutre, 2002 Unsupervised clustering with spiking neurons by sparse temporal coding and multi-layer RBF networks, IEEE Transactions on Neural Networks, 13 426 435 , 1045-9227
12 - M. Cottrell, J. C. Fort, 1986 A stochastic model of retinotopy: a self-organizing process, Biological Cybernetics, 53 6 405 411 , 0340-1200
13 - G. Cybenko, 1989 Approximation by superposition of a sigmoidal function, Mathematics of Control, Signals, and Systems, 2 303 314 , 0932-4194
14 - P. Dayan, L. F. Abbott, 2001 Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems, MIT Press, 0-26204-199-5
15 - R. C. Dorf, R. H. Bishop, 1995 Modern Control Systems, Addison-Wesley, 0-20184-559-8
16 - A. A. Feldbaum, A. G. Butkovskiy, 1971 Methods of Automatic Control Theory, Nauka, Moscow (in Russian).
17 - W. Gerstner, R. Kempter, J. L. van Hemmen, H. Wagner, 1996 A neuronal learning rule for sub-millisecond temporal coding, Nature, 383 76 78 , 0028-0836
18 - W. Gerstner, W. M. Kistler, 2002 Spiking Neuron Models: Single Neurons, Populations, Plasticity, Cambridge University Press, 0-52181-384-0
19 - G. C. Goodwin, S. F. Graebe, M. E. Salgado, 2001 Control System Design, Prentice Hall, 0-13958-653-9 Saddle Reiver.
20 - S. Haykin, 1999 Neural Networks: A Comprehensive Foundation, Prentice Hall, 0-13273-350-1 Saddle River.
21 - J. J. Hopfield, 1995 Pattern recognition computation using action potential timing for stimulus representation, Nature, 376 33 36 , 0028-0836
22 - K. Hornik, M. Stinchombe, H. White, 1989 Multilayer feedforward networks are universal approximators, Neural Networks, 2 5 359 366 , 0893-6080
23 - J. Jang, R. Sh, Ch. T. Sun, E. Mizutani, 1997 Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence, Prentice Hall, 0-13261-066-3 Saddle River.
24 - T. Kohonen, 1995 Self-Organizing Maps, Springer, 3-54058-600-8
25 - M. Kraft, A. Kasinski, F. Ponulak, 2006 Design of the spiking neuron having learning capabilities based on FPGA circuits, Proceedings of the 3rd IFAC Workshop on Discrete-Event System Design “DESDes’06”, 301 306 , 8-37481-035-1 September 2006, University of Zielona Gora Press, Zielona Gora.
26 - T. Lindblad, J. M. Kinser, 2005 Image Processing Using Pulse-Coupled Neural Networks, Springer, 3-54028-293-9
27 - W. Maass, 1997a Fast sigmoidal networks via spiking neurons, Neural Computation, 9 2 279 304 , 0899-7667
28 - W. Maass, 1997b Networks of spiking neurons: the third generation of neural network models, Neural Networks, 10 9 1659 1671 , 0893-6080
29 - W. Maass, C. M. Bishop, 1998 Pulsed Neural Networks, MIT Press, 0-26213-350-4
30 - C. Malsburg, von der, 1994 The correlation theory of brain function, In: Models of Neural Networks II: Temporal Aspects of Coding and Information Processing in Biological Systems, E. Domany, J.L. van Hemmen, K. Schulten, (Eds.), 95 119 , Springer-Verlag, 978-0-38794-362-6 New York.
31 - W. S. Mc Culloch, W. A. Pitts, 1943 A logical calculus of ideas immanent in nervous activity, Bulletin of Mathematical Biophysics, 5 115 133 , 0092-8240
32 - B. Meftah, A. Benyettou, O. Lezoray, Xiang. W. Qing, 2008 Image clustering with spiking neuron network, Proceedings of the International Joint Conference on Neural Networks “IJCNN 2008”, part of the IEEE World Congress on Computational Intelligence “WCCI 2008”, 681 685 , 978-1-42441-821-3 Hong Kong, June 2008, IEEE Press, Piscataway.
33 - M. L. Minsky, S. A. Papert, 1969 Perceptrons: An Introduction to Computational Geometry, MIT Press, 0-26263-022-2
34 - T. Natschlaeger, B. Ruf, 1998 Spatial and temporal patterns analysis via spiking neurons, Network: Computation in Neural Systems, 9 3 319 332 , 1361-6536
35 - C. L. Phillips, R. D. Harbor, 2000 Feedback Control Systems, Prentice Hall, 0-13949-090-6 Saddle River.
36 - A. Pikovsky, M. Rosenblum, J. Kurths, 2001 Synchronization: A Universal Concept in Nonlinear Sciences, Cambridge University Press, 0-52159-285-2
37 - H. Ritter, K. Schulten, 1986 On the stationary state of Kohonen’s self-organizing sensory mapping, Biological Cybernetics, 54 2 99 106 , 0340-1200
38 - M. Sato-Ilic, L. C. Jain, 2006 Innovations in Fuzzy Clustering: Theory and Applications, Springer, 978-3-54034-356-1 Berlin.
39 - A. Scott, 2002 Neuroscience: A Mathematical Primer, Springer, 0-38795-403-1 York.
40 - T. Schoenauer, S. Atasoy, N. Mehrtash, H. Klar, 2000 Simulation of a digital neuro-chip for spiking neural networks, Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Network “IJCNN 2000”, 4 490 495 , 0-76950-619-4 July 2000, IEEE Computer Society, Los Alamitos.
41 - Ya. Z. Tsypkin, 1984 Relay Control Systems, Cambridge University Press, 0-52124-390-4
42 - USC-SIPI Image Database, the. Lenna 2004 University of Southern California, 4.2.04, Signal & Image Processing Institute, Electrical Engineering Department, http://sipi.usc.edu/database/database.cgi?volume=misc&image=12.