Open access

Analog-Digital Self-Learning Fuzzy Spiking Neural Network in Image Processing Problems

Written By

Artem Dolotov and Yevgeniy Bodyanskiy

Published: 01 December 2009

DOI: 10.5772/7061

From the Edited Volume

Image Processing

Edited by Yung-Sheng Chen

Chapter metrics overview

2,951 Chapter Downloads

View Full Metrics

1. Introduction

Computational intelligence provides a variety of means that can perform complex image processing in a rather effective way. Among them, self-learning systems, especially self-learning artificial neural networks (self-organizing maps, ART neural networks, ‘Brain-State-in-a-Box’ neuromodels, etc.) (Haykin, 1999) and fuzzy clustering systems (fuzzy c-means, algorithms of Gustafson-Kessel, Yager-Filev, Klawonn-Hoeppner, etc) (Bezdek et al., 2005, Sato-Ilic & Jain, 2006), occupy a significant place as they make it possible to solve a data processing problem in the absence of a priori knowledge of it.

While there are many artificial neural networks that can be successfully used in image processing tasks, the most prominent of them are networks of a new, the third generation, commonly known as spiking neural networks (Maass & Bishop, 1998, Gerstner & Kistler, 2002). On the one hand, spiking neural networks are biologically more plausible than neural networks of the previous generations that is of fundamental importance for computational intelligence from theoretical point of view. On the other hand, networks of spiking neurons appeared to be computationally more powerful than conventional neural networks (Maass, 1997b). In addition, complex data processing via artificial neural networks of the second generation is time consuming due to multi-epoch learning; instead, spiking neural networks can perform the same processing tasks much faster as they require a few learning epochs only (Bohte et al., 2002, Berredo, 2005, Meftah et al., 2008,Lindblad & Kinser, 2005). All these facts are causing considerable interest in networks of spiking neurons as a powerful computational intelligence tool for image processing

Although spiking neural networks are becoming a popular computational intelligence tool for various technical problems solving, their architecture and functioning are treated in terms of neurophysiology rather than in terms of any technical sciences apparatus in the most research works on engineering subjects. Yet none technically plausible description of spiking neurons functioning has been provided.

In contrast to artificial neural networks, fuzzy logic systems are capable of performing accurate and efficient data processing under a priori and current uncertainty, particularly if classes to be separated overlap one another. Integrating artificial neural networks and fuzzy systems together allows of combining capabilities of both in a synergetic way (Jang et al., 1997), thus producing hybrid intelligent systems that achieve high performance and reliability in real life problems solving, particularly in image processing. Obviously, designing hybrid intelligent systems based on the new generation of artificial neural networks attracts both practical and theoretical interest.

In the present chapter of the book, analog-digital self-learning fuzzy spiking neural network that belongs to a new type of computational intelligence hybrid systems combining spiking neurons computational capabilities and fuzzy systems tolerance for uncertainty is proposed. It is demonstrated that both fuzzy probabilistic and fuzzy possibilistic approaches can be implemented on spiking neural network basis. A spiking neural network is treated in terms of well-known and widely used apparatus of classical automatic control theory based on the Laplace transform. It is shown that a spiking neuron synapse is nothing other than a second- order critically damped response unit, and a spiking neuron soma is a kind of threshold detection system. An improved unsupervised learning algorithm for the proposed neural network based on ‘Winner-Takes-More’ rule is introduced. Capabilities of the neural network in solving image processing problems are investigated. From theoretical point of view, the proposed neural network is another step toward evolution of artificial neural networks theory as a part of computational intelligence paradigm.

Advertisement

2. Formal models of spiking neurons

Biological neuron constructive features that are significant for the discussion that follows are sketched on Fig. 1 (Dayan & Abbott, 2001, Scott, 2002). As illustrated, neuron includes synapses, dendritic tree, soma and axon and its terminals. Synapse connects axonal terminals of a neuron with dendrites of another neuron. Soma processes incoming information and transmits it through axon and axonal terminals to synapses of the subsequent neurons. Neurons communicate one another by nerve pulses (action potentials, spikes).

Figure 1.

Biological neuron

Neuron behaviour can be briefly described in the following way (Fig. 2). Spike arrived to synapse from presynaptic neuron generates postsynaptic potential (either excitatory or inhibitory – depending on synapse type). Postsynaptic potential reaches neuron soma through a dendrite and either increases membrane potential, or decreases it. Neuron soma accumulates all postsynaptic potentials incoming from different synapses. When membrane potential exceeds firing threshold, neuron fires and emits outgoing spike that moves through axon to postsynaptic neurons. Once neuron has fired, its soma produces spike after-potential, namely, the membrane potential drops steeply below the rest potential and then it ascends gradually to the rest potential back. Period when membrane potential is below the rest potential is called refractory period. Within this period, appearance of another spike is unlikely. If the firing threshold is not reached after arrival of a postsynaptic potential, membrane potential gradually descends to rest potential until another postsynaptic potential incomes.

Figure 2.

Biological neuron behaviour: a) Dynamics of membrane potential u ( t ) ( θ is the firing threshold, u rest ( t ) is the rest potential); b) Incoming spikes; c) Outgoing spike

Traits of any artificial neural networks generation depend upon the formal model of biological neuron that is considered within scope of that generation. Any formal model treats biological neuron on a certain level of abstraction. It takes into account some details of biological neuron behaviour and features, but disregards other ones. On the one hand, prescribing complexity level of formal model sets computational and performance properties of artificial neural networks originated by that model. On the other hand, chosen level of abstraction of formal model defines how realistic artificial neural networks are.

Both the first and the second generations of artificial neural networks (Maass, 1997b) rest on the rate model that neglects temporal properties of biological neuron (Maass & Bishop, 1998). One of the essential elements for both generations is a neuron activation function. The rate model based on the threshold activation function (McCulloch & Pitts, 1943) gave birth to the first generation of artificial neural networks. Though such networks were capable of performing some elementary logic functions, their computational capabilities were very limited (Minsky & Papert, 1969). Replacing the threshold activation function with continuous one resulted in appearance of the second generation, that turned out to be significantly powerful than networks of the previous generation (Cybenko, 1989, Hornik et al., 1989). Nevertheless, neurons of the second generation are even far from real biological neurons than the first generation neurons since they ignore soma firing mechanism totally (Gerstner & Kistler, 2002). This gap is avoided in threshold-fire models (Maass & Bishop, 1998). One of such models, namely, the leaky integrate-and-fire model (Maass & Bishop, 1998, Gerstner & Kistler, 2002), is the basis for the third generation of artificial neural networks.

The leaky integrate-and-fire model is the one of the simplest and well-known formal models of a biological neuron that are used in different areas of neuroscience. It captures neuron behaviour described above except the neuron refractoriness. The model considers that on firing, membrane potential drops to the rest potential, not below it.

The spike-response model (Maass & Bishop, 1998, Gerstner & Kistler, 2002), another threshold-fire model, captures biological neuron behaviour more accurately. Besides postsynaptic potentials accumulating and spike firing, it models the neuron refractoriness also. This model will be used in the subsequent discussion.

It makes sense to mention here that computational neuroscience and computational intelligence sometimes understand spiking neurons in a different way. Spiking neuron in computational neuroscience is any model of biological neuron that transmits and processes information by spike trains. Within scope of computational intelligence, spiking neuron is a leaky integrate-and-fire model usually. This results from the fact that self-learning properties of spiking neurons are caused by capability of any threshold-fire model to detect coincidences in input signal. Since the leaky integrate-and-fire model is the simplest one among the threshold-fire models, there is no sense to use any complicated ones. Obviously, if any more complicated model reveals some particular properties that are useful for solving technical problems, the concept of spiking neurons will be extended in computational intelligence.

Advertisement

3. Self-learning spiking neural network

3.1. Introduction

Ability of spiking neurons to respond to incoming signal selectively was originally discovered by J. Hopfield in 1995 (Hopfield, 1995). He found that spiking neuron soma behaviour was similar to a radial basis function: the neuron fired as earlier as higher degree of coincidence of incoming spikes was; if the degree was sufficiently low, the neuron did not fire at all. And spiking neuron synapses appeared to be acting as a spike pattern storing unit: one was able to get a spiking neuron to fire to a certain spike pattern by adjusting synaptic time delays the way that they evened out (in temporal sense) incoming signal and made it to reach the neuron soma simultaneously. Spike pattern encoded in synaptic time delays of a neuron was called a center of spiking neuron in the following. Here it is worth to note that synchronization phenomena is of primary importance in nature (Pikovsky et al., 2001), particularly in the brain functioning (Malsburg, 1994).

The discovered capabilities of spiking neurons provided the basis for constructing self-learning networks of spiking neurons. Original architecture of self-learning spiking neural network and its learning algorithm, namely, a temporal Hebbian rule were introduced in (Natschlaeger & Ruf, 1998). The proposed self-learning network was able to separate clusters as long as their number was not greater than dimensionality of input signal. If clusters number exceeded number of input signal dimensions, spiking neural network performance decreased. This drawback was overcome by using population coding of incoming signal based on pools of receptive neurons in the first hidden layer of the network (Bohte et al., 2002). Such spiking neural network was shown to be considerably powerful and significantly fast in solving real life problems. Henceforward we will use this network as a basis for its further improvements and hybrid architectures designing.

3.2. Architecture

Self-learning spiking neural network architecture is shown on Fig. 3. As illustrated, it is heterogeneous two-layered feed-forward neural network with lateral connections in the second hidden layer.

The first hidden layer consists of pools of receptive neurons and performs transformation of input signal. It encodes an ( n × 1 ) -dimensional input sampled pattern x ( k ) (here n is the dimensionality of input space, k = 1, N ¯ is a patter number, N is number of patterns in incoming set) into ( h n × 1 ) -dimensional vector of input spikes t [ 0 ] ( x ( k ) ) (here h is the number of receptive neurons in a pool) where each spike is defined by its firing time.

Spiking neurons form the second hidden layer of the network. They are connected with receptive neurons with multiple synapses where incoming vector of spikes transforms into postsynaptic potentials. Number of spiking neurons in the second hidden layer is set to be equal to the number of clusters to be detected. Each spiking neuron corresponds to a certain cluster. The neuron that has fired to the input pattern defines cluster that the pattern belongs to. Thus, the second hidden layer takes ( h n × 1 ) -dimensional vector of input spikes t [ 0 ] ( x ( k ) ) and outputs ( m × 1 ) -dimensional vector of outgoing spikes t [ 1 ] ( x ( k ) ) that defines the membership of input pattern x ( k )

This is the basic architecture and behaviour of self-learning spiking neural network. The detailed architecture is stated below.

3.3. Population coding and receptive neurons

The first hidden layer is constructed to perform population coding of input signal. It acts in such a manner that each dimensional component x i ( k ) i = 1, n ¯ , of input signal x ( k ) is processed by a pool of h receptive neurons R N l i l = 1, h ¯ . Obviously, there can be different number of receptive neurons h i in a pool for each dimensional component in the general case. For the sake of simplicity, we will consider here that the number of neurons is equal for all pools.

As a rule, activation functions of receptive neurons within a pool are bell-shaped (Gaussians usually), shifted, overlapped, of different width, and have dead zone. Generally firing time of a spike emitted by a receptive neuron R N l i upon incoming signal x i ( k ) lies in a certain interval { 1 } [ 0, t max [ 0 ] ] referred to as coding interval and is described by the following expression:

t l i [ 0 ] ( x i ( k ) ) = { t max [ 0 ] ( 1 ψ ( | x i ( k ) c l i [ 0 ] | σ l i ) ) ψ ( | x i ( k ) c l i [ 0 ] | σ l i ) θ r .n . 1 ψ ( | x i ( k ) c l i [ 0 ] | σ l i ) θ r .n E1

Figure 3.

Self-learning spiking neural network (spiking neurons are depicted the way to stress they act in integrate-and-fire manner)

where is the floor function, ψ ( ) c l i [ 0 ] σ l i and θ r .n . are the receptive neuron’s activation function, center, width and dead zone, respectively ( r .n . in the last parameter means ‘receptive neuron’), -1 indicates that the neuron does not fire. An example of population coding is depicted on Fig. 4. It is easily seen that the closer x i ( k ) is to the center c l i [ 0 ] of receptive neuron R N l i the earlier the neuron emits spike t l i [ 0 ] ( x i ( k ) )

Figure 4.

An example of population coding. Incoming signal x i ( k ) fires receptive neurons R N 2, i and R N 3, i . It is considered here that width of activation function of all receptive neurons within the pool is the same

In this work, we used Gaussian as activation function of receptive neurons:

ψ ( | x i ( k ) c l i [ 0 ] | σ l i ) = exp ( ( x i ( k ) c l i [ 0 ] ) 2 2 σ l i 2 ) E2

There can be several ways to set widths and centers of receptive neurons within a pool. As a rule, activation functions can be of two types – either ‘narrow’ or ‘wide’. Centers of each width type of activation function are calculated in different ways but in either case they cover date range uniformly. More details can be found in (Bohte et al., 2002).

3.4. Spiking neurons layer

Spiking neuron is considered to be formed of two constituents, they are: synapse and soma.

As it was mentioned above, synapses between receptive neurons and spiking neurons are multiple structures. As shown on Fig. 5, a multiple synapse M S j l i consists of a set of q subsynapses with different time delays d p d p d p 1 0 d q d 1 t max [ 0 ] and adjustable weights w j l i p (here p = 1, q ¯ ). It should be noted that number of subsynapses within a multiple synapse are fixed for the whole network. Having a spike t l i [ 0 ] ( x i ( k ) ) from the l i -th receptive neuron, the p -th subsynapse of the j -th spiking neuron produces delayed weighted postsynaptic potential

u j l i p ( t ) = w j l i p ε j l i p ( t ) = w j l i p ε ( t ( t l i [ 0 ] ( x i ( k ) ) + d p ) ) E3

where ε ( ) is a spike-response function usually described by the expression (Natschlaeger & Ruf, 1998)

ε ( t ( t l i [ 0 ] ( x i ( k ) ) + d p ) ) = t ( t l i [ 0 ] ( x i ( k ) ) + d p ) τ PSP exp ( 1 t ( t l i [ 0 ] ( x i ( k ) ) + d p ) τ PSP ) × × H ( t ( t l i [ 0 ] ( x i ( k ) ) + d p ) ) E4
τ PSP – membrane potential decay time constant whose value can be obtained empirically ( PSP means ‘postsynaptic potential’), H ( ) – the Heaviside step function. Output of the multiple synapse M S j l i forms total postsynaptic potential
u j l i ( t ) = p = 1 q u j l i p ( t ) E5

Figure 5.

Multiple synapse

Each incoming total postsynaptic potential contributes to membrane potential of spiking neuron S N j as follows:

u j ( t ) = i = 1 n l = 1 h u j l i ( t ) E6

Spiking neuron S N j generates at most one outgoing spike t j [ 1 ] ( x ( k ) ) during a simulation interval (the presentation of an input pattern x ( k ) ), and fires at the instant the membrane potential reaches firing threshold θ s .n . ( s .n . means here ‘spiking neuron’). After neuron has fired, the membrane potential is reset to the rest value u rest (0 usually) until the next input pattern is presented.

It is easily seen that the descried behaviour of spiking neuron corresponds to the one of the leaky integrate-and-fire model.

Spiking neurons are linked with lateral inhibitory connections that disable all other neurons to fire after the first one has fired. Thus, any input pattern makes only one spiking neuron to fire that is only one component of the vector of outgoing spikes has non-negative value. There can be cases when a few spiking neurons fire simultaneously for an input pattern. Such cases are rare enough, and their appearance depends directly on initial synaptic weights distribution.

3.5. Crisp data clustering

As it was mentioned above, a spiking neuron acts similarly to a radial basis function, and its response depends on degree of coincidence of the input. There was considered a spiking neuron center to describe such neuron behaviour in a convenient way (Natschlaeger & Ruf, 1998). In the general case, it is considered to possess the following property: the closer input pattern is to the neuron’s center, the earlier output spike fires. Hence, a spiking neuron firing time reflects the similarity (Natschlaeger & Ruf, 1998) (or distance (Bohte et al., 2002)) between the input pattern and the neuron center. Degree of coincidence is utilized here as a similarity (distance) measure.

Center of spiking neuron is encoded in the synaptic time delays. They produce coincidence output (and in its terns it makes the soma to fire at the earliest possible time) if incoming pattern is similar to the encoded one. Thus, the learned spiking neuron can respond selectively to the input set of patterns. Data clustering in the described neural network rests on this property of spiking neuron. Input pattern fires the neuron whose center is the most similar (the closest) to it, and the fired spike prevents the rest neurons to fire through the lateral inhibitory connections. This way self-learning spiking neural network performs clusters separation if classes to be detected do not overlap one another.

One can readily see that the unsupervised pattern classification procedure of the spiking neurons layer is identical with the one of self-organizing maps (Kohonen, 1995).

Advertisement

4. Spiking neural network learning algorithms

4.1. Winner-takes-all

The purpose of an unsupervised learning algorithm of spiking neural network is to adjust centers of spiking neurons so as to make each of them to correspond to centroid of a certain data cluster. Such learning algorithm was introduced on the basis of two learning rules, namely, ‘Winner-Takes-All’ rule and temporal Hebbain rule (Natschlaeger & Ruf, 1998, Gerstner et al., 1996). The first one defines which neuron should be updated, and the second one defines how it should be updated. The algorithm updates neuron centers through synaptic weights adjusting, whereas synaptic time delays always remain constant. The concept here is that significance of the given time delay can be changed by varying corresponding synaptic weight.

Each learning epoch consists of two phases. Competition, the first phase, defines a neuron-winner. Being laterally linked with inhibitory connections, spiking neurons compete to respond to the pattern. The one wins (and fires) whose center is the closest to the pattern. Following competition phase, weights adjusting takes place. The learning algorithm adjusts synaptic weights of the neuron-winner to move it closer to the input pattern. It strengthens weights of those subsynapses which contributed to the neuron-winner’s firing (i.e. the subsynapses produced delayed spikes right before the neuron firing) and weakens ones which did not contribute (i.e. the delayed spikes appeared right after the neurons firing or long before it). Generally, the learning algorithm can be expressed as

w j l i p ( K + 1 ) = { w j l i p ( K ) + η w ( K ) L ( Δ t j l i p ) j = j ˜ w j l i p ( K ) j j ˜ E7

where K is the current epoch number, η w ( ) 0 is the learning rate (while it is constant in (Natschlaeger & Ruf, 1998), it can depend on epoch number in the general case; w means ‘weights’), L ( ) is the learning function (Gerstner et al, 1996), j ˜ is the number of neuron that has won on the current epoch, Δ t j l i p is the time delay between delayed spike t l i [ 0 ] ( x i ( k ) ) + d p produced by the p -th subsynapse of the l i -th synapse and spiking neuron firing time t j [ 1 ] ( x ( k ) ) :

Δ t j l i p = t l i [ 0 ] ( x i ( k ) ) + d p t j [ 1 ] ( x ( k ) ) E8

As a rule, the learning function has the following form (Berredo, 2005):

L ( Δ t j l i p ) = ( 1 + β ) exp ( ( Δ t j l i p α ) 2 2 ( κ 1 ) ) β E9
κ = 1 ν 2 2 ln ( β 1 + β ) E10

where α 0 β 0 ν are the shape parameters of the learning function L ( ) that can be obtained empirically (Berredo, 2005, Natschlaeger & Ruf, 1998). The learning function and its shape parameters are depicted on Fig. 6. The effect of the shape parameters on results of information processing performed by spiking neural network can be found in (Meftah et al., 2008).

Upon the learning stage, center of a spiking neuron represents centroid of a certain data cluster, and spiking neural network can successfully perform unsupervised classification of the input set.

4.2. Winner-takes-more

The learning algorithm (7) updates only neuron-winner on each epoch and disregards other neurons. It seems more natural to update not only spiking neuron-winner, but also its neighbours (Bodyanskiy & Dolotov, 2009). This approach is known as ‘Winner-Takes-More’

Figure 6.

Learning function L ( )

rule. It implies that there is a cooperation phase before weights adjustment. Neuron-winner determines a local region of topological neighbourhood on each learning epoch. Within this region, the neuron-winner fires along with its neighbours, and the closer a neighbour is to the winner, the more significantly its weights are changed. The topological region is represented by the neighbourhood function ϕ ( | Δ t j j ˜ | ) that depends on difference | Δ t j j ˜ | between the neuron-winner firing time t j ˜ [ 1 ] ( x ( k ) ) and the neighbour firing time t j [ 1 ] ( x ( k ) ) (distance between the neurons in temporal sense) and a parameter that defines effective width of the region. As a rule, ϕ ( ) is a kernel function that is symmetric about its maximum at the point where Δ t j j ˜ = 0 . It reaches unity at that point and monotonically decreases as Δ t j j ˜ tends to infinity. The functions that are the most frequently used as neighbourhood function are Gaussian, paraboloid, Mexican Hat, and many others (Bodyanskiy & Rudenko, 2004).

For self-learning spiking neural network, the learning algorithm based on ‘Winner-Takes-More’ rule can be expressed in the following form (Bodyanskiy & Dolotov, 2009):

w j l i p ( K + 1 ) = w j l i p ( K ) + η w ( K ) ϕ ( | Δ t j j ˜ | ) L ( Δ t j l i p ) E11

where temporal distance Δ t j j ˜ is

Δ t j j ˜ = t j ˜ [ 1 ] ( x ( k ) ) t j [ 1 ] ( x ( k ) ) E12

Obviously, expression (11) is a generalization of (7).

Analysis of competitive unsupervised learning convergence showed that width parameter of the neighbourhood function should decrease during synaptic weights adjustment (Cottrell & Fort, 1986). For Gaussian neighbourhood function

ϕ ( | Δ t j j ˜ | K ) = exp ( ( Δ t j j ˜ ) 2 2 ρ 2 ( K ) ) E13

width parameter ρ can be adjusted as follows (Ritter & Schulten, 1986):

ρ ( K ) = ρ ( 0 ) exp ( K γ ) E14

where γ 0 is a scalar that determines rate of neuron-winner effect on its neighbours.

Noteworthily that exponential decreasing of width parameter can be achieved by applying the simpler expression instead of (14) (Bodyanskiy & Rudenko, 2004):

ρ ( K ) = γ ρ ( K 1 ) 0 γ 1 E15

Learning algorithm (11) requires modification of self-learning spiking neural architecture. Lateral inhibitory connections in the second hidden layer should be replaced with excitatory ones during the network learning stage in order to implement ‘Winner-Takes-More’ rule.

In the following sections, it will be shown that the learning algorithm based on ‘Winner-Takes-More’ rule is more natural than the one based on ‘Winner-Takes-All’ rule to learn fuzzy spiking neural network.

Advertisement

5. Spiking neural network as an analog-digital system

5.1. Introduction

Hardware implementations of spiking neural network demonstrated fast processing ability that made it possible to apply such systems in real-life applications where processing speed was a rather critical parameter (Maass, 1997a, Maass & Bishop 1998, Schoenauer et al., 2000, Kraft et al., 2006). From theoretical point of view, the current research works on spiking neurons hardware implementation subject are very particular, they lack for a technically plausible description on a general ground. In this section, we consider a spiking neuron as a processing system of classical automatic control theory (Feldbaum & Butkovskiy, 1971, Dorf & Bishop, 1995, Phillips & Harbor, 2000, Goodwin et al., 2001). Spiking neuron functioning is described in terms of the Laplace transform. Having such a general description of a spiking neuron, one can derive various hardware implementations of self-learning spiking neural network for solving specific technical problems, among them realistic complex image processing.

Within a scope of automatic control theory, a spike t ( x ( k ) ) can be represented by the Dirac delta function δ ( t t ( x ( k ) ) ) . Its Laplace transform is

L { δ ( t t ( x ( k ) ) ) } = e t ( x ( k ) ) s E16

where s is the Laplace operator. Spiking neuron takes spikes on its input, performs spike–membrane potential–spike transformation, and produces spikes on its output. Obviously, it is a kind of analog-digital system that processes information in continuous-time form and transmits it in pulse-position form. This is the basic concept for designing analog-digital architecture of self-learning spiking neural network. Overall network architecture is depicted on Fig. 7 and is explained in details in the following subsections.

Figure 7.

Spiking neuron as a nonlinear dynamic system

5.2. Synapse as a second-order critically damped response unit

Multiple synapse M S j l i of a spiking neuron S N j transforms incoming pulse-position signal δ ( t t l i [ 0 ] ( x i ( k ) ) ) to continuous-time signal u j l i ( t ) . Spike-response function (4), the basis of such transformation, has form that is similar to the one of impulse response of second-order damped response unit. Transfer function of a second-order damped response unit with unit gain factor is

G ˜ ( s ) = 1 ( τ 1 s + 1 ) ( τ 2 s + 1 ) = 1 τ 4 2 s 2 + τ 3 s + 1 E17

where τ 1,2 = τ 3 2 ± τ 3 2 4 τ 4 2 τ 1 τ 2 τ 3 2 τ 4 and its impulse response is

ε ˜ ( t ) = 1 τ 1 τ 2 ( e t τ 1 e t τ 2 ) E18

Putting τ 1 = τ 2 = τ PSP (that corresponds to a second-order critically damped response unit) and applying l'Hôpital's rule, one can obtain

ε ˜ ( t ) = t τ PSP 2 e t τ PSP E19

Comparing spike-response function (4) with the impulse response (19) leads us to the following relationship:

ε ( t ) = e τ PSP ε ˜ ( t ) E20

Thus, transfer function of the second-order critically damped response unit whose impulse response corresponds to a spike-response function is

G SRF ( s ) = e τ PSP ( τ PSP s + 1 ) 2 E21

where SRF means ‘spike-response function’.

Now, we can design multiple synapse in terms of the Laplace transform (Bodyanskiy et al., 2009). As illustrated on Fig. 7, multiple synapse M S j l i is a dynamic system that consists of a set of subsynapses that are connected in parallel. Each subsynapse is formed by a group of time delay, second-order critically damped response unit, and gain. As a response to incoming spike δ ( t t l i [ 0 ] ( x i ( k ) ) ) the subsynapse produces delayed weighted postsynaptic potential u j l i p ( s ) and the multiple synapse produces total postsynaptic potential u j l i ( s ) that arrives to spiking neuron soma.

Taking into account (21), transfer function of the p -th subsynapse of M S j l i takes the following form:

U j l i p ( s ) = w j l i p e d p s G SRF ( s ) = τ PSP w j l i p e 1 d p s ( τ PSP s + 1 ) 2 E22

and its response to a spike δ ( t t l i [ 0 ] ( x i ( k ) ) ) is

u j l i p ( s ) = e t l i [ 0 ] ( x i ( k ) ) s U j l i p ( s ) = τ PSP w j l i p e 1 ( t l i [ 0 ] ( x i ( k ) ) + d p ) s ( τ PSP s + 1 ) 2 E23

So finally, considering transfer function of multiple synapse M S j l i

U j l i ( s ) = p = 1 q U j l i p ( s ) = p = 1 q τ PSP w j l i p e 1 d p s ( τ PSP s + 1 ) 2 E24

the Laplace transform of the multiple synapse output can be expressed in the following form:

u j l i ( s ) = e t l i [ 0 ] ( x i ( k ) ) s U j l i ( s ) = p = 1 q τ PSP w j l i p e 1 ( t l i [ 0 ] ( x i ( k ) ) + d p ) s ( τ PSP s + 1 ) 2 E25

Here it is worth to note that since it is impossible to use δ -function in practice (Phillips & Harbor, 2000), it is convenient to model it with impulse of a triangular form (Feldbaum & Butkovskiy, 1971) as shown on Fig. 8. Such impulse is similar to δ -function under the following condition

lim Δ 0 a ( t Δ ) = δ ( t ) E26

Figure 8.

Triangular impulse a ( t Δ )

In this case, numerator of (21) should be revised the way to take into account finite peak of a ( t Δ ) (in contrast to the one of the Dirac delta function).

5.3. Soma as a threshold detection unit

Spiking neuron soma performs transformation that is opposite to one of the synapse. It takes continuous-time signals u j l i ( t ) and produces pulse-position signal δ ( t t j [ 1 ] ( x ( k ) ) ) . In doing so, soma responds each time membrane potential reaches a certain threshold value. In other words, spiking neuron soma acts as a threshold detection system and consequently it can be designed on the base of relay control systems concept (Tsypkin, 1984). In (Bodyanskiy et al., 2009), mechanisms of threshold detection behaviour and firing process were described. Here we extend this approach to capture refractoriness of spiking neuron.

Threshold detection behaviour of a neuron soma can be modelled by an element relay with dead zone θs.n. that is defined by the nonlinear function

Φ relay ( u j ( t ) θ s .n . ) = s i g n ( u j ( t ) θ s .n . ) + 1 2 E27

where sign (•)is the signum function. Soma firing can be described by a derivative unit that is connected with the element relay in series and produces a spike each time the relay switches. In order to avoid a negative spike that appears as a response to the relay resetting, a conventional diode is added next to the derivative unit. The diode is defined by the following function:

Φ diode ( δ ( t t relay [ 1 ] ) ) = δ ( t t relay [ 1 ] ) H ( δ ( t t relay [ 1 ] ) ) E28

where t relay [ 1 ] is a spike produced by the derivative unit upon the relay switching.

Now we can define the Laplace transform of an outgoing spike t j [ 1 ] ( x ( k ) ) , namely,

L { δ ( t t j [ 1 ] ( x ( k ) ) ) } = e t j [ 1 ] ( x ( k ) ) s = L { Φ diode ( s L { Φ relay ( u ( t ) j θ s .n . ) } ) } E29

As it was mentioned above, the leaky integrate-and-fire model disregards the neuron refractoriness. Anyway, the refractory period is implemented in the layer of spiking neurons indirectly. The point is that a spiking neuron cannot produce another spike after firing and until the end of the simulation interval since the input pattern is provided only once within the interval. In the analog-digital architecture of spiking neuron, the refractoriness can be modelled by a feedback circuit. As shown on Fig. 7, it is a group of a time delay, a second-order critically damped response unit, and a gain that are connected in series. The time delay defines duration of a spike generation period d spike (usually, d spike 0 ). The second-order critically damped response unit defines a spike after-potential. Generally, spike after-potential can be represented by a second-order damped response unit, but for the sake of simplicity, we use critically damped response unit as it can be defined by one parameter only, namely, τ SAP ( SAP means here ‘spike after-potential’). This parameter controls duration of the refractory period. Finally, the gain unit sets amplitude of the spike after-potential w SAP . Obviously, w SAP should be much greater than any synaptic weight.

Thus, transfer function of the feedback circuit is

G F .B . ( s ) = w SAP e d spike s ( τ SAP s + 1 ) 2 E30

where F.B. means ‘feedback circuit’, and transfer function of the soma is

G soma ( s ) = G F .F . 1 + G F .F . G F .B . E31

where G F.F. is defined by (29) (F.F. means ‘feed-forward circuit’).

It is easily seen that the functioning of spiking neuron analog-digital architecture introduced above is similar to the spike-response model.

Advertisement

6. Self-learning hybrid systems based on spiking neural network

6.1. Fuzzy receptive neurons

A common peculiarity of artificial neural networks is that they store dependence of system model outputs on its inputs in the form of ‘black box’. Instead, data processing methods based on fuzzy logic allow of system model designing and storing in analytical form that can be substantially interpreted in a relatively simple way. This fact arises interest in designing of hybrid systems that can combine spiking neural networks computational capabilities with capability of fuzzy logic methods to conveniently describe input-output relationships of the system being modelled. The present section shows how receptive neuron layers, a part of spiking neural network, can be ‘fuzzified’.

One can readily see that the layer of receptive neuron pools is identical to a fuzzification layer of neuro-fuzzy systems like Takagi-Sugeno-Kang networks, ANFIS, etc. (Jang et al., 1997). Considering activation function ψ l i ( x i ( k ) ) as a membership function, the receptive neurons layer can be treated as the one that transforms input data set to a fuzzy set that is defined by values of activation-membership function ψ l i ( x i ( k ) ) and is expressed over time domain in form of firing times t l i [ 0 ] ( x i ( k ) ) (Bodyanskiy et al., 2008a). In fact, each pool of receptive neurons performs zero order Takagi-Sugeno fuzzy inference (Jang et al., 1997)

IF   x i ( k ) IS   x l i THEN OUTPUT IS t l i [ 0 ] E32

where X l i is the fuzzy set with membership function ψ l i ( x i ( k ) ) . Thus, one can interpret a receptive neurons pool as a certain linguistic variable and each receptive neuron (more precisely, fuzzy receptive neuron) within the pool – as a linguistic term with membership function ψ l i ( x i ( k ) ) (Fig. 9). This way, having any a priori knowledge of data structure, it is possible beforehand to adjust activation functions of the first layer neurons to fit them and thus, to get better clustering results.

6.2. Fuzzy clustering

Conventional approach of data clustering implies that each pattern x(k) can belong to one cluster only. It is more natural to consider that a pattern can belong to a several clusters with different membership levels. This case is the subject matter of fuzzy cluster analysis that is heading in several directions. Among them, algorithms based on objective function are the most mathematically rigorous (Bezdek, 1981). Such algorithms solve data processing tasks by optimizing a certain preset cluster quality criterion.

One of the commonly used cluster quality criteria can be stated as follows:

E ( μ j ( x ( k ) ) v j ) = k = 1 N j = 1 m μ j ζ ( x ( k ) ) x ( k ) v j A 2 E33

where μ j ( x ( k ) ) [ 0,1 ] is the membership level of the input pattern x ( k ) to the j -th cluster, v j is the center of the j -th cluster, ζ 0 is the fuzzifier that determines boundary between

Figure 9.

Terms of linguistic variable for the i-th input. Membership functions are adjusted to represent a priori knowledge of input data structure. Incoming signal x i (k) fires fuzzy receptive neurons FRN 2,i and FRN 3,i

clusters and controls the amount of fuzziness in the final partition, x ( k ) v j A is the distance between x ( k ) and v j in a certain metric, A is a norm matrix that defines distance metric. By applying the method of indefinite Lagrange multipliers under restrictions

j = 1 m μ j ( x ( k ) ) = 1 k = 1, N ¯ E34
0 k = 1 N μ j ( x ( k ) ) N j = 1, m ¯ E35

minimization of (33) leads us to the following solution:

μ j ( x ( k ) ) = ( x ( k ) v j A 2 ) 1 1 ζ ι = 1 m ( x ( k ) v ι A 2 ) 1 1 ζ E36
v j = k = 1 N μ j ζ ( x ( k ) ) x ( k ) k = 1 N μ j ζ ( x ( k ) ) E37

that originates the methods of so-called fuzzy probabilistic clustering (Bezdek et al., 2005). In the case when norm matrix A is the identity matrix and ζ = 1 , equations (36), (37) present hard c-means algorithm, and for ζ = 2 they are conventional fuzzy c-means algorithm.

Efficiency of fuzzy probabilistic clustering decreases in the presence of noise. Algorithm (36), (37) produces unnaturally high degree of membership for outliers that are equidistant from clusters centers. This drawback is avoided by applying fuzzy possibilistic approach that is based on the following objective function:

E ( μ j ( x ( k ) ) v j ) = k = 1 N j = 1 m μ j ζ ( x ( k ) ) x ( k ) v j A 2 + j = 1 m λ j k = 1 N ( 1 μ j ( x ( k ) ) ) ζ E38

where λ j 0 is the scalar parameter that defines the distance at which membership level takes the value 0.5, i.e. if x ( k ) v j A 2 = λ j , then μ j ( x ( k ) ) = 0.5 . Minimization of (38) with respect to μ j ( x ( k ) ) v j , and λ j yields the following solution:

μ j ( k ) = ( 1 + ( x ( k ) v j A 2 λ j ) 1 ζ 1 ) 1 E39
v j = k = 1 N μ j ζ ( k ) x ( k ) k = 1 N μ j ζ ( k ) E40
λ j = k = 1 N μ j ζ ( k ) x ( k ) v j A 2 k = 1 N μ j ζ ( k ) E41

that gives conventional possibilistic c-means algorithm if ζ = 2 and A is the identity matrix (Bezdek et al., 2005).

After spiking neural network learning has been done, center c j [ 1 ] of a spiking neuron S N j represents center v j of a certain data cluster, and its firing time t j [ 1 ] ( x ( k ) ) reflects distance x ( k ) v j A in temporal sense (Natschlaeger & Ruf, 1998, Bohte et al., 2002). This notion allows us of using self-learning spiking neural network output in fuzzy clustering algorithms described above. In order to implement fuzzy clustering on the base of the spiking neural network, its architecture is modified in the following way: lateral connections in the second hidden layer are disabled, and output fuzzy clustering layer is added next to spiking neuron layer. Such modification is applied to the spiking neural network on data clustering stage only. Output fuzzy clustering layer receives information on the distances of input patter to centers of all spiking neurons and produces fuzzy partition using either probabilistic approach (36), (37) as follows (Bodyanskiy & Dolotov, 2008a-2008b):

μ j ( x ( k ) ) = ( t j [ 1 ] ( x ( k ) ) ) 2 1 ζ ι = 1 m ( t ι [ 1 ] ( x ( k ) ) ) 2 1 ζ E42

or possibilistic approach (39)-(41) as follows (Bodyanskiy et al., 2008b):

μ j ( x ( k ) ) = ( 1 + ( ( t j [ 1 ] ( x ( k ) ) ) 2 λ j ) 1 ζ 1 ) 1 E43
λ j = k = 1 N μ j ζ ( t j [ 1 ] ( x ( k ) ) ) 2 k = 1 N μ j ζ ( x ( k ) ) E44

Obviously, the learning algorithm (11) is more natural here then (7) since response of each spiking neuron within the second hidden layer matters for producing fuzzy partition by the output layer.

The advantage of fuzzy clustering based on self-learning spiking neural network is that it is not required to calculate centers of data clusters according to (37) or (40) as the network finds them itself during learning.

Advertisement

7. Simulation experiment

The proposed self-learning fuzzy spiking neural network was tested on the coloured Lenna image shown on Fig. 10a (USC-SIPI Image Database). The image is a standard benchmark that is widely used in image processing. The image has 3 layers (RGB) with spatial dimensions 512512 so the set to process is formed of 262144 three-dimensional data points (n=3). The purpose was to separate classes by colour of pixels avoiding their spatial location. Obviously, some classes overlap one another as three RGB-components define a plenty of colours, and the boundary between colours is indistinct. There were considered 8 classes to be separated on the image (m=8). A certain grade of grey was assigned to each of the eight classes to visualize the obtained results. 30% of the image pixels were randomly selected to generate a training set (Fig. 10b).

Self-leaning fuzzy spiking neural network settings were set as follows (the most settings were taken from (Berredo, 2005)): time step is 0.1 sec, h=6, receptive neuron type – crisp, θ r .n . = 0.1 t max [ 0 ] = 20 sec, τ P S P = 3 sec, q = 16 d 1 = 0 d 16 = 15 , minimum value of a synaptic weight is 0, maximum value is 1, simulation interval length is 30 sec, η w = 0.35 α = 2.3 sec, β = 0.2 ν = 5 sec, θ s .n . ( 0 ) = 9 θ s .n . ( K + 1 ) = θ s .n . ( K ) + 0.3 θ s .n . ( K ) K max K max = 3 , neighbourhood function – Gaussian, ρ ( 0 ) = 6 γ = 0.5 , calculating ρ ( K ) – expression (15), fuzzy clustering – probabilistic, ζ = 2 , defuzzification method – the largest value. Results of image processing produced by the spiking neural network on the 1st and the 3rd epochs are shown on Fig. 10c and Fig. 10d, respectively.

Fuzzy c-means algorithm was also trained over the same testing set ( ζ = 2 , defuzzification method – the largest value). Results of image processing produced by the algorithm on the 3rd and the 30th epochs are shown on Fig. 10e and Fig. 10f, respectively.

Thus, self-learning fuzzy spiking neural network requires a number of epochs that is in an order less then conventional fuzzy c-means algorithm requires.

Figure 10.

The Lenna image processing: a) Original image; b) Training set (30% of the original image); c) The 1st epoch of self-learning fuzzy spiking neural network learning; d) The 3rd epoch of self-learning fuzzy spiking neural network learning; e) The 3rd epoch of fuzzy c-means learning; f) The 30th epoch of fuzzy c-means learning

Advertisement

8. Conclusion

Spiking neural networks are more realistic models of real neuronal systems than artificial neural networks of the previous generations. Nevertheless, they can be described, as it was shown in earlier sections, in a strict technically plausible way based on the Laplace transform. Spiking neural network designed in terms of transfer functions is an analog-digital nonlinear dynamic system that conveys and processes information both in pulse-position and continuous-time forms. Such precise formal description of spiking neural network architecture and functioning provides researchers and engineers with a framework to construct hardware implementations of various spiking neural networks for image processing of different levels of complexity.

Networks of spiking neurons introduced new, biologically more plausible essence of information processing and gave rise to a new, computationally more powerful generation of computational intelligence hybrid systems. In the present chapter, self-learning fuzzy spiking neural network that combined spiking neural network and fuzzy probabilistic and fuzzy possibilistic clustering algorithms was described as an example of such hybrid systems. It was shown that using of hybrid systems constructed on a spiking neural network basis made it possible to reduce number of learning epochs as compared to conventional fuzzy clustering algorithms. In addition, the way to ‘fuzzify’ spiking neural network architecture was demonstrated with consideration of a pool of receptive neurons to be a linguistic variable.

Although the temporal Hebian learning algorithm of spiking neural network is biologically plausible, even more realistic learning algorithm based on ‘Winner-Takes-More’ rule was proposed as its improvement.

Both theoretical innovations and simulation experiment presented in this chapter confirmed that self-learning spiking neural network and hybrid systems developed on its basis are powerful and efficient advanced tool of computational intelligence for data clustering and, particularly, for image processing.

References

  1. 1. Berredo de, R.C. 2005 A review of spiking neuron models and applications, M. Sc. Dissertation, Pontifical Catholic University of Minas Gerais, Belo Horizonte, http://www.loria.fr/~falex/pub/Teaching/disserta.pdf.
  2. 2. Bezdek J. C. 1981 Pattern Recognition with Fuzzy Objective Function Algorithms, Plenum Press, 0-30640-671-3 York.
  3. 3. Bezdek J. C. Keller J. Krishnapuram R. Pal N. R. 2005 Fuzzy Models and Algorithms for Pattern Recognition and Image Processing, Springer, 0-38724-515-4 York.
  4. 4. Bodyanskiy Ye. Rudenko O. 2004 Artificial Neural Networks: Architectures, Learning, Applications, Teletekh, 9-66954-162-2in Russian).
  5. 5. Bodyanskiy Ye. Dolotov A. 2008a Image processing using self-learning fuzzy spiking neural network in the presence of overlapping classes, Proceedings of the 11th International Biennial Baltic Electronics Conference “BEC 2008”, 213 216 , 978-1-42442-060-5 Tallinn/Laulasmaa, Estonia, October 2008, Tallinn University of Technology, Tallinn.
  6. 6. Bodyanskiy Ye. Dolotov A. 2008b A self-learning spiking neural network for fuzzy clustering task, Scientific Proceedings of Riga Technical University, Computer Science Series: Information Technology and Management Science, 36 27 33 , 1407-7493
  7. 7. Bodyanskiy Ye. Dolotov A. 2009 Hebbian learning of fuzzy spiking neural network based on’Winner-Takes-More’ rule, Proceedings of the 11th International Conference on Science and Technology “System Analysis and Information Technologies”, 271 978-9-66215-327-9 Kyiv, May 2009, ESC “IASA” NTUU “KPI”, Kyiv.
  8. 8. Bodyanskiy Ye. Dolotov A. Pliss I. 2008a Fuzzy receptive neurons using in self-learning spiking neural network, Proceedings of International Scientific and Technical Conference “Automation: Problems, Ideas, Solutions”, 12 14 , 978-9-66296-032-7 Sevastopil, September 2008, Publishing House of SevNTU, Sevastopil (in Russian).
  9. 9. Bodyanskiy Ye. Dolotov A. Pliss I. Viktorov Ye. 2008b Fuzzy possibilistic clustering using self-learning spiking neural network, Wissenschaftliche Berichte der Hochschule Zittau/Goerlitz, 100 53 60 , 3-98080-899-9
  10. 10. Bodyanskiy Ye. Dolotov A. Pliss I. 2009 Self-learning fuzzy spiking neural network as a nonlinear pulse-position threshold detection dynamic system based on second-order critically damped response units, In: International Book Series “Information Science and Computing”: 9 Intelligent Processing, K. Markov, P. Stanchev, K. Ivanova, I. Mitov, (Eds.), 63 70 , Institute of Information Theories and Applications FOI ITHEA, 1313-0455 Sofia.
  11. 11. Bohte S. M. Kok J. N. La Poutre H. 2002 Unsupervised clustering with spiking neurons by sparse temporal coding and multi-layer RBF networks, IEEE Transactions on Neural Networks, 13 426 435 , 1045-9227
  12. 12. Cottrell M. Fort J. C. 1986 A stochastic model of retinotopy: a self-organizing process, Biological Cybernetics, 53 6 405 411 , 0340-1200
  13. 13. Cybenko G. 1989 Approximation by superposition of a sigmoidal function, Mathematics of Control, Signals, and Systems, 2 303 314 , 0932-4194
  14. 14. Dayan P. Abbott L. F. 2001 Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems, MIT Press, 0-26204-199-5
  15. 15. Dorf R. C. Bishop R. H. 1995 Modern Control Systems, Addison-Wesley, 0-20184-559-8
  16. 16. Feldbaum A. A. Butkovskiy A. G. 1971 Methods of Automatic Control Theory, Nauka, Moscow (in Russian).
  17. 17. Gerstner W. Kempter R. van Hemmen J. L. Wagner H. 1996 A neuronal learning rule for sub-millisecond temporal coding, Nature, 383 76 78 , 0028-0836
  18. 18. Gerstner W. Kistler W. M. 2002 Spiking Neuron Models: Single Neurons, Populations, Plasticity, Cambridge University Press, 0-52181-384-0
  19. 19. Goodwin G. C. Graebe S. F. Salgado M. E. 2001 Control System Design, Prentice Hall, 0-13958-653-9 Saddle Reiver.
  20. 20. Haykin S. 1999 Neural Networks: A Comprehensive Foundation, Prentice Hall, 0-13273-350-1 Saddle River.
  21. 21. Hopfield J. J. 1995 Pattern recognition computation using action potential timing for stimulus representation, Nature, 376 33 36 , 0028-0836
  22. 22. Hornik K. Stinchombe M. White H. 1989 Multilayer feedforward networks are universal approximators, Neural Networks, 2 5 359 366 , 0893-6080
  23. 23. Jang J. Sh R. Sun Ch. T. Mizutani E. 1997 Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence, Prentice Hall, 0-13261-066-3 Saddle River.
  24. 24. Kohonen T. 1995 Self-Organizing Maps, Springer, 3-54058-600-8
  25. 25. Kraft M. Kasinski A. Ponulak F. 2006 Design of the spiking neuron having learning capabilities based on FPGA circuits, Proceedings of the 3rd IFAC Workshop on Discrete-Event System Design “DESDes’06”, 301 306 , 8-37481-035-1 September 2006, University of Zielona Gora Press, Zielona Gora.
  26. 26. Lindblad T. Kinser J. M. 2005 Image Processing Using Pulse-Coupled Neural Networks, Springer, 3-54028-293-9
  27. 27. Maass W. 1997a Fast sigmoidal networks via spiking neurons, Neural Computation, 9 2 279 304 , 0899-7667
  28. 28. Maass W. 1997b Networks of spiking neurons: the third generation of neural network models, Neural Networks, 10 9 1659 1671 , 0893-6080
  29. 29. Maass W. Bishop C. M. 1998 Pulsed Neural Networks, MIT Press, 0-26213-350-4
  30. 30. Malsburg C. von der 1994 The correlation theory of brain function, In: Models of Neural Networks II: Temporal Aspects of Coding and Information Processing in Biological Systems, E. Domany, J.L. van Hemmen, K. Schulten, (Eds.), 95 119 , Springer-Verlag, 978-0-38794-362-6 New York.
  31. 31. Mc Culloch W. S. Pitts W. A. 1943 A logical calculus of ideas immanent in nervous activity, Bulletin of Mathematical Biophysics, 5 115 133 , 0092-8240
  32. 32. Meftah B. Benyettou A. Lezoray O. Qing Xiang. W. 2008 Image clustering with spiking neuron network, Proceedings of the International Joint Conference on Neural Networks “IJCNN 2008”, part of the IEEE World Congress on Computational Intelligence “WCCI 2008”, 681 685 , 978-1-42441-821-3 Hong Kong, June 2008, IEEE Press, Piscataway.
  33. 33. Minsky M. L. Papert S. A. 1969 Perceptrons: An Introduction to Computational Geometry, MIT Press, 0-26263-022-2
  34. 34. Natschlaeger T. Ruf B. 1998 Spatial and temporal patterns analysis via spiking neurons, Network: Computation in Neural Systems, 9 3 319 332 , 1361-6536
  35. 35. Phillips C. L. Harbor R. D. 2000 Feedback Control Systems, Prentice Hall, 0-13949-090-6 Saddle River.
  36. 36. Pikovsky A. Rosenblum M. Kurths J. 2001 Synchronization: A Universal Concept in Nonlinear Sciences, Cambridge University Press, 0-52159-285-2
  37. 37. Ritter H. Schulten K. 1986 On the stationary state of Kohonen’s self-organizing sensory mapping, Biological Cybernetics, 54 2 99 106 , 0340-1200
  38. 38. Sato-Ilic M. Jain L. C. 2006 Innovations in Fuzzy Clustering: Theory and Applications, Springer, 978-3-54034-356-1 Berlin.
  39. 39. Scott A. 2002 Neuroscience: A Mathematical Primer, Springer, 0-38795-403-1 York.
  40. 40. Schoenauer T. Atasoy S. Mehrtash N. Klar H. 2000 Simulation of a digital neuro-chip for spiking neural networks, Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Network “IJCNN 2000”, 4 490 495 , 0-76950-619-4 July 2000, IEEE Computer Society, Los Alamitos.
  41. 41. Tsypkin Ya. Z. 1984 Relay Control Systems, Cambridge University Press, 0-52124-390-4
  42. 42. USC-SIPI Image Database, the. Lenna 2004 University of Southern California, 4.2.04, Signal & Image Processing Institute, Electrical Engineering Department, http://sipi.usc.edu/database/database.cgi?volume=misc&image=12.

Written By

Artem Dolotov and Yevgeniy Bodyanskiy

Published: 01 December 2009