Open access peer-reviewed chapter

Neural Network-Based Analog-to-Digital Converters

Written By

Aigerim Tankimanova and Alex Pappachen James

Submitted: 28 December 2016 Reviewed: 11 December 2017 Published: 20 March 2018

DOI: 10.5772/intechopen.73038

From the Edited Volume

Memristor and Memristive Neural Networks

Edited by Alex Pappachen James

Chapter metrics overview

2,155 Chapter Downloads

View Full Metrics

Abstract

In this chapter, we present an overview of the recent advances in analog-to-digital converter (ADC) neural networks. Biological neural networks consist of natural binarization reflected by the neurosynaptic processes. This natural analog-to-binary conversion ability of neurons can be modeled to emulate analog-to-digital conversion using a set of nonlinear circuit elements and existing artificial neural network models. Since one neuron during processing consumes on average only about half nanowatts of power, neurons can perform highly energy-efficient operations, including pattern recognition. Analog-to-digital conversion itself is an example of simple pattern recognition where input analog signal can be presented in one of the 2N different patterns for N bits. The classical configuration of neural network-based ADC is Hopfield neural network ADC. Improved designs, such as modified Hopfield network ADC, T-model neural ADC, and multilevel neurons-based neural ADC, will be discussed. In addition, the latest architecture designs of neural ADC such as hybrid complementary metal-oxide semiconductor (CMOS)-memristor Hopfield ADC are covered at the end of this chapter.

Keywords

  • neural networks
  • analog-to-digital converters
  • Hopfield network

1. Introduction

This chapter presents a review of the advancements in the application of neural network (NN) systems in analog-to-digital converter (ADC) design. Analog-to-digital (A/D) conversion is an essential part of all microelectronic systems design that serves as a link between analog sensors and digital-processing circuitry [1]. The dominant period of the ADC design development came with the maturity of complementary metal-oxide semiconductor (CMOS) technologies [1]. At present, there is a huge variety of high-speed and high-resolution ADCs based on the most advanced CMOS processes that are applicable for different applications [1]. In fact, even though the ADC design field is mature, the complexity of the construction of properly operating ADC system that fits certain applications is still high. In conventional CMOS ADCs, a number of appropriately designed analog circuitries, such as switches, operational amplifiers, voltage converters, and so on, are required [2]. However, with modern advancements in computational systems and processing applications, the demand for faster processing and more flexible architectures that can perform a variety of tasks in the most efficient manner has increased. Artificial neural network (ANN) technology is a well-known candidate that can resolve such demands in high-performance A/D conversion as it divides the task between a number of simple processing elements (neurons) [3]. Further, neurons can perform highly energy-efficient operations of pattern recognition, in particular, one neuron during processing consumes on average only about half nanowatts of power [4].

Since the early twentieth century, scientists and engineers have been trying to explain how human brain functions, and a number of models that are aimed to mimic some features of the biological neural networks were proposed. The work presented by McCulloch and Pitts [5] is one of the first examples of mathematical modeling of ANN that is based on a two-state neuron model. The information processing that is performed in biological neural networks incorporates memorization, learning, classification, and so on and is performed by natural binarization mechanism reflected by the neurosynaptic processes. Brain associative property that is used in processing information is being discussed widely since the 1960s and later. Based on the works presented by [5, 6, 7, 8], Hopfield proposed a neural network model that incorporates associative memory property. The idea that he presented is actually a model of content addressable memory (CAM) that can be implemented in hardware [9, 10, 11]. He discovered that such a type of network has collective computational properties so that it can be used in solving different optimization problems [9, 10, 11].

One of the applications of such CAM-based neural network (NN) that was introduced by Hopfield and Tank includes solving simple optimization problem such as analog-to-digital (A/D) conversion, where the dynamics of the system is described by an energy function (or cost function) [9]. The main concept behind the proper operation of the Hopfield NN is based on minimization of the energy function, so that when the minimum value is achieved, the network reaches its stable state [9, 10, 11, 12]. In general, A/D conversion can be classified as an example of simple pattern recognition where input analog signal can be presented in one of the 2N different patterns for N bits. In Hopfield NN-based ADC, these digital patterns are stored as a memory and are retrieved when the network reaches stable state after conversion period [11].

The NN model proposed by Hopfield represents a network of interconnected processing units (neurons) connected through a symmetric connection matrix with zero diagonal elements [9, 10, 11, 13]. The interconnection nodes between neurons can be viewed as synaptic strength values. The strength of each synapse is represented by the conductance value at each node. The network dynamics is governed by the behaviour of energy function, E, so that when the energy function is of the minimum value, the network reaches stable state and gives digital output [9, 10, 11, 13].

Therefore, in Section 2, a comprehensive discussion on Hopfield NN in general and the ADC based on the Hopfield NN design is presented. The section addresses such topics as the theory of the Hopfield NN, the description of how to construct an ADC structure and the problems that appear in the Hopfield NN ADC. In Section 3, a review of different designs based on original Hopfield ADC such as modified Hopfield neural ADC, NN-based ADCs with non-symmetrical weight matrix, NN-based ADC with multilevel neurons and level-shifted neural ADC is presented. In Section 4, recent CMOS-memristor-based ADC architectures are reviewed. The last section summarizes and gives a conclusion for this chapter.

Advertisement

2. Hopfield neural network ADC

2.1. The Hopfield ADC theory

In his early works, Hopfield introduced the ideas behind the emergent collective computational properties of highly interconnected associative networks [9, 10]. The neural network models that were presented earlier were of Perceptron type and were implemented by feedforward architecture [13]. By contrast, Hopfield presented a different type of architecture with fully interconnected neurons, where each neuron translates its output to the inputs of the remaining neurons through feedback connections [9, 10]. The strength of each feedback connection is represented by its weight (or synapse). In a later work, Hopfield and Tank [11] presented methods of how the network can be applied in solving optimization problems, such as A/D conversion, signal decomposition and linear programming.

One of the earliest works on artificial neural networks (ANNs) by McCulloch and Pitts [5] described a two-state (on-state and off-state) stochastic neuron model that simplifies biological neural function to simple logical operation. However, this model was not applicable for analog processing as it did not have the continuous behaviour as of biological neurons [10, 11, 13]. Hopfield proposed the NN model with continuous neuron response [10, 11, 13], which has computation properties of the stochastic model [9] that can be implemented in hardware. Continuous neuron response in Hopfield NN can be interpreted as an analogy of graded dependence of firing rate produced by the soma of biological neuron as the input signal to the neuron membrane [10, 11, 13] without considering action potential signal details. Two states in neuron model are considered as ‘0’ for not firing state and ‘1’ for firing at a maximum rate [10, 11, 13]. The graded response function that describes such dependence is neuron’s activation function giui that is represented by monotonically increasing sigmoid function (Eq. (1))

giui=11+expuiE1

where ui is the input voltage to neuron, so the neuron’s output signal will be equal to Vi=giui.

The neuron output Vi can be either logic high or logic low depending on whether the effective input voltage to neuron ui is higher or lower than the neuron threshold, as it can be observed from Figure 1.

Figure 1.

Neuron sigmoidal transfer function.

The hardware implementation of 4-bit Hopfield NN ADC proposed in [11] is shown in Figure 2. As it is described in [11], at each analog input level, the network creates an energy function surface that consists of local minima states with one global minimum for this particular analog input. The global minimum for each input level represents the correct digital representation for the input signal [11]. The dynamics of the system can be viewed as a flow in energy state space that tends to minimize E, so that when the network reaches minimum it stops searching process [10, 13]. When the ADC network arrives at an energy minimum state, it should produce an output code that best represents the corresponding analog input. Thus, the E function is a Lyapunov stability function of the system [10]. The proper operation of the Hopfield ADC is achieved when the voltage level of the output code is equal to the value of the analog input, Eq. (2).

VIn=i=0N12iViE2

Figure 2.

4-bit Hopfield neural network ADC.

The ADC network consists of four neurons that are interconnected by a synaptic weight matrix. The network dynamics is highly dependent on the values of synaptic matrix elements. This dependency was analysed by Hopfield in his work [9, 10], where it is deduced that for the system to reach a stable state two conditions should be maintained: (1) the symmetrical synaptic weight matrix Tij=Tji and (2) the diagonal synaptic weights that correspond to feedbacks from neurons to their own inputs should be equal to zero Tii=0. Following this condition, as it is shown in [9, 10, 11, 13], the Hopfield neural network should converge to a stable state. The energy function for the Hopfield network with symmetric weight matrix is shown by Eq. (3)

E=12i,jTijViVj+i1/Ri0Vigi1VdViIiViE3

where the term gi1V is equal to the neuron input potential ui and Ri is the neuron input resistance [10, 13].

The NN proposed by Hopfield has the features that correlate with biological NNs and so it represents a simplified analogy of biological NNs. The system dynamic change can be described by the first-order differential equation of the rate of change of ith neuron input potential, Eq. (4). The capacitance C that is present at the neuron input is a circuit representation of neuron cell membrane capacitance, while the term TIni+TRi+iTij=1Ri in which Ri can be viewed as neuron cell transmembrane resistance [6]

Cduidt=jTijVjTIni+TRi+iTijui+TIniVIn+TRiVrefE4

From Eq. (4), it is seen that ith neuron is charged by integrating the current flowing into the neuron with charging RC time constant [10, 13]. The current that flows into the neuron consists of three components formed prior to the neuron input, which are postsynaptic current TijVj from neuron j, analog input current TIniVIn and constant reference current TRiVref [10, 13].

The ADC operation also can be described by the energy function shown by Eq. (5) [11]. The first term of Eq. (5) shows the squared difference between analog input voltage and the corresponding digital output voltage. As it was previously assumed, the value of analog input voltage should be close to the voltage level of the corresponding output code, see Eq. (2). If for particular VIn the output code V3V2V1V0 is the most correct digital representation, the first term of Eq. (5) should be equal to zero [11]. The second term in Eq. (5) is added to ensure that the digital output voltages Vi will be of logic ‘0’ and ‘1’ [11]

E=12VIni=0N1Vi2i2i=0N12i2ViVi1E5

After expanding and rearrangement of the above equation, we get the expression shown in Eq. (6). By using Eq. (6), the expressions for synaptic weights calculation can be obtained, Eq. (7)

E=12i=0j=0ijN12i+jViVj12i=0N122i1+2iVInViE6
Tij=2i+jTrefi=22i1TIni=2iE7

Therefore, four-bit Hopfield NN ADC can be designed by using Eqs. (2)(7).

2.2. The local minima problem

As it is already discussed, the stability of the Hopfield NN is achieved when the energy function is at its minimum in the state space. The dynamics of the system is moving towards decreasing the energy function. Thus, for the Hopfield NN the energy state space will have multiple local minima, where each of these local minima states is able to stabilize the system dynamics. In theory, the ADC structure proposed by Tank and Hopfield [11], which is based on the Hopfield NN with symmetric weight matrix, has to retrieve correct digital response of the analog input voltage by means of the energy local minima states that are assigned for each correct digital output. However, in practice, this concept does not work as is expected. It appears that the local minima states corrupt the correct operation of the network [14, 15, 16, 17, 18, 19].

In the original Hopfield’s work [11], it was proposed to implement neurons with the CMOS operational amplifiers. The results that were obtained exhibit not ideal ADC behaviour with incorrect output states (Figure 3).

Figure 3.

Hopfield NN ADC transfer characteristics with digital errors.

It is found that after each A/D conversion cycle, the threshold voltage of each neuron circuitry differs from the pre-set value of uth=0V [11], such that the response of comparators exhibits offset. The authors in [11] proposed that the behaviour caused by the hysteresis of CMOS neurons in addition to the local minima states is a dominant contributor to the wrong network response. This hysteresis change in thresholds makes the system to stabilize at the local minima state, which is located closer to the network’s present energy state at the moment of conversion [14, 15, 16, 17, 18, 19]. In order to solve this problem, one of the solutions is to reset the neuron state to the initial threshold value after each conversion [11]. However, the main disadvantage of this method is that it requires more power.

Alternatively, in the works [11, 12, 13, 14, 15, 16], the authors proposed to change the Hopfield ADC network architecture itself in order to eliminate the local minima states, which cause errors in the ADC outputs. Thus, different methods on eliminating local minima problem are proposed in [14, 15, 16, 17, 18, 19], which are discussed in more detail in Section 3.

Advertisement

3. Hopfield neural network-based ADCs

The design presented by Hopfield and Tank is a first example of ADC task implementation with neural networks. This idea became very popular later, as it appeared very simple compared to the conventional designs and, moreover, it opens up the possibilities to explore its phenomenological computational abilities which is a good contribution for science and engineering by itself.

As it was previously described, the existence of local minima in the dynamics of the original Hopfield network ADC design corrupts its digital output by generating spurious states. This problem was addressed by several works that presented the ways of eliminating the local minima states by changing the structure of the ADC network [14, 15, 16, 17, 18, 19]. In the following subsection, the two methods that are claimed to eliminate the problem of local minima of the energy function are presented.

3.1. Eliminating the local minima problem of Hopfield ADC

3.1.1. Modified Hopfield architecture with correction currents

In the study by [14], the authors analyzed the stability of the output codes of Hopfield network ADC in terms of overlap of input currents between two adjacent output codes which is defined as GAP. According to Lee and Sheu [14], in order to avoid the local minima state, this parameter should be higher or equal to zero. Thus, it was deduced that in order to eliminate this current overlap condition, the correction currents can be applied back to the inputs of Hopfield network through the additional set of conductance weights [14].

The modified Hopfield network ADC schematic diagram is shown in Figure 2. The correction currents are generated by inverting amplifiers in order to compensate the overlap and to maintain system dynamics converging to a stable state. In Eq. (8), the dynamics of network in a stable state with applied correction current, IiC, is described.

Tiui=i=0j=0jiN1TijVj+Ii+IiCE8

The energy function of the modified Hopfield network can be described by adding an additional term that represents the correction currents, Eq. (8). The correcting energy eliminates the local minima states and gives the network one global minimum energy state [14].

EC=12i=0j=0ijN1TijViVji=0N1IiVii=1N1IiCViE9

There are certain conditions, according to [14], that should be followed while selecting the correction currents and conductance values. The first condition is to avoid the state when the GAPC parameter is less than zero so that to avoid the two adjacent codes be stable simultaneously. The second condition states that the network dynamics must be sustained in the operation that minimizes energy function of the system. The last condition is to maintain the input current range in appropriate for the global minimum value. For the detailed description of the method, please refer to [14].

3.1.2. Non-symmetric Hopfield architecture

The type of architecture based on Hopfield network is built with non-symmetric connection weight matrix, which is another example that is aimed to solve the local minima states problem. In the designs by [15, 16, 17, 18, 19], the properties of the triangular connections are analyzed. In [19], the authors prove that by triangular interconnection matrix the network operates without spurious states and that this type of architecture can be a good alternative for the original Hopfield design. Similar network type was analyzed by Sun et al. [18], and it is proven that the local minima problem can be mitigated by using this architecture. Taking into account the structure of the model [18], the learning component can be applied to the network making this type of architecture advantageous over the original one.

In Figure 3, the non-symmetric T-model ADC is shown. The input current at each raw represents the current flowing from the external analog input source and from the reference.

This section presents an overview of the designs of neural network ADC of Hopfield network type that solves the problem of the local minima of energy function that creates the digital error at the output of the ADC. We introduced a brief explanation of the two methods of elimination of the local minima.

3.2. Hopfield ADC with multilevel neurons

An interesting alternative design is proposed by [20, 21] in which the authors focus on implementing analog neurons to be of multiple states. The design named as Multilevel Neural Network is applied to the original Hopfield neural network ADC by replacing the conventional two-state sigmoidal neurons by multiple-state (or multiple threshold) neurons [21]. The motivation under this idea is to create a type of neural ADC with better resolution but with the same number (or even less number) of synaptic weights as in the original Hopfield ADC design [18]. This method reduces the complexity of weight matrix and makes it easier to implement the ADC with improved resolution in hardware [21].

The schematic diagram of the ADC proposed in [21] is shown in Figure 4. Being a distinguishable alternative neural networks-based ADC design, it still does not solve the problems of the local minima states of the Hopfield associative network. In [21], the authors considered this case and proposed to solve the local minima by additional correction current method [14].

Figure 4.

Schematic representation of modified Hopfield network ADC.

The multilevel neuron dynamics is described by the block diagram shown in Figure 5 [21]. In the original Hopfield ADC, continuous neuron model dynamics is described by the first-order differential equation (Eq. (4)). The two-state neuron activation function is expressed by Eq. (1). The neuron output is then equal to Vi=giui, and it can take two states logic high and logic low (refer to Section 2). In multilevel neuron model, a two-state activation function neuron is replaced with the multiple-state nonlinearity block (Eq. (10)) (Figures 6 and 7)

Vi=MiuiE10

Figure 5.

Non-symmetric T-model ADC.

Figure 6.

Multilevel neural ADC.

Figure 7.

Multilevel neuron block diagram.

The nonlinearity function Mu shown in Eq. (10) is described as a sum of monotonically nondecreasing step functions fj. with different threshold values θj, where the state of the neuron changes. Each step function is multiplied by the positive coefficient parameter that can be seen as an offset parameter bj. The sigmoidal multilevel nonlinearity can be observed in Figure 8

Mu=j=0l1bjfjuθjE11

Figure 8.

Sigmoidal multilevel nonlinearity.

The neuron dynamics can be expressed by the block diagram shown in Figure 7. The term Xiui=Giui translates information about the current state to its own input so that when the input current value Ii is higher than the Xiui, the state of the neuron is increased. In this design, the additional Gi value is present as a diagonal element in the weight matrix [18]. Therefore, the system is described by Eq. (12)

Ciduidt=j=0n1TijVjGiui+IiE12
ui=M1ViE13

The energy function for the multilevel ADC architecture can also be found by the square of difference expression, Eq. (14). The number of levels in the multilevel nonlinearity block of the neuron is m=0,1,2,,l1 and l represents the base of conversion [21]. The system tends to find the correct digital representation with base l of analog input signal with the minimum energy function value [21]. After expanding Eq. (14), Eq. (15) is obtained, which gives the synaptic weight values of the network, Eq. (16)

E=12VIni=0N1liVi2E14
E=12i=0j=0ijN1li+jViVji=0N1liVInVi+12i=0N1l2iVi2E15
Tij=li+jE16

In the ADC with multilevel neurons, the design system suffers from the local minima problem, which they solve by applying a similar technique that was proposed by [14] described in the previous subsection [21]. Another method of eliminating incorrect output response for multilevel neuron-based ADC was presented in [19], where the parallel hardware-annealing technique was introduced.

3.3. Hopfield neural network-based level-shifted ADC

In the previous subsections, we discussed various types of architectures that are the modified versions of Hopfield ADC, such as the ADC with correction current, the ADC with non-symmetric weight matrix and the ADC with multilevel neurons. All these designs are based on the original Hopfield ADC structure. However, in this subsection we discuss a type of Hopfield-based ADC that is different from the earlier architectures discussed. The level-shifted neural ADC [23] is a new type of architecture that is constructed with multiple 2-bit Hopfield ADCs and voltage level shifters (Figure 9). The ADC design proposed by Hopfield and Tank [11] produces a 4-bit digital output, which is not very much practical in modern technologies. In order to increase the number of neurons in Hopfield NN ADC [11], the corresponding scaling of input and output voltages should be made according to Eq. (2). Therefore, if the goal is to increase the resolution by increasing the number of neurons of Hopfield ADC, the binary output voltage values from neurons will be reduced. Furthermore, the resolution change will require appropriate scaling of the weight matrix. These two problems were addressed in [20, 21, 22] and methods that solve these problems were presented. The level-shifted neural ADC is another method that can solve the resolution improvement issue of Hopfield NN ADC.

Figure 9.

Level-shifted neural ADC.

The operation principle of the proposed level-shifted neural ADC [23] is not very complicated compared to the designs in [14, 15, 16, 17, 18, 19, 20, 21, 22]. As it was mentioned, the design consists of multiple 2-bit Hopfield ADC blocks that operate in parallel. Each successive 2-bit ADC block receives input signal that is DC-shifted to some small positive voltage level. The design parameters can be adjusted depending on the application of the ADC.

The preliminary results of the level-shifted neural ADC for 16-quantization level ADC are presented in [23]. As the design consists of multiple operation in parallel 2-bit Hopfield ADCs, the number of output bits in the digital code is larger compared to the conventional Hopfield ADC. Therefore, it is proposed to use a feedforward neural network encoder so that the digital output will be of a 4-bit format and also to reduce the error in computation due to the local minima and circuit nonidealities.

Advertisement

4. CMOS/memristor hybrid network-based ADC

Since memristor, the fourth fundamental circuit element [24], was discovered by HP Labs in 2008 [25], the device is receiving very high attention as it has a potential to emulate the functionality of biological synapses. During the past decade, many scientists have shown a variety of methods of memristor application in hardware design of ANN systems. For instance, in [26] the hybrid CMOS-memristor Hopfield network-based associative memory is demonstrated. While in the work conducted by Guo et al. [27], the CMOS-memristor hybrid architecture is applied in the design of 4-bit Hopfield neural ADC. Figure 10 reflects the schematic of the system proposed in [27].

Figure 10.

CMOS-memristor hybrid neural ADC.

The CMOS-memristor hybrid Hopfield ADC [27] consists of memristor-based weight matrix and sigmoidal CMOS neurons. The advantage of implementing constant synapses (in Hopfield NN for ADC design synaptic weights a preset and kept unchanged [11]) with memristors is that being a nanoscale device, memristors consume much less power [27]. Moreover, they significantly reduce the on-chip area compared to CMOS-based synaptic weights [27]. In their work, Guo et al. [27] demonstrated the simulation of the proposed system and also successfully implemented their circuit in hardware.

The tuning of memristors is performed by applying either voltage or current pulses with gradually changing amplitude (and/or width) continuously until the device reaches a desired resistance state [27]. In order to sustain the pre-programmed resistances in memristive weight matrix, the network-operating region (analog input and neuron maximum output voltage) was scaled down so as to prevent any resistance state fluctuations in memristors [27]. The CMOS-memristor hybrid ADC applied resetting the neuron states technique similar to that demonstrated in [11] for reduction of the effects of the local minima states.

Another type of CMOS-memristor hybrid neural ADC is a T-model neural ADC architecture proposed by [2]. In the design by Wang et al. [2], the additional least mean square (LMS) training algorithm is applied in order to optimize the system operation to certain conditions. The LMS algorithm that was used in [2] allows flexibility to ADC in terms of voltage operation region. The training algorithm is implemented by means of digital training block connected to the T-model weight matrix. The works presented in [2, 27] introduce architectures of neural ADC that utilize memristors as a synaptic weight elements. They demonstrate that the lower power consumption of the memristive devices can be applied in the Hopfield NN ADC design. By contrast, the Hopfield network still requires additional circuitry to eliminate the local minima-related errors.

Advertisement

5. Discussion

The Hopfield network-based ADCs represent a compact approach for the implementation of analog-to-digital conversion task. However, if trying to implement the model in hardware, the multiple circuit nonidealities create errors in the digital output that somehow must be corrected. For instance, as it was discussed previously, the offset response (hysteresis) of comparators after each conversion creates condition for the network to develop incorrect patterns. The possible solution for eliminating offset is to reset the comparators periodically after each conversion to the initial 0-V threshold state, as it was already mentioned [11]. However, this method is not preferable in terms of circuit implementation, as such circuit requires more power. Another problem, as it was previously discussed, was the local minima behaviour of Hopfield network that creates spurious states so that the output does not correspond to the desired response. The existence of local minima of the network is deduced by circuit analysis techniques in [14], and it was proposed to add a feedback current that will balance the network and create a single energy minimum for the whole system dynamics. Thus, the Hopfield NN-based ADC examples discussed in this chapter are still not adapted into practical use. Even though the local minima problem was mitigated, there is not much analysis on resolution improvement. In [20, 21, 22] by means of multilevel neuron structure, the 8-bit of resolution was achieved. However, ADC structure becomes much more complex since it incorporates multilevel nonlinearity blocks in each neuron and also uses correction current technique as in [14]. Therefore, in order to achieve performance as better as possible from such type of designs as Hopfield network-based ADCs, the complexity of system components must be increased and many parameters must be taken into account, such as circuit mismatches and offsets since they can affect the output significantly. In addition, the analog structure of Hopfield network-based ADCs creates limitations to resolution improvement and thus makes these designs difficult to be implemented and to be compatible with conventional ADCs.

The alternative ADC structure based on Neural Engineering Framework (NEF) was demonstrated in [28], where it is proposed to shift system parts as much as possible into the digital domain, and only the front end of the ADC incorporates feedforward-type neural network encoder that passes signal to analog neurons, and the rest of the processing is done in digital form. Since the design uses a huge population of neurons in the input, even some amount of neurons will fail and the system is robust to such failures. Moreover, the stability issue is no longer valid in this type of architecture, as the neural network used in the design is purely feedforward. The NEF ADC is generally flexible and scalable, as it mostly consists of digital circuitry, and therefore, it can be adapted to any system requirements and technologies. However, the unresolved issue of the design is a very high power consumption of the network [28].

Advertisement

6. Conclusion

This chapter presents a review of existing technologies of neural network-based ADC designs. A/D conversion is an essential process in microelectronic systems that create a connection between analog systems (e.g., sensors) and digital-processing circuitry [1]. With the modern advancements in submicron CMOS technologies, a variety of high-speed and high-resolution ADCs that are used in different applications have increased [1]. In fact, considering the maturity of the field, the complexity of building an ADC has not been reduced. Moreover, due to the applications that require higher performance and flexibility, the resources of conventional ADC architectures may not be enough. Artificial intelligence is considered to tackle such high requirements on speed and performance. The A/D conversion is not excluded from the list of operations that can be done by means of ANN.

In classical works presented by Hopfield [9, 10], he proposed a mathematical CAM model that consists of a group of two-state neurons interconnected between each other that exhibit collective computational behaviour. He further presented the design that can solve optimization problems [11]. The A/D conversion in his work [11] was considered as a simple optimization problem in which it was desired to minimize the value of energy function that describes the dynamics of the ADC system. He presented a 4-bit NN ADC architecture that can be implemented in hardware [11]. However, the ADC architecture has intrinsic imperfection due to multiple local minima of energy function that creates digital error in the output of the ADC [14, 15, 16, 17, 18, 19].

To solve the local minima problem, several methods are proposed in [14, 15, 16, 17, 18, 19]. As it is discussed in Section 3, there are two main methods of eliminating the local minima states and obtaining one global minimum. In the modified Hopfield ADC design, it is proposed to apply correction currents back to the input of each neuron in order to reduce the overlapping current occurring between adjacent output codes [14]. This method eliminates local minima and creates one global minimum towards which the network flow is attracted [14]. Another interesting method that also reduces the effects of local minima is the neural ADC with non-symmetrical weight matrix connection [15, 16, 17, 18, 19]. According to this method, ADC architectures with non-symmetrical weight matrix do not create multiple energy minima states; as a result, such networks are also attracted to a global minimum energy state [15, 16, 17, 18, 19].

Multilevel neural ADC architecture [21] is based on the original Hopfield ADC structure but with modified neuron model. The authors in [21] proposed a multiple-state neuron implementation that is aimed to improve the resolution of the ADC. A similar goal, to improve resolution, was pursued by the level-shifted neural ADC architecture [23] that is built with multiple Hopfield ADC blocks and voltage level shifters.

In addition to the presented CMOS-based neural ADC structures in Section 3, examples of CMOS-memristor-based neural ADC architecture [2, 27] are discussed in Section 4. The memristor device is a promising technology that is aimed to expand the capabilities of traditional CMOS-based systems. The application of memristors in neuromorphic circuits and the development of new memristor-based architectures are currently being widely discussed. However, in [2, 27], traditional implementation of neural ADC architecture was modified by the addition of memristors. Thus, the demonstrated results in [2, 27] have shown that there is a potential in the application of memristors in CMOS-based systems, as memristors consume less power and save on-chip area, which makes memristor-based neural ADC an attractive alternative to traditional NN-based ADC designs that are discussed previously. To sum up, a general overview on the NN-based ADC design area is presented in this chapter.

References

  1. 1. Van de Plassche RJ. CMOS Integrated Analog-to-Digital and Digital-to-Analog Converters. 2nd ed. Netherlands: Kluwer Academic Publishers; 2003. 588 p
  2. 2. Wang W, You Z, Liu P, Kuang J. An adaptive neural network A/D converter based on CMOS/memristor hybrid design. IEICE Electronics Express. 2014;11(24):1-6
  3. 3. Yang H, Sarpeshkar R. A bio-inspired ultra-energy-efficient analog-to-digital converter for biomedical applications. IEEE Transactions on Circuits and Systems I: Regular Papers. 2006;53(11):2349-2356
  4. 4. Aiello L. Brains and guts in human evolution: The expensive tissue hypothesis. Brazilian Journal of Genetics. 1997;20(1):141-148
  5. 5. McCulloch WS, Pitts WH. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics. 1943;5:115-133
  6. 6. Cooper L. A possible organization of animal memory and learning. In: Lundquist B, Lundquist S, editors. Proceedings of the Nobel Symposium on Collective Properties of Physical Systems. New York: Academic Press; 1973. pp. 252-264
  7. 7. Warren S, Pitt M, Pitt W. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics. 1943;5(4):115-133
  8. 8. Longuet-Higgins HC. The non-local storage of temporal information. Proceedings of the Royal Society of London B: Biological Sciences. 1968;171(1024):327-334
  9. 9. Hopfield J. Neural networks and physical systems with emergent collective computational abilities. Proceedings of National Academy of Sciences of the United States of America. 1982;79(8):2554-2558
  10. 10. Hopfield J. Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of National Academy of Sciences of the United States of America. 1984;81(Biophysics):3088-3092. DOI: 10.1073/pnas.81.10.3088
  11. 11. Tank D, Hopfield J. Simple “neural” optimization networks: An A/D converter, signal decision circuit, and linear programming circuit. IEEE Transactions on Circuits and Systems. 1986;33(5):533-541
  12. 12. Rosenblatt F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review. 1958;65(6):386-408
  13. 13. Hopfield J, Tank D. Computing with neural circuits: A model. Science, New Series. 1986;233(4764):625-633
  14. 14. Lee B, Sheu B. Modified Hopfield neural networks for retrieving the optimal solution. IEEE Transactions on Neural Networks. 1991;2(1):137-142
  15. 15. Chande V, Poonacha P. On neural networks for analog to digital conversion. IEEE Transactions on Neural Networks. 1995;6(5):1269-1274
  16. 16. Avitable G, Mannetti S. Some structures for neural based A/D conversion. Electronic Letters. 1990;26(18):1516-1517
  17. 17. Gray D, Michel A, Porod W. Application of neural networks to sorting problems. 27th IEEE Conference on Decision and Control; 7-9 Dec; IEEE; 1988. pp. 350-351
  18. 18. Sun CL, Tang Z, Ishizuka O, Matsumoto H. Synthesis and implementation of T-model neural-based A/D converter. In: IEEE International Symposium on Circuits and Systems; 10-13 May; IEEE; 1992. pp. 1573-1576
  19. 19. Avitabile G, Forti M, Manetti S, Marini M. On a class of nonsymmetrical neural networks with application to ADC. IEEE Transactions on Circuits and Systems. 1991;38(2):202-209
  20. 20. Yuh J, Newcomb R. Circuits for multi-level neuron nonlinearities. In: International Joint Conference on Neural Networks; 7–11 June; IEEE; 1992. pp. 27-32
  21. 21. Yuh J, Newcomb W. A multilevel neural network for a/d conversion. IEEE Transactions on Neural Networks. 1993;4(3):470-483
  22. 22. Bang SH, Chen O, Chang J, Sheu B. Paralleled hardware annealing in multilevel Hopfield neural networks for optimal solutions. IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing. 1995;42(1):46-49
  23. 23. Tankimanova A, Kumar Maan A, James A. Level-shifted neural encoded analog-to-digital converter. In: 24th IEEE International Conference on Electronics, Circuits and Systems (ICECS 2017)
  24. 24. Kumar Maan A, Ai JD, James AP. A survey of memristive threshold logic circuits. IEEE Transactions on Neural Networks and Learning Systems. 2016;PP(99):1-13
  25. 25. Strukov DB, Snider GS, Stewart DR, Williams RS. The missing memristor found. Nature. 2008;453(7191):80-83
  26. 26. Hu SG, Liu Y, Liu Z, Chen TP, Wang JJ, Yu Q, Deng LJ, Yin Y, Hosaka S. Associative memory realized by a reconfigurable memristive Hopfield neural network. Nature Communications. 2015;6:7522
  27. 27. Guo X, Meririkh-Bayat F, Gao L, Hoskins DB, Alibart F, Linares-Barranco B, Theogarajan L, Teuscher C, Strukov DB. Modelling and experimental demonstration of a Hopfield network analog-to-digital converter with hybrid CMOS/memristor circuits. Frontiers in Neuroscience. 2015;9:488
  28. 28. Mayr CG, Partzsch J, Noack M, Schuffny R. Configurable analog-digital conversion using the neural engineering framework. Frontiers of Neuroscience. 2014;8:201

Written By

Aigerim Tankimanova and Alex Pappachen James

Submitted: 28 December 2016 Reviewed: 11 December 2017 Published: 20 March 2018