Open access peer-reviewed chapter

Neuromorphic Computing between Reality and Future Needs

Written By

Khaled S. Ahmed and Fayroz F. Shereif

Submitted: 09 November 2022 Reviewed: 20 January 2023 Published: 01 April 2023

DOI: 10.5772/intechopen.110097

From the Edited Volume

Neuromorphic Computing

Edited by Yang (Cindy) Yi and Hongyu An

Chapter metrics overview

302 Chapter Downloads

View Full Metrics

Abstract

Neuromorphic computing is a one of computer engineering methods that to model their elements as the human brain and nervous system. Many sciences as biology, mathematics, electronic engineering, computer science and physics have been integrated to construct artificial neural systems. In this chapter, the basics of Neuromorphic computing together with existing systems having the materials, devices, and circuits. The last part includes algorithms and applications in some fields.

Keywords

  • neuromorphic computing expectation
  • neuromorphic systems
  • neuromorphic chips
  • applications
  • algorithms

1. Introduction

Neuromorphic computing, technology that mimic neuro-biological architectures present in the nervous system using electronic circuits, has attracted attention as next-generation computing due to its characteristics as they can process complex data with high efficiency, high speed and low power consumption. Neuromorphic computing importance increased in the industry because it efficiently executes artificial intelligence algorithms by imitating the brain nerve structure of humans. The conventional von Neumann computing using separated processors and memory systems is not efficient for machine learning due to processor-memory bottlenecks. Because the machine learning has a special workload that iterates simple computation with alot of data, there should be huge data traffic between processors and memory. For neuromorphic computing system, it consists of multiple neurons and synapses to compute and store data, and a neural network to communicate them. Therefore, this computing system can compute simple iterations efficiently for the training of machine learning. This chapter will discuss neuromorphic computing goals and challenges, showing how materials, devices, algorithms development researches lead to high expectations in neuromorphic computing field. We will discuss neuromorphic systems that are already existing today and expectations for what neuromorphic computing can achieve in the future.

Advertisement

2. What is neuromorphic computing?

2.1 Neuromorphic computing history

The development of the perceptron in 1958 served as the forerunner of the artificial neurons employed in modern neural networks. With the inadequate understanding of the brain’s inner workings at the time, the perceptron was an amateurish attempt to imitate some aspects of organic neural networks. The U.S. Navy planned to use the perceptron as a piece of mechanical hardware that was specifically designed for picture identification. Before it was understood that the technology could not perform the required job, it was subject to a great deal of hype.

Carver Mead, a professor at Caltech, first proposed neuromorphic computing in the 1980s. Mead’s description of the first analogue silicon retina presaged a brand-new class of physical calculations that were motivated by the neural paradigm. Mead is also stated as believing that, given a thorough grasp of how the nervous system functions, nothing the human nervous system does cannot be replicated by computers in an article about neural computation in analogue VLSI.

However, the ubiquitous and expanding usage of AI, machine learning, and neural networks in consumer and enterprise technologies might be partly blamed for the recent investment and enthusiasm surrounding neuromorphic research. It can also be mainly related to the perception among many IT specialists that Moore’s law is coming to an end. According to Moore’s Law, a chip can accommodate twice as many micro components each year while maintaining the same price.

Major chip manufacturers like IBM and Intel are paying close attention to neuromorphic computing because it has the potential to get around conventional architectures and achieve radically higher levels of efficiency. In fact, Intel launched Moore’s Law which was about to come to an end in 2017.

Moore’s Law by condensing Gordon Moore’s ideas has been launched in 2013, was also quoted as saying, “Going to multicore chips helped, but now we are up to eight cores and it does not look like we can go much further. People have to crash into the wall before they pay attention.” This belief supports the idea that there are ups and downs in the hype and popular discourse surrounding artificial intelligence, with periods of low interest sometimes referred to as AI winters and times of high interest frequently brought on by an urgent issue that needs to be resolved, in this case the end of Moore’s Law. the integration of neuromorphic chips and smart cockpits and explore related fields with BMW has been announced by SynSense The world’s leading neuromorphic intelligence and solutions provider.

2.2 Neuromorphic computing architecture

Neuromorphic computing is a method of computer engineering in which elements of a computer are modelled after systems in the human brain and nervous system. The phrase describes the creation of both software and hardware components in computers. To construct artificial neural systems that are inspired by biological architecture, neuromorphic engineers depend on a variety of fields, including computer science, biology, mathematics, electronic engineering, and physics.

Researchers have investigated neuromorphic computing throughout the years to create various types of machines that think and learn like humans. Studies have been done to imitate how the human brain learns and computes by using computer hardware in the form of an artificial neural network to simulate human learning abilities. The anatomy of the human brain, which has over one billion neurons and trillions of synapses, is extremely intricate. As shown in Figure 1, neurons are made up of a cell body, an axon that generates neural signal impulses, and a dendrite that receives signals from other neurons. A synapse is a physical component that enables one neuron to transmit an electrical signal to an additional neuron.

Figure 1.

Three interconnected neurons. A presynaptic neuron transmits the signal toward a synapse, whereas a postsynaptic neuron transmits the signal away from the synapse [1].

As seen in Figure 2b, the neuromorphic hardware typically comprises of neurons and synapses to imitate the human brain’s neural network. Each neuron functions as a data processing centre in the neuromorphic hardware, and neurons are linked together in parallel by synapses to convey data [2, 3, 4]. The single signal bus present in the neuromorphic hardware does not result in a von Neumann bottleneck. Instead of using regular CMOS devices, artificial synaptic devices that reflect the properties of bio-synapses must be developed in order to execute this in practical design. The block diagrams for the traditional von Neumann architecture and the emerging neuromorphic design are shown in Figure 2.

Figure 2.

Block diagram of computing systems: (a) von Neumann architecture; (b) neuromorphic architecture [5].

Significant improvements have recently been made in the field of neuromorphic computing. Three major steps are used to categorise the most recent improvement [6, 7, 8]. A GPU-Centric system, which is primarily used for learning and supports artificial intelligence by using a graphics processing unit (GPU), is the initial phase. An ASIC-Centric system, which is now the next stage, is a hot topic of research. An effective and low-power application-specific integrated circuit (ASIC) for machine learning is anticipated to be created by this trend. As a result, numerous semiconductor firms are creating ASIC chips [9, 10, 11, 12]. However, it is anticipated that neuromorphic computing will be developed 1 day into neuromorphic hardware, enabling ultra-low power and ultra-high speed computation to support for general-purpose artificial intelligence.

The neuromorphic-centric hardware must be able to handle massive amounts of data in parallel while using incredibly little power. Furthermore, compared to current technology that uses ordinary CMOS components, the neuromorphic semiconductor chip demands a quicker rate of computing. This suggests that creating an emerging synaptic device is essential to the development of neuromorphic-centric hardware.

2.3 Neuromorphic computing goals

Everything appears to be becoming “smarter” these days. Artificial intelligence (AI) is being used by an increasing number of goods and services, from industrial machinery to residential appliances, to comprehend user requests, analyse data, and spot patterns. It’s simple to understand why AI-powered goods are so well-liked. Instead of using buttons or touchscreens, smart interfaces enable voice and gesture control of devices. It’s a lot more instinctive to use a device in this way. Additionally, AI has the potential to make products more autonomous, freeing us from laborious or repetitive tasks. Smart items can also enable ongoing optimization and data analysis, which can be used to monitor our health and send us alerts, as well as to anticipate when a piece of equipment needs to be serviced or replaced. The popularity of smart devices today is increasing demand for increasingly complex AI-powered experiences. And we are beginning to push the capabilities of the hardware that is now available. Today’s smart devices actually use a lot of processing that is done remotely in the cloud or a data center, where there is sufficient computer power to conduct the required algorithms. This means that a network connection is necessary, and when data is transmitted back and forth, this can also cause latency to grow. When sending particular types of data to the cloud, there are additional real and perceived data privacy factors to take into account.

These factors suggest benefits of increasing the amount of smart computing housed within the device itself. Since the processing is carried out in edge network devices rather than a centralised cloud, this is referred to as “edge AI.” However, because many edge processors are mobile, they frequently rely on batteries for power. How can we run low-power edge devices with power-hungry AI algorithms? To do that, we are going to need to reevaluate how AI hardware is created.

This is the goal of neuromorphic computing. The information processing methods used by a biological brain are the foundation of neuromorphic computing design. Consider the 80–100 billion neurons found in the average human brain, each of which functions incredibly effectively and asynchronously to offer huge parallel processing. We are able to be so intelligent without constantly ingesting enormous quantities of energy because to this power and efficiency combination. Spiking neural networks are one of the most effective ways to simulate how a real neuron fires or “spikes” to relay a signal before going back to being silent. The end result is a system that uses much less power than artificial neural networks, which are currently employed in the majority of AI systems. Additionally, such efficiency makes it possible for considerably more AI processing to be done on smaller, low-power devices at the network edge.

Advertisement

3. Neuromorphic computing working principles

3.1 Von Neumann vs. neuromorphic

Non-von Neumann computers that are inspired by the structure and operation of brains and are made up of neurons and synapses are referred to as neuromorphic computers. Von Neumann computers consist of distinct CPUs and memory units, with the latter housing data and instructions. On the other hand, with a neuromorphic computer, the neurons and synapses control both processing and memory. In contrast to von Neumann computers, which use explicit instructions to define programmes, neuromorphic computers instead use the characteristics and structure of the neural network. In addition, while von Neumann computers encode information as numerical values represented by binary values, neuromorphic computers receive spikes as input, where the associated time at which they occur, their magnitude and their shape can be used to encode numerical information. Binary values can be turned into spikes and vice versa, but the precise way to perform this conversion is still an area of study in neuromorphic computing.

These are some fundamental operational differences between Neuromorphic computers and Von Neumann:

  • Highly parallel operation: All of the neurons and synapses in neuromorphic computers are potentially capable of operating simultaneously due to their inherent parallelism; nevertheless, compared to parallelized von Neumann systems, the computations carried out by neurons and synapses are comparatively straightforward.

  • Collocated processing and memory: In neuromorphic hardware, the idea of a distinction between processing and memory is absent. In many implementations, neurons and synapses both do processing and store information, despite the fact that neurons are sometimes thought of as processing units and synapses as memory. The von Neumann bottleneck relating to the processor/memory separation, which slows down the maximum throughput that may be attained, is lessened by collocating processing and memory. Additionally, this collocation aids in avoiding main memory data accesses, which are common in conventional computing systems and use a lot more energy than compute energy.

  • Inherent scalability: Since adding more neuromorphic chips implies increasing the potential number of neurons and synapses, neuromorphic computers are designed to be intrinsically scalable. In order to run larger and larger networks, it is possible to treat a number of physical neuromorphic chips as a single huge neuromorphic implementation. Several large-scale neuromorphic hardware devices, like SpiNNaker and Loihi, have been used to successfully achieve this.

  • Event-driven computation: neuromorphic computers leverage event-driven computation and temporally sparse activity to allow for extremely efficient computation. Neurons and synapses only perform work when there are spikes to process, and typically, spikes are relatively sparse within the operation of the network.

  • Stochasticity: To account for noise, neuromorphic computers can incorporate a notion of randomness, such as in the firing of neurons.

The literature makes extensive note of the characteristics of neuromorphic computers and provides reasons for their use and implementation. The extraordinarily low power consumption of neuromorphic computers for computation is one of its most appealing qualities; they frequently consume orders of magnitude less power than conventional computing systems. Because they operate in enormous parallel and are event-driven, only a small percentage of the system is normally active at any given moment, with the remainder being idle. Energy efficiency alone is a compelling incentive to examine the use of neuromorphic computers, given the rising energy cost of computing and the rising number of applications (such as edge computing applications) that have energy limits. Furthermore, neuromorphic computers provide a suitable platform for many of the artificial intelligence and machine learning applications of today since they naturally incorporate neural network-style processing. Additionally, neuromorphic computers hold potential for utilising their inherent computational capabilities to carry out a wide range of different forms of computing.

It is unclear whether these are the only parts of biological brains that are significant for computation, despite the fact that each of these qualities of neuromorphic computers is modelled after traits of the brain and has received priority in recent years. For instance, glial cells are one of many additional types of neural components that may be beneficial for computation, despite the fact that neurons and synapses have been chosen as the main computational units of neuromorphic computers. Furthermore, neurons and synapses have proved a useful level of abstraction for neuromorphic computers, but it is still unclear whether they are the best level of abstraction.

Contrary to some of the upcoming computing technologies, the research community already has access to various physical realisations of neuromorphic hardware that are now being developed. Numerous large-scale neuromorphic computers have been created with various methodologies and objectives. SpiNNaker and BrainScaleS were created with funding from the Human Brain Project of the European Union in order to facilitate large-scale neuroscience simulations. It has also been suggested to use slightly more complicated neuron models using an improved digital neuromorphic processor known as the online-learning digital spiking neuromorphic (ODIN). The Tianjic chip, a platform that supports both neuromorphic spiking neural networks and conventional artificial neural networks for various issue types, is one of the neuromorphic platforms aiming for more broad computations for wider classes of applications [13]. Both business and academia are interested in neuromorphic systems; examples from business include IBM’s TrueNorth [14] and Intel’s Loihi [15], while other academic initiatives exist as well, including DYNAPs [16], Neurogrid [17], IFAT [18], and BrainScales-2 [19]. In order to optimise learning-to-learn scenarios—situations where an optimization method is used to specify how learning occurs—for spiking neural networks, running at considerably faster timescales than biological timescales, neuromorphic hardware such as BrainScales-2 has been shown to be useful [20].

However, there is a lot of research being done in the neuromorphic community to create new types of materials for neuromorphic implementations, such as phase-change, ferroelectric, non-filamentary, topological insulators, or channel-doped biomembranes. All of the large-scale neuromorphic computers mentioned above are silicon-based and implemented using conventional complementary metal oxide semiconductor technology [21, 22, 23]. Memristors are frequently utilised in the literature as the fundamental device to have resistive memory to collocate processing and memory [24, 25], although other types of devices, such as optoelectronic devices, have also been employed to create neuromorphic computers [26]. Neuromorphic computers can be implemented using a variety of devices and materials, each of which has its own operating properties, including speed of operation, energy consumption, and biological likeness. Because neuromorphic hardware may now be implemented using a variety of tools and materials, it is possible to tailor its properties to a particular application.

The majority of current research in the field of neuromorphic computing focuses on the aforementioned hardware systems, devices, and materials; however, in order to best utilise neuromorphic computers in the future, take advantage of all of their special computational properties, and influence their hardware design From this vantage point, we give an overview of the state-of-the-art in neuromorphic algorithms and applications and offer a look ahead at the possibilities for the development of neuromorphic computing in computer science and computational science. It is important to note that a variety of various sorts of technology have all been referred to as neuromorphic computing.

3.2 Neuromorphic computing and artificial general intelligence (AGI)

Artificial general intelligence (AGI) is the term used to describe AI that demonstrates intelligence comparable to that of humans. One could say it’s the holy grail of all AI. That degree of intelligence has not yet been attained by machines and might never be. However, neuromorphic computing presents brand-new opportunities for advancing in that direction.

For example, the Human Brain Project which features the neuromorphic supercomputer SpiNNaker aims to produce a functioning simulation of the human brain and is one of many active research projects interested in AGI.

The criteria for determining whether a machine has achieved AGI are debated, but these points need to be discussed if the machine can reason and make judgements under uncertainty. If it can plan, learn, communicate using natural language, represent knowledge including common-sense knowledge, and if it can integrate these skills in the pursuit of a common goal.

The ability to imagine, subjective experience, and self-awareness are occasionally included. The well-known Turing Test and the Robot College Student Test, in which a machine enrols in classes and earns a degree just like a human would, are two more suggested techniques for verifying AGI.

There are disagreements over how it should be handled ethically and legally if a machine ever attained human intelligence. Some contend that it ought to be regarded under the law as a nonhuman animal. These debates have been going on for years, in part because we still do not fully understand consciousness as a whole.

Advertisement

4. Challenges of neuromorphic computing

Conventional CMOS-based neuromorphic systems have shown promise in delivering brain-like features including pattern recognition, adaptive learning, and sophisticated sensing. However, because of insurmountable problems inherent in the material qualities of conventional semiconductors, their future potential is constrained. For creating a system that imitates a biological brain, quantum materials and technologies are anticipated to function as a ground-breaking, next-generation computational platform. This field is at a point where further fundamental investigation into every facet of the issue would be extremely beneficial. This issue is devoted to evaluating the state of the art in the field and exhibiting fresh, possibly ground-breaking concepts.

Ab initio theoretical calculations combined with state-of-the-art synthesis and nanoscale structural, electrical, magnetic, and optical characterisation are some of the techniques that will be used to better understand the properties of quantum materials, the impact of defects, and their ultimate effect on devices and systems. It is crucial to conduct a thorough, quantitative, interdisciplinary analysis of the relevant materials under intense (electrical, thermal, magnetic, and physical) stresses. Modern technologies for synthesis and characterisation are now at a point where they can give precise control and information about a material’s structure and how it affects its physical properties.

With a full understanding of the properties of the materials, new design ideas that go beyond typical semiconductor devices based on the charge or even the spin of the electron are being developed. For instance, Mott physics offers a way to simulate technology that is inspired by the brain. Synaptic devices encode a memory state by shifting and modifying the concentration of defects, whereas “neuronal” devices accumulate “metal phase” and modulate the conduction by metallic filamentary development (e.g., oxygen vacancies). In general, the combination of memristive phenomena with light-sensitive oxides is leading to exciting design ideas for memory, networks, and neuro-sensors. Polymer-gated transistors with numerous functionalities can be used to achieve synaptic behaviour, which calls for plasticity in a material’s response function, as well as in photonic and magnetoelectric systems. Spin-based devices, on the other hand, benefit from pronounced nonlinearities in the magnetic responses of quantum materials. Optomagnetic neural networks can be adjusted to construct low-dissipation networks thanks to the low-energy plasticity and non-volatility of magnetic properties, while magnetic anisotropies are proposed to train networks. Nanoscale magnetic phenomena can also be used to create nanowires. Time-dependent responses that resemble synaptic and dendritic trees as well as neural spikes can be produced by superconducting Josephson devices. Many physical phenomena, such reservoir computing and in-memory computing, are being studied as avenues to duplicate the magnificent processes carried out by the brain. It is crucial to comprehend the ultimate physical limits of these phenomena, including the smallest sizes, shortest times, or closest physical proximity permitted by the physics of the materials and devices; all of these are directly related to scaling issues. Only then can these phenomena be incorporated into functional devices.

Emergent behaviour is at the core of biological neurons’ and synapses’ complexity and remarkable effectiveness. Numerous self-organising principles may be seen in the emergent properties of quantum materials and their nonlinearities, which offer a variety of static and dynamic states helpful for simulating sophisticated brain devices and networks. [27] A diversity of different memory states, fine-tuning of critical behaviour, and the formation of unique collective phenomena are possible thanks to the metastability provided by the complex energy landscape of the quantum system [26, 28]. It is crucial to take into account the network in which the devices are located in order to approach the problem of building a brain-like machine. The network is the system, to adapt Herbert Kroemer’s aphorism to bio-inspired neuromorphic computing.

Advertisement

5. Existing neuromorphic systems

5.1 The Tianjic Chip

This chip is the world’s first hybrid-paradigm chip for brain-inspired computing to facilitate the development of artificial general intelligence (AGI) (Figure 3).

Figure 3.

Tianjic chip architecture [29].

Two key approaches to the creation of AGI are computer science- and neuroscience-based. Both have benefits and drawbacks of their own. The combination of these two methods is currently thought to be one of the most promising approaches to artificial general intelligence (AGI), and a strong hardware foundation that supports hybrid-paradigm computing is one of its key pillars.

The CBICR team suggests a novel brain-inspired computing architecture, a hybrid-paradigm Tianjic chip, which can simultaneously support computer-science-oriented and neuroscience-oriented neural networks and capitalise on their strengths, such as artificial neural networks and spiking neural networks. This architecture is based on the computing principles of brain science. The second-generation Tianjic chip was introduced in 2017 and was built upon the first-generation chip created in 2015. The modern Tianjic achieves versatile functionality, fast speed, and low power after continual design refinement. Tianjic has 20% higher density, 10 times the speed, 100 times the internal bandwidth, better flexibility/scalability, and more complete functionality than IBM’s TrueNorth chip. The group has also created a first-generation software tool chain that facilitates compilation and model mapping automatically.

As a flexible and scalable AGI development platform, an unmanned bicycle was built. Using just one chip, it was able to perform a variety of complex tasks, including speech recognition, object tracking and detection, balance control, obstacle avoidance, and decision making. This device outperformed a comparable GPU 160 times faster and 120,000 times more effectively thanks to its 40,000 neurons and 10 million synapses. This book offers a fresh perspective and a new venue for academic study of AGI, which will facilitate its advancement and effects on business.

5.2 Intel’s Loihi Chips and Intel’s Pohoiki Beach computers

Intel improved its Pohoiki Beach neuromorphic system, paving the way for quicker processing to be employed in AI for the Internet of Things and autonomous vehicles and equipment. More than 60 partners will use the system to solve challenging challenges that need extensive number crunching. It is built to imitate 8 million neurons and runs 64 Loihi research chips.

Loihi, which was originally launched in 2017, has 10,000 times greater efficiency and can process information up to 1000 times faster than CPUs. Researchers will be able to scale up sparse coding, simultaneous localization and mapping (SLAM), and path planning to learn from new data inputs and adapt.

According to Applied Brain Research, the advantages of Loihi also include decreased power consumption, which is around 109 times lower than a GPU and five times lower than comparable IoT inference hardware.

Loihi was motivated by how the brain approaches problem-solving, which is the fundamental origin of all neuromorphic research. According to Intel, neuromorphic computing is computer technology that mimics the brain’s neural structure and can use context-sensitive reasoning, common sense, and deal with ambiguity and contradiction. AI in its early stages was less robust, literal, and deterministic.

According to Prof. Konstantinos Michmizos of Rutgers University, “Loihi allows us to create a spiking neural network that imitates the underlying neuronal representations and activity of the brain.” His lab was able to operate mobile robots precisely and with 100 times less energy while using a Loihi-run network as opposed to a commonly utilised CPU.

Neuromorphic computing-based AI has been touted by Intel as having applications in medical imaging and autonomous vehicles (AV). With AV, solutions could emerge to deal with uncertainties, like a ball rolling onto the roadway or an aggressive driver in another car on the road. AI can be used to highlight areas of a medical image where an ailment is more likely to be present.

Without providing a detailed roadmap, Intel stated that their research will eventually result in the commercialization of neuromorphic technology. Moore’s Law and process-node computing cannot continue to expand, hence there is a critical need to sustain the gains in computing power and performance. According to Intel, specialised architectures like the Pohoiki Beach neuromorphic approach are required for particular developing applications like AV, smart homes, and cybersecurity (Figure 4).

Figure 4.

Intel uses 64 Loihi research chips in its neuromorphic system called Pohoiki Beach [30].

5.3 IBM’s TrueNorth chip

In 2014, IBM released the neuromorphic CMOS integrated circuit known as TrueNorth. With 4096 cores and 256 programmable simulated neurons per core for a total of just over a million neurons, it is a manycore processor network on a chip. The 256 programmable “synapses” that connect each neuron serve to transmit impulses between them. Thus, there are little over 268 million programmable synapses overall. It has 5.4 billion fundamental transistors.

TrueNorth avoids the von Neumann-architecture bottleneck and is extremely energy efficient, with IBM claiming a power consumption of 70 milliwatts and a power density that is 1/10,000th of conventional microprocessors. Memory, computation, and communication are handled in each of the 4096 neurosynaptic cores [31]. The SyNAPSE chip operates at lower temperatures and power consumption because it only uses the power required for computing. Skyrmions have been proposed as models of the synapse on a chip [32, 33].

The neurons are emulated using a Linear-Leak Integrate-and-Fire (LLIF) model, a simplification of the leaky integrate-and-fire model [34]. IBM claims that it lacks a clock [35] and only uses unary numbers, counting up to a maximum of 19 bits to perform computations [34, 36]. The said cores are event-driven by using both a synchronous logic and are interconnected through an asynchronous packet-switched mesh network on chip (NOC).

In order to programme and use TrueNorth, IBM has created an entirely new environment. It featured libraries, a simulator, a fresh programming language, and even an integrated programming environment. Due to the substantial concerns of vendor lock-in and other negative effects, this absence of backward compatibility with any prior technology (such as C++ compilers) may preclude it from becoming commercialization in the future (Figure 5) [35].

Figure 5.

Board with 16 TrueNorth chips.

5.4 Human brain project

The European Union funded Human Brain Project (HBP), a 10-year endeavour that started in 2013, aims to better understand the brain through research in six areas, including neuromorphic computing. SpiNNaker and BrainScaleS are two significant neuromorphic university initiatives that were inspired by the HBP. The largest neuromorphic supercomputer in existence at the time, the million-core SpiNNaker system, was released in 2018; The University of Manchester plans to expand it up in the future to model one million neurons.

5.5 BrainScaleS From Heidelberg University

BrainScaleS (Brain-inspired multiscale computation in neuromorphic hybrid systems) was an EU FET-Proactive FP7 funded research project. The undertaking began on January 1, 2011, and it was completed on March 31, 2015. It involved the cooperation of 19 research teams from 10 different European nations. The Human Brain Project’s (HBP) Neuromorphic Computing Platform is where the hardware for neuromorphic computing systems is currently being developed. The goal of the BrainScaleS project is to comprehend and simulate how various spatial and temporal scales interact to process information in the brain.

Advertisement

6. Materials and devices

The production and characterisation of materials for neuromorphic systems has been one of the major areas of advancement in neuromorphic computing in recent years. We want to highlight the variety of innovative nanoscale devices and materials that the materials science community is developing and characterising for neuromorphic systems.

Two typical nano-scale devices that have been constructed with various materials that can result in various behaviours are atomic switches and CBRAM. A survey of many atomic switch types for neuromorphic devices is given in [37], but common atomic switch materials include Ag2S [38, 39, 40], Cu2S [41], Ta2O5 [42], and WO3-x [43]. Under various conditions, many atomic switch materials can display various switching behaviours. As a result, the atomic switch’s behaviour can be controlled by the material choice, which is likely to vary on the application. CBRAM has been implemented using GeS2/Ag [44, 45, 46, 47, 48, 49, 50], HfO2/GeS2 [51], Cu/Ti/Al2O3 [52], Ag/Ge0.3Se0.7 [53, 54, 55], Ag2S [56, 57, 58] and Cu/SiO2 [54]. Similar to atomic switches, the material chosen affects the stability and dependability of the device as well as the switching behaviour of CBRAM devices,

Memristors can be used in a wide number of ways. Transition metal-oxide-based memristor systems may be the most widely used (TMOs). Many different types of materials are utilised to make metal-oxide memristors, including HfOx [59, 60, 61, 62, 63, 64, 65, 66, 67], TiOx [68, 69, 70, 71, 72, 73], WOx [74, 75, 76, 77, 78], SiOx [79, 80], TaOx/TiOx [81, 82], NiOx [83, 84, 85], TaOx [86, 87, 88], FeOx [89], AlOx [90, 91], TaOx/TiOx [81, 82], HfOx/ZnOx [92], and PCMO [93, 94, 95, 96, 97, 98]. The quantity and sorts of resistance states that can be created by various metal oxide memristor types determine the range of weight values that can be stored on the memristor. They also differ in terms of their capacities for endurance, stability, and dependability.

There have also been other more suggested materials for memristors. For instance, spin-based magnetic tunnel junction memristors based on MgO have been suggested for implementations of both synapses and neurons [99], though it has been highlighted that they have a restricted range of resistance levels, making them less suitable to store synaptic weights [88]. Synapses have also been implemented using chalcogenide memristors [100, 101, 102]. One of the justifications provided for doing so is the chalcogenide-based memristor’s ultra-fast switching speeds, which allow for processes like STDP to take place at nanosecond scale [97]. Polymer-based memristors have been used because of their inexpensive cost and adjustable performance [68, 103, 104, 105, 106, 107, 108, 109, 110, 111]. There have also been suggestions for organic memristors, which include organic polymers [69, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123].

Ferroelectric materials have been taken into consideration for the construction of analogue memory for synaptic weights [124, 125, 126, 127, 128, 129], and synaptic devices [130, 131, 132, 133], including those based on ferroelectric memristors [134, 135, 136]. Their main area of investigation has been three-terminal synaptic devices as opposed other implementations that may be two-terminal. Three-terminal synaptic devices do not need additional circuitry to implement learning processes like STDP because they can realise them in the device itself [130, 135].

Neuromorphic systems have more recently incorporated graphene to produce more compact circuits. It has been used in neuromorphic implementations as well as full synapse implementations [137, 138] for transistors [139, 140, 141] and resistors [142].

The carbon nanotube is another material under consideration for various neuromorphic applications. A variety of neuromorphic components, including dendrites on neurons [143, 144, 145, 146, 147], synapses [148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165], and spiking neurons [166, 167, 168, 169], have been proposed to use carbon nanotubes. Carbon nanotubes have been used because they can provide the scale (number of neurons and synapses) and density (in terms of synapses) of neuromorphic systems that may be necessary for replicating or imitating biological neural systems. They have also been used to interface with living tissue proving that carbon-nanotube based devices may be helpful in prosthetic applications of neuromorphic systems [155].

Synaptic transistors with different architectures, such as silicon-based ones [170171], and oxide-based ones [172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184], have also been produced for neuromorphic applications. Synapses and other neuromorphic components have been built using organic electrochemical transistors [185, 186, 187, 188, 189, 190] and organic nanoparticle transistors [191, 192, 193, 194]. Organic transistors are being ordered for similar reasons as organic memristors: low processing costs and versatility. Additionally, they are ideally suited for the development of brain-machine interfaces as well as chemical or biological sensors [185]. It’s interesting to note that various teams are working to create transistors within polymer-based membranes that can be applied to neuromorphic applications such biosensors [195, 196, 197, 198, 199].

Recently, there has been an increased interest in nonvolatile memory (NVM) technologies for neuromorphic computing, beyond their potential as DRAM replacement or as hybrid memory in shared memory systems. For neuromorphic architectures, nonvolatile devices offer a wide range of great qualities, including as memory retention, analogue behaviour, high integration density, faster read and program speeds, high energy efficiency, and programming voltages that are compatible with CMOS electronics. So we focus here on emerging materials for NVM devices:

6.1 Polymer

Polymers are long-chain molecules with repetitive units. They fall within the category of one-dimensional materials with features including biocompatibility, chemical sensitivity, and mechanical flexibility. You can use organic or inorganic materials to create polymers. A flexible wearable memristor is designed with ammonium polyphosphate (APP) in a stack of Au/APP/ITO [201]. The I-V characteristics suggested that the bidirectional voltage sweeps’ memristive behaviour was caused by ion movement in the APP. Even under harsh humid, temperature, or radiation settings, the proposed structure has demonstrated stable function. However, a lot of research has been done on organic semiconductors (OSC) for use in neuromorphic devices. The two-terminal OSCs can utilise filament formation, charge trapping, and ion migration to facilitate the integration into ReRAM, PCM, or FeRAM. Fuller et al. present a polymer-based redox transistor combined with a CBRAM synaptic device [200] whose conductance change is initiated by reversible electrochemical reactions. The authors also show an array of 1024 × 1024 organic polymer memristors arranged for performance characteristics simulation. The main difficulties with OSCs are speed and density. Due to low mobilities of carriers and defects, OSC speed is impacted [201]. Due to OSCs’ incompatibility issues with various solvents, the patterning of these devices through photolithography is limited, restricting the fabrication of dense networks.

6.2 2D materials

Fundamental 2D material research has received a great deal of attention during the past 10 years. The 2D materials maintain a stable mono-layer structure with special chemical and physical properties suitable for simulating synapses. They are recognised for their inter-layer weak van der Waals forces. Haigh et al. demonstrate that the high mobility of 2D synapses allows them to exhibit fast switching speed at low operating voltages [202]. The alteration in electrical, photonic, and electrochemical properties is another distinguishing quality of 2D material synaptic devices [203]. Shi et al. use a CBRAM-based h-BN memristor to exhibit STP and LTP properties [204]. The creation and deformation of the conductive filament caused by ion migration between Cu or Ag electrodes controls weight update. The h-BN exhibits boron vacancies that stimulate resistance changes.

Wang et al. demonstrate how externally introduced oxygen atoms occupy intrinsic defects that can be adjusted as MoS2 sulphur vacancies, causing resistance changes [16]. In [205], authors use bilayer MoS2 vertical memristors to demonstrate STDP features at 0.1–0.2 V voltage. Apart from CBRAM, 2D materials have been integrated as PCM 2-terminal synaptic elements, which have the advantage of better reliability. MoTe2 exhibits an electric field-induced amorphous to crystalline phase transition in TMD materials [206]. Multilevel programming resistance could be facilitated by further device engineering to the device stack. On the other hand, A 3-terminal device uses the gate and the channel as presynaptic and postsynaptic inputs, exhibits superior stability and effective channel conductance control. Chen et al. demonstrate how a graphene and ferroelectric insulator (polyvinylidene fluoride, i.e., PVDF) synapse can imitate the synaptic behaviour as a FeFET device [207]. The ferroelectric material’s changing polarisation state affects the carrier concentration in graphene. The polarisation shift is induced when the gate voltage is raised above the threshold voltage. Future research might also look toward Li + ion gated synaptic transistors [208] and various hetero synaptic plasticity implementations [201, 209, 210].

6.3 Quantum dot

Quantum dots (QD) are zero-dimensional memristors. Semiconducting quantum dots which are small particles with clearly defined energy levels, exhibit electrical and optical features based on quantum mechanics. Josephson junctions are the foundation for how a QD functions as a memristor. The phase difference between quasi-particles is employed ss a state variable [211]. Here, a hybrid construction made of QDs and a memristor is used. In [212], Lv et al. demonstrate that RRAM devices with QD-film as their insulator can be switched in response to an external signal. The memristive characteristic of QD-RRAMs is catalysed by ion migration, charge trapping, or redox reaction. Qi et al. show how to fabricate RRAM with Carbon QD for usage as a LED [213]. In [214], Roychowdhury et al. use QD arrays to show quantum neuromorphic computing. There are a lot of potential opportunities that are yet to be explored.

6.4 Carbon nanotube

A carbon nanotube (CNT) is a cylindrical rolled up, often single-walled carbon in a tube shape of nanometre diameter. They have a metallic or semiconducting character because of their achiral connections. These fall within the category of one-dimensional materials that resemble axons structurally. Due to their great charge mobility, semiconductor CNTs can be employed as conducting channels in FETs. CNTFET is a CNT replacing semiconductor channel between the source and drain. With voltage applied, a Schottky barrier that has formed at the metal-CNT contact is alleviated. The ON/OFF state of memory cells is determined by the interaction between CNTs. The gate and source of the single-walled CNT matrix network are respectively coupled to presynaptic and integrate-and-fire (IF) postsynaptic neurons by Feldmann et al. [215]. The channel conduct anceto store synaptic weights is controlled by varying voltage pulses at the pre- and post-neuron. All of the post synaptic neuron spikes are gathered to fire back the CNT if the output hits a threshold level. In order to choose the sign and magnitude of the weight update for an STDP implementation, the channel conductance change when the gate and source voltages are correlated. Kim et al. describe p-type CNTFET-based models of excitatory and inhibitory neurons [158], where the neurons exhibit STP accumulative current. The very lateral geometry of CNTFETs is not viable for larger integration, though. CNT TFTs have therefore become a popular substitute for the same.

The materials science community is doing a lot of intriguing work to create devices for neuromorphic systems out of novel materials in order to create smaller, faster, and more effective neuromorphic devices. Diverse materials can have radically different properties, even when used in the same device. These variations will have an impact on the rest of the community, up through the device, high-level hardware, supporting software, models, and algorithms levels of neuromorphic systems. Therefore, it is crucial that we as a community comprehend the potential effects that various materials may have on functionality. Going forward, tight partnerships with the materials science community will undoubtedly be necessary.

Advertisement

7. Neuromorphic circuits

The creation of brain-inspired computing systems necessitates the careful design of circuit blocks that exhibit specific elements of neuromorphic functions, such as plasticity and learning, in addition to device engineering. The individual synapses, for instance, must act as electrical connections between two neurons and change their weight in accordance with particular learning criteria that are inspired by the brain. This is one of the biggest challenges with neuromorphic computing is trying to replicate the human “connectome,” or the physical wiring of the nervous system. Reverse-engineering the brain on solid-state devices (SSD) is still a long way off, despite advances in our understanding of how the wiring of the brain carries out higher-level processes (Figure 6).

Figure 6.

Key elements of a neural network [216].

By digitally transferring biological neuronal network models onto electronic devices, neuromorphic computing explicitly aims to appropriate the biological connectome of the brain. Neuro-electronic interfaces are brain-computer interfaces that transmit data from the brain to outside equipment. Because there are currently no suitable neuro-electronic interfaces, digital brain mimicry is difficult.

On the other hand, it has been demonstrated that time has a significant impact on synaptic plasticity in the brain. For instance, the human brain’s spike-timing dependent plasticity (STDP) is a weight update mechanism where the amount of time between pre- and postsynaptic spikes determines the weight change’s magnitude and sign, i.e., potentiation for the pre-synaptic spike coming before the postsynaptic spike and depression for the postsynaptic spike coming before the pre-synaptic spike [217]. Time calculation between a pair or triplet of spikes is a key component of other, more complex synaptic weight updating algorithms [218, 219, 220]. These plasticity rules typically call for the development of intricate circuits utilising CMOS devices [221, 222, 223] or nanoscale devices, such as PCM [224, 225, 226] or RRAM [24, 64, 224, 227, 228, 229] technology. Plastic synapses based on PCM and RRAM often include one or more transistors in a hybrid design to enable a controllable computation of time within the circuit block, although examples of time-sensitive synapses that have been described [230, 231].

7.1 Hybrid STDP synapses

Figure 7a shows a hybrid RRAM-CMOS synapse example with a 2T1R design that consists of one RRAM element and two transistors [228]. The top electrode of the RRAM device and the gate of one transistor known as the communication transistor are both driven by the pre-synaptic neuron. It displays the voltage VTE applied to the top electrode and VCG applied to the communication gate’s gate, both of which are applied during a spike event by the pre-synaptic neuron. A synaptic current that spikes as a result of the applied voltage spikes in the figure, which is proportional to the conductance of the RRAM, acts as a storage component for synaptic weight. According to the schematic circuit of Figure 7c, synaptic current travels via the synaptic circuit and is fed into the input terminal of the post-synaptic neuron, where integration and firing occur. The input node of the post-synaptic neuron is a virtual ground, guaranteeing a zero voltage at the bottom electrode of the 2T1R synapse and acting as a summing input for a practically infinite number of synaptic channels. The post-synaptic neuron fires when the integrated current exceeds a predetermined threshold, sending spikes to the subsequent neurons in the network and applying a feedback spike to the fire gate.

Figure 7.

Hybrid RRAM-CMOS synapse with 2T1R configuration (a), voltage waveforms for VCG and VTE applied by the pre-synaptic neuron in the spike event (b), and overall circuit sketch including the synapse and the pre- and post-synaptic neurons [232].

7.2 Learning with RRAM STDP synapses

A conceptual demonstration of learning cannot be provided by definitively demonstrating STDP in individual synapses; instead, simulations and experiments at the higher level of synaptic/neural networks are needed. A straightforward example of a feed-forward neural network, known as a perceptron [233, 234, 235], is shown in Figure 8. The network is made up of two layers: a presynaptic layer where synaptic channels receive neural spikes, and a second layer with just a single post-synaptic neuron to integrate and fire current spikes. Each pre-synaptic neuron is coupled to a post-synaptic neuron via a hybrid CMOS-RRAM synapse, indicating that the network is fully connected. Depending on the time Δt between pre- and post-synaptic spikes, the post-synaptic neuron transmits a feedback spike to each synapse at each fire event to enable LTP/LTD. As a result, submitted patterns, such as images, sounds, or speech, tend to be learned by the network because the synapses corresponding to the pattern channels are potentiated while all other synapses, commonly known as the background synapses, tend to become depressed, thus enabling on-line learning of submitted patterns [226, 228, 229].

Figure 8.

Schematic illustration of a perceptron-like neural network with a 4 × 4 first layer, and a single post-synaptic neuron in the second layer. Each neuron in the first layer is connected to the post-synaptic neuron by synapses [232].

With the recent advancements in ferroelectric devices, nanowire networks, organic materials, and new memory hardware, duplicating the human neural network is now a more realistic possibility. The great ability of memory devices promise synaptic device that can handle massive amounts of data effectively while consuming extremely little power, which is necessary to develop artificial intelligence technologies. The detailed analysis of these memory devices, together with the present problems and future prospects, is covered in the following subsections.

7.2.1 PRAM: phase-change synaptic devices

Phase-change memory (PRAM), also called PCM, is a type of non-volatile random-access memory. A thesis on the viability of a phase-change memory device using chalcogenide film and a diode was published by Charles Sie in 1969 [236]. The phase-change memory process in chalcogenide glass involves electric-field-induced crystalline filament development, according to the following study from 1970 [237, 238].

PRAM uses the resistance difference between the crystalline phase’s low resistivity and the high resistivity of the amorphous phase in phase change materials. A phase change material known as T-cell is typically used between two electrodes in PRAM structures. The phase of the material is changed after a high voltage/current is applied to the electrodes. A current pulse is used to anneal and quench the phase-change material to cause it to crystallise in order to set into the crystalline phase. The high resistance amorphous component can be steadily crystallised using a series of Set pulses to achieve the progressive Set function of PRAM devices. The programming region is first melted and then quickly quenched by using a strong current pulse for a brief period of time in order to reset into the amorphous phase.

The device has been investigated as a good candidate for artificial synapses in implementing the machine learning algorithm because of desirable properties as high speed, multi-level capability, and low energy consumption [239, 240, 241]. But issues with material quality and power consumption prohibited PRAM technology from being widely used. Resistance drift, which refers to how the resistance changes over time is a particular problem [242, 243, 244]. In phase-change materials, resistance drift is a common occurrence that severely hinders the development of PRAM and destroys stability. A structural relaxation, which is a locally rapid local rearrangement thermally induced at intervals the amorphous region, has been used to explain the drift phenomenon in amorphous chalcogenide materials [243, 245, 246, 247]. This implies that the resistance can be modified with time and that a lost memory resembles almost human memory. In addition to the significant reliability problem with amorphous chalcogenide materials, a low power neuromorphic device must overcome the high power need for melting the phase-change material. These reliability and power problems must be solved for PRAM technology to function well as a synaptic device.

7.2.2 Reram: filament type synaptic devices

One of the leading candidates to replace current memory technology as the next generation is resistive random access memory (ReRAM). The non-volatile nature of ReRAM is the main justification, among many others. Researchers and manufacturers are currently paying close attention to every non-volatile memory. This results in a race between several technologies that practically all offer the same advantages while aiming for the same standard level of performance goals. Each non-volatile memory is competing fiercely to prove that it is a worthy rival in the memory field. Other non-volatile memories of current interest are PCRAM (Phase Change), FeRAM (Ferroelectric), and MRAM (Magnetoresistive). ReRAM is distinguished and enabled to compete with these technologies because of its quick read and write speeds. Additionally, ReRAM’s manufacturing is not as difficult as other technologies.

Low dependability and performance variations, on the other hand, are the obstacles preventing its market penetration. Therefore, it still requires significant development in terms of material fabrication, construction, circuit difficulties, and handling unwanted disturbances from the outside environment. ReRAM technology appears to have more benefits than drawbacks, and this good promise for the future of the technology.

ReRAM has a metal-insulator-metal structure, with the insulator positioned between two layers of metal. Both metals form a low resistive state when touched, and a high resistive state when not touched. A conducting filament serves as the structure’s central functional part. The rupture of the filament depends on the applied voltage across the metal terminals (Figure 9).

Figure 9.

(a) 2D structure of ReRam (b) 3D structure of ReRam [248].

ReRAM’s ability to store digital data, or 0 s and 1 s, is a result of its structural design. ReRAM typically operates in a High Resistive State [HRS], or logic low referred to as 0; a voltage is applied across ReRAM to switch its state from the High Resistive State to a Low Resistive State [LRS], or logic high referred to as 1. There is no suitable path for the current to pass since the filament is disconnected during the HRS. The filament contacts the bottom terminal and produces a low resistance route when an appropriate voltage is placed across the top electrode (TE) and bottom electrode (BE). This is known as coming into LRS, which is also known as logic high, or 1. HRS is the OFF state, and LRS is the ON state. The “Set” voltage is the voltage used to switch the ReRAM state from HRS to LRS. In a similar vein, the voltage used to switch a device from LRS to HRS is known as the “Reset” voltage. After a voltage that is greater than the Reset or Set voltage is applied, the filament either forms or ruptures. This state change is the write operation during putting ReRAM in the desired state to hold a logic. A voltage that is less than the Reset or Set voltage is applied to read the operation, and the output is monitored to determine the state of the ReRAM. In accordance with the polarity of the applied voltage, there are two forms of resistive switching. It could be bipolar or unipolar. In unipolar switching, the voltage’s polarity has no dependence on the switching process. In contrast, switching in bipolar switching is dependent on the voltage’s polarity, going from HRS to LRS in one case and from LRS to HRS in the other [249].

7.2.3 MRAM: spintronic synaptic devices

Magneto-Resistive Random Access Memory (MRAM) is a kind of non-volatile memory (NVM) that can retain saved data even if the power is out or is unintentionally turned off. MRAM, commonly known as Magnetic RAM, is not brand-new. MRAM has been available on the market for more than 20 years, but recent advancements in the technology have allowed it to be employed successfully in both new and old applications.

Magnetic Tunnel Junction (MTJ), a part of MRAM technology, is a combination of two ferromagnetic layers and an insulating layer. MRAM stores a logic 1 or a logic 0 by altering the resistance of an MTJ. Two ferromagnetic layers’ relative spin orientations determine the resistance of the MTJ, which has two resistance options: high resistance and low resistance.

According to Figure 10. The free layer of the MTJ can have its magnetic direction modified by applying polarised currents or magnetic fields, whereas the reference layer of the MTJ maintains a constant magnetic direction. The resistance of the MTJ is minimal when the directions of the reference layer and the free layer coincide (a logic 0 is stored). However, resistance is strong if they are going in opposite directions (a logic 1 is stored).

Figure 10.

Structure of a MTJ. (a) Anti-parallel (high resistance) and (b) parallel (low resistance) [250].

Toggle mode technology, in which a magnetic field modifies the electron spin to program/write bits, was utilised by the initial generation of MRAM devices. Although it was simpler to build toggle MRAM, scaling it down is challenging. As the write lines get smaller and other difficulties arise, the current needed to flip the bit does not decrease in value. Following generations of MRAM devices began utilising a different technique known as Spin-Transfer Torque (STT) MRAM. SST MRAM uses a spin-polarised current to change the spin of electrons, making device scaling down simpler and less expensive. However, the majority of STT MRAM devices now available use a perpendicular strategy in which electrons spin horizontally and change perpendicular to the plane (Figure 11). This perpendicular method provides the benefit of offering higher density memory products by permitting lower switching currents with fewer transistors and less power consumption.

Figure 11.

Perpendicular MTJ diagram [251].

STT-MRAM products now on the market have the benefit of being faster and using less power than NVM flash memory devices. They could also compete with volatile memory units like SRAM and DRAM due to their potential speed as well as their ability to be scaled down even smaller than 10 nm. They are even more appealing to embedded memory applications because of this property. STT-MRAM has inherent limitations, much like many other technologies, and only time will tell if this technology will take the place of DRAM or SRAM. STT-MRAM technology offers intriguing applications in neuromorphic computing, automotive microcontrollers, and system-on-chips (SoCs).

7.2.4 ASN (atomic switch network): network-based synaptic device

Inorganic “atomic-switch” synapses embedded in a sophisticated network of silver nanowires make up the more than a billion interconnected synapses used in the self-assembling neuromorphic (brain-like) technology created by researchers in the U.S. and Japan. It has been demonstrated that the atomic switch, a freshly created nanoscale circuit component, displays synapse-like characteristics in an entirely inorganic device. Each square centimetre of the device’s silver nanowire network contains a billion junctions that are intricately coupled to one another.

These atomic switch networks (ASN) produce emergent behaviours resembling memristors through distributed, collective interactions, just like biological neural networks. A key feature of biological brain networks and many other complex systems is these emergent behaviours. According to the researchers, experiments are currently being conducted that exploit these emergent behaviours for information processing with the goal of creating a new category of cognitive technology. The architecture of the ASN device is extremely linked and includes synaptic circuit components at each point of nanowire contact. According to the researchers, the collective interactions between these atomic switches provide distinctive emergent features that offer substantial potential for neuromorphic computing.

Comparing these emerging devices, MRAM features non-volatility, as the magnetic properties of ferromagnets do not disappear due to power failure. The magnetism of ferromagnets not only does not disappear due to power failure, but can be considered almost never to disappear, so MRAM can be rewritten infinitely as well as DRAM. The write time of MRAM is as low as 2.3 ns and the power consumption is extremely low, enabling instant on/off and extending the battery life of portable devices. MRAM cells can be easily embedded into logic circuit chips, requiring only one or two additional steps in the back-end metallization process, which requires a photolithographic mask version. In addition, MRAM cells can be completely fabricated in the metal layer of the chip, and even 2–3 layers of cells can be stacked, so it has the potential to construct large-scale memory arrays on logic circuits. However, the biggest drawback of MRAM is the interference between memory cells. When programming the target bit, the free layer in the non-target bit can be easily misprogrammed, especially in the case of high density, the overlap of the magnetic field between adjacent cells will be more serious. For PCM, it does not need to erase previous code or data before writing updated code, so its speed has an advantage over NAND, and the read and write time is more balanced. PCM read and write is non-destructive, so its write resistance far exceeds that of flash memory, and PCM is used to replace traditional mechanical hard drives with higher reliability. PCM has no mechanical rotation device, and no refresh current is required to save code or data, so PCM’s power consumption is lower than HDD, NAND, and DRAM. Some PCMs use a non-transistor design, which can realise high-density storage. PCM storage technology is independent of the charged particle state of the material, so it has a strong resistance to space radiation and can meet the needs of national defence and space. While RRAM erase speed is determined by the pulse width of the trigger resistor transition, generally less than 100 ns. Some RRAM materials also have a variety of resistance states, making it possible to store multi-layer data when a memory cell, thus increasing the storage density. The memory matrix of RRAM can be divided into two types: passive matrix and active matrix. The memory cell of the passive matrix is connected by a resistive element and a nonlinear element (usually using a diode), the role of the latter is to make the resistive element get a suitable voltage division, so as to avoid the loss of the memory cell reading and writing information when the resistive element is in a low resistance state. The advantage of this method is that the design is relatively simple and the process is well miniaturised, but the use of a passive matrix will make the adjacent cells inevitably interfere with each other. Active cells are controlled by transistors to read and write and erase resistive components, although good isolation of adjacent cells interference, its design is more complex, and the device can be poorly miniaturised. In terms of capacity, these three new types of memory, MRAM up to 4 Gb, PRAM up to 8 Gb, and RRAM up to 32 Gb, are still very different from flash memory but all three are more than 1000 times faster than flash memory in terms of read and write speed.

Advertisement

8. Neuromorphic algorithms and applications

8.1 Neuromorphic algorithms

How to define an SNN for a certain application is frequently covered in algorithms for neuromorphic implementations. There are many different algorithmic strategies for neuromorphic computing systems, and they can be divided into two main groups: (1) algorithms for training or learning an SNN that will be used in a neuromorphic computer, and (2) non-machine learning algorithms where SNNs are manually built to complete a specific task. It’s important to note that training and learning algorithms here refer to the mechanism of optimising.

It is difficult to learn in a spiking neural network. Traditional artificial neural networks have had great success with backpropagation-based gradient descent learning; however, training SNNs is challenging due to the nondifferentiable nature of spike events. Due to the interest in deep learning, a significant amount of research has been put into creating appropriate learning algorithms that can be used with multilayer SNNs. Unsupervised learning, supervised learning, conversion from learned ANNs, and evolutionary algorithms are the four main methods for training SNNs. The following subsections provide a quick overview of them.

8.1.1 Unsupervised learning

Learning without preexisting labels is referred to as unsupervised learning. The Hebbian rule, which unsupervised learning of SNNs is based on, entails adapting the synaptic connections of the network to the data received by the neurons [252]. Hebb’s rule is applied through the spike-timing-dependent plasticity (STDP) algorithm. The STDP phenomenon, which has been observed in the brain, explains how the relative timing of presynaptic and postsynaptic spikes affects how effective a synapse is. The spike that reaches the neuron’s synapse is referred to in this context as a presynaptic spike. The spike released by the neuron itself is known as a postsynaptic spike [253]. The idea behind STDP’s mechanism is that the synapses that are most likely to have caused the neuron to fire should be reinforced. Similar to this, weakening should be done to synapses that did not participate or contributed negatively [254].

In unsupervised learning in SNNs, STDP is commonly employed as a learning method. A synaptic weight is said to be reinforced by STDP if a presynaptic neuron fires just before a postsynaptic neuron. Similar to that, the synaptic weight is diminished if the presynaptic spike follows the postsynaptic neuron for a brief period of time [255]. The most observed STDP rule is described by Eqs. (1) and (2):

Δw={+A+exp(Δtτ),ifΔt>0Aexp(+Δtτ),ifΔt0E1
Δt=tposttpreE2

where w is the synaptic weights, τ is the time constant, and A+ and A− are constant parameters indicating the strength of potentiation and depression.

Significant study has been put on training SNNs with STDP in recent years. Two new hardware-friendly techniques, lateral inhibition and homeostasis, were created by Qu et al. [256]. These techniques lessen the number of inhibitory connections, which lowers the hardware overhead. On the MNIST data set, an STDP rule was utilised to modify the synapse weight between the input and learning layers to obtain 92% recognition accuracy. A hybrid learning system called deep CovDense SNN, put out by Xu et al. [255], combines the biological plausibility of SNNs and feature extraction from CNNs. They updated the parameters of their deep CovDenseSNN model, which is appropriate for implementation on neuromorphic hardware, using an unsupervised STDP learning method. Other STDP learning techniques include supervised learning and reinforcement learning [257].

A semisupervised method was put out by Lee et al. [258] to train a convolutional SNN with several hidden layers. The training approach consisted of two steps: initialise the network weights by unsupervised learning (namely, SSTDP), and the supervised gradient descent backpropagation (BP) algorithm to fine-tune the synaptic weight. Pretraining methods resulted in 99.28% accuracy on the MNIST database, quicker training times, and greater generalisation. An innovative technique for training multilayer spiking convolutional neural networks was created by Tavanaei et al. [259]. (SCNNs). Unsupervised (a unique STDP learning scheme for feature extraction) and supervised (a supervised learning scheme to train spiking CNNs (ConvNets)) components are both included in the training process.

8.1.2 Supervised learning

The SpikeProp algorithm by Bohte et al. [260] is one of the first to train SNNs utilising backpropagation errors. Using a three-layer design, this approach is successfully used to solve classification challenges. Spike train SpikeProp (ST-SpikeProp), a more recent and sophisticated variation of SpikeProp, trains single-layer SNNs using the output layer’s weight update rule [261]. Wu et al. [262] proposed the spatiotemporal backpropagation (STBP) technique, which combines the timing-dependent temporal domain and the layer-by-layer spatial domain, to address the nondifferentiable problem of SNNs. The energy consumption of SNNs has been demonstrated to significantly decrease with supervised learning using temporal coding. Mostafa [263] created a direct training method using the temporal coding scheme and back propagation error. The preprocessing technique is not general, and his network lacks convolutional layers. By adding convolutional layers into the SNN, creating a new kernel operation, and suggesting a new method for preprocessing the input data, Zhou et al. [264] improved on Mostafa’s work. They used fewer trainable parameters to attain good recognition accuracy with their SCNN. Using the stochastic gradient descent (SGD) algorithm and then converting it to an SNN, Stromatias et al. [265] developed a supervised technique for training a classifier. Zheng and Mazumder [266] suggested backpropagation-based learning for training SNNs in previous works. Their suggested learning approach can be used with neuromorphic hardware.

8.1.3 Conversion from trained ANN

The third method involves converting an offline-trained ANN to an SNN so that the modified network can benefit from an established, fully trained ANN model. This method is frequently referred to as “spike conversion” or “spike transcoding.” There are many advantages of converting an ANN to SNN. First, simulating the precise spike dynamics in a big network can be expensive in terms of computing, especially if precise spike durations and high firing rates are needed. As a result, this method enables the use of SNNs to challenging benchmark tasks that demand massive networks, such ImageNet or CIFAR-10, with a minimal accuracy loss when compared to its formal ANNs [267, 268]. Second, we can use very effective training methods created for ANNs as well as many state-of-the-art deep networks for classification tasks for conversion to SNNs. The optimization approach can also be used to ANNs. This makes it possible to use of state-of-the-art optimization techniques and GPUs for training [269]. The conversion method’s primary drawback is that it cannot support on-chip learning. Additionally, many SNN particularities that are not present in the equivalent ANNs cannot be taken into account during training. Because of this, the SNNs’ inference performance is frequently inferior than that of the original ANNs [270].

An extensive amount of research has been done to successfully convert an ANN to an SNN with performance on the MNIST data set. Diehl et al. [269] proposed a method for transforming an ANN into an SNN that has the least amount of performance loss, and using the MNIST database, a recognition rate of 98.64% was attained. Rueckauer et al. [271] transformed continuous-valued deep CNN to its accurate spiking equivalent in another work. This network exhibits a 99.44% recognition rate on the MNIST data set and incorporates typical operations like softmax, max-pooling, batch normalisation, biases, and inception modules. An approach to conversion that is appropriate for mapping on neuromorphic hardware was put out by Xu et al. [272]. On the MNIST data set, they demonstrated a threshold rescaling strategy to lessen the loss and attained a maximum accuracy of 99.17%. To transform CNNs into spiking CNNs, Xu et al. [255] developed an effective and hardware-friendly conversion rule. On the MNIST data set, they suggested a “n-scaling” weight mapping method that delivers high accuracy and low latency classification. On the MNIST data set, Wang et al. [273] suggested a weights-thresholds balancing conversion method that uses less memory and delivers excellent recognition accuracy. They concentrated on the relationship between weights and thresholds of spiking neurons throughout the conversion process, as opposed to the existing conversion strategies, which concentrate on the approximation between the artificial neurons’ activation values and the spiking neurons’ firing rates.

8.1.4 Evolutionary spiking neural networks

Population-based metaheuristics are evolutionary algorithms (EAs). Historically, observations of natural evolution in biological populations served as the inspiration for their design. The network architecture, model hyperparameters, and synaptic weights and delays can all be directly optimised using these approaches [274, 275]. The synaptic weights of SNNs are currently learned using evolutionary algorithms like differential evolution (DE), grammatical evolution (GE), harmony search algorithm (HSA), and particle swarm optimization (PSO). The synaptic weights of a spiking neuron, such as integrate and fire, Izhikevich, and spike response model (SRM) models, can be trained by employing algorithms like DE, GE, and HSA to carry out classification tasks, as demonstrated by Vazquez [276], López-Vázquez et al. [277], and Yusuf et al. [278]. In both linear and nonlinear classification problems, Vazquez and Garro [279] used the PSO algorithm to train the synaptic weights of a spiking neuron. They found that firing rates are matched by input patterns of the same class. For the purpose of training supervised feedforward SNNs, Pavlidis et al. [280] presented the parallel differential evolution method. Their method is tested only with exclusive OR, which does not accurately reflect its advantages. Evolutionary algorithms can be an alternative to exhaustive search. They take a lot of time, especially because the fitness function requires expensive computation [281].

The deployment of appropriate learning and training algorithms, which have a significant impact on application accuracy and execution cost, is a challenge in the development of SNNs. The way that information is encoded by spikes presents another difficulty. Choosing the right training and learning algorithms to use can have a significant impact on the accuracy and cost of an application. Information encoding using spikes presents another difficulty. Although neural coding significantly improves the performance of SNNs, there are still problems about the ideal encoding strategy and how to create a learning algorithm that complements the encoding technique. It has become extremely difficult to create a learning algorithm that can train hidden neurons in a linked SNN. As neuromorphic computing is still in its infancy, significant work needs to be done to develop algorithms and hardware that can simulate human intellect.

8.2 Neuromorphic applications

8.2.1 Medicine

The ability of neuromorphic devices to receive and process information from their surroundings is particularly effective [282, 283, 284, 285, 286]. These devices can work with the human body when combined with organic components. Neuromorphic devices may enhance drug delivery techniques in the future. Due to their high responsiveness, they may release a medicine when they noticed a change in the body’s conditions (i.e. varying insulin and glucose levels). The employment of neuromorphic computer technology in prostheses is also possible. Another advantage of this technology is its capability to efficiently accept and process an external signal. For those with prosthesis, using neuromorphic devices rather than conventional ones could result in a more natural, seamless experience [282].

8.2.2 Large-scale operations and product customization

Neuromorphic computing may also be useful for large-scale initiatives and product customization [282]. It could be used to handle massive amounts of data from environmental sensors more quickly. Depending on the requirements of the sector, these sensors could monitor water content, temperature, radiation, and other characteristics. By identifying patterns in these data, the neuromorphic computing framework might make it simpler to draw useful conclusions. Due to the characteristics of the materials used to construct them, neuromorphic devices may potentially facilitate product customisation. These substances can be turned into fluids that are simple to control. They can be processed through additive manufacturing in liquid form to produce devices that are especially suited to the requirements of the user [282].

8.2.3 Artificial intelligence

By definition, the goal of neuromorphic computing is to replicate how the human brain works. Neurons in the brain receive, process, and transmit impulses in a very quick and energy-efficient manner. As a result, it makes sense that technology experts, particularly those working in the area of artificial intelligence (AI), would be fascinated by this kind of computing. As the name implies, experts in the field of artificial intelligence (AI) concentrate on a certain aspect of brain intelligence. The capacity of the intellect to gather and use knowledge is known as intelligence. It would be advantageous for the two fields to work together moving forward because this concept is so closely related to neuromorphic computing [287]. The answer to creating computers with truly human-level intelligence may lie in concentrating on brain functionality at the level of an artificial neuron and smaller.

8.2.4 Imaging

Similar to how the human eye creates images, neuromorphic vision sensors do the same. They are event-based imaging devices [288]. This demonstrates that they create images in response to light intensity which is an external signal rather than an internal signal [289]. Furthermore, they move at a faster pace independent of conventional frame rates. In a neuromorphic sensor, each pixel functions independently of its neighbours. Additionally, the gadget almost instantly communicates changes to each pixel [288]. These mechanisms work together to make data utilisation significantly more effective. Like their traditional equivalents, these sensors do not exhibit motion blur or a delayed response to the environment. Incorporating neuromorphic vision sensors into virtual and augmented reality systems may be advantageous given these qualities.

8.2.5 Other applications

Neuromorphic computing may be suited for use on “the edge” due to its low energy consumption [290]. The boundary of a network where a device could connect to a cloud platform is referred to as “the edge.” Since driverless cars must operate in this environment, neuromorphic computing may enable them to react to their surroundings more quickly. A system using neuromorphic computing could take control of these vehicles when they are not linked to a reliable internet source. This might increase the safety and environmental suitability of driverless cars. Neuromorphic computing’s superior sensory capabilities may also enhance current “smart technologies” [283]. This modification might improve the effectiveness of the devices for a wider range of circumstances, similar to driverless cars. Expanding communication channels is another potential application that could be done using neuromorphic computing.

Advertisement

9. The gap between expectations and reality

Deep learning systems are used nowadays to process the majority of neuromorphic computing tasks on CPUs, GPUs, and FPGAs. However, none of these are optimised for neuromorphic processing. These functions were the exclusive focus of the creation of chips like Intel’s Loihi. Because of this, Loihi was able to obtain the same outcomes with a much smaller energy profile, as demonstrated by ABR. The next generation of compact gadgets that require AI capabilities will depend heavily on this efficiency.

Many experts predict that during the next three to 5 years, commercial applications will start to appear seriously, but that will not be the end of it. For this reason, Samsung, for instance, declared in 2019 that it would double the size of its neuromorphic processing unit (NPU) business by 2030, going from 200 personnel to 2000. At the time, Samsung predicted that the market for neuromorphic chips will expand by 52% a year through 2023.

Developing common workloads and benchmarking approaches will be one of the upcoming difficulties in the neuromorphic space. Technology adopters have benefited greatly from the use of benchmarking tools like 3DMark and SPECint to match products to their needs. Although author Mike Davies of Intel Labs suggests a spiking neuromorphic system dubbed SpikeMark, there are sadly no such benchmarks in the neuromorphic space. Dmitri Nikonov and Ian Young, researchers at Intel, outline a number of guidelines and techniques for doing neuromorphic benchmarking in a technical article titled “Benchmarking Physical Performance of Neural Inference Circuits.”

There is currently no practical testing tool on the market, though Intel Labs Day 2020 in early December made some significant advancements. When processing “Sudoku solver” issues, Intel, for instance, compared Loihi to its Core i7-9300K and demonstrated how fast Loihi’s searching was.

Researchers solved Latin squares with a remarkable reduction in power consumption and experienced a similar 100× gain. What was perhaps the most significant finding was how various processor types performed versus Loihi for specific workloads.

Loihi competed against IBM’s TrueNorth neuromorphic microprocessor in addition to traditional computers. On neuromorphic solutions like Loihi, deep learning feedforward neural networks (DNNs) clearly underperform. Data moves linearly from input to output with DNNs. Recurrent neural networks (RNNs) use feedback loops and behave more dynamically, making them more similar to how the brain functions. In RNN workloads, Loihi excels. As Intel stated: “The more bio-inspired properties we find in these networks, typically, the better the results are.”

One could consider the aforementioned examples to be early benchmarks. They are an important first step in the direction of a tool that is widely used and runs typical workloads for the industry. New applications and use cases will ultimately appear, and testing holes will be filled. The deployment of these benchmarks and apps in response to urgent requirements will continue to be a focus of developers.

Research and development for neuromorphic computing is still ongoing. It’s becoming more and more obvious which applications work best with neuromorphic computing. For these tasks, neuromorphic computers will be more faster and more energy-efficient than any current, conventional options. Neuromorphic computing will simply coexist with CPU and GPU computing to perform tasks more effectively than anything we have seen before. CPU and GPU computing will not go away.

Advertisement

10. Conclusion and outlook

In this chapter, we have summarised what neuromorphic computing reaches until today, discussing its materials, algorithms, circuits and applications. Although neuromorphic devices show promising characteristics, they still have some challenges to be solved to realise energy-efficient neuromorphic systems. Neuromorphic computing technology has created high expectations and its market size expands every day in different fields.

One of the markets for neuromorphic chips that is expanding the quickest is the auto business. All of the top vehicle producers are working hard to achieve Level 5 of vehicle autonomy, which is anticipated to result in enormous demand for AI-controlled neuromorphic processors. The self-governing driving industry needs constant advancements in AI calculations for maximum throughput and minimal force requirements. Neuromorphic chips are excellent for order-related tasks and could be applied in a few instances to self-driving cars. Compared to static deep learning arrangements, they are also proficient in loud climate like self-driving vehicles. According to Intel, four terabytes is the estimated amount of information that a standalone vehicle might produce of approximately 60 min and portion of driving, or the amount of time a person spends in their vehicle on average every day. The ability of self-ruling vehicles to effectively handle all of the data generated during these excursions will be put to the test.

The ADAS (Advanced Driver Assistance System) applications include picture learning and recognition work among other uses of neuromorphic chips in vehicles. It functions similarly to one of the standard ADAS features found in passenger vehicles, such as cruise control or intelligent speed assistance. By sensing the traffic data set aside on streets, such as crosswalks, school zones, street knocks, and so forth, it can control vehicle speed.

Some of the major market players, like Intel Corporation and IBM Corporation, are based in North America. Due to factors including governmental initiatives, speculative activities, and others, the market for neuromorphic chips is expanding in the area. For instance, the Department of Energy (DOE) reported funding of USD 2 million for five crucial exploration projects to advance neuromorphic registration in September 2020. The DOE’s initiative supports the development of software and hardware for mind-enhanced neuromorphic registering.

The industry is also growing as a result of the scaling down of neuromorphic chips that help in numerous applications. For instance, MIT engineers expected to release a mind-on-chip in June 2020 that was smaller than a piece of confetti and was made using a sizable number of memristors. Such chips can be utilised in portable AI devices. The Canadian government is also focusing on artificial intelligence technology, which will support advancements in neuromorphic figuring throughout the coming years. For instance, the legislatures of Canada and Quebec joined together in June 2020 to foster the thoughtful development of AI. The focus will be on a variety of topics, including solid AI, commercialization, information management, and future work and development.

There is a tremendous amount of interest in organisations’ innovative work exercises. For instance, Intel and Sandia National Laboratories collaborated in October 2020 to study the value of neuromorphic processing for escalating computational issues. Sandia National Laboratories is one of three National Nuclear Security Administration innovative work research centres in the United States.

The penetration of neural-based chipsets in well-known applications is another factor driving market expansion. For instance, one of the most important technology companies, Apple, released the M1 Chip in November 2020 with the specific purpose of being used with its Mac products. The M1 Chip speeds up AI assignments and transports Apple Neural Engine to the Mac. The 16-center architecture can complete 11 trillion tasks per second, enabling up to 15 times faster ML execution.

References

  1. 1. Sörnmo L, Laguna P. Bioelectrical Signal Processing in Cardiac and Neurological Applications. Vol. 8. Academic Press; 2005
  2. 2. Zidan MA, Strachan JP, Lu WD. The future of electronics based on memristive systems. Nature Electronics. 2018;1:22-29
  3. 3. Yu S. Neuro-Inspired Computing Using Resistive Synaptic Devices. Berlin/Heidelberg, Germany: Springer; 2017
  4. 4. Goldberg DH, Cauwenberghs G, Andreou AG. Probabilistic synaptic weighting in a reconfigurable network of VLSI integrate-and-fire neurons. Neural Networks. 2001;14:781-793
  5. 5. An H, Bai K, Yi Y. The Roadmap to Realize Memristive Three-Dimensional Neuromorphic Computing System. 2018. 10.5772/intechopen.78986
  6. 6. Choi S, Ham S, Wang G. Memristor synapses for neuromorphic computing. In: Memristors-Circuits and Applications of Memristor Devices. London, UK: IntechOpen; 2019
  7. 7. Camuñas-Mesa LA, Linares-Barranco B, Serrano-Gotarredona T. Neuromorphic spiking neural networks and their memristor-CMOS hardware implementations. Materials. 2019;12:2745
  8. 8. Priestley A. Emerging technology analysis: Neuromorphic computing. Nanotechnology. 2018;30:032001
  9. 9. Fowers J, Ovtcharov K, Papamichael M, Massengill T, Liu M, Lo D, et al. A configurable cloud-scale DNN processor for real-time AI. In: Proceedings of the 2018 ACM/IEEE Press 45th Annual International Symposium on Computer Architecture (ISCA); 1-6 June 2018; Los Angeles, CA, USA. pp. 1-14
  10. 10. Ma D, Shen J, Gu Z, Zhang M, Zhu X, Xu X, et al. Darwin: A neuromorphic hardware co-processor based on spiking neural networks. Journal of Systems Architecture. 2017;77:43-51
  11. 11. Jiao Y, Han L, Jin R, Su YJ, Ho C, Yin L, et al. 7.2 A 12nm Programmable Convolution-Efficient Neural-Processing-Unit Chip Achieving 825TOPS. In: Proceedings of the 2020 IEEE Press International Solid-State Circuits Conference-(ISSCC); 16-20 February 2020; San Francisco, CA, USA. pp. 136-140
  12. 12. Corinto F, Civalleri PP, Chua LO. A theoretical approach to memristor devices. EEE Journal on Emerging and Selected Topics in Circuits and Systems. 2015;5:123-132
  13. 13. Pei J et al. Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature. 2019;572:106-111
  14. 14. Merolla PA et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science. 2014;345:668-673
  15. 15. Davies M et al. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro. 2018;38:82-99
  16. 16. Moradi S, Qiao N, Stefanini F, Indiveri G. A scalable multicore architecture with heterogeneous memory structures for dynamic neuromorphic asynchronous processors (DYNAPs). IEEE Transactions on Biomedical Circuits and Systems. 2017;12:106-122
  17. 17. Benjamin BV et al. Neurogrid: A mixed-analog-digital multichip system for large-scale neural simulations. Proceedings of the IEEE. 2014;102:699-716
  18. 18. Thakur CS et al. Large-scale neuromorphic spiking array processors: A quest to mimic the brain. Frontiers in Neuroscience. 2018;12:891
  19. 19. Schemmel J, Billaudelle S, Dauer P, Weis J. Accelerated analog neuromorphic computing. 2020. Preprint at https://arxiv.org/abs/2003.11996
  20. 20. Bohnstingl T, Scherr F, Pehle C, Meier K, Maass W. Neuromorphic hardware learns to learn. Frontiers in Neuroscience. 2019;13:483
  21. 21. Islam R et al. Device and materials requirements for neuromorphic computing. Journal of Physics D. 2019;52:113001
  22. 22. Nandakumar S, Kulkarni SR, Babu AV, Rajendran B. Building brain-inspired computing systems: Examining the role of nanoscale devices. IEEE Nanotechnology Magazine. 2018;12:19-35
  23. 23. Najem JS et al. Memristive ion channel-doped biomembranes as synaptic mimics. ACS Nano. 2018;12:4702-4711
  24. 24. Jo SH et al. Nanoscale memristor device as synapse in neuromorphic systems. Nano Letters. 2010;10:1297-1301
  25. 25. Li Y, Wang Z, Midya R, Xia Q , Yang JJ. Review of memristor devices in neuromorphic computing: Materials sciences and device challenges. Journal of Physics D. 2018;51:503002
  26. 26. Schuman CD et al. A survey of neuromorphic computing and neural networks in hardware. 2017. Preprint at https://arxiv.org/abs/1705.06963
  27. 27. Markovic D, Grollier J. Quantum neuromorphic computing. Applied Physics Letters. 2020;117:150501
  28. 28. Goteti US, Zaluzhnyy IA, Ramanathan S, et al. Low-temperature emergent neuromorphic networks with correlated oxide devices. Proceedings of the National Academy of Sciences. 2021;118:e2103934118
  29. 29. Wang G, Ma S, Wu Y, Pei J, Zhao R, Shi LP. End-to-end implementation of various hybrid neural networks on a cross-paradigm neuromorphic Chip. Frontiers in Neuroscience. 2021;15:615279. DOI: 10.3389/fnins.2021.615279
  30. 30. Available from: https://newsroom.intel.com/news/intels-pohoiki-beach-64-chip-neuromorphic-system-delivers-breakthrough-results-research-tests/#gs.jaqzr2
  31. 31. How IBM Got Brainlike Efficiency from the TrueNorth Chip. Available from: https://spectrum.ieee.org/how-ibm-got-brainlike-efficiency-from-the-truenorth-chip
  32. 32. Song KM, Jeong J-S, Pan B, Zhang X, Xia J, Cha S, et al. Skyrmion-based artificial synapses for neuromorphic computing. Nature Electronics. 2020;3(3):148-155. arXiv:1907.00957. DOI: 10.1038/s41928-020-0385-0.S2CID195767210
  33. 33. Neuromorphic computing: The long path from roots to real life. 2020. Available from: https://venturebeat.com/ai/neuromorphic-computing-the-long-path-from-roots-to-real-life/
  34. 34. The brain's Architecture, Efficiency… on a Chip. IBM Research Blog. 2016. Available from: https://www.ibm.com/blogs/research/2016/12/the-brains-architecture-efficiency-on-a-chip/
  35. 35. IBM Research: Brain-inspired Chip. 2021. Available from: https://research.ibm.com/blog; www.research.ibm.com
  36. 36. Andreou AG, Dykman AA, Fischl KD, Garreau G, Mendat DR, Orchard G, et al. Real-time sensory information processing using the TrueNorth Neurosynaptic System. In: 2016 IEEE Press International Symposium on Circuits and Systems (ISCAS): 2911. 2016. DOI: 10.1109/ISCAS.2016.7539214. ISBN 978-1-4799-5341-7. S2CID 29335047
  37. 37. Tsuruoka T, Hasegawa T, Aono M. Synaptic plasticity and memristive behavior operated by atomic switches. In: Cellular Nanoscale Networks and their Applications (CNNA), 2014 14th International Workshop on. IEEE; 2014. pp. 1-2
  38. 38. Avizienis AV, Sillin HO, Martin-Olmos C, Shieh HH, Aono M, Stieg AZ, et al. Neuromorphic atomic switch networks. PLoS One. 2012;7(8):e42772
  39. 39. Hasegawa T, Ohno T, Terabe K, Tsuruoka T, Nakayama T, Gimzewski JK, et al. Learning abilities achieved by a single solid-state atomic switch. Advanced Materials. 2010;22(16):1831-1834
  40. 40. Stieg AZ, Avizienis AV, Sillin HO, Aguilera R, Shieh H-H, Martin-Olmos C, et al. Self-organization and emergence of dynamical structures in neuromorphic atomic switch networks. In: Memristor Networks. Springer; 2014. pp. 173-209
  41. 41. Nayak A, Ohno T, Tsuruoka T, Terabe K, Hasegawa T, Gimzewski JK, et al. Controlling the synaptic plasticity of a cu2s gap-type atomic switch. Advanced Functional Materials. 2012;22(17):3606-3613
  42. 42. Tsuruoka T, Hasegawa T, Terabe K, Aono M. Conductance quantization and synaptic behavior in a ta2o5-based atomic switch. Nanotechnology. 2012;23(43):435705
  43. 43. Yang R, Terabe K, Yao Y, Tsuruoka T, Hasegawa T, Gimzewski JK, et al. Synaptic plasticity and memory functions achieved in a wo3- x-based nanoionics device by using the principle of atomic switch operation. Nanotechnology. 2013;24(38):384003
  44. 44. Suri M, Bichler O, Querlioz D, Palma G, Vianello E, Vuillaume D, et al. Cbram devices as binary synapses for low-power stochastic neuromorphic systems: Auditory (cochlea) and visual (retina) cognitive processing applications. In: Electron Devices Meeting (IEDM), 2012 IEEE International. Vol. 2012. IEEE. pp. 10-13
  45. 45. Palma G, Suri M, Querlioz D, Vianello E, De Salvo B. Stochastic neuron design using conductive bridge ram. In: Nanoscale Architectures (NANOARCH), 2013 IEEE/ACM International Symposium on. IEEE; 2013. pp. 95-100
  46. 46. Suri M, Parmar V. Exploiting intrinsic variability of filamentary resistive memory for extreme learning machine architectures. Nanotechnology, IEEE Transactions on. 2015;14(6):963-968
  47. 47. DeSalvo B, Vianello E, Thomas O, Clermidy F, Bichler O, Gamrat C, et al. Emerging resistive memories for low power embedded applications and neuromorphic systems. In: Circuits and Systems (ISCAS), 2015 IEEE International Symposium on. Vol. 2015. IEEE. pp. 3088-3091
  48. 48. Suri M, Querlioz D, Bichler O, Palma G, Vianello E, Vuillaume D, et al. Bio-inspired stochastic computing using binary cbram synapses. Electron Devices, IEEE Transactions on. 2013;60(7):2402-2409
  49. 49. Clermidy F, Heliot R, Valentian A, Gamrat C, Bichler O, Duranton M, et al. Advanced technologies for braininspired computing. In: Design Automation Conference (ASP-DAC), 2014 19th Asia and South Pacific. Vol. 2014. IEEE. pp. 563-569
  50. 50. Roclin D, Bichler O, Gamrat C, Klein J-O. Sneak paths effects in cbram memristive devices arrays for spiking neural networks. In: Proceedings of the 2014 IEEE/ACM International Symposium on Nanoscale Architectures. ACM; 2014. pp. 13-18
  51. 51. DeSalvo B, Vianello E, Garbin D, Bichler O, Perniola L. From memory in our brain to emerging resistive memories in neuromorphic systems. In: Memory Workshop (IMW), 2015 IEEE International. Vol. 2015. IEEE. pp. 1-4
  52. 52. Jang JW, Attarimashalkoubeh B, Prakash A, Hwang H, Jeong YH. “S calable neuron circuit using conductive-bridge ram for pattern reconstructions”. IEEE Transactions on Electron Devices. vol. 99. 2016. pp. 1-4
  53. 53. Mahalanabis D, Sivaraj M, Chen W, Shah S, Barnaby H, Kozicki M, et al. Demonstration of spike timing dependent plasticity in cbram devices with silicon neurons. In: Circuits and Systems (ISCAS), 2016 IEEE International Symposium on. Vol. 2016. IEEE. pp. 2314-2317
  54. 54. Yu S, Wong H-S. Modeling the switching dynamics of programmable-metallization-cell (pmc) memory and its application as synapse device for a neuromorphic computation system. In: Electron Devices Meeting (IEDM), 2010 IEEE International. Vol. 2010. IEEE. pp. 22-21
  55. 55. Mahalanabis D, Barnaby H, Gonzalez-Velo Y, Kozicki M, Vrudhula S, Dandamudi P. Incremental resistance programming of programmable metallization cells for use as electronic synapses. Solid-State Electronics. 2014;100:39-44
  56. 56. La Barbera S, Vincent A, Vuillaume D, Querlioz D, Alibart F. Short-term to long-term plasticity transition in filamentary switching for memory applications. In: Memristive Systems (MEMRISYS) 2015 International Conference on. IEEE; 2015. pp. 1-2
  57. 57. La Barbera S, Vincent AF, Vuillaume D, Querlioz D, Alibart F. Interplay of multiple synaptic plasticity features in filamentary memristive devices for neuromorphic computing. Scientific Reports. 2016;6
  58. 58. La Barbera S, Vuillaume D, Alibart F. Filamentary switching: Synaptic plasticity through device volatility. ACS Nano. 2015;9(1):941-949
  59. 59. Covi E, Brivio S, Serb A, Prodromakis T, Fanciulli M, Spiga S. Hfo2-based memristors for neuromorphic applications. In: Circuits and Systems (ISCAS), 2016 IEEE International Symposium on. Vol. 2016. IEEE. pp. 393-396
  60. 60. Gao B, Bi Y, Chen H-Y, Liu R, Huang P, Chen B, et al. Ultra-low-energy three-dimensional oxide-based electronic synapses for implementation of robust highaccuracy neuromorphic computation systems. ACS Nano. 2014;8(7):6998-7004
  61. 61. Gao B, Liu L, Kang J. Investigation of the synaptic device based on the resistive switching behavior in hafnium oxide. Progress in Natural Science: Materials International. 2015;25(1):47-50
  62. 62. Jha R, Mandal S. Nanoelectronic synaptic devices and materials for brain-inspired computational architectures. In: SPIE NanoScience+ Engineering. International Society for Optics and Photonics; 2014. pp. 91 740S-91 740S
  63. 63. Matveyev Y, Kirtaev R, Fetisova A, Zakharchenko S, Negrov D, Zenkevich A. Crossbar nanoscale hfo 2-based electronic synapses. Nanoscale Research Letters. 2016;11(1):1
  64. 64. Yu S, Wu Y, Jeyasingh R, Kuzum D, Wong H-S. An electronic synapse device based on metal oxide resistive switching memory for neuromorphic computation. Electron Devices, IEEE Transactions on. 2011;58(8):2729-2737
  65. 65. Matveyev Y, Egorov K, Markeev A, Zenkevich A. Resistive switching and synaptic properties of fully atomic layer deposition grown tin/hfo2/tin devices. Journal of Applied Physics. 2015;117(4):044901
  66. 66. Woo J, Moon K, Song J, Kwak M, Park J, Hwang H. Optimized programming scheme enabling linear potentiation in filamentary hfo 2 rram synapse for neuromorphic systems. IEEE Transactions on Electron Devices. 2016;63(12):5064-5067
  67. 67. Jia H, Deng N, Pang H. Threshold adaptive transistor realized with rrams for neuromorphic circuits. In: Junction Technology (IWJT), 2014 International Workshop on. Vol. 2014. IEEE. pp. 1-4
  68. 68. Demin V, Emelyanov A, Lapkin D, Erokhin V, Kashkarov P, Kovalchuk M. Neuromorphic elements and systems as the basis for the physical implementation of artificial intelligence technologies. Crystallography Reports. 2016;61(6):992-1001
  69. 69. Dongale T, Desai N, Khot K, Mullani N, Pawar P, Tikke R, et al. Effect of surfactants on the data directionality and learning behaviour of al/tio2/fto thin film memristor-based electronic synapse. Journal of Solid State Electrochemistry. 2016;49:1-5
  70. 70. Hu M, Wang Y, Qiu Q , Chen Y, Li H. The stochastic modeling of tio 2 memristor and its usage in neuromorphic system design. In: Design Automation Conference (ASP-DAC), 2014 19th Asia and South Pacific. Vol. 2014. IEEE. pp. 831-836
  71. 71. Hu X, Feng G, Li H, Chen Y, Duan S. An adjustable memristor model and its application in small-world neural networks. In: Neural Networks (IJCNN), 2014 International Joint Conference on. IEEE; 2014. pp. 7-14
  72. 72. Park J, Kwak M, Moon K, Woo J, Lee D, Hwang H. Tio x-based rram synapse with 64-levels of conductance and symmetric conductance change by adopting a hybrid pulse scheme for neuromorphic computing. IEEE Electron Device Letters. 2016;37(12):1559-1562
  73. 73. O’Kelly CJ, Fairfield JA, McCloskey D, Manning HG, Donegan JF, Boland JJ. Associative enhancement of time correlated response to heterogeneous stimuli in a neuromorphic nanowire device. Advanced Electronic Materials. 2016;3:1-6
  74. 74. Chang T, Sheridan P, Lu W. Modeling and implementation of oxide memristors for neuromorphic applications. In: 2012 13th International Workshop on Cellular Nanoscale Networks and their Applications. 2012. pp. 1-3
  75. 75. Du C, Ma W, Chang T, Sheridan P, Lu WD. Biorealistic implementation of synaptic functions with oxide memristors through internal ionic dynamics. Advanced Functional Materials. 2015;25(27):4290-4299
  76. 76. Tan Z-H, Yang R, Terabe K, Yin X-B, Zhang X-D, Guo X. Synaptic metaplasticity realized in oxide memristive devices. Advanced Materials. 2015;28:377-384
  77. 77. Shi T, Yin X-B, Yang R, Guo X. Pt/wo 3/fto memristive devices with recoverable pseudo-electroforming for time-delay switches in neuromorphic computing. Physical Chemistry Chemical Physics. 2016;18(14):9338-9343
  78. 78. Thakoor S, Moopenn A, Daud T, Thakoor A. Solid-state thinfilm memistor for electronic neural networks. Journal of Applied Physics. 1990;67(6):3132-3135
  79. 79. Chang Y-F, Fowler B, Chen Y-C, Zhou F, Pan C-H, Chang T-C, et al. Demonstration of synaptic behaviors and resistive switching characterizations by proton exchange reactions in silicon oxide. Scientific Reports. 2016;6
  80. 80. Guo L, Wan Q , Wan C, Zhu L, Shi Y. Short-term memory to long-term memory transition mimicked in izo homojunction synaptic transistors. Electron Device Letters. 2013;34(12):1581-1583
  81. 81. Gao L, Wang I-T, Chen P-Y, Vrudhula S, Seo J-S, Cao Y, et al. Fully parallel write/read in resistive synaptic array for accelerating on-chip learning. Nanotechnology. 2015;26(45):455204
  82. 82. Wang Y-F, Lin Y-C, Wang I-T, Lin T-P, Hou T-H. Characterization and modeling of nonfilamentary ta/taox/tio2/ti analog synaptic device. Scientific Reports. 2015;5
  83. 83. Hu S, Liu Y, Chen T, Liu Z, Yu Q , Deng L, et al. Emulating the ebbinghaus forgetting curve of the human brain with a nio-based memristor. Applied Physics Letters. 2013;103(13):133701
  84. 84. Hu SG, Liu Y, Chen T, Liu Z, Yu Q , Deng L, et al. Emulating the paired-pulse facilitation of a biological synapse with a NiOx-based memristor. Applied Physics Letters. 2013;102. DOI: 10.1063/1.4804374
  85. 85. Hu S, Liu Y, Liu Z, Chen T, Yu Q , Deng L, et al. Synaptic long-term potentiation realized in pavlov’s dog model based on a niox-based memristor. Journal of Applied Physics. 2014;116(21):214502
  86. 86. Yang X, Cai Y, Zhang Z, Yu M, Huang R. An electronic synapse based on tantalum oxide material. In: 2015 15th Non-Volatile Memory Technology Symposium (NVMTS). Vol. 2015. IEEE. pp. 1-4
  87. 87. Wang Z, Yin M, Zhang T, Cai Y, Wang Y, Yang Y, et al. Engineering incremental resistive switching in tao x based memristorsfor brain-inspired computing. Nanoscale. 2016;8:14015-14022
  88. 88. Thomas A, Niehörster S, Fabretti S, Shepheard N, Kuschel O, Küpper K, et al. Tunnel junction based memristors as artificial synapses. Frontiers in Neuroscience. 2015;9
  89. 89. Wang C, He W, Tong Y, Zhao R. Investigation and manipulation of different analog behaviors of memristor as electronic synapse for neuromorphic applications. Scientific Reports. 2016;6
  90. 90. Wu Y, Yu S, Wong H-S, Chen Y-S, Lee H-Y, Wang S-M, et al. Alox-based resistive switching device with gradual resistance modulation for neuromorphic device application. In: Memory Workshop (IMW), 2012 4th IEEE International. Vol. 2012. IEEE. pp. 1-4
  91. 91. Sarkar B, Lee B, Misra V. Understanding the gradual reset in pt/al2o3/ni rram for synaptic applications. Semiconductor Science and Technology. 2015;30(10):105014
  92. 92. Wang L-G, Zhang W, Chen Y, Cao Y-Q , Li A-D, Wu D. Synaptic plasticity and learning behaviors mimicked in single inorganic synapses of pt/hfox/znox/tin memristive system. Nanoscale Research Letters. 2017;12(1):65
  93. 93. Moon K, Cha E, Park J, Gi S, Chu M, Baek K, et al. High density neuromorphic system with mo/pr0. 7ca0. 3mno3 synapse and nbo2 imt oscillator neuron. In: 2015 IEEE International Electron Devices Meeting (IEDM). Vol. 2015. IEEE. pp. 17-16
  94. 94. Jang J-W, Park S, Jeong Y-H, Hwang H. Reram-based synaptic device for neuromorphic computing. In: Circuits and Systems (ISCAS), 2014 IEEE International Symposium on. Vol. 2014. IEEE. pp. 1054-1057
  95. 95. Jang J-W, Park S, Burr GW, Hwang H, Jeong Y-H. Optimization of conductance change in pr 1–x ca x mno 3-based synaptic devices for neuromorphic systems. Electron Device Letters, IEEE. 2015;36(5):457-459
  96. 96. Lee D, Park J, Moon K, Jang J, Park S, Chu M, et al. Oxide based nanoscale analog synapse device for neural signal recognition system. In: 2015 IEEE International Electron Devices Meeting (IEDM). Vol. 2015. IEEE. pp. 4-7
  97. 97. Moon K, Cha E, Lee D, Jang J, Park J, Hwang H. Rerambased analog synapse and imt neuron device for neuromorphic system. In: VLSI Technology, Systems and Application (VLSI-TSA), 2016 International Symposium on. Vol. 2016. IEEE. pp. 1-2
  98. 98. Moon K, Cha E, Park J, Gi S, Chu M, Baek K, et al. Analog synapse device with 5-b mlc and improved data retention for neuromorphic system. IEEE Electron Device Letters. 2016;37(8):1067-1070
  99. 99. Krzysteczko P, Münchenberger J, Schäfers M, Reiss G, Thomas A. The memristive magnetic tunnel junction as a nanoscopic synapse-neuron system. Advanced Materials. 2012;24(6):762-766
  100. 100. Li Y, Zhong Y, Xu L, Zhang J, Xu X, Sun H, et al. Ultrafast synaptic events in a chalcogenide memristor. Scientific Reports. 2013;3
  101. 101. Li Y, Zhong Y, Zhang J, Xu L, Wang Q , Sun H, et al. Activity-dependent synaptic plasticity of a chalcogenide electronic synapse for neuromorphic systems. Scientific Reports. 2014;4
  102. 102. Tranchant J, Janod E, Corraze B, Stoliar P, Rozenberg M, Besland M-P, et al. Control of resistive switching in am4q8 narrow gap Mott insulators: A first step towards neuromorphic applications. Physica Status Solidi (a). 2015;212(2):239-244
  103. 103. Chen Y, Liu G, Wang C, Zhang W, Li R-W, Wang L. Polymer memristor for information storage and neuromorphic applications. Materials Horizons. 2014;1(5):489-506
  104. 104. Juarez-Hernandez LJ, Cornella N, Pasquardini L, Battistoni S, Vidalino L, Vanzetti L, et al. Bio-hybrid interfaces to study neuromorphic functionalities: New multidisciplinary evidences of cell viability on poly (anyline)(pani), a semiconductor polymer with memristive properties. Biophysical Chemistry. 2016;208:40-47
  105. 105. Li S, Zeng F, Chen C, Liu H, Tang G, Gao S, et al. Synaptic plasticity and learning behaviours mimicked through ag interface movement in an ag/conducting polymer/ta memristive system. Journal of Materials Chemistry C. 2013;1(34):5292-5298
  106. 106. Luo W, Yuan F-Y, Wu H, Pan L, Deng N, Zeng F, et al. Synaptic learning behaviors achieved by metal ion migration in a cu/pedot: Pss/ta memristor. In: 2015 15th Non-Volatile Memory Technology Symposium (NVMTS). Vol. 2015. IEEE. pp. 1-4
  107. 107. Luo W, Wu X, Yuan F-Y, Wu H, Pan L, Deng N. Synaptic learning behavior based on a ag/pedot: Pss/ta memristor. In: Next- Generation Electronics (ISNE), 2016 5th International Symposium on. Vol. 2016. IEEE. pp. 1-2
  108. 108. Nawrocki R, Voyles RM, Shaheen SE, et al. Neurons in polymer: Hardware neural units based on polymer memristive devices and polymer transistors. Electron Devices, IEEE Transactions on. 2014;61(10):3513-3519
  109. 109. Xiao Z, Huang J. Energy-efficient hybrid perovskite memristors and synaptic devices. Advanced Electronic Materials. 2016;2:1-8
  110. 110. Yang X, Wang C, Shang J, Zhang C, Tan H, Yi X, et al. An organic terpyridyl-iron polymer based memristor for synaptic plasticity and learning behavior simulation. RSC Advances. 2016;6(30):25179-25184
  111. 111. Zhang C, Tai Y-T, Shang J, Liu G, Wang K-L, Hsu C, et al. Synaptic plasticity and learning behaviours in flexible artificial synapse based on polymer/viologen system. Journal of Materials Chemistry C. 2016;4(15):3217-3223
  112. 112. Bennett CH, Chabi D, Cabaret T, Jousselme B, Derycke V, Querlioz D, et al. Supervised learning with organic memristor devices and prospects for neural crossbar arrays. In: Nanoscale Architectures (NANOARCH), 2015 IEEE/ACM International Symposium on. Vol. 2015. IEEE. pp. 181-186
  113. 113. Cabaret T, Fillaud L, Jousselme B, Klein J-O, Derycke V. Electro-grafted organic memristors: Properties and prospects for artificial neural networks based on stdp. In: Nanotechnology (IEEENANO), 2014 IEEE 14th International Conference on. IEEE; 2014. pp. 499-504
  114. 114. Chang C, Zeng F, Li X, Dong W, Lu S, Gao S, et al. Simulation of synaptic short-term plasticity using ba (cf3so3) 2- doped polyethylene oxide electrolyte film. Scientific Reports. 2016;6
  115. 115. Erokhin V, Berzina T, Smerieri A, Camorani P, Erokhina S, Fontana MP. Bio-inspired adaptive networks based on organic memristors. Nano Communication Networks. 2010;1(2):108-117
  116. 116. Erokhin V. Organic memristive devices: Architecture, properties and applications in neuromorphic networks. In: Electronics, Circuits, and Systems (ICECS), 2013 IEEE 20th International Conference on. IEEE; 2013. pp. 305-308
  117. 117. Erokhina S. Layer-by-layer technique for the fabrication of organic memristors and neuromorphic systems. In: Memristive Systems (MEMRISYS) 2015 International Conference on. IEEE; 2015. pp. 1-2
  118. 118. Kim C-H, Sung S, Yoon M-H. Synaptic organic transistors with a vacuum-deposited charge-trapping nanosheet. Scientific Reports. 2016;6
  119. 119. Kong L-A, Sun J, Qian C, Gou G, He Y, Yang J, et al. Iongel gated field-effect transistors with solution-processed oxide semiconductors for bioinspired artificial synapses. Organic Electronics. 2016;39:64-70
  120. 120. Lin Y-P, Bennett CH, Cabaret T, Vodenicarevic D, Chabi D, Querlioz D, et al. Physical realization of a supervised learning system built with organic memristive synapses. Scientific Reports. 2016;6
  121. 121. Liu G, Wang C, Zhang W, Pan L, Zhang C, Yang X, et al. Organic biomimicking memristor for information storage and processing applications. Advanced Electronic Materials. 2015;2:1-8
  122. 122. Nawrocki RA, Voyles RM, Shaheen SE. Simulating hardware neural networks with organic memristors and organic field effect transistors. Intelligent Engineering Systems through Artificial Neural Networks. 2010;20
  123. 123. Wang L, Wang Z, Lin J, Yang J, Xie L, Yi M, et al. Long-term homeostatic properties complementary to hebbian rules in cupc-based multifunctional memristor. Scientific Reports. 2016;6
  124. 124. Ishiwara H. Proposal of adaptive-learning neuron circuits with ferroelectric analog-memory weights. Japanese Journal of Applied Physics. 1993;32(1S):442
  125. 125. Yoon S-M, Tokumitsu E, Ishiwara H. An electrically modifiable synapse array composed of metal-ferroelectric-semiconductor (mfs) fet’s using srbi/sub 2/ta/sub 2/o/sub 9/thin films. Electron Device Letters, IEEE. 1999;20(5):229-231
  126. 126. Yoon S-M, Tokumitsu E, Ishiwara H. Adaptive-learning neuron integrated circuits using metal-ferroelectric (SrBi/sub 2/Ta/sub 2/O/sub 9/)-semiconductor (MFS) FET's. IEEE Electron Device Letters. 1999;20(10):526-528. DOI: 10.1109/55.791931
  127. 127. Yoon SM, Tokumitsu E, Ishiwara H. Realization of adaptive learning function in a neuron circuit using metal/ferroelectric (SrBi2Ta2O9)/semiconductor field effect transistor (MFSFET). Japanese Journal of Applied Physics. 1999;38(4 B):2289-2293. DOI: 10.1143/jjap.38.2289
  128. 128. Yoon SM, Tokumitsu E, Ishiwara H. Ferroelectric neuron integrated circuits using SrBi2Ta2O9-gate FET's and CMOS Schmitt-Trigger oscillators. IEEE Transactions on Electron Devices. 2000;47(8):1630-1635. DOI: 10.1109/16.853041
  129. 129. Yoon S-MYS-M et al. Japanese Journal of Applied Physics. 2000;39:2119
  130. 130. Kim E, Kim K, Yoon S. Investigation of the ferroelectric switching behavior of p (vdf-trfe)-pmma blended films for synaptic device applications. Journal of Physics D: Applied Physics. 2016;49(7):075105
  131. 131. Nishitani Y, Kaneko Y, Ueda M, Morie T, Fujii E. Threeterminal ferroelectric synapse device with concurrent learning function for artificial neural networks. Journal of Applied Physics. 2012;111(12):124108
  132. 132. Nishitani Y, Kaneko Y, Ueda M, Fujii E, Tsujimura A. Dynamic observation of brain-like learning in a ferroelectric synapse device. Japanese Journal of Applied Physics. 2013;52(4S):04CE06
  133. 133. Yoon S-M, Ishiwara H. Adaptive-learning synaptic devices using ferroelectric-gate field-effect transistors for neuromorphic applications. In: Ferroelectric-Gate Field Effect Transistor Memories. Springer; 2016. pp. 311-333
  134. 134. Nishitani Y, Kaneko Y, Ueda M. Artificial synapses using ferroelectric memristors embedded with cmos circuit for image recognition. In: Device Research Conference (DRC), 2014 72nd Annual. Vol. 2014. IEEE. pp. 297-298
  135. 135. Nishitani Y, Kaneko Y, Ueda M. Supervised learning using spike-timing-dependent plasticity of memristive synapses. IEEE Transactions on Neural Networks and Learning Systems. 2015;26(12):2999-3008. DOI: 10.1109/TNNLS.2015.2399491
  136. 136. Wang Z, Zhao W, Kang W, Zhang Y, Klein J-O, Ravelosona D, et al. Compact modelling of ferroelectric tunnel memristor and its use for neuromorphic simulation. Applied Physics Letters. 2014;104(5):053505
  137. 137. Tian H, Mi W, Wang X-F, Zhao H, Xie Q-Y, Li C, et al. Graphene dynamic synapse with modulatable plasticity. Nano Letters. 2015;15(12):8013-8019
  138. 138. Wang L, Wang Z, Zhao W, Hu B, Xie L, Yi M, et al. Controllable multiple depression in a grapheme oxide artificial synapse. Advanced Electronic Materials. 2017;3(1)
  139. 139. Wan CJ, Liu YH, Feng P, Wang W, Zhu LQ , Liu ZP, et al. Flexible metal oxide/graphene oxide hybrid neuromorphic transistors on flexible conducting graphene substrates. Advanced Materials. 2016;28:5878-5885
  140. 140. Wan CJ, Zhu LQ , Liu YH, Feng P, Liu ZP, Cao HL, et al. Proton-conducting graphene oxide-coupled neuron transistors for brain-inspired cognitive systems. Advanced Materials. 2016;28(18):3557-3563
  141. 141. Yang Y, Wen J, Guo L, Wan X, Du P, Feng P, et al. Long-term synaptic plasticity emulated in modified graphene oxide electrolyte gated izo-based thin-film transistors. ACS Applied Materials & Interfaces. 2016;8(44):30281-30286
  142. 142. Darwish M, Calayir V, Pileggi L, Weldon J. Ultra-Compact Graphene Multigate Variable Resistor for Neuromorphic Computing. IEEE Transactions on Nanotechnology. 2016;15:1-1. DOI: 10.1109/TNANO.2016.2525039
  143. 143. Hsu C-C, Parker AC, Joshi J. Dendritic computations, dendritic spiking and dendritic plasticity in nanoelectronic neurons. In: Circuits and Systems (MWSCAS), 2010 53rd IEEE International Midwest Symposium on. Vol. 2010. IEEE. pp. 89-92
  144. 144. Hsu C-C, Parker AC. Border ownership in a nanoneuromorphic circuit using nonlinear dendritic computations. In: Neural Networks (IJCNN), 2014 International Joint Conference on. IEEE; 2014. pp. 3442-3449
  145. 145. Hsu C-C, Parker AC. Dynamic spike threshold and nonlinear dendritic computation for coincidence detection in neuromorphic circuits. In: 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 2014. pp. 461-464
  146. 146. Joshi J, Hsu C, Parker AC, Deshmukh P. A carbon nanotube cortical neuron with excitatory and inhibitory dendritic computations. In: IEEE Xplore/NIH LIfe Science Systems and Applications Workshop. 2009
  147. 147. Parker AC, Joshi J, Hsu C-C, Singh NAD. A carbon nanotube implementation of temporal and spatial dendritic computations. In: Circuits and Systems, 2008. MWSCAS 2008. 51st Midwest Symposium on. Vol. 2008. IEEE. pp. 818-821
  148. 148. Friesz AK, Parker AC, Zhou C, Ryu K, Sanders JM, Wong H-SP, et al. A biomimetic carbon nanotube synapse circuit. In: Biomedical Engineering Society (BMES) Annual Fall Meeting. Springer; Vol. 2(8). 2007. p. 29
  149. 149. Barzegarjalali S, Parker AC. A hybrid neuromorphic circuit demonstrating schizophrenic symptoms. In: Biomedical Circuits and Systems Conference (BioCAS), 2015 IEEE. IEEE; 2015. pp. 1-4
  150. 150. Barzegarjalali S, Parker AC. Neuromorphic circuit modeling directional selectivity in the visual cortex. Annu Int Conf IEEE Eng Med Biol Soc. 2016;2016:6130-6133. DOI: 10.1109/EMBC.2016.7592127
  151. 151. Barzegarjalali S, Parker A. A neuromorphic circuit mimicking biological short-term memory. In: Conference Proceedings: ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference. Vol. 2016. 2016. pp. 1401-1404. DOI: 10.1109/EMBC.2016.7590970
  152. 152. Barzegarjalali S, Parker AC. A bio-inspired electronic mechanism for unsupervised learning using structural plasticity. Future Technologies Conference (FTC). 2016;2016:806-815. DOI: 10.1109/FTC.2016.7821696
  153. 153. Gacem K, Retrouvey J-M, Chabi D, Filoramo A, Zhao W, Klein J-O, et al. Neuromorphic function learning with carbon nanotube based synapses. Nanotechnology. 2013;24(38):384013
  154. 154. Joshi J, Parker AC, Hsu C-C. A carbon nanotube cortical neuron with spike-timing-dependent plasticity. In: Engineering in Medicine and Biology Society, 2009. EMBC 2009. Annual International Conference of the IEEE. Vol. 2009. IEEE. pp. 1651-1654
  155. 155. Joshi J, Zhang J, Wang C, Hsu CC, Parker AC, Zhou C, Ravishankar U. “A biomimetic fabricated carbon nanotube synapse for prosthetic applications.” In: Life Science Systems and Applications Workshop (LiSSA), 2011 IEEE/NIH. IEEE; 2011. pp. 139-142
  156. 156. Kim K, Chen C-L, Truong Q , Shen AM, Chen Y. A carbon nanotube synapse with dynamic logic and learning. Advanced Materials. 2013;25(12):1693-1698
  157. 157. Kim K, Tudor A, Chen C-L, Lee D, Shen AM, Chen Y. Bioinspired neuromorphic module based on carbon nanotube/c60/polymer composite. Journal of Composite Materials. 2015;49:0021998315573559
  158. 158. Kim S, Yoon J, Kim H-D, Choi S-J. Carbon nanotube synaptic transistor network for pattern recognition. ACS Applied Materials & Interfaces. 2015;7(45):25479-25486
  159. 159. Liao S-Y, Retrouvey J-M, Agnus G, Zhao W, Maneux C, Fr’egon’ese S, et al. Design and modeling of a neuro-inspired learning circuit using nanotube-based memory devices. Circuits and Systems I: Regular Papers, IEEE Transactions on. 2011;58(9):2172-2181
  160. 160. Mahvash M, Parker AC. Modeling intrinsic ion-channel and synaptic variability in a cortical neuromorphic circuit. In: Biomedical Circuits and Systems Conference (BioCAS), 2011 IEEE. IEEE; 2011. pp. 69-72
  161. 161. Shen AM, Chen C-L, Kim K, Cho B, Tudor A, Chen Y. Analog neuromorphic module based on carbon nanotube synapses. ACS Nano. 2013;7(7):6117-6122
  162. 162. Shen AM, Kim K, Tudor A, Lee D, Chen Y. Doping modulated carbon nanotube synapstors for a spike neuromorphic module. Small. 2015;11(13):1571-1579
  163. 163. Yin C, Li Y, Wang J, Wang X, Yang Y, Ren T-L. Carbon nanotube transistor with short-term memory. Tsinghua Science and Technology. 2016;21(4):442-448
  164. 164. Zhao W, Agnus G, Derycke V, Filoramo A, Bourgoin J, Gamrat C. Nanotube devices based crossbar architecture: Toward neuromorphic computing. Nanotechnology. 2010;21(17):175202
  165. 165. Feng P, Xu W, Yang Y, Wan X, Shi Y, Wan Q , et al. Printed neuromorphic devices based on printed carbon nanotube thinfilm transistors. Advanced Functional Materials. 2016;10
  166. 166. Chen C, Kim K, Truong Q , Shen A, Li Z, Chen Y. A spiking neuron circuit based on a carbon nanotube transistor. Nanotechnology. 2012;23(27):275202
  167. 167. Joshi J, Parker A, Hsu C. A carbon nanotube spiking cortical neuron with tunable refractory period and spiking duration. In: IEEE Latin American Symp. on Circuits and Systems (LASCAS). 2010
  168. 168. Mahvash M, Parker AC. Synaptic variability in a cortical neuromorphic circuit. Neural Networks and Learning Systems, IEEE Transactions on. 2013;24(3):397-409
  169. 169. Najari M, El-Grour T, Jelliti S, Hakami OM, Al-Kamli A, Can N, et al. Simulation of a spiking neuron circuit using carbon nanotube transistors. In: AIP Conference Proceedings. Vol. 1742(1). AIP Publishing; 2016. p. 030013
  170. 170. Kim H, Park J, Kwon M, Lee J, Park B. Silicon-based floating-body synaptic transistor with frequency dependent short- and long-term memories. Electron Device Letters, IEEE. 2016;99:1-1
  171. 171. Kim H, Cho S, Sun M-C, Park J, Hwang S, Park B-G. Simulation study on silicon-based floating body synaptic transistor with short-and long-term memory functions and its spike timing-dependent plasticity. Journal of Semiconductor Technology and Science. 2016;16(5):657-663
  172. 172. Shao F, Yang Y, Zhu LQ , Feng P, Wan Q. Oxide-based synaptic transistors gated by sol-gel silica electrolytes. ACS Applied Materials & Interfaces. 2016;5:3050-3055
  173. 173. Shi J, Ha SD, Zhou Y, Schoofs F, Ramanathan S. A correlated nickelate synaptic transistor. Nature Communications. 2013;4
  174. 174. Wan CJ, Zhu LQ , Zhou JM, Shi Y, Wan Q. Memory and learning behaviors mimicked in nanogranular sio 2-based proton conductor gated oxide-based synaptic transistors. Nanoscale. 2013;5(21):10194-10199
  175. 175. Wan C, Zhu L, Zhou J, Shi Y, Wan Q. Inorganic proton conducting electrolyte coupled oxide-based dendritic transistors for synaptic electronics. Nanoscale. 2014;6. DOI: 10.1039/c3nr05882d
  176. 176. Wan C, Zhu L, Liu Y, Shi Y, Wan Q. Laterally coupled synaptic transistors gated by proton conducting sodium alginate films. Electron Device Letters, IEEE. 2014;35(6):672-674
  177. 177. Wan X, Feng P, Wu GD, Shi Y, Wan Q. Simulation of laterally coupled ingazno 4-based electric-double-layer transistors for synaptic electronics. Electron Device Letters, IEEE. 2015;36(2):204-206
  178. 178. Wan X, Yang Y, Feng P, Shi Y, Wan Q. Short-term plasticity and synaptic filtering emulated in electrolyte gated igzo transistors. Electron Device Letters, IEEE. 2016;99:1-1
  179. 179. Wan C, Liu YH, Zhu LQ , Feng P, Shi Y, Wan Q. Shortterm synaptic plasticity regulation in solution-gated indium-galliumzinc- oxide electric-double-layer transistors. ACS Applied Materials & Interfaces. 2016;15:9762-9768
  180. 180. Wang J, Li Y, Yin C, Yang Y, Ren T-L. Long-term depression mimicked in an igzo-based synaptic transistor. IEEE Electron Device Letters. 2016;38:191-194
  181. 181. Zhou J, Liu N, Zhu L, Shi Y, Wan Q. Energy-efficient artificial synapses based on flexible igzo electric-double-layer transistors. Electron Device Letters, IEEE. 2015;36(2):198-200
  182. 182. Zhu LQ , Xiao H, Liu YH, Wan CJ, Shi Y, Wan Q. Multigate synergic modulation in laterally coupled synaptic transistors. Applied Physics Letters. 2015;107(14):143502
  183. 183. Zhu LQ , Wan CJ, Gao PQ , Liu YH, Xiao H, Ye JC, et al. Flexible proton-gated oxide synaptic transistors on si membrane. ACS Applied Materials & Interfaces. 2016;8(33):21770-21775
  184. 184. Zhou J, Wan C, Zhu L, Shi Y, Wan Q. Synaptic behaviors mimicked in flexible oxide-based transistors on plastic substrates. Electron Device Letters, IEEE. 2013;34(11):1433-1435
  185. 185. Gkoupidenis P, Schaefer N, Garlan B, Malliaras GG. Neuromorphic functions in pedot: Pss organic electrochemical transistors. Advanced Materials. 2015;27(44):7176-7180
  186. 186. Gkoupidenis P, Schaefer N, Strakosas X, Fairfield JA, Malliaras GG. Synaptic plasticity functions in an organic electrochemical transistor. Applied Physics Letters. 2015;107(26):263302
  187. 187. Qian C, Sun J, Kong L-A, Gou G, Yang J, He J, et al. Artificial synapses based on in-plane gate organic electrochemical transistors. ACS Applied Materials & Interfaces. 2016;8(39):26169-26175
  188. 188. Wan CJ, Zhu LQ , Wan X, Shi Y, Wan Q. Organic/inorganic hybrid synaptic transistors gated by proton conducting methylcellulose films. Applied Physics Letters. 2016;108(4):043508
  189. 189. Wood R, Bruce I, Mascher P. Modeling of spiking analog neural circuits with hebbian learning, using amorphous semiconductor thin film transistors with silicon oxide nitride semiconductor split gates. In: Artificial Neural Networks and Machine Learning–ICANN 2012. Vol. 2012. Springer. pp. 89-96
  190. 190. Xu W, Min S-Y, Hwang H, Lee T-W. Organic core-sheath nanowire artificial synapses with femtojoule energy consumption. Science Advances. 2016;2(6):e1501326
  191. 191. Alibart F, Pleutin S, Gu’erin D, Novembre C, Lenfant S, Lmimouni K, et al. An organic nanoparticle transistor behaving as a biological spiking synapse. Advanced Functional Materials. 2010;20(2):330-337
  192. 192. Alibart F, Pleutin S, Bichler O, Gamrat C, Serrano-Gotarredona T, Linares-Barranco B, et al. A memristive nanoparticle/ organic hybrid synapstor for neuroinspired computing. Advanced Functional Materials. 2012;22(3):609-616
  193. 193. Bichler O, Zhao W, Alibart F, Pleutin S, Vuillaume D, Gamrat C. Functional model of a nanoparticle organic memory transistor for use as a spiking synapse. Electron Devices, IEEE Transactions on. 2010;57(11):3115-3122
  194. 194. Kwon K-C, Lee J-S, Kim CG, Park J-G. Biological synapse behavior of nanoparticle organic memory field effect transistor fabricated by curing. Applied Physics Express. 2013;6(6):067001
  195. 195. Liu YH, Zhu LQ , Feng P, Shi Y, Wan Q. Freestanding artificial synapses based on laterally proton-coupled transistors on chitosan membranes. Advanced Materials. 2015;27(37):5599-5604
  196. 196. Wu G, Zhang J, Wan X, Yang Y, Jiang S. Chitosan-based biopolysaccharide proton conductors for synaptic transistors on paper substrates. Journal of Materials Chemistry C. 2014;2(31):6249-6255
  197. 197. Wu G, Wan C, Zhou J, Zhu L, Wan Q. Low-voltage protonic/electronic hybrid indium zinc oxide synaptic transistors on paper substrates. Nanotechnology. 2014;25(9):094001
  198. 198. Wu G, Feng P, Wan X, Zhu L, Shi Y, Wan Q. Artificial synaptic devices based on natural chicken albumen coupled electricdouble- layer transistors. Scientific Reports. 2016;6
  199. 199. Zhou J, Liu Y, Shi Y, Wan Q. Solution-processed chitosangated izo-based transistors for mimicking synaptic plasticity. Electron Device Letters, IEEE. 2014;35(2):280-282
  200. 200. Schemmel J, Grübl A, Hartmann S, Kononov A, Mayr C, Meier K, et al. Live demonstration: A scaled-down version of the brainscales wafer-scale neuromorphic system. In: Proceedings of the 2012 IEEE Xplore International Symposium on Circuits and Systems (ISCAS), Seoul, Korea, 20-23 May 2012.
  201. 201. Liu D, Yu H, Chai Y. Low-power computing with neuromorphic engineering. Advanced Intelligent Systems. 2021;3:2000150
  202. 202. Balaji A, Adiraju P, Kashyap HJ, Das A, Krichmar JL, Dutt ND, et al. PyCARL: A PyNN interface for hardwaresoftware co-simulation of spiking neural network. arXiv 2020, arXiv:2003.09696
  203. 203. Shi L, Pei J, Deng N, Wang D, Deng L, Wang Y, et al. Development of a neuromorphic computing system. In: Proceedings of the 2015 IEEE International Electron Devices Meeting (IEDM); 7-9 December 2015; Washington, DC, USA.
  204. 204. Chi P, Li S, Xu C, Zhang T, Zhao J, Liu Y, et al. PRIME: A novel processing-in-memory architecture for neural network computation in ReRAM-based main memory. ACM SIGARCH Computer Architecture News. 2016;44(27):39
  205. 205. Xia Q , Yang JJ. Memristive crossbar arrays for brain-inspired computing. Nature Materials. 2019;18:309-323
  206. 206. Chakraborty I, Jaiswal A, Saha A, Gupta S, Roy K. Pathways to efficient neuromorphic computing with non-volatile memory technologies. Applied Physics Reviews. 2020;7:021308
  207. 207. Islam R, Li H, Chen PY, Wan W, Chen HY, Gao B, et al. Device and materials requirements for neuromorphic computing. Journal of Physics D: Applied Physics. 2019;52:113001
  208. 208. Chen A. A review of emerging non-volatile memory (NVM) technologies and applications. Solid-State Electronics. 2016;125:25-38
  209. 209. Strenz R. Review and outlook on embedded nvm technologies–from evolution to revolution. In: Proceedings of the 2020 IEEE Xplore International MemoryWorkshop (IMW); 17-20 May 2020; Dresden, Germany
  210. 210. Burr GW, Sebastian A, Vianello E, Waser R, Parkin S. Emerging materials in neuromorphic computing: Guest editorial. APL Materials. 2020;8:010401
  211. 211. Sangwan VK, Hersam MC. Neuromorphic nanoelectronic materials. Nature Nanotechnology. 2020;15:517-528
  212. 212. Lv Z, Wang Y, Chen J, Wang J, Zhou Y, Han ST. Semiconductor quantum dots for memories and neuromorphic computing systems. Chemical Reviews. 2020;120:3941-4006
  213. 213. Qi M, Zhang X, Yang L, Wang Z, Xu H, Liu W, et al. Intensity-modulated LED achieved through integrating p-GaN/n-ZnO heterojunction with multilevel RRAM. Applied Physics Letters. 2018;113:223503
  214. 214. Roychowdhury V, Janes D, Bandyopadhyay S, Wang X. Collective computational activity in self-assembled arrays of quantum dots: A novel neuromorphic architecture for nanoelectronics. IEEE Transactions on Electron Devices. 1996;43:1688-1699
  215. 215. Feldmann J, Youngblood N, Wright CD, Bhaskaran H, Pernice WH. All-optical spiking neurosynaptic networks with self-learning capabilities. Nature. 2019;569:208-214
  216. 216. Christensen DV et al. Neuromorph. Computer Engineering. 2022;2:022501
  217. 217. Bi G-Q , Poo M-M. Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and post synaptic cell type. The Journal of Neuroscience. 1998;18(24):10464-10472
  218. 218. Wittenberg GM, Wang SS-H. Malleability of spike-timing-dependent plasticity at the CA3–CA1 synapse. The Journal of Neuroscience. 2006;26(24):6610-6617. DOI: 10.1523/JNEUROSCI.5388-05.2006
  219. 219. Abbott LF, Nelson SB. Synaptic plasticity: Taming the beast. Nature Neuroscience. 2000;3(Suppl):1178-1183. DOI: 10.1038/81453
  220. 220. Gjorgjieva J, Clopath C, Audet J, Pfister J-P. A triplet spike-timing–dependent plasticity model generalizes the Bienenstock–Cooper–Munro rule to higher-order spatiotemporal correlations. Proceedings of the National Academy of Sciences of the United States of America. 2011;108(48):19383-19388. DOI: 10.1073/pnas.1105933108
  221. 221. Rachmuth G, Shouval H-Z, Bear MF, Poon C-S. A biophysically-based neuromorphic model of spike rate- and timing-dependent plasticity. Proceedings of the National Academy of Sciences of the United States of America. 2011;108(49):E1266-E1274. DOI: 10.1073/pnas. 1106161108
  222. 222. Indiveri G, Chicca E, Douglas R. A VLSI array of low-power spiking neurons and bistable synapses with spike-timing dependent plasticity. IEEE Transactions on Neural Networks. 2006;17(1):211-221. DOI: 10.1109/TNN.2005.860850
  223. 223. Diorio C, Hasler P, Minch BA, Mead CA. A single-transistor silicon synapse. IEEE Transactions on Electron Devices. 1996;43(11):1972-1980. DOI: 10.1109/16. 543035
  224. 224. Kuzum D, Jeyasingh RGD, Lee B, Wong H-SP. Nanoelectronic programmable synapses based on phase change materials for brain-inspired computing. Nano Letters. 2012;12(5):2179-2186. DOI: 10.1021/nl201040y
  225. 225. Kim S, Ishii M, Lewis S, Perri T, BrightSky M, Kim W, et al. NVM neuromorphic core with 64kcell (256-by-256) phase change memory synaptic array with on-chip neuron circuits for continuous in-situ learning. IEDM Technical Digest. 2015:443-446. DOI: 10.1109/IEDM.2015.7409716
  226. 226. Ambrogio S, Ciocchini N, Laudato M, Milo V, Pirovano A, Fantini P, et al. Unsupervised learning by spike timing dependent plasticity in phase change memory (PCM) synapses. Frontiers in Neuroscience. 2016;10:56. DOI: 10.3389/fnins.2016. 00056
  227. 227. Ambrogio S, Balatti S, Nardi F, Facchinetti S, Ielmini D. Spike-timing dependent plasticity in a transistor-selected resistive switching memory. Nanotechnology. 2013;24:384012. DOI: 10.1088/0957-4484/24/38/384012
  228. 228. Wang Z-Q , Ambrogio S, Balatti S, Ielmini D. A 2-transistor/1-resistor artificial synapse capable of communication and stochastic learning for neuromorphic systems. Frontiers in Neuroscience. 2015;8:438. DOI: 10.3389/fnins.2014.00438
  229. 229. Ambrogio S, Balatti S, Milo V, Carboni R, Wang Z, Calderoni A, et al. Neuromorphic learning and recognition with one-transistor-one-resistor synapses and bistable metal oxide RRAM. IEEE Transactions on Electron Devices. 2016;63(4):1508-1515. DOI: 10.1109/TED.2016.2526647
  230. 230. Wang Z, Joshi S, Savel'ev SE, Jiang H, Midya R, Lin P, et al. Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing. Nature Materials. 2017;16:101-108. DOI: 10.1038/nmat4756
  231. 231. Ohno T, Hasegawa T, Tsuruoka T, Terabe K, Gimzewski JK, Aono M. Short-term plasticity and long-term potentiation mimicked in single inorganic synapses. Nature Materials. 2011;10(8):591-595. DOI: 10.1038/nmat3054
  232. 232. Ielmini D. Brain-inspired computing with resistive switching memory (RRAM): Devices, synapses and neural networks. Microelectronic Engineering. 2018;190. DOI: 10.1016/j.mee.2018.01.009
  233. 233. Pedretti G, Milo V, Ambrogio S, Carboni R, Bianchi S, Calderoni A, et al. Memristive neural network for on-line learning and tracking with brain-inspired spike timing dependent plasticity. Scientific Reports. 2017;7:5288. DOI: 10.1038/s41598-017-05480-0
  234. 234. Rojas R. Neural Networks: A Systematic Introduction. Springer; 1996
  235. 235. Minsky M, Papert S. Perceptrons. MIT Press; 1969
  236. 236. Sie C. Memory devices using bistable resistivity in amorphous As-Te-Ge films [Ph.D. thesis]. Ames, IA, USA: Iowa State University; 1969
  237. 237. Sie C, Pohm A, Uttecht P, Kao A, Agrawal R. Chalcogenide glass bistable resistivity memory. IEEE MAG-6. 1970;6:592
  238. 238. Sie C, Uttecht R, Stevenson H, Griener J, Raghavan K. Electricfield induced filament formation in As-Te-Ge glass. Journal of Non-Crystalline Solids. 1970;2:358-370
  239. 239. Suri M, Bichler O, Querlioz D, Cueto O, Perniola L, Sousa V, et al. Phase change memory as synapse for ultra-dense neuromorphic systems: Application to complex visual pattern extraction. In: Proceedings of the IEEE Xplore 2011 International Electron Devices Meeting. 5-7 December 2011; Washington, DC, USA. p. 4
  240. 240. Shelby RM, Burr GW, Boybat I, Di Nolfo C. Non-volatile memory as hardware synapse in neuromorphic computing: A first look at reliability issues. In: Proceedings of the IEEE Xplore International Reliability Physics Symposium; 19-23 April 2015; Monterey, CA, USA. pp. 6A-61A
  241. 241. Yu S. Neuro-inspired computing with emerging nonvolatile memorys. Proceedings of the IEEE. 2018;106:260-285
  242. 242. Suri M, Garbin D, Bichler O, Querlioz D, Vuillaume D, Gamrat C, et al. Impact of PCM resistance-drift in neuromorphic systems and drift-mitigation strategy. In: Proceedings of the IEEE Xplore/ACM International Symposium on Nanoscale Architectures (NANOARCH); 15-17 July 2013; Brooklyn, NY, USA. pp. 140-145
  243. 243. Li J, Luan B, Lam C. Resistance drift in phase change memory. In: Proceedings of the IEEE International Reliability Physics Symposium (IRPS); 15-19 April 2012; Anaheim, CA, USA. p. 6C-1
  244. 244. Ielmini D, Lavizzari S, Sharma D, Lacaita AL. Physical interpretation, modeling and impact on phase change memory (PCM) reliability of resistance drift due to chalcogenide structural relaxation. In: Proceedings of the IEEE International Electron Devices Meeting; 10-12 December 2007; Washington, DC, USA. pp. 939-942
  245. 245. Ielmini D, Sharma D, Lavizzari S, Lacaita AL. Reliability impact of chalcogenide-structure relaxation in phase-change memory (PCM) cells—Part I: Experimental study. IEEE Transactions on Electron Devices. 2009;56:1070-1077
  246. 246. Boniardi M, Ielmini D. Physical origin of the resistance drift exponent in amorphous phase change materials. Applied Physics Letters. 2011;98:243506
  247. 247. Pirovano A, Lacaita AL, Pellizzer F, Kostylev SA, Benvenuti A, Bez R. Low-field amorphous state resistance and threshold voltage drift in chalcogenide materials. IEEE Transactions on Electron Devices. 2004;51:714-719
  248. 248. Available from: https://www.lumenci.com/post/reram
  249. 249. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278-2324. DOI: 10.1109/5. 726791
  250. 250. Wu X, Li J, Zhang L, Speight E, Rajamony R, Xie Y. Hybrid cache architecture with disparate memory technologies. ACM Sigarch Computer Architecture News. 2009;37:34-45. DOI: 10.1145/1555815.1555761
  251. 251. Peng S, Zhang Y, Wang MX, Zhang YG, Zhao W. Magnetic Tunnel Junctions for Spintronics: Principles and Applications. Springer; 2014. DOI: 10.1002/047134608X.W8231
  252. 252. Caporale N, Dan Y. Spike-timing-dependent plasticity: A Hebbian learning rule. Annual Review of Neuroscience. 2008;31:25-46. DOI: 10.1146/annurev.neuro.31.060407.125639
  253. 253. Markram H, Gerstner W, Sjöström PJ. A history of spike-timing-dependent plasticity. Frontiers in Synaptic Neuroscience. 2011;3:4. DOI: 10.3389/fnsyn.2011.00004
  254. 254. Dan Y, Poo MM. Spike timing-dependent plasticity: From synapse to perception. Physiological Reviews. 2006;86:1033-1048. DOI: 10.1152/physrev.00030.2005
  255. 255. Xu Q , Peng J, Shen J, Tang H, Pan G. Deep CovDenseSNN: A hierarchical event-driven dynamic framework with spiking neurons in noisy environment. Neural Networks. 2020;121:512-519. DOI: 10.1016/j.neunet.2019.08.034
  256. 256. Qu L, Zhao Z, Wang L, Wang Y. Efficient and hardware-friendly methods to implement competitive learning for spiking neural networks. Neural Computing and Applications. 2020;32(17):13479-13490
  257. 257. Mozafari M, Ganjtabesh M, Nowzari-Dalini A, Thorpe SJ, Masquelier T. Combining STDP and reward-modulated STDP in deep convolutional spiking neural networks for digit recognition. 2018. arXiv:1804.00227
  258. 258. Lee C, Panda P, Srinivasan G, Roy K. Training deep spiking convolutional neural networks with STDP-based unsupervised pre-training followed by supervised fine-tuning. Frontiers in Neuroscience. 2018;12:435
  259. 259. Tavanaei A, Kirby Z, Maida AS. Training spiking ConvNets by STDP and gradient descent. In: Proceedings of the International Joint Conference on Neural Networks. Piscataway, NJ: IEEE; 2018. pp. 1-8
  260. 260. Bohte SM, Kok JN, La Poutre H. Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing. 2002;48(14):17-37. DOI: 10.1016/S0925-2312(01)00658-0
  261. 261. Xu Y, Zeng X, Han L, Yang J. A supervised multi-spike learning algorithm based on gradient descent for spiking neural networks. Neural Networks. 2013;43:99-113. DOI: 10.1016/j.neunet.2013.02.003
  262. 262. Wu Y, Deng L, Li G, Zhu J, Shi L. Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience. 2018;12:331
  263. 263. Mostafa H. Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems. 2017;29(7):3227-3235
  264. 264. Zhou S, Chen Y, Ye Q , Li J. Direct training based spiking convolutional neural networks for object recognition. 2019. arXiv:1909.10837
  265. 265. Stromatias E, Soto M, Serrano-Gotarredona T, Linares-Barranco B. An event-driven classifier for spiking neural networks fed with synthetic or dynamic vision sensor data. Frontiers in Neuroscience. 2017;11:350. DOI: 10.3389/fnins.2017.00350
  266. 266. Zheng N, Mazumder P. Online supervised learning for hardware-based multilayer spiking neural networks through the modulation of weight-dependent spike-timing-dependent plasticity. IEEE Transactions on Neural Networks and Learning Systems. 2018a;29(9):4287-4302. DOI: 10.1109/TNNLS.2017.2761335
  267. 267. Sengupta A, Ye Y, Wang R, Liu C, Roy K. Going deeper in spiking neural networks: VGG and residual architectures. 2018. arXiv:1802.02627
  268. 268. Hu Y, Tang H, Pan G. Spiking deep residual network. 2018. arXiv:1805.01352
  269. 269. Diehl PU, Neil D, Binas J, Cook M, Liu SC, Pfeiffer M. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In: Proceedings of the 2015 International Joint Conference on Neural Networks. Piscataway, NJ: IEEE; 2015. pp. 1-8
  270. 270. Pfeiffer M, Pfeil T. Deep learning with spiking neurons: Opportunities and challenges. Frontiers in Neuroscience. 2018;12:774. DOI: 10.3389/fnins.2018.00774
  271. 271. Rueckauer B, Lungu IA, Hu Y, Pfeiffer M, Liu SC. Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Frontiers in Neuroscience. 2017;11:682. DOI: 10.3389/fnins.2017.00682
  272. 272. Xu Y, Tang H, Xing J, Li H. Spike trains encoding and threshold rescaling method for deep spiking neural networks. In: Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence. Piscataway, NJ: IEEE; 2017. pp. 1-6
  273. 273. Wang Y, Xu Y, Yan R, Tang H. Deep spiking neural networks with binary weights for object recognition. IEEE Transactions on Cognitive and Developmental Systems. 2020;13(3):514-523. DOI: 10.1109/TCDS.2020.2971655
  274. 274. Saleh AY, Hameed H, Najib M, Salleh M. A novel hybrid algorithm of differential evolution with evolving spiking neural network for pre-synaptic neurons optimization. International Journal of Advances in Soft Computing and Its Applications. 2014;6(1):1-16
  275. 275. Schaffer JD. Evolving spiking neural networks: A novel growth algorithm corrects the teacher. In: Proceedings of the IEEE Symposium on Computational Intelligence for Security and Defense Applications. Piscataway, NJ: IEEE; 2015. pp. 1-8
  276. 276. Vazquez RA. Izhikevich neuron model and its application in pattern recognition. Australian Journal of Intelligent Information Processing Systems. 2010;11:35-40
  277. 277. Liu Y, Yenamachintala SS, Li P. Energy-efficient FPGA spiking neural accelerators with supervised and unsupervised spike-timing-dependent-plasticity. ACM Journal on Emerging Technologies in Computing Systems. 2019;15(3):1-19
  278. 278. Yusuf ZM, Hamed HNA, Yusuf LM, Isa MA. Evolving spiking neural network (ESNN) and harmony search algorithm (HSA) for parameter optimization. In: Proceedings of the Sixth International Conference on Electrical Engineering and Informatics. Piscataway, NJ: IEEE; 2017. pp. 1-6
  279. 279. Vázquez RA, Garro BA. Training spiking neurons by means of particle swarm optimization. In: Proceedings of the International Conference in Swarm Intelligence. Berlin: Springer; 2011. pp. 242-249
  280. 280. Pavlidis NG, Tasoulis OK, Plagianakos VP, Nikiforidis G, Vrahatis MN. Spiking neural network training using evolutionary algorithms. In: Proceedings of the 2005 IEEE International Joint Conference on Neural Networks. Vol. 4. Piscataway, NJ: IEEE; 2005. pp. 2190-2194 10.1109/IJCNN.2005.1556240
  281. 281. Sherif FF, Ahmed KS. Geographic classification and identification of SARS-CoV2 from related viral sequences. International Journal Biological Biomedical Engineering. 2021;15:254-259
  282. 282. Tuchman Y, Mangoma TN, Gkoupidenis P, van de Burgt Y, John RA, Mathews N, et al. Organic neuromorphic devices: Past, present, and future challenges. MRS Bullletin. 2020;45(8):619-630. DOI: 10.1557/mrs.2020.196
  283. 283. Sharifshazileh M, Burelo K, Sarnthein J, Indiveri G. An electronic neuromorphic system for real-time detection of high frequency oscillations (HFO) in intracranial EEG. Nature Communications. 2021;12(1):3095. DOI: 10.1038/s41467-021-23342-2
  284. 284. Kerman Z, Yu C, An H. Beta oscillation detector design for closed-loop deep brain stimulation of Parkinson’s disease with memristive spiking neural networks. In: 2022 23rd International Symposium on Quality Electronic Design (ISQED); Santa Clara, CA, USA. Vol. 2022. pp. 1-6. DOI: 10.1109/ISQED54688.2022.9806207
  285. 285. Ahmed KS. Wheelchair movement control VIA human eye blinks. American Journal of Biomedical Engineering. 2011;1(1):27-30
  286. 286. Magour AA, Sayed K, Mohamed WA, El Bahy MM. Locked-in patients’ activities enhancement via brain-computer interface system using neural network. Engineering. 2018;12
  287. 287. Sherif FF. Discovering Alzheimer Genetic Biomarkers Using Bayesian Networks. 2015. pp. 1-8. DOI: 10.1155/2015/639367
  288. 288. Sherif FF, Ahmed KS. Unsupervised clustering of SARS-CoV-2 using deep convolutional autoencoder. Journal Engineering Application Science. 2022;69:72. DOI: 10.1186/s44147-022-00125-0
  289. 289. Khaled Ahmed S, Mohammed Ali R, Fayroz Sherif F. Designing a new fast solution to control isolation rooms in hospitals depending on artificial intelligence decision, Biomedical Signal Processing and Control. 2023;79(1)
  290. 290. Greengard S. Neuromorphic chips take shape. Communications of the ACM. 2020;63(8):9-11

Written By

Khaled S. Ahmed and Fayroz F. Shereif

Submitted: 09 November 2022 Reviewed: 20 January 2023 Published: 01 April 2023