Open access peer-reviewed chapter

ADDC: Automatic Design of Digital Circuit

Written By

Conor Ryan, Michael Tetteh, Jack McEllin, Douglas Mota-Dias, Richard Conway and Enrique Naredo

Reviewed: 08 March 2022 Published: 04 June 2022

DOI: 10.5772/intechopen.104410

From the Edited Volume

Genetic Algorithms

Edited by Sebastián Ventura, José María Luna and José María Moyano

Chapter metrics overview

233 Chapter Downloads

View Full Metrics

Abstract

Digital circuits are one of the most important enabling technologies in the world today. Powerful tools, such as Hardware Description Languages (HDLs) have evolved over the past number of decades to allow designers to operate at high levels of abstraction and expressiveness, rather than at the gate level, which circuits are actually constructed from. Similarly, highly accurate digital circuit simulators permit designers to test their circuits before committing them to silicon. This is still a highly complex and generally manual task, however, with complex circuits taking months or even years to go from planning to silicon. We show how Grammatical Evolution (GE) can harness the standard tools of silicon design and be used to create a fully automatic circuit design system. Specifically, we use a HDL known as SystemVerilog and Icarus, a free, but powerful simulator, to generate circuits from high level descriptions. We apply our system to several well known digital circuit literature benchmarks and demonstrate that GE can successfully evolve functional circuits, including several which have been subsequently rendered in Field Programmable Gate Arrays (FPGAs).

Keywords

  • digital design
  • VLSI design
  • microelectronics design
  • evolvable hardware
  • HDL
  • verilog
  • grammatical evolution

1. Introduction

This chapter describes the application of Evolutionary Computation to the task of digital circuit design. Although many Electronic Design Automation (EDA) tools exist to aid designers, digital circuit design remains a time consuming and expensive task that requires skilled engineers.

The cost of errors in silicon is enormous and this has led to extremely powerful and accurate simulators that designers use to verify their designs before committing them to silicon. These simulators provide a huge opportunity for Evolutionary Computation as they can be used to test individuals.

This chapter gives an overview about how digital integrated circuits are designed and how the tools used to develop them have evolved over the past few decades. These tools, when linked together with GE produce a system we call the Automatic Design of Digital Circuits (ADDC), which can evolve circuits using massive levels of abstraction rather than simple logic gates.

We demonstrate the system on three real-world problems, including one with 220 test cases, showing that ADDC successfully evolves solutions in all cases.

Advertisement

2. Background

Digital circuit design began in the 1960’s with the arrival of semiconductor transistor based circuits and the Integrated Circuit (IC). Up until the 2010’s, Moore’s Law has successfully predicted the shrinking in size of manufacturing technologies allowing for lower cost, faster speeds and lower power consumption. However in the last decade, this shrinking has slowed due to the difficulties involved in the fabrication process. Integrated circuits are extremely common in many products today and their use is not always obvious.

Integrated circuits come in three different varieties; Digital, Analogue or Mixed-Signal. Digital integrated circuits process digital information, often represented using bits, bytes or words. Many of these circuits employ the use of one or more processors (often referred to as a core) with support logic, memories and I/O interfaces. The microprocessor is a famous example of a digital circuit. Analogue integrated circuits are used for handling continuous-time signals and to perform operations such as amplification, analogue filtering and power management. Mixed-Signal integrated circuits contain both analogue and digital circuitry in the same package and use ADC (Analogue to Digital Converters) and DACs (Digital to Analogue Converters) to share information between both domains.

In modern circuit design, signal processing tends to be performed in the digital domain instead of the analogue domain. This is due to the reliability of digital circuitry and the existence of advanced digital algorithms with performance that cannot be obtained with analog circuitry alone [1]. This move towards using digital designs for signal processing has required the use of circuit representations like Hardware Description Languages (HDLs) to be used to describe these extremely complex circuits. New devices such as Complex Programmable Logic Devices (CPLDs) and Field Programmable Gate Arrays (FPGAs) are increasingly being used due to their ability to replicate the behaviour of these circuits without requiring the fabrication of new chips. The following sections will go more in-depth into HDLs, the differences between CPLD and FPGA devices and an overview of the Digital Design Flow.

2.1 Hardware description languages

The first modern HDL, Very High Speed Integrated Circuit Hardware Description Language (VHSIC-HDL), more commonly known as VHDL, was created in 1983. VHDL was developed for the US Department of Defense as part of the VHSIC project. The project was launched in 1980 [2], while the first version of VHDL was launched in 1983 by Intermetrics Inc., Texas Instruments and IBM [3, 4]. VHDL is a verbose and strongly-typed language. It grew steadily in popularity, resulting in both logic simulators and logic synthesis tools being developed for it. IEEE Standard VHDL was standardised in 1986 [5] with the adoption of VHDL version 7.2 and was finalised in 1987 in the IEEE Standard 1076-1987 [5]. VHDL would become the first HDL language that would gain widespread adoption, and is still in use today.

Another modern HDL developed around this time was Verilog, created by Phil Moorby in 1983 [6] while working for Gateway Design Automation, who were acquired by Cadence Design Systems in 1989 [7]. In comparison to VHDL, Verilog is less verbose and is a weakly-typed language. Originally it was designed only for logic simulation, but later had logic synthesis features added after the language gained widespread popularity. Verilog-XL, a Verilog simulator owned by Cadence, became the de facto Verilog simulator throughout the 1990’s. Due to the increasing popularity of VHDL, Cadence created the the Open Verilog International (OVI, now known as Accellera) organisation and transferred the rights of Verilog into the public domain [8]. Verilog was eventually standardised in IEEE Standard 1364-1995 [6]. Verilog would be superseded by SystemVerilog in IEEE Standard 1800-2005 [9], adding features for design verification. SystemVerilog is more popular than VHDL today due to the language being less verbose and having a similar structure to the C programming language. Table 1 provides a comparision between VHDL and SystemVerilog hardware description languages.

VHDLSystemVerilog
Standardised in 1987Standardised in 1995 (Verilog) and 2005 (SystemVerilog)
More VerboseLess Verbose
ADA-likeC-like
Case InsensitiveCase Sensitive
Support for Digital, Analog and Mixed-Signal DesignsSupport for Digital Designs only

Table 1.

A comparison between VHDL and verilog hardware description languages.

With the introduction of Hardware Description Languages for digital circuit design, two discrete time based simulation methods came into prominence. Both cycle-driven and event-driven simulation methods were orders of magnitude faster than the traditional continuous time based simulation method “SPICE”. One limitation of the cycle-driven simulation method is that the output is only updated on each clock edge This means it can only be used for synchronous digital designs, but is much faster than event-driven simulation. It also cannot detect glitches and doesn’t take the timing of the design into consideration.

Event-driven simulation updates the output on any input event meaning it can be used for both synchronous and asynchronous designs. Although still quicker than SPICE methods, it is much slower than cycle-driven simulation. Modern circuit designs utilise techniques such as clock and power gating, allowing parts of a design to be “turned off”. This can help reduce the simulation time of an event-driven simulation, bringing it closer to cycle-driven simulation while providing a more accurate simulation. Table 2 provides a comparison between cycle-driven and event-driven simulation methods. Practically all commercial and open-source simulation tools today utilise one of these methods.

Cycle-driven simulationEvent-driven simulation
Evaluation every clock cycleEvaluation at minimum time-step or greater
Synchronous Designs onlySynchronous and Asynchronous Designs
Behavioural Simulation onlyBehavioural, Functional and Timing Simulations
Faster Simulation SpeedSlower Simulation Speed

Table 2.

A comparison between cycle-driven simulation and event-driven simulation.

2.2 CPLD vs FPGA devices

As digital designs grew in complexity, early Programmable Logic Devices (PLD) such as Programmable Array Logic (PAL) became obsolete as they could only replicate the behaviour of a few hundred logic gates. To address this shortcoming, PALs were soon replaced by Complex Programmable Logic Devices (CPLD). Modern CPLDs are able to replicate the behaviour of hundreds of thousands of logic gates. One advantage of CPLD devices is that they use non-volatile memory to store their configuration. As a result, their logic is already configured at power-up. This makes them ideal devices for systems where the logic is required to be ready for initialisation, such as glue logic for circuits.

Figure 1 shows the internal structure of a CPLD. These logic blocks consist of programmable PAL blocks. The inputs can be connected together to different AND gates using programmable fuses. The OR gate connections are fixed and cannot be reconfigured. Although less configurable than a PLA (which contains both programmable AND and OR planes), this reduction in complexity makes PAL blocks cheaper to manufacture. In order for PAL blocks to be able to implement sequential designs, a D flip-flop can be used to store the state of the output. CPLDs can connect multiple logic blocks together using the programmable interconnection matrix in order to implement more complex designs.

Figure 1.

Structure of a Complex Programmable Logic Device (CPLD) and Programmable Array Logic (PAL) block. The programmable AND plane and the fixed OR plane are shown on the right.

While CPLD devices are still used for specific tasks, the most common PLD in use today is the Field Programmable Gate Array (FPGA). These devices are quite similar in structure to the mask-programmed gate array (MPGA) [10] which was one of the first commercial programmable PLDs available. One benefit of using FPGAs is that they can be electronically reconfigured, whereas the previous MPGAs configuration was specified at the time of manufacture. The first FPGA, the Xilinx XC2064 was invented by Ross Freeman and Bernard Vonderschmitt in 1985 [11]. Early FPGAs were mainly used in the telecommunications and networking sectors as they were often cheaper than manufacturing custom silicon for these tasks.

Figure 2 shows the internal structure of the FPGA. Similarities can be seen between FPGA and CPLD devices where a programmable interconnect is used to connect programmable logic blocks. In an FPGA, the Configurable Logic Blocks (CLB) consist of Look Up Tables (LUTs). The output of these programmable memories are defined by their input signals. The multiplexer then selects either the output of the LUT or the D flip flop to allow for combinational or sequential logic, similar to PAL blocks in CPLDs. These blocks can then be connected together using the Switching Blocks (SB). Modern FPGAs are able to replicate the behaviour of tens of millions of logic gates and contain logic like RAM and multipliers. Today, they are often used in high-performance computing applications due to their performance and efficiency over processor-based algorithms.

Figure 2.

Structure of a Field Programmable Gate Array (FPGA). The Complex Logic Block (CLB) consists of a programmable memory called a Look Up Table (LUT), a D flip flop to store state and a multiplexer to select the output signal. The programmable Switching Block (SB) is used to connect the CLBs together.

2.3 Digital design flow

In digital design, it is often not practical to use gate-level descriptions. Instead, a representation called Register Transfer Level (RTL) is used. RTL allows for a high-level model of the design to be represented without having to think about the low-level logic structures required to implement the functionality [12]. This abstraction uses constructs like logic statements, arithmetic operations and control flow. Similarities can be drawn between programming using mnemonics in Assembly Language compared to functional programming in C. Using RTL allows the designer to focus on the functionality of the design rather than on the implementation. Figure 3 presents the different stages a digital design must goes through in order to convert a RTL representation into an implementable design.

Figure 3.

A flowchart showing the different stages of the logic synthesis/digital design process.

When a high level language is used for programming, the code written by the programmer must first go through a process called compilation before the code is executed. Similarly with digital hardware, a design specified using RTL (often using a HDL like VHDL or SystemVerilog) must go through a process called logic synthesis. This process analyses the given RTL and converts it into a set of primitives that is functionally equivalent. Primitives are the basic building blocks of any logic design and consists of both combinational and sequential blocks. Examples include boolean logic (NOT/AND/OR/XOR etc.), multiplexers and flip-flops. This output is stored in a file called a net-list, which contains a list of all the primitive blocks and the nets that connect them together.

The net-list generated from the logic synthesis is not optimized and must go through a process known as logic minimization. There can be many parameters to optimize for, such as area usage, power consumption and timing delay. There are many different methods that can be used to perform this optimization. Some early algorithmic methods include Karnaugh maps [13] and the Quine–McCluskey algorithm [14]. However as designs have become increasing complex, these algorithmic methods are not computationally feasible. This has lead to the use of heuristic optimizers such as ESPRESSO [15] and BOOM [16]. When using heuristic optimizers, it cannot be guaranteed that the minimized design is the global minimum. However in practice, these methods are sufficient and are widely used in logic synthesis tools today.

Following the logic minimization process, the optimized net-list is in an intermediate representation. A process called Technology Mapping must be performed before the design can be implemented in silicon or on a PLD. For silicon, this intermediate representation is compared against a library of available “building blocks” called leaf-level cells. The mapper then selects and connects these leaf-level cells, rebuilding the circuit. Further optimization may be performed here as the available leaf cells may be able to replace multiple blocks in the intermediate representation. For PLDs, the process is similar. The PAL blocks in CPLDs and the LUTs in FPGAs can be configured to replace one or multiple blocks. These are then connected together using the programmable interconnects. In comparison to logic minimization, the optimizations performed here are much simpler. After the technology mapping process is complete, the designer now has an implementable design. This is often in the GDSII/OASIS format for silicon manufacturing and in a bitstream format for PLDs. The top EDA companies for ASIC digital design tools include Cadence Design Systems, Siemens and Synopsys, with Xilinx and Intel providing FPGA tools.

2.4 Grammatical evolution and circuit design

Grammatical Evolution (GE), the tool used in this chapter and described in detail in the next section, has been used to evolve Verilog circuits, such as the one-bit adder and D-type latch at the gate level [17]. Notably, the one-bit adder is frequently used as a case study to evolve combinational circuits at the gate-level through GE [18, 19, 20]. However, gate-level evolution is less likely to scale to highly complex circuits from scratch [21]. In response to scalability issues inherent in gate level evolution, [22] proposed functional level evolution through Genetic Algorithms, which uses higher-level functions such as multiplexers, adders, subtractors instead of primitive gates to help reduce the search space. Similarly, [23] evolved a 3-bit multiplier using only binary multiplexers. 9- and 25-Median approximate circuits have also been designed at the functional level through Cartesian GP [24]. We address the scalability concern by performing circuit evolution through GE at a more abstract level – RTL modeling, where the focus is on describing the circuit’s behavior [25, 26].

Advertisement

3. Grammatical evolution

Biological evolution has been a source of inspiration for many techniques that formed the field of evolutionary computation (EC), and has been used to address a wide range of problem domains ranging from the small to the huge, solving molecular to astronomical related problems. One of the most successful evolutionary techniques is GP, introduced by John R. Koza in his book “Genetic Programming—On the programming of Computers by Means of Natural Selection” [27], which mimics natural selection in an iterative way to find an optimal (best) solution. Algorithm 1 details the steps required to implement a standard GP. A survey of the different GP techniques current available in the literature is out of the scope of this work, the interested reader can find in [28] a comprehensive review of various aspects and techniques of GP and their categorization.

Grammatical evolution (GE) is an evolutionary computation and, more specifically, a genetic programming (GP) technique [29] that addresses the closure issue of Koza-style GP, which effectively confines GP to single-type problems. This is achieved through the use of a grammar, generally in Backus-Naur Form (BNF) [29, 30], or Attribute Grammar (AG) [31, 32, 33, 34].

The GE system shown in Figure 4 automatically generates programs using three main components: (i) grammar; (ii) cost function; and (iii) search engine. The grammar describes the program’s syntax, the cost function evaluates the quality of each program, and the search engine, typically a GA, searches within the program space defined by the grammar.

Figure 4.

The GE system uses a search engine (typically a GA) to generate solutions for a given problem, by recombining the genetic material (genotype) and mapped onto programs (phenotype) according to a language specification (interpreter/compiler).

In GE, a typical representation for an individual is a binary string grouped into codons (e.g. 8 bits). The linear representation of the genome allows the application of genetic operators such as crossover and mutation in the manner of a typical GA, unlike tree-based GP.

3.1 Initialisation

In GP, the standard initialisation is the ramped-half-and-half (RHH) technique, introduced in [27]. In order to ensure diversity in the population, GP individuals typically represented as trees are created with different depths. The RHH technique uses two methods to create a tree: full and grow. Typically there is a probability of 0.5 to select either method for a particular individual. The full method creates trees with full branches at the maximum specified depth, whereas the grow method creates trees with different length of branches and different depth size up to the maximum allowed.

Sensible Initialisation is an adaptation of ramped-half-and-half initialisation routine in GP [35]. SI requires the grammar to labelled— whether a production rule is recursive or non-recursive and the minimum derivation tree depth to fully expand a production rule. SI requires a maximum tree depth to be specified prior. Similar to ramped-half-and-half in GP, SI applies both the grow and full method. When applying the grow method any production can be selected while the full method chooses only recursive productions. Both grow and full methods are subject to the constraints of not exceeding the maximum specified tree depth and the availability of enough tree depth budget to fully expand all non-terminals to terminals.

3.2 GE operators

Generally, GE uses a one-point crossover as it has been shown to be effective [36]. In crossover, two individuals are selected as parents and a single crossover point within each parent’s genome is randomly chosen, dividing the genome into two halves: left and right sub-genomes. The right sub-genomes of both parents are swapped to create two offspring. However, crossover points that lie within non-coding regions (unused codon(s) from the mapping step) may not be so useful. As a result, a variant of one-point crossover known as effective crossover, which constrains the selection of the crossover point to be within the effective length of an individual’s genome is preferred. Point mutation is a commonly used mutation operator in GE. Each bit within the binary string genome is mutated or flipped using the specified mutation probability. However, neutral mutations can occur whereby mutated codons select the same productions as the original codon; as a result, the corresponding phenotype remains the same.

3.3 GE example

To illustrate the application of GE, we first explain the evolutionary process using a mathematical optimisation problem as study case.

GE begins with the start symbol of the grammar, then the codons are used to select and apply the grammar production rules to finally build a program. This mapping process is illustrated in Figure 5 with a simple example, where the production rules in the grammar contains a set of user-defined functions: maxab, minab, additionab, subtractionab, multiplicationab, divisionab, const, and X, which is a value (or a vector) sampled from the independent variable.

Figure 5.

Example of a GE genotype-phenotype mapping process for the Iris dataset, where the binary genotype is grouped into codons (e.g. 8 bits; red & blue), transcribed into an integer string, then used to select production rules from a predefined grammar (BNF-Grammar), and finally translated into a sequence of rules to build a solution (phenotype).

The production rules for each non-terminal are indexed starting from 0 and, when selecting a production rule (starting with the left-most non-terminal of the developing program) the next codon value in the genome is read and interpreted using the formula: p=c%r, where c represents the current codon value, % represents the modulus operator, and r is the number of production rules for the left-most non-terminal.

To prevent reaching the end of the genome without consuming all the available codons, then a wrapping process is used to continuing reading from the beginning of the genome. This mapping process stops when all of the non-terminal symbols have been replaced, in order to get a valid program. An exemption to this process is in the case when it fails to replace all of the non-terminal symbols after a maximum number of iterations, then it is considered an invalid individual and it is penalized with the lowest possible fitness.

Advertisement

4. ADDC

ADDC is an evolutionary HDL circuit design framework mainly driven by GE. ADDC requires a grammar and a testbench as inputs for circuit evolution and verification respectively. The designed grammar must be BNF compliant and must satisfy the grammar sufficiency property. Thus, the grammar must contain all the necessary building blocks required to potentially evolve an optimal circuit. ADDC is technology agnostic and easily configurable as the choice of HDL and simulator are left to the user to choose. Illustrated in Figure 6 is ADDC’s design flow for functional evolution of circuits.

Figure 6.

ADDC Functional Circuit Evolution Overview.

During the initial phase of the circuit design process, ADDC creates an initial population of circuit designs using a suitable GE initialisation routine such as sensible initialisation. These individuals then undergo fitness evaluation. The fitness evaluation phase entails a number of steps. First, the genotype (genome) of each individual is translated to a HDL (SystemVerilog in this work) circuit design (phenotype) by the GE mapper, using the grammar designed for the circuit. Functional simulation of each circuit takes place, assuming all circuit designs are valid. For these experiments, Icarus Verilog, a lightweight and open-source Verilog simulator is used. The simulator uses a testbench which must be provided by the user for circuit verification. A testbench is a verification description written in the same HDL as the circuit that ensures the device under test (DUT) or evolved circuit meets functional and timing requirements. A typical testbench uses a test vector(s) which contains circuit inputs and their respective expected outputs. For example, test vector(s) for combinational circuits are usually created from their truth table. The simulator drives these input(s) through the DUT and verifies if the circuit output is the desired output. The sum of the number of passed cases out of the total number of cases represents the fitness value of the circuit. After all circuits have been evaluated, ADDC makes available the best circuit design if the specified termination criterion is satisfied, otherwise the circuit design evolution continues.

The next phase is reproduction, where usually individuals with either good overall fitness score or individuals that perform best on certain cases are selected for crossover and mutation to create a new population of circuit designs. Lexicase selection performs well on circuit design benchmarks [25, 26], hence selected as the choice of selection routine. Also, depending on the genetic algorithm (GA) of choice, for example steady state, generational GAs etc., events like replacement or elitism may take place in creating the new population. The new population undergoes fitness evaluation in similar manner as described in the previous section. The process continues until the termination criterion is satisfied and the best circuit design returned as solution.

Advertisement

5. Experiments

Three circuit benchmark problems are considered, namely: Hamming Code (7,4) Encoder (corresponding decoder evolved in [25]), Seven Segment Display [25, 37] and 16-to-4 Multiplexer [25, 38, 39]. These problems are not only standard benchmarks in circuit design literature, but are used in real world applications. For example, hamming codes, seven segment displays and multiplexers are used in satellite communication, digital calculators and telephone networks respectively.

5.1 Benchmark problems

5.1.1 Hamming code (7,4) encoder

Hamming codes are a linear error-correcting codes capable of detecting a single error and at most two errors, but are only capable of correcting a single error. They belong to a category of codes referred to as Linear Block Codes. A Hamming Code (7,4) Encoder encodes a 4-bit data word into a 7-bit code word prior to data transmission by generating and adding three parity bits to the data word.

The structure of the code words generated by hamming codes can be classified into two categories: systematic and non-systematic encodings. The structure of systematic encoding separates the data word and code word while the data word and parity bits are interspersed in non-systematic encoding. We adopt systematic encoding as it is easier to separate the data word from the parity bits.

The grammar designed for evolving the Hamming Code (7,4) Encoder is shown in Figure 7. The circuit’s interface is defined using the beginmodule rule. The grammar uses four bitwise operators and a bitwise negation operator in Verilog defined using bitwiseop and bitwiseneg rules respectively. Hamming Code (7,4) Encoder generates three redundant bits stated earlier and these have been defined using paritybit1, paritybit2 and paritybit3 rules. The expr rule is use by the paritybit rules to evolve variable length expressions that generate the correct parity bit. The grammar also features an important Verilog construct, the always procedural block, defined using alwaysblock which behaves similarly to an infinite loop and which is triggered by a change in any signal (indicated with ) to evaluate the statements within its scope.

Figure 7.

Hamming code (7,4) encoder grammar.

5.1.2 Seven segment display

A Seven Segment Display is an electronic device used for the display of decimal numerals. It is also capable of displaying letters, though some letters such as K, X, Z etc. are difficult to recognise on the device. The specification for Seven Segment Display considered in this work supports decimal numerals (0-9) and A-F letter representations. These numbers and characters are encoded as a 4-bit binary number (0000–1111) referred to as binary coded decimal (BCD) which are sent as inputs to the Seven Segment Display. The 4-bit binary numbers starting from 0000 to 1001 are used to encode decimal numbers 0 to 9 respectively; while 1010 to 1111 are used to encode letters A-F. Upon receipt, the Seven Segment generates a 7-bit binary number (each bit corresponding an ON/OFF state of a segment) to turn on the appropriate LEDs to display the digit/letter. Figure 8 shows the grammar designed to evolve the seven segment display. The BCD and the 7-bit binary numbers are defined by the bcd and sevensegment rules respectively. The grammar uses switch-case construct defined using the switchcase as well as the always procedural block (always). beginmodule defines the circuit interface of the Seven Segment Display.

Figure 8.

Seven segment display grammar.

5.1.3 16-to-4 multiplexer

A multiplexer is a multiple-input single-output device that accepts data (data lines) and an address (select lines) as inputs and uses the address to select the corresponding data line to be transmitted. The 16-to-4 multiplexer has 16 data lines and 4 select lines.

Figure 9 shows the grammar designed to evolve the multiplexer. Similar to the Seven Segment Display Grammar, the 16-to-4 Multiplexer Grammar also uses the always procedural block. However, here an if-else (ifelse) construct is used, although the switch-case construct is also suitable in this context. The addresses used to select a data line as output are defined using the address rule. The address is used to generate a conditional statement (cond) by the ifelse rule to determine the data line to select. The dataindex defines the indexes of the 16-bit data which is used by the databitrule to select data line. beginmodule defines the circuit interface of the 16-to-4 Multiplexer.

Figure 9.

16-to-4 multiplexer grammar.

5.2 Evolutionary parameters

Experimental parameters used for running the experiments are shown in Table 3. The generation number and population size were selected based on preliminary experiments. The generation sizes used for the preliminary experiments were 50, 100 and 200; the population sizes were 100, 200, 500, 1000 and 2000. For each problem, 5 independent runs were conducted. The choice of generation number and population size for the actual experiments were based on setups with majority of the runs with mean best fitness of the final generation within the fourth quartile of the maximum fitness. The other parameters used remain the same as used in [25, 26].

ParameterValue
InitializationSensible Initialization
No of generations50 for Hamming Code (7,4) Encoder for Seven Segment Display & 16-to-4 Multiplexer
Mutation rate0.01
Crossover rate0.8
Replacement rate0.5
No of independent runs50
Population1,000
SelectionLexicase Parent Selection

Table 3.

Experimental run parameters.

50 generations were used for evolving the Hamming Code (7,4) Encoder, while 100 generations was used for each of the Seven Segment Display and 16-to-4 Multiplexer designs as preliminary results revealed these problems were relatively challenging to evolve compared to the Hamming Code (7,4) Encoder. All other parameters remain the same for all benchmark problems.

5.3 Training and testing

The number of training and testing cases are tabulated in Table 4.

Benchmark problemNo of training casesNo of testing cases
Hamming Code (7,4) Encoder112
Seven Segment Display (with A-F letter representations)16
16-to-4 Multiplexer41005000

Table 4.

Number of training and testing cases.

Each of the Hamming Code (7,4) Encoder and Seven Segment Display have only 16 cases. However, for the Hamming Code (7,4) Encoder every correct bit in each bit position in the codeword is counted as part of the total fitness score for a candidate circuit, giving a total of 112 (7×16) cases. Given that Hamming Code (7,4) and Seven Segment Display have so few cases, it is feasible to perform exhaustive testing during training, leaving no cases for testing as shown in Table 4.

On the other hand, the 16-to-4 Multiplexer has 220 cases making exhaustive testing infeasible. As a result we uniformly sample 4,100 and 5,000 cases for training and testing respectively as shown in Table 4.

5.4 Results and discussions

Results obtained from experiments conducted demonstrate ADDC is ideal for evolving digital circuit designs due to the use GE and a HDL which permits designs to be done at a more abstract level. The evolutionary performance for the experiments conducted for Hamming Code (7,4) Encoder, Seven Segment Display and 16-to-4 Multiplexer described in Section 5.1 are visualized in Figures 1012 respectively. The success rate per benchmark problem is tabulated in Table 5. A representative solution per each circuit benchmark problem is shown in Figures 1315 in the Appendix. The advantages and disadvantages of the proposed approach is discussed in Section 5.7.

Figure 10.

Mean best and mean average across runs for hamming code (7,4) encoder.

Figure 11.

Mean best and mean average across runs for seven segment display.

Figure 12.

Mean best and mean average across runs for 16-to-4 multiplexer.

Benchmark problemSuccess rate
Hamming Code (7,4) Encoder50/50
Seven Segment Display (with A-F letter representations)30/50
16-to-4 Multiplexer43/50

Table 5.

Success rate for benchmark problems.

5.5 Success rate

A successful run is a single independent evolutionary run that evolved an optimal circuit for the target problem. Fifty independent runs were conducted for all three benchmark problems. The success rate is the number of successful runs divided by the total number of evolutionary runs (i.e. 50) as tabulated in Table 5. A 100% success rate was attained for the Hamming Code (7,4) Encoder. The Seven Segment Display and 16-to-4 Multiplexer obtained 60 and 86% success rates, respectively.

5.6 Evolutionary performance

Visualization of the evolutionary performance as evolution progressed for Hamming Code (7,4) Encoder, Seven Segment Display and 16-to-4 Multiplexer are shown in Figures 1012 respectively.

The red line represents the mean best fitness per generations across the 50 independent runs conducted, while the black line represents the mean average fitness. Also plotted are error bars representing the standard error. The error bars are short to non-existent indicating small variability between the fitnesses of individuals. Furthermore, all three plots reveal a steady and progressive increase in fitness as the evolution progressed indicating the evolutionary search is continuously searching regions of the solutions where fitter individuals are located. Hamming Code (7,4) Encoder and 16-to-4 Multiplexer problems discover individual(s) that solve more that 50% of the test cases from the initial generations while the Seven Segment Display evolves individual(s) that solve 25% of the test cases on average.

5.7 Advantages and disadvantages of proposed approach

First, evolved circuit designs are quite interpretable compared to gate-level designs. This is due to the high level of abstraction at which these designs are performed which focuses on evolving circuit behaviours as opposed to evolving gate-level designs. Gate-level design approaches are challenging to scale to complex circuits [40]. The use of constructs such as if-else, switch-case, always procedural blocks, bitwise operators etc., makes it easier for humans to understand the behaviour of a circuit and if required effect manual modifications. Second, verification of circuits is easier as the circuit behaviour are easier to interpret enabling robust testbenches to be written.

Third, like any other methodology, there exist few disadvantages. The use of HDL requires the user to have technical knowledge about the HDL of choice— how to design grammars free of syntax errors and modelling errors. Syntax errors are easier to find and fix as most simulators will report such errors at the functional simulation phase. Modelling errors are a bit more challenging to fix, as they are only noticeable during synthesis (conversion of RTL or high level designs to gate-level representation) phase of the circuit design when designed grammars do not adhere to the guidelines of the HDL of choice. For example, fully functional representative solutions for the 16-to-4 Multiplexer and Seven Segment Display shown in Figures 13 and 14 respectively may not be directly synthesizable (depending on the synthesis tool), as the case statements and if conditions are not mutually exclusive. Some of these errors can be fixed by defining new rules, modifying existing rules, the use of attribute grammars in order to impose the necessary constraints etc. The redundant logic can be gotten rid of by via a number of approaches: manually by hardware designer, design of SystemVerilog/Verilog redundant logic removal algorithm, use unique case statements (for switch case constructs) etc.

The choice of operators to use for evolving circuits is key as it has been shown to increase simulation time of circuits if inappropriate operators are chosen [26]. Furthermore, some circuit designs may contain redundant block of code which impede interpretability as observed in representative best solutions for 16-to-4 Multiplexer and Seven Segment Display shown in Figures 13 and 14 respectively in Appendix. Only 16 of the if conditions and case statements are valid for the 16-to-4 Multiplexer and Seven Segment Display representation circuit designs respectively.

Advertisement

6. Conclusions and future directions

We have presented a system for the automated design of digital circuits, ADDC. ADDC is the next logical step in the evolution of Electronic Design Automation and this chapter has described how the history of integrated circuits has led to the confluence of GE, circuit simulators and HDLs. ADDC has been demonstrated on three difficult, real-world problems and was successful on all three of them, including one with 220 possible inputs.

The HDLs employed here are hugely powerful and expressive. Digital designers often operate at very high levels of abstraction using IP blocks, which is somewhat similar to using libraries when programming software. IP blocks are typically very powerful and can have complexities equivalent to tens of thousands of gates. Making some of these blocks available to ADDC will dramatically increase its scalability and doing so will simply involve expanding the grammar.

As the problems scale up, the number of test cases can become astronomical, as was the case in this chapter. While in this case we randomly sampled the training and test cases, it is also possible to use a more intelligent approach. Recent work [41] has investigated using clustering to select a representative set of test cases. This will permit us to operate at greater scales with confidence.

A circuit that functions correctly on a simulator is not guaranteed to be fit for purpose when rendered in silicon. This is because there are often other considerations, such as silicon area, power dissipation and delay. Future work will use multi-objective optimisation to include pressure on individuals to adhere to these constraints too.

While some of our automatically generated circuits have successfully been implemented in silicon on a Xilinx Artix-7 FPGA, e.g., an 8-to-1 multiplexer [25], ADDC does not yet include that step in its toolchain; to be fully automated it will need to include this.

Advertisement

Acknowledgments

The authors are supported by Research Grants 13/RC/2094 and 16/IA/4605 from the Science Foundation Ireland and by Lero, the Irish Software Engineering Research Centre (www.lero.ie). The third is partially financed by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brazil (CAPES), Finance Code 001, and Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro (FAPERJ).

Advertisement

Conflict of interest

The authors declare no conflict of interest.

Advertisement

Figure 13.

16-to-4 multiplexer representative solution.

Figure 14.

Seven segment display representative solution.

Figure 15.

Hamming code (7,3) encoder representative solution.

References

  1. 1. Conway T, Conway R, Mulvaney K, Mahony SO, Billon C, Khan MK, et al. Low power application specific processor for ISM band transceiver. In: IET Irish Signals and Systems Conference (ISSC 2010). Cork, Ireland: IET; 2010. pp. 272-277. Available from: https://digital-library.theiet.org/content/conferences/10.1049/cp.2010.0525
  2. 2. Barbe DF. VHSIC Systems and Technology. Computer. 1981;14(2):13-22
  3. 3. Dewey A. VHSIC Hardware Description (VHDL) Development Program. In: 20th Design Automation Conference Proceedings. Miami Beach (FL): IEEE Press; 1983. pp. 625-628
  4. 4. Dewey A, Gadient A. VHDL Motivation. IEEE Design Test of Computers. 1986;3(2):12-16
  5. 5. IEEE Standard VHDL Language Reference Manual. IEEE Std 1076-1987; 1988
  6. 6. IEEE Standard Hardware Description Language Based on the Verilog(R) Hardware Description Language. IEEE Std 1364-1995. 1996
  7. 7. Ap. COMPANY NEWS; Cadence to Buy Gateway Design. The New York Times. 1989. Available from: https://www.nytimes.com/1989/10/05/business/company-news-cadence-to-buy-gateway-design.html [Accessed: January 10, 2022]
  8. 8. Verilog Hardware Description Language Reference Manual (LRM). Version 1.0. Open Verilog International (OVI); 1991
  9. 9. IEEE Standard for SystemVerilog: Unified Hardware Design, Specification and Verification Language. IEEE Std 1800-2005. 2005
  10. 10. Section 8—XC157. In: The Integrated Circuit Data Book. Motorola Semiconductor Products Inc.; 1968. p. 3
  11. 11. Freeman R. Xilinx Inc. Configurable Electrical Circuit Having Configurable Logic Elements and Configurable Interconnects. US4870302; 1989
  12. 12. De la Guia Solaz M, Conway R. Razor based programmable truncated multiply and accumulate, energy-reduction for efficient digital signal processing. IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 2014;23(1):189-193
  13. 13. Karnaugh M. The Map Method for Synthesis of Combinational Logic Circuits. Transactions of the American Institute of Electrical Engineers, Part I: Communication and Electronics. 1953;72(5):593-599
  14. 14. McCluskey EJ. Minimization of Boolean Functions. The Bell System Technical Journal. 1956;35(6):1417-1444
  15. 15. Brayton R, Hachtel G, Hemachandra L, Newton A, Sangiovanni-Vincentelli A. A Comparison of Logic Minimization Strategies Using ESPRESSO: An APL Program Package for Partitioned Logic Minimization. In: Proceedings of the International Symposium on Circuits and Systems. New York (NY): IEEE Press; 1982. pp. 42-48
  16. 16. Hlavicka J, Fiser P. BOOMa—A heuristic boolean minimizer. In: IEEE/ACM International Conference on Computer Aided Design. ICCAD 2001. IEEE/ACM Digest of Technical Papers (Cat. No.01CH37281). San Jose, CA, USA: IEEE; 2001. pp. 439-442
  17. 17. Cullen J. Evolving Digital Circuits in an Industry Standard Hardware Description Language. In: Li X, Kirley M, Zhang M, Green D, Ciesielski V, Abbass H, et al., editors. Simulated Evolution and Learning. Berlin, Heidelberg: Springer; 2008. pp. 514-523
  18. 18. Youssef A, Majeed B, Ryan C. Optimizing combinational logic circuits using Grammatical Evolution. In: 2021 3rd Novel Intelligent and Leading Emerging Sciences Conference (NILES); 2021. pp. 87-92
  19. 19. Karpuzcu UR. Automatic verilog code generation through grammatical evolution. In: Proceedings of the 7th Annual Workshop on Genetic and Evolutionary Computation. GECCO ’05. New York, NY, USA: Association for Computing Machinery; 2005. pp. 394-397. DOI: 10.1145/1102256.1102346
  20. 20. Kratochvil O, Osmera P, Popelka O. Parallel grammatical evolution for circuit optimization. In: Ao SI, Douglas C, Grundfest WS, Burgstone J, editors. Proceedings of the World Congress on Engineering and Computer Science, WCECS ’09. vol. II. International Association of Engineers. San Francisco, USA: Newswood Limited; 2009. pp. 1032–1037. Available from: http://www.iaeng.org/publication/WCECS2009/WCECS2009_pp1032-1037.pdf
  21. 21. Vassilev VK, Miller JE. Scalability problems of digital circuit evolution evolvability and efficient designs. In: Proceedings. The Second NASA/DoD Workshop on Evolvable Hardware; 2000. pp. 55-64
  22. 22. Murakawa M, Yoshizawa S, Kajitani I, Furuya T, Iwata M, Higuchi T. Hardware evolution at function level. In: Voigt HM, Ebeling W, Rechenberg I, Schwefel HP, editors. Parallel Problem Solving from Nature — PPSN IV. Springer: Berlin, Heidelberg; 1996. pp. 62-71
  23. 23. Vassilev VK, Miller JF. Embedding landscape neutrality to build a bridge from the conventional to a more efficient three-bit multiplier circuit. In: Proceedings of Genetic and Evolutionary Computation Conference. Morgan Kaufmann; 2000
  24. 24. Vasicek Z, Sekanina L. Evolutionary approach to approximate digital circuits design. IEEE Transactions on Evolutionary Computation. 2015;19(3):432-444
  25. 25. Ryan C, Tetteh M, Dias D. Behavioural Modelling of Digital Circuits in System Verilog using Grammatical Evolution. In: Proceedings of the 12th International Joint Conference on Computational Intelligence—ECTA,. INSTICC. SciTePress; 2020. pp. 28-39
  26. 26. Tetteh MK, Mota Dias D, Ryan C. Evolution of complex combinational logic circuits using grammatical evolution with systemverilog. In: Hu T, Lourenço N, Medvet E, editors. Genetic Programming. Cham: Springer International Publishing; 2021. pp. 146-161
  27. 27. Koza JR. Genetic Programming—On the programming of Computers by Means of Natural Selection. Complex Adaptive Systems. Cambridge (MA): MIT Press; 1992
  28. 28. Eiben SJ, Agoston E. From evolutionary computation to the evolution of things. Nature. 2015;521(7553):476-482. DOI: 10.1038/nature14544
  29. 29. Ryan C, Collins JJ, O’Neill M. Grammatical evolution: Evolving programs for an arbitrary language. In: Banzhaf W, Poli R, Schoenauer M, Fogarty TC, editors. EuroGP. vol. 1391 of Lecture Notes in Computer Science. Berlin: Springer; 1998. pp. 83-96
  30. 30. O’Neill M, Ryan C. Grammatical evolution. IEEE Trans Evolutionary Computation. 2001;5(4):349-358
  31. 31. Patten JV, Ryan C. Attributed grammatical evolution using shared memory spaces and dynamically typed semantic function specification. In: Genetic Programming—18th European Conference, EuroGP 2015, Copenhagen, Denmark, April 8-10, 2015, Proceedings. 2015. pp. 105-112. DOI: 10.1007/978-3-319-16501-1_9
  32. 32. Karim MR, Ryan C. On improving grammatical evolution performance in symbolic regression with attribute grammar. In: Genetic and Evolutionary Computation Conference, GECCO ’14, Vancouver, BC, Canada, July 12-16, 2014, Companion Material Proceedings. 2014. pp. 139-140. DOI: 10.1145/2598394.2598488
  33. 33. Karim MR, Ryan C. A new approach to solving 0-1 multiconstraint knapsack problems using attribute grammar with lookahead. In: Genetic Programming—14th European Conference, EuroGP 2011, Torino, Italy, April 27-29, 2011. Proceedings. 2011. pp. 250-261
  34. 34. Karim MR, Ryan C. Degeneracy reduction or duplicate elimination? an analysis on the performance of attributed grammatical evolution with lookahead to solve the multiple knapsack problem. In: Nature Inspired Cooperative Strategies for Optimization, NICSO 2011, Cluj-Napoca, Romania, 2011. Vol. 387. Berlin: Springer. Studies in computational intelligence; 2011. pp. 247-266. DOI: 10.1007/978-3-642-24094-2_18
  35. 35. Ryan C, Azad R. Sensible initialisation in grammatical evolution. In: GECCO 2003: Proceedings Of The Bird Of A Feather Workshops, Genetic And Evolutionary Computation Conference. Chigaco (IL): AAAI; 2003. pp. 142-145
  36. 36. O’Neill M, Ryan C, Keijzer M, Cattolico M. Crossover in Grammatical Evolution. The Search Continues. Genetic Programming. 2001:337-347
  37. 37. Miller J. Cartesian Genetic Programming. Cartesian Genetic Programming. 2011. pp. 17-34. DOI: 10.1007/978-3-642-17310-3_2
  38. 38. Kruse R, Borgelt C, Klawonn F, Moewes C, Steinbrecher M, Held P. Fundamental evolutionary algorithms. Computational Intelligence: A Methodological Introduction. London (UK): Springer London; 2013. pp. 227-274. DOI: 10.1007/978-1-4471-5013-8_13
  39. 39. Fredivianus N, Prothmann H, Schmeck H. XCS revisited: A novel discovery component for the extended classifier system. Simulated Evolution and Learning. Berlin, Heidelberg: Springer Berlin Heidelberg. Lecture Notes in Computer Science. 2010;6457:289-298. DOI: 10.1007/978-3-642-17298-4_30
  40. 40. Zdenek, V. Bridging the Gap Between Evolvable Hardware and Industry Using Cartesian Genetic Programming. 2018. DOI: 10.1007/978-3-319-67997-6_2
  41. 41. Ryan C, Kshirsagar M, Gupt K, Rosenbauer L, Sullivan J. Hierarchical clustering driven test case selection in digital circuits. In: Proceedings of the 16th International Conference on Software Technologies—ICSOFT,. INSTICC. SciTePress; 2021. pp. 589-596

Written By

Conor Ryan, Michael Tetteh, Jack McEllin, Douglas Mota-Dias, Richard Conway and Enrique Naredo

Reviewed: 08 March 2022 Published: 04 June 2022