The DB2 low-pass real and RNS-system number equivalent, multiplied by 2^{11}.

## Abstract

Using discrete wavelet transform (DWT) in high-speed signal-processing applications imposes a high degree of care to hardware resource availability, latency, and power consumption. In this chapter, the design aspects and performance of multiplierless DWT is analyzed. We presented the two key multiplierless approaches, namely the distributed arithmetic algorithm (DAA) and the residue number system (RNS). We aim to estimate the performance requirements and hardware resources for each approach, allowing for the selection of proper algorithm and implementation of multi-level DAA- and RNS-based DWT. The design has been implemented and synthesized in Xilinx Virtex 6 ML605, taking advantage of Virtex 6’s embedded block RAMs (BRAMs).

### Keywords

- discrete wavelet transform (DWT)
- distributed arithmetic algorithm (DAA)
- field programmable gate array (FPGA)
- residue number system (RNS)
- multiplierless implementation

## 1. Introduction

The architecture of the embedded platform plays a significant role in ensuring that real-time systems meet the performance requirements. Moreover, software development suffers from increased implementation complexity and a lack of standard methodology for partitioning the implementation of signal-processing functionalities to heterogeneous hardware platforms. For instance, digital signal processor (DSP) is cheaper, consumes less power, and is easy to develop software applications, but it has a considerable latency and less throughput compared with field programmable gate arrays (FPGAs) [1]. For high-speed signal-processing (HSP) communication systems, such as cognitive radio (CR) [2, 3] and software-defined radio (SDR) [4], DSP may fail to capture and process the received data due to data loss. In addition, implementing applications such as finite impulse response (FIR) filtering, discrete wavelet transform (DWT), or fast Fourier transform (FFT) by software application limits the throughput, which is not sufficient to meet the requirements of high-bandwidth and high-performance applications. As a result, HSP systems are enhanced by off-loading complex signal-processing operations to hardware platforms.

Although FPGAs exhibit an increased development time and design complexity, they are preferred to meet high-performance requirements for two reasons. First, they efficiently address signal-processing tasks that can be pipelined. Second, they have the capacity to develop a programmable circuit architecture with the flexibility of computational, memory, speed, and power requirements. However, FPGA has its own resources such as memory, configurable logic blocks (CLBs), and multipliers that influence on the performance and selected algorithm. As a consequence, the choice of algorithm is determined by the hardware resource availability and performance requirements. These factors have an impact on each other and create many challenges that need to be optimized.

As an example, the discrete wavelet transform (DWT) [5, 6, 7, 8, 9], a linear signal-processing technique that transforms a signal from the time domain to the wavelet domain [10], employs various techniques for signal decomposing into an orthonormal time series with different frequency bands. The signal decomposition is performed using a pyramid algorithm (PA) [10, 11] or a recursive pyramid transform (RPT) [12]. While the PA algorithm is based on convolutions with quadrature mirror filters, which is infeasible for HW implementation, RPT decomposes the signal x[n] into two parts using high- and low-pass filters, which can be implemented using FIR filter [13]. Figure 1 shows a four-tap FIR filter with four multipliers, named as multiplier accumulator (MAC). By using the MAC structure, multipliers are involved in multiplying an input with filter coefficients, _{i}. It is clear that the direct implementation of the N-tap filter requires N multipliers.

This work focuses exclusively on implementing a one-level multiplierless DWT for a pattern-based cognitive communication system receiver (PBCCS) [8] by means of FPGA. The DWT is required to extract the received signal’s features. Then, the extracted features are fed into a multilayer perceptron (MLP) neural network (NN) to identify the received symbol. The most challenging part is that the NN could consume most of the available multipliers inside the FPGA. As an example, Ntoune et al. [14] have implemented a real-valued time-delay neural network (RVTDNN) and real-valued recurrent neural network (RVRNN) architecture with 600 and 720 multipliers, respectively, while ML605 [15], ZC706 [16], and VC707 [17] have 768, 900, and 2800 multipliers (DSP48Es), respectively.

Although the modern FPGAs come with a reasonable number of multipliers, designers prefer to implement multiplier-free DWT architecture for many reasons. First, a partial number of multipliers can be preserved for tasks, such as pulse shape filter, digital-up and digital-down converter that are used at SDR front-ends. Second, in contrast to DWT, the MLP weights depend on the learning step. Third, MLP weights could be frequently changed at runtime in an adaptive manner, whereas the DWT coefficients are fixed and known. Therefore, the multiplier-free DWT architecture could simplify the design process and allow the designers to focus on the MLP design.

In this work, we present the 1-D DWT implementation on FPGA by means of memory-based approaches. The aim is to compare different implementations in terms of system performance and resource consumption. We demonstrate the implementation of Daubechies wavelets (DB2, DB4, and DB5) using DAA and RNS approaches. These approaches do not employ explicit multipliers in the design. Because the main focus of this work is on extracting the key features of a signal via DWT, the inverse DWT (IDWT) and high-pass filter coefficients are not considered in this work.

### 1.1. Related work

Implementations of 1-D DWT for signal de-noising, feature extraction, and pattern recognition and compression can be found in [8, 9, 18, 19]. The conventional convolution-based DWT requires massive computations and consumes much area and power, which could be overcome by using the lifting-based scheme for the DWT, which is introduced by Sweldens [20]. Although, the lifting scheme is used to compute the output of low- and high-pass using fewer components, it may not be well suited to our application, owing to the PBCCS’s nature, where the low-frequency components are much important than the higher ones. Therefore, in this study, 1-D DWT decomposition, which is implemented by means of filter banks, is considered. Another advantage of using convolution-based DWT over lifting approach is that they do not require temporary registers to store the intermediate results, and with an appropriate design strategy, they could have better area and power efficiency [21].

Rather than the simplest implementation of FIR filter via multipliers and an adder tree, a multiplier-free architecture is used because they result in low-complexity systems and for their high-throughput-processing capability [22]. Fundamentally, there are two techniques for facilitating parallel processing. They are the distributed arithmetic algorithm (DAA) and the residue number system (RNS). DAA is an algorithm that performs the inner product in a bit serial with the assist of a lookup table (LUT) scheme followed by shift accumulation operations [23, 24]. Several techniques have been proposed to improve the design, such as the partial sum technique [25], a multiple memory bank technique [26, 27], and an LUT-less adder-based [28]. The DAA approach has been adapted in many applications, such as least mean square (LMS) adaptive filter [29] and square-root-raised cosine filter [30].

On the other hand, RNS is an integer number system, in which the operations are performed based on the residue of division operation [31, 32, 33]. Eventually, the RNS-based results are converted back to the equivalent binary number format using a Chinese reminder theorem (CRT) [34]. The key advantage of RNS is gained by reducing an arithmetic operation to a set of concurrent, but simple, operations. Several applications, such as digital filters, benefit from the RNS implementation, for example, [35, 36, 37]. In addition, RNS was combined with DAA in one architecture, called RNS-DA [38, 39], which benefits from the advantages of both approaches.

In this chapter, three major 1-D DWT approaches are implemented on FPGA-based platforms and compared in terms of performance and energy requirements. The implementations are compared for different number of, multipliers, memory consumptions, number of taps (N), and levels (L) of the transform to show their advantages. To the best of our knowledge, no detailed comparisons of hardware implementations of the three major 1-D DWT designs exist in the study. This comparison will give significant insight on which implementation is the most suitable for given values of relevant algorithmic parameters. Although there are many efficient designs in the study, we did not optimize the number of memories in any approach, so that we have a fair comparison.

The remainder of this chapter is organized as follows. Section 2 presents the preliminaries information to understand DWT. It also reviews the theoretical background of DAA and RNS. Section 3 describes the implementation of discrete wavelet transform. We further show an analytical comparison between these approaches. Section 4 presents the performance results. Finally, this chapter concludes in Section 5.

## 2. Fundamentals and basic concepts

### 2.1. Discrete wavelet transform

The wavelet decomposition mainly depends on the orthonormal filter banks. Figure 2 shows a two-channel wavelet structure for decomposition, where is the input signal,

**is the high-pass filter,**g [n ]

**is the low-pass filter, and ↓2 is the down-sampling by a factor of two. The output of each low-pass filter is fed to the next level, so that each filter creates a series of coefficients (**h [n ]

*and*a

_{i}

*), which represent and compact the original signal information.*d

_{i}

Mathematically, a signal consists of high- and low-frequency components, as shown in Eq. (1). It shows that the obtained signal can be represented by using half of the coefficients, because they are decimated by

2

The decimated low-pass-filtered output is recursively passed through identical filter banks to add the dimension of varying resolution at every stage. Eqs. (2) and (3) represent the filtering process through a digital low-pass filter and high-pass filter

**, corresponding to a convolution with an impulse response of**g [k ]

**tap filters**k-

where ** 2**is the down-sampling process. The outputs

y

_{low}

**and**[n ]

y

_{high}

**provide an approximation signal and of the detailed signal, respectively [40].**[n ]

### 2.2. Distributed arithmetic algorithm

The distributed arithmetic algorithm (DAA) gets rid of multipliers by performing the arithmetic operations in a bit-serial computation [13]. Because the down-sampling process follows each filter (as shown in Figure 2), Eq. (2) can be rewritten without the decimation factor as

Obviously, Eq. (4) requires an intensive operation due to multiplication of the real input values with the filter coefficients. Eq. (3) can be simplified by representing as a fixed point arithmetic of length

**:**L

where _{l}is the and

x [k ]

_{0}is the sign bit. Substituting Eq. (5) into Eq. (4), the output of the filter becomes

Since _{l}takes the value of either ** 0**or

1,

2

^{N}possible values. That is, rather than computing the summation at each iteration online, it is possible to pre-compute and store these values in a ROM, indexed by

x [k ]

_{l}. In other words, Eq. (6) simply realizes the sum of product computation by memory (LUT), adders, and shift register.

### 2.3. Residue number system

The RNS is a non-weighted number system that performs parallel carry-free addition and multiplication arithmetic. In DSP applications, which require intensive computations, the carry-free propagation allows for a concurrent computation in each residue channel. The RNS moduli set, _{1}_{2}_{q}, consists of

**channels. Each**q

m

_{i}represents a positive relatively prime integer; the greatest common divisor (GCD) (

m

_{i,}

m

_{j}) = 1 for

**.**i ≠ j

Any number, _{M} , is uniquely represented in RNS by its residues

**by**X

m

_{i}and

**is defined in Eq. (7) as**M

where determines the range of unsigned numbers in

**, and should be greater than the largest performed results. In addition, M uniquely represents any signed numbers. The implementation of RNS-based DWT obtained from Eq. (4) is given by Eq. (8) as follows:**[0,M - 1]

for each _{i}. This implies that a

**-channel DWT is implemented by**q

**FIR filters that work in parallel.**q

Mapping from the RNS system to integers is performed by the Chinese reminder theorem (CRT) [34, 41, 42]. The CRT states that binary/decimal representation of a number can be obtained from its RNS if all elements of the moduli set are pairwise relatively prime.

Designing a robust RNS-based DWT requires selecting a moduli set and implementing the hardware design of residue to binary conversion. Most widely studied moduli sets are given as a power of two due to the attractive arithmetic properties of these modulo sets. For example,

For the purpose of illustrating, the moduli set ), the dynamic range of

P

_{7}is large,

**, which could efficiently express real numbers in the range [**M = 4,145,280

**] using a**−2.5, 2.5

**-bit fixed-point representation, provided scaling and rounding are done properly. We assume that this interval is sufficient to map the input values, which does not exceed**16

**. Third, the reverse converter unit is simple and regular [42] due to using simple circuits design.**±2

## 3. DWT implementation methodology

### 3.1. DWT implementation using DA

DAA hides the explicit multiplications with a ROM lookup table. The memory stores all possible values of the inner product of a fixed bit with any possible combination of the DWT filter coefficients. The input data,

**are signed fixed-point of a**x [n ],

**-bit width, with**22

**binary-point bits (Q**16

_{5,16}). We assumed that the memory contents have the same precision as the input, which is reasonable to give high enough accuracy for the fixed-point implementation. As a consequence,

**ROMs, each consisting of**22

**words, are required. Each ROM stores any possible combination of the four DWT filter coefficients, where the final result is a**16

**-bit signed fixed-point (Q**22

_{5,16}). In order to decrease the number of memory, the width should be reduced, which will have an impact on the output precision.

Figure 3 shows the block diagram of 1-bit DAA at position . This block contains one ROM (

**) and one shift register. Because the word’s length**4 × 22

**of the input**w

**is**x

**bits, the actual design contains**22

**memory blocks and**22

**adders for summing up the partial results.**21

### 3.2. DWT implementation using RNS

The RNS-based DWT implementation has mainly three components. They are the forward converter, the modulo adders (MAs), and the reverse converter. The forward converter, which is also known as the binary-to-residue converter (BRC), is used to convert a binary input number to residue numbers. By contrast, the reverse converter or the residue-to-binary converter (RBC) is used to obtain the result in a binary format from the residue numbers. We refer to the RNS system, which does not include RBC, as a forward converter and modular-adders (FCMA), as illustrated in Figure 4.

#### 3.2.1. The forward converter

The forward converter is used to convert the result of multiplying an input number by a wavelet coefficient to residue numbers via LUT, shift, and modulo adders, where

**is the number of channels.**q

#### 3.2.2. RNS-system number conversion

The received samples and wavelet coefficients span the real number and might take small values. One of the main drawbacks of RNS-number representation is that it only operates with positive integer numbers from [0, 1]. The DWT coefficients are generally between

**and**1

**. As a possible solution, we have divided the range of RNS, [0,**− 1

**1], to handle those numbers.**M –

In addition, the received sample is scaled up by shifting

**positions to the left (multiplying by**y

2

^{y}), which ensures that

**is a**X [i ]

**bit fixed point integer. In a similar manner, the wavelet coefficients are scaled by shifting its**y-

**positions to the left. In our design, we set the filter scaling factor**z

**to**z

**. Table 1 presents the low-pass filter of DB2 before and after scaling.**11

Coefficient()_{k} | Real value | RNS-system value |
---|---|---|

_{0} | −0.129409522550921 | −266 |

_{1} | 0.224143868041857 | 459 |

_{2} | 0.836516303737469 | 1713 |

_{3} | 0.482962913144690 | 989 |

#### 3.2.3. Modulo m_{i} multiplier

_{i}

The multiplication of the received sample, , by the filter coefficients, which are constants, is performed by indexing the ROM. As the word length,

**of the received sample**w,

**is increased, the memory size becomes**X [i ]

2

^{w}. In addition,

**ROMs are required to perform the modulo multiplication.**q

We propose few improvements to this design. First, instead of preserving a dedicated memory for each modulo _{i}, a ROM that contains all module results is used. Thus, each word at location contains the

**modules of**q

P

_{7}

**with its memory map at the right top corner. It shows that, for a location**= {127, 128, 255}

**, the least significant**j

**bit contains**8

**bit contains**7

**bit contains**7

As with DAA-based approach, if the input word length is ** 16**bits, the ROM should contain

2

^{16}locations. One way to reduce the size of the memory is to divide it into four ROMs of

**. Figure 4 shows the block diagram of the binary-to-residue converter with four ROMs; each is indexed by four bits of**4 × 22

**. However, the output of each ROM should be combined, so that the final result can be corrected. It is worth noting that this division comes with a cost in terms of adders and registers.**x

According to the previous improvements, the RNS-based works are as follows. The input ** 4**-bit segment is fed into one ROM, so that three outputs, corresponding to

To obtain the final multiplications’ result, each _{i}output should be shifted by positions, where

**is the index of the lowest input bit (**l

**or**4, 8,

**). The modular multiplication and shift for (**12

2

^{n}

**) and (**– 1

2

^{n + 1}

**) can be achieved by a left circular shift (left rotate) for**–1

**positions, whereas the modular multiplication and shift for**l

2

^{n}can be achieved by a left shift for

**positions [17]. Finally, the modulo adder adds the corresponding output (Figure 6).**l

#### 3.2.4. Modulo adder (MA)

The modulo adders are required for adding the results from a modular multiplier as well as for a reverse converter. In this work, we have two MAs—that is, one is based on ^{n}and the other is based on ^{n}– ** 1**. Modulo

2

^{n}adder is just the lowest

**bits of adding two integer numbers, where the carry is ignored. Figure 7 shows the block diagram of the 2**n

*– 1 modulo adder.*

^{n}

#### 3.2.5. The reverse converter

The Chinese remainder theorem (CRT) [34] provides the theoretical basis for converting a residue number into a natural integer. The moduli set ** 3 ***)-bits integer number. The actual signed number can be found by shifting the result

**positions to the left, which is equivalent to dividing by**y + z

2

^{(y + z)}

**and**.y

**are the scaled values of the input and wavelet coefficients, respectively. Generally, the word length of one-level DWT is bounded by Eq. (10) and should not exceed (**z

**bits**3 *n – 2)

As a consequence, the range of the moduli set should be greater than the maximum output, * th*, which can be computed as follows:

_{o}

where _{k}is the filter coefficient,

^{th}

**is the input,**x [n ]

*and*y

*are the input and filter scaling factors, respectively, and*z

**is the maximum range.**M

### 3.3. Two-level DWT implementation

The two-level discrete wavelet transform compromises two one-level DWTs, where the output of the first level is fed into the second level (as shown in Figure 7). The one-level DWT at each level is identical, but the output of each level is halved. For example, if a signal of 1800 samples is applied to the input, then 900 and 450 samples are produced by the first and second levels, respectively.

Figure 7 shows the design of two-level RNS-based DWT, which involves two FCMA blocks and two RBC blocks. Each FCMA requires converting the result of the first stage to binary, shifting the number by 11 and converting it to residue number again.

### 3.4. Hardware complexity

#### 3.4.1. Memory usage

DAA and RNS techniques employ the memory as a key resource to avoid multiplying two input variables. In each approach, as the number of filter taps increases, both the size and the number of memories change. Assuming that the length of the received word is -bit and there are

**filter taps, the size of a memory element can be considered as**N

**, where**a × b

**and**a

**are the word length in bits of the input and output, respectively. The value of**b

**determines the size of the memory,**a

2

^{a}.

The total number of memory elements that are occupied by the DAA-based filter is The output is a fixed

**-bit fixed point and the word length is**16

**bits. The number of memory elements remains constant as the filter taps increase, whereas the size of the memory exponentially increases to**22

2

^{N}.

On the other hand, the total number of memory elements occupied by an RNS-based filter is ** (4 × 22).**Table 2 shows a comparison of the memory usage with

**for different DWT families.**w = 16

DB2 | DB4 | DB5 | |
---|---|---|---|

Number of filter taps | 4 | 8 | 10 |

DA memory usage | 22*(4 × 22) | 22*(8 × 22) | 22*(10 × 22) |

RNS memory usage | 16*(4 × 22) | 32*(4 × 22) | 40*(4 × 22) |

#### 3.4.2. Shift register and adder counts

DAA-based implementation employs shift registers and adders to sum the result at each bit level (Figure 3). For a word length with

**magnitude bits, we need (**m

**shift registers and (**w – 1)

**-input adders (data combined by a tree adder architecture). To handle the negative numbers, the two’s complement operation requires additional (**w – 1) 2

**shift registers and (**m – 1)

**adders. Thus, for**m – 1)

*level DA-based implementation, a total of*l -

**shift registers and two-input adders is required.**l * (w – m – 2)

On the other hand, for a word length and

**-tap filter, the**N

**-channel FCMA implementation requires**q

**BRC blocks and (**N

**) MA blocks to compute the final result. Each BRC block has**q* (N – 1)

2

^{n}

– 1, 2

^{n}

**and**,

2

^{n + 1}

**modulo, respectively. The modulo**–1

2

^{n}requires

log

_{2}

**because shifting operations is not circular and shifting**(n )

**-bit numbers to the left by**n

**positions or more is always zero. Likewise, the RBC has four MA blocks (for**n

2

^{n + 1}

**), two multiplexers, and two subtractors. Thus, the total number of MA blocks at one-level RNS-based is**–1

For instance, three-channel DB2 implementation requires nine MA blocks to sum the result, and _{7}RNS-based implementation has a total of ** 45**MA blocks when

**.**w = 16

Meanwhile, the number of RNS-based adders depends on the design of the MA block. For example, each MA block of (^{n} ** – 1**) and (

2

^{n + 1}

**) requires two adders, while each MA block of**–1

2

^{n}requires one adder. Thus,

## 4. Performance analysis and validation

Hardware analysis was carried out by using a Xilinx System Generator for DSP (SysGen) [45], which is a high-level software tool that enables the use of MATLAB/Simulink environment to create and verify hardware designs for Xilinx FPGAs. It enables the use of the MathWorks model-based Simulink design environment for FPGA design. Furthermore, the hardware-software co-simulation design was synthesized and implemented on ML605 Xilinx Vertex 6 [15].

The implementation of RNS and DA is compared with the multiplier-accumulate-based DWT structure (MAC), as shown in Figure 8. We also consider the direct DWT implementation using an IP FIR Compiler 6.3 (FIR6.3) block [46], which provides a common interface to generate highly area-efficient and high-performance FIR filters (Figure 9).

For RNS implementation, the moduli sets of _{7} ** = {127, 128, 255}**and

P

_{10}=

**} were used. The dynamic ranges of these sets are**{1023, 1024, 2047

*= 4,161,536 and 2,144,338,944, respectively. The moduli set of*M

P

_{10}is selected because its dynamic range is greater than

*(Eq. (11) with*th

_{o}

*= 6,*y

*= 11 and ∑(*z

*) = 1.5436). In all RNS-based implementations, the word length was set to 16 bits.*hi

### 4.1. Resource utilization and system performance

Table 4 summarizes the resource use by RNS-based components—that is, FCMA and reverse converter (RBC). The RBC consumes fewer resources and less power. However, the operating frequency is equal in all models and greater than the entire RNS-based filter.

Resources | RNS-based (n = 7) | RNS-based (n = 10) | ||
---|---|---|---|---|

FCMA | RBC | FCMA | RBC | |

Number of slice registers | 656 | 157 | 883 | 190 |

Number of slice LUTs | 591 | 138 | 854 | 180 |

Number of RAMB18E1 | 16 | 0 | 16 | 0 |

Max. operating freq. (MHz) | 291.2 | 311.62 | 283.85 | 298.67 |

Min. period (ns) | 3.434 | 3.21 | 3.523 | 3.348 |

Estimated total power (mW) | 40.5 | 6.59 | 43.08 | 7.33 |

Latency (clock cycle (CC)) | 6 | 6 | 6 | 6 |

Table 5 summarizes the resource consumption of each filter implementation. It shows that the MAC and IP FIR-based implementations have four multiplier units (DSP48E1s) with maximum frequencies of ** 296**and

**MHz, respectively. By contrast, the proposed approaches are more complex than MAC. However; DAA- and RNS-based implementations has**472

**and**22

**memory blocks (BRAMs) used to store the pre-calculated wavelet coefficients. It also shows that the number of slice registers, slice LUTs, and occupied slices of**16

P

_{10}RNS-based is greater than one of

P

_{7}because the former has

**output signals, while the latter has**31

**output signals. As a result, the number of flip-flops is increased and the number of resources is approximately increased by one-third, while the maximum frequency in both designs is greater than**22

**MHz.**235

Resources | MAC | DA | FIR | RNS ( 7) | RNS ( 10) |
---|---|---|---|---|---|

Number of slice registers | 282 | 661 | 167 | 767 | 1089 |

Number of slice LUTs | 128 | 520 | 71 | 721 | 1055 |

Number of occupied slices | 58 | 188 | 60 | 240 | 358 |

Number of DSP48E1s | 4 | 0 | 4 | 0 | 0 |

Number of RAMB18E1 | 0 | 22 | 0 | 16 | 16 |

Max. operating freq. (MHz) | 296.38 | 229.83 | 472.59 | 258.86 | 261.028 |

Min. period (ns) | 3.374 | 4.351 | 2.030 | 3.863 | 3.831 |

Estimated total power (mW) | 8.44 | 66.54 | 9.05 | 56.22 | 53.05 |

Table 6 shows a comparison between the DA- and RNS-based one-level DWT implementations when using larger filter banks—that is, DB4 and DB5. It shows that DAA-based implementation occupies a fixed number of RAMB18E1. The number of memory elements of DAA-based implementation is fixed and depends on the word length (Table 2).

Resources | DA-based | RNS (n = 7) | ||||
---|---|---|---|---|---|---|

DB2 | DB4 | DB5 | DB2 | DB4 | DB5 | |

Number of slice registers | 650 | 737 | 780 | 767 | 1441 | 1898 |

Number of slice LUTs | 521 | 539 | 568 | 721 | 132 | 1677 |

Number of RAMB18E1 | 22 | 22 | 22 | 16 | 32 | 40 |

Max. operating freq. (MHz) | 232.72 | 205.55 | 223.31 | 258.87 | 265.32 | 258.80 |

However, as the number of filter taps increases, the memory size is exponentially increased to ** 2N**. By contrast, the number of memory elements that are used in RNS-based implementation is linearly increased as the number of filter taps is increased. Similarly, the number of memories that are used at multilevel DAA-based and RNS-based implementations with the

**-level would be an aggregate of levels**l

**through**1

**.**l

### 4.2. Functionality verification

The discrete wavelet transform was simulated by means of ModelSim simulator. Figure 10 shows that the MAC and DAA have lower latency than other approaches. It depicts that the FIR- and RNS-based of _{7}and _{10}implementations lag behind MAC and DAA by four clock cycles.

### 4.3. Precision analysis

We carried out the precision analysis for the first and DWT levels, and the result is presented in Table 7. The output bit precision is set to Q_{5,16} for all implementations. Table 7 shows the maximum performance based on the signal-to-noise-ratio (SNR) and peak-signal-to-noise-ratio (PSNR). For P_{7}, we could not achieve a better accuracy with the specified scaling factors because * y* +

*= 19 < (3 ∗ 7) + 1 = 22. However, both DAA- and RNS-based approaches offer high-signal quality with a peak signal-to-noise ratio (PSNR) of*z

**and**73.5

**dB, respectively. Figure 11 shows the effect of changing the scaling factors of**56.5

P

_{10}for DB2 RNS-based approach. The input scaling factor is increased from 8 to 13 bit and the filter scaling factor is increased from 11 to 18. As expected, lower scaler factors produce PSNR equal to 56 dB, whereas the maximum PSNR equal to 84 is obtained when

*= 12 and*y

*= 16.*z

Resources | FIR | MAC | DAA-based | RNS-based | |
---|---|---|---|---|---|

_{7} | _{7} | ||||

Input precision | Q_{5,16} | Q_{5,16} | Q_{5,16} | = 8 | = 8 |

Coefficients precision | Q_{0,12} | Q_{1,15} | Q_{0,15} | = 11 | = 11 |

Internal word length | 22 bit | 22 bit | NA | 22 bit | 31 bit |

SNR (dB) | 83.2 | 78.7 | 70.4 | 53.41 | 54.78 |

PSNR (dB) | 86.3 | 81.8 | 73.5 | 56.5 | 57.9 |

### 4.4. Discussion

Hardware availability and system performance requirements are critical for selecting the appropriate architecture of the embedded platform. The number of DWT levels, filter taps, and word length has a substantial influence on the performance of the design and complexity.

Increasing the number of DWT levels has roughly the same effect on the operating frequency. Because the only change between the RNS-based with _{7}or _{10}implementations is the output signal width, and the maximum operating frequencies slightly change. Furthermore, the one-level DB2 filter bank was designed with maximum operating frequencies of ** 232**and

**MHz for DAA- and RNS-based approaches, respectively. However, all high-frequency implementations introduce a latency of at least**258

**clock cycles for one-level DA-based DWT.**10

Another critical parameter that affects the DWT performance is the filter order. DAA-based implementation outperforms the RNS-based with at most ** 10**taps. When the number of taps increases, the number of memory units and binary adders within the RNS-based implementation constantly increases, and the size is not affected as shown in Table 2. The memory requirement for DAA-based implementation is exponentially increased as the number of filter taps increases.

In addition, the two approaches have different memory content. Whereas the memory content of DAA-based implementation is consistent and identical, the memory content of RNS-based varies from tap to tap. This is obvious because each memory 590 stores the multiplication values of each filter coefficient by the moduli set.

The word length determines the number of occupied memory in both implementations. As the word length increases, the number of memory within the DAA- and RNS-based approaches increases linearly by and

**, respectively. Furthermore, we could not neglect the effect of output word length on the accuracy and the internal structure. The DAA-based approach requires large memories to have high precision. By contrast, RNS-based approach could achieve high precision with adopting the scaling factors, which do not require any change to the design, except updating memory contents.**w ∗log2(w )

## 5. Conclusion

In this chapter, we addressed the effect of multiplierless DWT implementations, which have a substantial impact on the overall performance of the design and resource availability. We presented DAA- and RNS-based implementations of DWT and compared them with the MAC-based approach. The former approaches are multiplierless architectures that intensively use memory to speed up the entire processing time.

Given implementation examples for experimental verifications and analysis, the approaches were simulated using Simulink and validated on a Xilinx Virtex 6 FPGA platform. The co-simulation results have also been verified and compared with the simulation environment. The complexity and optimization of multi-level DWT with respect to hardware structure provides a foundation for employing an appropriate algorithm for high-performance applications, such as in cognitive communication when combining the DWT analysis with machine-learning algorithms.