Open access peer-reviewed chapter

From Insect Vision to a Novel Bio-Inspired Algorithm for Image Denoising

Written By

Manfred Hartbauer

Submitted: 25 November 2019 Reviewed: 26 February 2020 Published: 09 July 2020

DOI: 10.5772/intechopen.91911

From the Edited Volume

Biomimetics

Edited by Maki K. Habib and César Martín-Gómez

Chapter metrics overview

558 Chapter Downloads

View Full Metrics

Abstract

Night active insects inspired the development of image enhancement methods that uncover the information contained in dim images or movies. Here, I describe a novel bionic night vision (NV) algorithm that operates in the spatial domain to remove noise from static images. The parameters of this NV algorithm can be automatically derived from global image statistics and a primitive type of noise estimate. In a first step, luminance values were ln-transformed, and then adaptive local means’ calculations were executed to remove the remaining noise without degrading fine image details and object contours. Its performance is comparable with several popular denoising methods and can be applied to grey-scale and color images. This novel algorithm can be executed in parallel at the level of pixels on programmable hardware.

Keywords

  • night vision
  • spatial integration
  • contrast enhancement
  • noise reduction
  • denoising
  • image enhancement
  • image processing
  • local means calculation

1. Introduction

Some insect species have attracted the attention of researchers due to their astonishing visual abilities under extremely dim light conditions [1, 2, 3]. These insects cope with noise that degrades visual information and has multiple origins: the sparsity of photons results in shot noise, which is overlaid by transducer noise. To increase the sensitivity of compound eyes, the clusters of photoreceptor cells direct light on to a certain part of the associated rhabdom in order to gather the photons from a wide field of view. In other, nocturnal insect species (e.g., Megalopta genalis) neurons in the brain sum up the information provided by individual ommatidia forming the apposition eye. Insects equipped with such neural apposition eyes can even see at star light conditions [3]. Filtering in the spatial [4] and temporal domains is mirrored in some denoising algorithms that are available for cleaning up noisy films (e.g., [5]). However, there is still a lack of image enhancement methods that improve the quality of underexposed static images while avoiding artifacts and preserving image sharpness. The elimination of noise from static images is usually a challenging task for any denoising algorithm, because the temporal domain is not available for filtering.

The quality of images taken under dim light conditions is also often reduced by imperfections in the sensor itself (‘sensor grain noise’) and shot noise. Generally, dim images have a very limited luminance range, which limits the content of available information. If any measure is undertaken to improve image contrast, such as the traditional method of histogram stretching, sensor noise is unavoidably amplified. Therefore, the goal of image enhancement is to preserve as many image details as possible while eliminating noise. Typically denoising can be achieved by the application of linear and nonlinear filters. Linear filters take the forms of smoothing or low-pass, sharpening, Laplacian, un-sharp masking, or high-boost filters. Nonlinear filters include order statistic filters such as minimum, median, and maximum filters (for a review of methods see [6, 7]).

Simple denoising techniques, such as linear smoothing or median filtering, can reduce noise, but at the same time smooth away edges, so that the resulting image becomes blurry. A popular alternative denoising method is total variation (TV) denoising, which has been described by Rudin et al. [8]. This method minimizes the total variation of the luminance values that can mainly be attributed to noise. The TV regularization method preserves salient edges while effectively removing noise. Lee et al. [9] published a framework for a Moving Least Squares Method with Total Variation Minimizing Regularization and Yoon et al. [10] improved the preservation of fine image details by developing an adaptive, TV-minimization-based image enhancement method (ATVM). Bilateral filtering described by Tomasi and Manduchi [11] is another powerful non-linear denoising algorithm that preserves object contours. Here, denoising is based on the spatial distance of surrounding pixels relative to an output pixel and its grey value difference. Bilateral filtering is fast but the tuning of parameters is rather difficult (see Zhang and Gunturk [12]) and staircase effects and inverse contours are known possible artifacts. Another possibility is the operations that are performed on a Fourier transform of the image rather than on the image itself. The techniques that fall under this category include low pass, high pass, homomorphic, linear and root filtering. Fourier-transformed images are filtered and inverse transformed to reduce noise and prevent blurring effects. The disadvantages of frequency domain methods are that they introduce certain artifacts and cannot simultaneously enhance all parts of the image very well. In addition, it is difficult to automate the image enhancement procedure. Despite these drawbacks, frequency filtering of similarity maps has proved to be a powerful method for image denoising (BM3D, published by Maggioni et al. [13]; see original work of Dabov et al. [14]). This method divides the image into small pieces (2D blocks) and, after 3D transformation of similar blocks, the filtering process eliminates noise while leaving the object details mostly untouched. In addition, wavelet-domain hidden Markov models have been applied to image denoising with fascinating results, especially when applied to diagnostic images [15, 16, 17].

In order to reduce the computational time required by complex image-processing algorithms such as edge detectors, homomorphic filtering, and image segmentation, general-purpose computing methods using graphics processing units were developed [18, 19]. More simple, computationally less demanding algorithms were developed as another strategy to reduce processing time. For example, the simple piecewise linear (PWL) function sharpens image edges and reduces noise simply by evaluating the luminance of pixels in a window of 3 × 3 pixels around each pixel [20]. Its effects can easily be controlled by varying only two parameters. Such simple algorithms can be implemented in reconfigurable hardware in the form of field-programmable gate arrays (FPGA), which is considered a practical way to obtain high performance when using computationally-intensive image processing algorithms [21, 22]. Performing parallel operations on hardware significantly reduces processing time, but simple algorithms are easier to implement on programmable hardware compared to mathematically complex ones.

Here, I describe a rather simple, bio-inspired algorithm that can be used to enhance the contrast of dim images and remove noise without affecting fine image details much. It operates in the spatial domain at the level of pixels and can be run in parallel on FPGA hardware.

Advertisement

2. Bionic method of image denoising

2.1 Method overview

This novel night vision (NV) image enhancement method increases the quality of underexposed pictures by combining three subsequent image processing steps (see Figure 1 ). These are executed at the level of pixels which perform simple calculations to mimic the amplification of the transduction process in photoreceptors and the spatial integration of image information as known from nocturnal insects [2]. The photoreceptors of Megalopta genalis (Halictidae), a nocturnal bee found in the Neotropics, has a rather great gain of transduction, which decreases the signal-to-noise ratio and information capacity in dim light for an increased sensitivity. This amplification of visual information is mirrored in the first image processing step of this night vision method by performing a logarithmic transformation of pixel grey values (luminance values). Logarithmic transformation of luminance values leads to an increase of small values while high values remain rather constant. Therefore, image details of dark image regions become visible.

Figure 1.

Schema of image-processing steps that make up the NV algorithm. Global image statistics and a simple method of noise estimate were used to derive the parameters “gain” and “variability threshold”. These were used in subsequent image-processing steps to enhance the quality of dim images and remove noise. Images with average brightness skip “Ln transformation” and the “contrast enhancement” routine was only applied to images that exhibited low variance among their luminance values.

The photoreceptors of nocturnal insects generate slow and noisy visual signals that are spatially summed by second-order monopolar cells in the lamina [1]. Summing visual information from a wide angle of view leads to the reduction of noise and, thus, improves signal-to-noise ratio. The neuronal correlate for this can be found in large dendritic trees of lamina interneurons in M. genalis. However, to prevent image blur, spatial summation should be small in image areas where contrast is high and large in more homogeneous image regions. This “adaptive spatial averaging” is performed in the second image processing step of this night vision method and preserves object contours and image sharpness. This procedure assumes a higher variability of luminance values near object contours as compared to homogeneous image regions. Thus, circles in which local luminance values can be averaged may not exceed a critical variability of grey values (threshold_var). Adaptive averaging is performed at the level of pixels and evaluates the variability of local grey values to find the dimension of a circle in which the variability of grey values remains below the predefined variability threshold. After exceeding this threshold, the average of grey values of pixels belonging to this circle is calculated and stored at the central pixel. As a final processing step an automatic contrast-enhancement procedure was applied by means of linear histogram stretching. Two parameters (gain and variability threshold) are essential for this method and were derived from global image statistics and a simple kind of noise estimate. The image-enhancement algorithm described here was developed using Netlogo 5.2 (developed by Uri Wilensky; http://ccl.northwestern.edu/netlogo/), a multi-agent programming environment that allows the parallel execution of commands at the level of pixels (named patches in the Netlogo language).

2.2 Import pictures

Dim and noisy images were imported into Netlogo using the command “import-pcolors”, which transformed the luminance values of pixels into the grey values of patches. This Netlogo function limits the luminance values of grey-scale images to the range between 0 (black) to 9.9 (white), whereby the total number of possible grey values is 110. Although image information is reduced by this function, the human eye is unable to recognize any difference between images having a grey value range of 110 or higher.

2.3 Image statistics

For the automatic adjustment of the two parameters gain and variability threshold, a simple calculation of the global image statistics was performed and image noise was estimated. The median and variance of the grey values of all pixels were analyzed and saved as median_grey and var_grey, respectively. Before processing ‘rgb’ images, the parameter median-grey was calculated by averaging over the luminance values of all color channels. In contrast, var_grey was calculated from the channel with the highest average luminance value. The parameter gain acts as a factor of ln-transformed grey values (adjusts image brightness) and was derived from median_grey according to Eq. 1. gain cannot be smaller than 1.

gain = 5 median _ grey E1

2.4 Noise estimation

A rather simple estimation of noise can be obtained by summing up the differences between the grey values of each pixel and the average grey values of surrounding pixels in rather homogeneous regions of the image. For this purpose, the local average luminance value (mean_grey_local) and local variance of luminance values (var_grey) of surrounding patches were calculated in a circle with a radius of 4 pixels. Since brighter images tend to show a higher variability of grey values, the noise estimation was restricted to those pixels XNE having circles in which mean_grey_local was smaller than the ratio between var_grey and (median_grey – 1) (see Eq. (2)). Noise estimation was computed according to Eq. (3), whereby X denotes the number of patches in the homogeneous image regions.

X NE mean _ grey _ local < var _ grey / median _ grey 1 E2
noise _ estimate = N = 1 X abs mean _ grey _ local luminance / X E3

The noise estimation of color images was restricted to the color channel with the highest average luminance value.

2.5 Parameter estimation

This NV algorithm derives all its parameters from global image statistics (median_grey and var_grey) and the noise_estimate. Eq. (4) defines the threshold that was used for the adaptive spatial averaging procedure. It was found empirically by testing numerous terms that allow to predict the parameters that result in high quality output images with respect to the conservation of image details and the (Peak signal-to-noise ratio) PSNR value. Eq. (4) was derived from manually-adjusted parameter combinations that were obtained from 10 different images exhibiting various noise levels and brightness values.

threshold _ var = 0.0002 noise _ estimate + var _ grey median _ grey + 0.92 + gain / 10 E4

If noise_estimate was smaller than 0.01, threshold_var was always set to 0.01, which is low enough to preserve fine image details, but high enough to remove the remaining noise.

2.6 Image processing step 1: logarithmic transformation

The logarithmic transformation of luminance values results in disproportionate amplification of the small grey values. The image contrast of very dim images was improved by calculating the natural logarithm of the luminance values of patches of each color channel and multiplication of the result by the gain factor using Eq. (5). Adding the constant 1.5 to the luminance values prevents them from becoming smaller than zero after ln-transformation. The result of this logarithmic transformation was stored as greyLn at the focal pixel.

greyLn = ln grey _ value + 1.5 gain E5

2.7 Image processing step 2: adaptive spatial averaging

Image noise was widely removed by means of ‘adaptive spatial averaging’, a procedure that is executed by each pixel and evaluates the local variability of the greyLn values to calculate the radius of the circle in which grey value averaging is executed (locals means computation). Adaptive spatial averaging was executed in parallel at the level of pixels. Each pixel expands a circle in steps of one patch as long as the variability of the average greyLn values of the pixels of each color channel within this circle remained below threshold_var. Once the dimension of this circle was found, the greyLn values were averaged and saved as grey_avg value for the focal pixel. This averaging of greyLn values in the circle was calculated for each color channel separately. The maximum radius of the expanding circle was restricted to 10 pixels and the minimum size to 1 pixel.

2.8 Image processing step 3: enhancement of image contrast

After adaptive spatial averaging, image contrast can be enhanced if the grey value variability of the resulting image is low (variance <75). This was done by means of linear histogram stretching, which uses the lowest and highest grey_avg values of the image to calculate the resulting grey value of each patch. This was achieved by using Eq. (6), which sets the lowest value to 0 (black) and the highest value to 9.9 (white) while intermediate values are assigned to shades of grey.

grey _ value = grey _ avg min grey _ avg / max grey _ avg min grey _ avg 9.9 E6

2.9 Evaluation of performance

The performance of this algorithm was evaluated by calculating the peak signal-to-noise-ratio (PSNR) using the method described by Russo [20]. The result is given in dB and quantifies the difference between the noisy input image and the processed image. Higher dB values indicate better denoising performance. To evaluate PSNR between the input and the output images, images were exported from Netlogo in ‘png’ format and a Python script was programmed to execute the function “compare_psnr” offered in the “skimage.measure” library.

Advertisement

3. Results image denoising

The sequence of image-processing steps illustrated in Figure 1 allowed the strong enhancement of the image contrast in dim images and simultaneously removed sensor noise. The performance of this simple, dynamic spatial domain-filtering algorithm depends on two parameters that can be estimated by evaluating global image statistics and executing a simple noise estimation method. Application of this NV algorithm to an underexposed image that was taken in a very dim room resulted in an output image with a high level of contrast and rather low noise level (see Figure 2C ). On the contrary, the automatic adjustment of brightness and contrast offered by commonly used image processing software either produced a rather dark image (similar as Figure 2A ) or a contrasted image in which sensor noise was greatly enhanced ( Figure 2B ). The adaptive spatial averaging procedure preserved fine image details and object contours, while most noise was removed. This is mirrored in the high PSNR value of 25.9 dB when Figure 2B was compared with Figure 2C .

Figure 2.

Performance of the NV algorithm using a dim image of a mobile phone camera as input. (A) The original image is black. (B) After automatic histogram equalization sensor noise was amplified. (C) The NV-processed image exhibits a high level of image contrast and contains a low amount of noise. PSNR of C: 25.9 dB. Hartbauer holds the copyright of this picture.

The NV algorithm enhanced image contrast and removed noise from the natural image showing a village (compare Figure 3A and C ; PSNR = 35.3 dB), whereas automatic contrast enhancement amplified image noise greatly ( Figure 3B ). In comparison, denoising performed with an improved non local mean algorithm published by Buades et al. [6] removed noise slightly more effective (PSNR = 41.1 dB), but did not improve image contrast.

Figure 3.

Performance of the NV algorithm using a dim natural image as input. (A) Noisy image of a village on Mallorca (printed with permission from Buades; see Buades et al. [6]). (B) Noise was amplified after histogram stretching. (C) The output of the NV algorithm yielded a reduced amount of noise (PSNR = 35.3 dB) and shows a higher quality regarding image details and contrast.

When Gaussian distributed noise with a standard deviation of 12.75 was added to an image of a bird ( Figure 4A ), the NV algorithm removed noise while retaining many fine image details ( Figure 4B ). This performance is reflected in a high PSNR value of 32.1 dB. The same noisy picture processed with a method that is based on the moving least squares (MLS) algorithm described by Lee et al. [9] resulted in the production of a rather blurry image.

Figure 4.

Denoising performance of the NV algorithm using grey scale images as input. (A) The image of a bird (from Lee et al. [9]) was degraded by Gaussian distributed noise with σ = 12.75. (B) The NV algorithm removed noise (PSNR = 32.1 dB) and preserved many fine image details. (C) Additive Gaussian noise with σ = 0.6 was added to an image of an alarm clock. The NV algorithm removed noise (PSNR = 27.4 dB) without degrading fine image details (D). The picture was printed with permission from Springer Inc. The picture shown in (C) was printed with permission from Yoon et al. [10] (Creative Commons License).

The noisy grey-scale image, shown in Figure 4C , contains additive Gaussian noise with a standard deviation of 0.6. The output image of the NV algorithm ( Figure 4D ) contains many fine image details and the noise level was reduced by 27.2 dB (PSNR). This denoising performance is very similar to the adaptive total variation minimization-based image enhancement method (ATVM) described by Yoon et al. [10], which is mirrored in a similar PSNR value of 30.5 dB.

Adaptive spatial averaging as described here can also be used to remove noise from color images (see Figure 5 ). This NV algorithm successfully removed noise from the extremely noisy image showing a candle ( Figure 5A ). Denoising of this image resulted in a PSNR of 32.0 dB ( Figure 5B ), which is similar to the denoising performance of the GSM wavelet denoising described by Portilla et al. [23]. The NV algorithm also removed most noise from the image showing the face of an ostrich in Figure 5C , resulting in a PSNR value of 32.7 dB. Interestingly, the resulting image is rather sharp and still contains many fine image details, such as the hairs. In contrast, the novel image denoising method described by Liu et al. [4] created patchy regions of homogeneous color, when the same noisy image was used as input. Furthermore, GSM wavelet denoising of this image resulted in image blur.

Figure 5.

The NV algorithm reduced the noise level in extremely noisy color images. (A) Noisy color image of a camera taken at candle light conditions. The NV algorithm removed most noise (PSNR = 32.0 dB) (B). (C) Noisy color image of an ostrich with 10% additive Gaussian noise. (D) Noise reduced output of the NV algorithm (PSNR = 32.7 dB). Pictures printed with permission from IEEE (see Liu et al. [4]).

Advertisement

4. Discussion

The first two image-processing steps of this novel bionic NV algorithm were inspired by the transducer gain of photoreceptor cells of nocturnal insects and the spatial integration of image information in lamina neurons of M. genalis [2]. The combination of them allowed the strong enhancement of the contrast of dim images and effectively removed noise that would have been amplified after performing histogram stretching (see Figures 2 and 3 ). A drawback of many denoising algorithms is obvious by the loss of fine image details and sharp object contours, which leads to image blur and staircase effects. In contrast, spatial domain-filtering by means of ‘adaptive spatial averaging’ showed an unexpected denoising performance, which removes noise without affecting fine image details and object contours. The denoising performance of the NV algorithm is comparable to the performance of well-known denoising techniques such as wavelet, bilateral, TV, ATVM, BM3D filtering as long as the noise level of the input image remains below a critical value. If noise dominates the input image, the output image may contain small artifacts in the form of grain noise in homogeneous image regions (see Figure 2C ).

Automatic parameter estimation as described here was sufficient to improve the quality of 12 pictures having different average brightness and noise levels. The method used here for noise estimation is similar to that described by Förstner [24] who estimated the noise level from the gradient of smooth or fine-textured regions, whereby these authors estimated signal-dependent noise level for each intensity interval. Similarly, Stefano et al. [25] described three methods to estimate the noise levels of natural images, and a noise estimation method based on mean absolute deviation was described by Donoho [26].

It will be possible to modify the bionic NV algorithm to improve the contrast of image sequences in dim films such as those generated by surveillance cameras operating at low light levels or digital night vision goggles. To reduce the noise of input images, it may be helpful to insert an additional processing step that performs a temporal summation of grey values by averaging across subsequent frames. A bionic method that operates in the spatial and temporal domains was described by Warrant et al. [5]. The night vision method described there is based on a smoothing kernel that is constructed for each pixel in three dimensions (two in space and one in time). In contrast to this complex algorithm, the NV algorithm described in this study is computationally less demanding and even runs on FPGA hardware due to the pixel-wise operations employed.

FPGA hardware can process images almost in real time due to its parallel architecture [19, 20]. To compute histogram statistics and equalization in parallel on a FPGA chip [27], non-conventional schemes for real-time histogram equalization have been discussed and implemented by Alsuwailem and Alshebeili [28]. Furthermore, several studies have investigated the implementation of the brightness control, contrast stretching, and histogram equalization algorithm on FPGA [29, 30], which has become a competitive alternative for high-performance digital signal processing (DSP) applications. Bittibssi et al. [31] addressed the hardware implementation of five image-enhancement algorithms in the spatial domain using FPGA: median filter, contrast stretching, histogram equalization, negative image transformation, and power law transformation. Recently, this NV algorithm was successfully implemented on a Trenz Electronic FPGA hardware platform for the purpose of denoising mammography images (prototype of the Mammobee project that was funded by the AWS Austria).

Advertisement

5. Conclusions

This bee-eyes inspired NV algorithm is based on rather simple calculations and operates in the spatial domain to suppress sensor noise in static images. It is applicable to a great variety of dim images differing in their brightness and degree of noise. Such a simple algorithm is suitable for FPGA technology, which allows image processing steps to be executed in parallel at the level of pixels. Since all parameters can be derived from the statistics of the input images, its use in the form of digital night vision goggles, medial applications and real-time fluorescence imaging systems is possible. In the future, this method will be adapted for night vision cameras that almost operate in the dark. For this task, subsequent video frames will be averaged to improve visual information. Another project is dedicated to the enhancement of dim X-ray images. This approach is based on the assumption that X-ray exposure during breast cancer screenings can be reduced by the amplification and denoising of slightly underexposed diagnostic raw data images. This is a challenging task because the resolution of such images as well as the range of grey values (16 Bit) is high and the procedure used for the noise estimation as well as the calculation of the threshold for “adaptive spatial averaging” needs to be modified.

Advertisement

Acknowledgments

The research was funded by the Austrian Science Fund P25709-B25.

References

  1. 1. Greiner B, Ribi WA, Warrant EJ. A neural network to improve dim-light vision? Dendritic fields of first-order interneurons in the nocturnal bee Megalopta genalis. Cell and Tissue Research. 2005;322:313-320
  2. 2. Warrant E. Seeing in the dark: Vision and visual behaviour in nocturnal bees and wasps. The Journal of Experimental Biology. 2008;211:1737-1746
  3. 3. Stöckl AL, O’Carroll DC, Warrant EJ. Neural summation in the hawkmoth visual system extends the limits of vision in dim light. Current Biology. 2016;26:821-826
  4. 4. Liu C, Szeliski R, Bing Kang S, Zitnick CL, Freeman WT. Automatic estimation and removal of noise from a single image. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2008;30:299-314
  5. 5. Warrant E, Oskarsson M, Malm H. The remarkable visual abilities of nocturnal insects: Neural principles and bioinspired night-vision algorithms. Proceedings of the IEEE. 2014;102:1411-1426
  6. 6. Buades A, Coll B, Morel J. A review of image denoising algorithms, with a new one. Multiscale Modeling and Simulation. 2005;4:490-530
  7. 7. Nirmala SO, Dongale TD, Kamat RK. Review on image enhancement techniques: FPGA implementation perspective. International Journal of Electronics, Computer and Communications Technologies. 2012;2
  8. 8. Rudin LI, Osher S, Fatemi E. Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena. 1992;60:259-268
  9. 9. Lee YJ, Lee S, Yoon J. A framework for moving least squares method with total variation minimizing regularization. Journal of Mathematical Imaging and Vision. 2014;48:566-582
  10. 10. Yoon SM, Lee YJ, Yoon G-J, Yoon J. Adaptive total variation minimization-based image enhancement from flash and no-flash pairs. Scientific World Journal. 2014;4:e319506
  11. 11. Tomasi C, Manduchi R. Bilateral filtering for grey and color images. In: Proceedings of the IEEE International Conference on Computer Vision; 1998. pp. 839-846
  12. 12. Zhang M, Gunturk BK. Multiresolution bilaterals filtering for image denoising. IEEE Transactions on Image Processing. 2008;17:2324-2333
  13. 13. Maggioni M, Katkovnik V, Egiazarian K, Foi A. Nonlocal transform-domain filter for volumetric data denoising and reconstruction. IEEE Transactions on Image Processing. 2013;22:119-133
  14. 14. Dabov K, Foi A, Katkovnik V, Egiazarian K. Image denoising with block-matching and 3D filtering. Proceedings of SPIE - Journal of Electronic Imaging. 2006;6064
  15. 15. Crouse MS, Nowak RD, Baraniuk RG. Wavelet-based statistical signal processing using hidden Markov models. IEEE Transactions on Signal Processing. 1998;46:886-902
  16. 16. Fan G, Xia X-G. Image denoising using a local contextual hidden Markov model in the wavelet domain. IEEE Signal Processing Letters. 2001;8:125-128
  17. 17. Portilla J. Full blind denoising through noise covariance estimation using Gaussian scale mixtures in the wavelet domain. Proceedings of IEEE International Conference on Image Processing. 2004;5:1217-1220
  18. 18. Bader DA, Jájá J, Harwood D, Davis LS. Parallel algorithms for image enhancement and segmentation by region growing, with an experimental study. The Journal of Supercomputing. 1996;10:141-168
  19. 19. Zhang N, Chen Y, Wang J. Image parallel processing based on GPU. In: 2nd International Conference on Advanced Computer Control (ICACC); 2010. pp. 367-370
  20. 20. Russo F. Piecewise linear model-based image enhancement. EURASIP Journal on Advances in Signal Processing. 2004;12:1861-1869
  21. 21. Rao DV, Patil S, Babu NA, Muthukumar V. Implementation and evaluation of image processing algorithms on reconfigurable architecture using c-based hardware description language. International Journal of Theoretical and Applied Computer Sciences. 2006;1:9-34
  22. 22. Xilinx Inc. System Generator for Digital Signal Processing. 2012. Available from: http://www.xilinx.com/tools/dsp.htm
  23. 23. Portilla J, Strela V, Wainwright MJ, Simoncelli EP. Image denoising using scale mixtures of Gaussians in the wavelet domain. IEEE Transactions on Image Processing. 2003;12:1338-1351
  24. 24. Förstner W. Image preprocessing for feature extraction in digital intensity, color and range images. LNES. 2003;95:165-189
  25. 25. Stefano A, White P, Collis W. Training methods for image noise level estimation on wavelet components. EURASIP Journal on Advances in Signal Processing. 2004;16:2400-2407
  26. 26. Donoho D. De-noising by soft-thresholding. IEEE Transactions on Information Theory. 1995;41:613-627
  27. 27. Salcic Z, Sivaswamy J. IMECO: A Reconfigurable FPGA-Based Image Enhancement Co-Processor Framework. IEEE TENCON-Speech and Image Technologies for Computing and Telecommunication, TENCON ‘97. Brisbane, Australia; 1997. pp. 231-234
  28. 28. Alsuwailem AM, Alshebeili S. A new approach for real time histogram equalization using FPGA. In: Proceedings of the International Symposium on Intelligent Signal Processing and Communication Systems; 2005. pp. 397-400
  29. 29. Sowmya S, Paily R. FPGA implementation of image enhancement algorithm. International Conference on Communication and Signal Processing (ICCSP); 2011. pp. 584-588
  30. 30. Acharya A, Mehra R, Takher V. FPGA based non uniform illumination correction in image processing applications. International Journal of Computer Technology and Applications. 2011;2:349-358
  31. 31. Bittibssi TM, Salama GI, Mehaseb YZ, Henawy AE. Image enhancement algorithms using FPGA. International Journal of Computer Science and Communication Networks. 2012;2:536-542

Written By

Manfred Hartbauer

Submitted: 25 November 2019 Reviewed: 26 February 2020 Published: 09 July 2020