Open access peer-reviewed chapter

Phase-Stretch Adaptive Gradient-Field Extractor (PAGE)

Written By

Madhuri Suthar and Bahram Jalali

Submitted: 17 May 2019 Reviewed: 04 November 2019 Published: 11 February 2020

DOI: 10.5772/intechopen.90361

From the Edited Volume

Coding Theory

Edited by Sudhakar Radhakrishnan and Muhammad Sarfraz

Chapter metrics overview

854 Chapter Downloads

View Full Metrics

Abstract

Emulated by an algorithm, certain physical phenomena have useful properties for image transformation. For example, image denoising can be achieved by propagating the image through the heat diffusion equation. Different stages of the temporal evolution represent a multiscale embedding of the image. Stimulated by the photonic time stretch, a realtime data acquisition technology, the Phase Stretch Transform (PST) emulates 2D propagation through a medium with group velocity dispersion, followed by coherent (phase) detection. The algorithm performs exceptionally well as an edge and texture extractor, in particular in visually impaired images. Here, we introduce a decomposition method that is metaphorically analogous to birefringent diffractive propagation. This decomposition method, which we term as Phase-stretch Adaptive Gradient-field Extractor (PAGE) embeds the original image into a set of feature maps that selects semantic information at different scale, orientation, and spatial frequency. We demonstrate applications of this algorithm in edge detection and extraction of semantic information from medical images, electron microscopy images of semiconductor circuits, optical characters and finger print images. The code for this algorithm is available here (https://github.com/JalaliLabUCLA).

Keywords

  • computational imaging
  • physics-inspired algorithms
  • phase stretch transform
  • feature engineering
  • Gabor filter
  • digital image processing

1. Introduction

Physical phenomena described by partial differential equations (PDE) have inspired a new field in computational imaging and computer vision [1]. Such physics-inspired algorithms based on PDEs have been successful for image smoothening and restoration. Image restoration can be viewed as obtaining the solution to evolution equations by minimizing an energy function. The most popular PDE technique for image smoothening treats the original image as the initial state of a diffusion process and extracts filtered versions from its evolution at different times. This embeds the original image into a family of simpler images at a hierarchical scale. Such a scale-space representation is useful for extracting semantically important information [2]. Physics based algorithms not only outperform their conventional counterparts, but also have enabled new applications. Usage of these algorithms range from feature detection in digital images [3, 4, 5], to 3D modeling of objects from 2D images [6, 7], to optical character recognition [8] as well as for restoring audio quality [9].

Phase Stretch Transform (PST) is a physics inspired algorithm that emulates 2D propagation through a medium with group velocity dispersion, followed by coherent (phase) detection [10, 11]. The algorithm performs exceptionally well as edge and texture extractor, in particular in visually impaired images [12]. This transform has an inherent equalization ability that supports wide dynamic range of operation for feature detection [12, 13, 14]. It also exhibits superior properties over conventional derivative operators, particularly in terms of feature enhancement in noisy low contrast images. These properties have been exploited to develop image processing tools for clinical needs such as a decision support system for radiologists to diagnose pneumothorax [15, 16], for resolution enhancement in brain MRI images [17], single molecule imaging [18], and image segmentation [19].

PST emulates the physics of photonic time stretch [20], a real time measurement technology that has enabled observation as well as detection of ultrafast, non-repetitive events like optical rogue waves [21], optical fiber soliton explosions [22] and birth of mode locking in laser [23]. Further, by combining photonic time stretch technology with machine learning algorithms, a world record accuracy has been achieved for classification of cancer cells in blood stream [24, 25].

The photonic time stretch employs group-velocity dispersion (GVD) in an optical fiber to slow down an analog signal in time by propagating a modulated optical pulse through the time stretch system which is governed by the following equation:

Eozt=12π+E˜i0ωejβ2zω22ejωtE1

where, β2 = GVD parameter, z is propagation distance, Eozt is the reshaped output pulse at distance z and time t. The response of dispersive element in the time-stretch system can be approximated a phase propagator K˜ω=ejβ2zω22 which leads to the definition of PST for a discrete 2D signal as following:

PSTEixyIFFT2FFT2EixyK˜uvE2

In the above equations, Eixy is the input image, FFT2 is 2D Fast Fourier Transform, IFFT2 is 2D Inverse Fast Fourier Transform, x and y are the spatial variables and, u and v are spatial frequency variables. The function K˜uv is called the warped phase kernel implemented in frequency domain for image processing.

PST utilizes the GVD dispersion to convert a real image to a complex quantity such that the spatial phase after the IFFT2 operation is a function of frequency. Upon thresholding, the high frequency edges survive. The phase kernel for the PST is designed by converting the 2D Cartesian frequencies u and v to polar coordinates which results in a symmetric Cartesian phase kernel. However, as digital images are fundamentally two-dimensional, there is an inherent loss of information in the features detected by PST. This motivates us to develop a more comprehensive approach that captures angular as well as spatial frequency information in a semantic fashion.

In this chapter, we introduce Phase-stretch Adaptive Gradient-field Extractor (PAGE), a new physics inspired feature engineering algorithm that computes a feature set comprising of edges at different spatial frequencies, at different orientations, and at different scales. These filters metaphorically emulate the physics of birefringent (orientation-dependent) diffractive propagation through a physical medium with a specific diffractive property. In such a medium, the dielectric constant of the medium and hence, its refractive index is a function of spatial frequency and the polarization in the transverse plane. To understand this metaphoric analogy, we consider an optical pulse with two linearly orthogonal polarizations, E˜x and E˜y, propagating through a dispersive diffractive medium such that

E˜izt=E˜x+E˜yE3

As the propagation constant β=n.2πλ is a function of refractive index (spatially varying), the two orthogonal polarizations E˜x and E˜y will have different propagation constants and hence, a phase difference at the output given by the following equation:

Δϕ=ϕxϕy=Δβl=ωmcnxnyLE4

By controlling the value of nx and ny, as well the dependence of refractive index on frequency nxω and nyω, we are able to detect a semantic hyper-dimensional feature set from a 2D image. We demonstrate with several visual examples in the later part of this chapter that the above filter banks can be applied for image processing and computer vision applications such as for detection of fabrication artifacts in semiconductor chips, development of clinical decision support systems, recognition of optical characters or finger prints. In particular, we show that PAGE features outperform the conventional derivative operators as well as directional Gabor filter banks.

Further, we address the dual problem of spatial resolution and dynamic range limitations in an imaging system. In an ideal imaging system, the numerical aperture and the wavelength of an optical set up are the only factors that determine the spatial resolution offered by the modality. But under non-ideal conditions, the number of photons collected from a specimen control its dynamic range (the ratio between the largest and the smallest value of a variable quantity) which in turn also limits the spatial resolution. This leads to the fundamental dual-problem of spatial resolution and dynamic range limitations in an imaging modality [26].

Certain approaches to improve the resolution of the imaging system include use of wide-field fluorescence microscopy [27, 28] which offers better resolution than confocal fluorescence microscopy [29], multiple fluorophores [30, 31]. Also, various image processing techniques such as multi-scale analysis using wavelets [32, 33] have been proposed for improving the resolution while retaining important visual information post the image acquisition. We show later in the chapter that we are able to alleviate this dual-problem by incorporating, in our algorithm, a local adaptive contrast enhancement operator, also known as Tone Mapping Operator (TMO) which leads to excellent dynamic range.

Other steps of the proposed decomposition method are discussed at length in the next section. The organization of the chapter is as follows. In Section 2, we describe the details of the proposed decomposition method. Experimental results and conclusions are presented in Sections 3 and 4, respectively.

Advertisement

2. Mathematical framework

Different steps of our proposed decomposition method Phase-stretch Gradient-field Extractor (PAGE) for feature engineering are shown in Figure 1. The first step is to apply an adaptive tone mapping operator (TMO) to enhance the local contrast. Next, we reduce the noise by applying a smoothening kernel in frequency domain (this operation can also be done in spatial domain). We then apply a spectral phase kernel that emulates the birefringence and frequency channelized diffractive propagation. The final step of PAGE is to apply thresholding and morphological operations on the generated feature vectors in spatial domain to produce the final output. The PAGE output embeds the original image into a set of feature maps that select semantic information at different scale, orientation, and spatial frequency. We show in Figure 2 how PAGE embeds semantic information at different orientations for an X-ray image of a flower.

Figure 1.

Different steps of the phase-stretch gradient-field extractor (PAGE) algorithm. The pipeline starts with application of tone mapping in the spatial domain. This is followed by a smoothening and a spectral phase operation in the frequency domain. The spectral phase operation is the main component of the PAGE algorithm. The generated hyper-dimensional feature vector is thresholded and post-processed by morphological operations. PAGE embeds the original image into a set of feature maps that select semantic information at different scale, orientation, and spatial frequency.

Figure 2.

The phase-stretch gradient-field extractor (PAGE) feature map of an X-ray image. The original image is shown on the left (A). PAGE embeds the original image into a feature map that selects semantic information at different orientations as shown in (B). The orientation of the edges is encoded into various color values here.

The sequence of steps of our physics-inspired feature extraction method, PAGE, can be represented by the following equations. We first define the birefringent stretch operator S as follows:

Eoxy=SEixy=IFFT2K˜uvθL˜uvFFT2TMOEixyE5

where Eoxy is a complex quantity defined as,

Eoxy=EoxyexyE6

In the above equations, Eixy is the input image, x and y are the spatial variables, FFT2 is the two-dimensional Fast Fourier Transform, IFFT2 is the two-dimensional Inverse Fast Fourier Transform, TMO is a spatially adaptive Tone Mapping Operator and u and v are frequency variables. The function K˜uvθ is called the PAGE kernel and the function L˜uv is a smoothening kernel, both implemented in frequency domain. For all our simulations here, we consider L˜uv to be low pass Gaussian filter whose cut off frequency is determined by the sigma of the Gaussian filter (σLPF).

The PAGE operator P can then be defined as the phase of the output of the stretch operation S applied on the input image Eixy:

PEixy=SEixyE7

where is the angle operator.

In the next subsections, we discuss each of the above mentioned kernels in detail and demonstrate the operation of each step using simulation results.

2.1 Tone mapping operator (TMO)

A tone mapping operator (TMO) is applied to enhance the local contrast in the input image Eixy. This technique is a standard method in the field of image processing to solve the problem of limited contrast in an imaging system while still preserving important details and thereby, helps in improving the dynamic range of an imaging system via post processing. By applying a tone mapping operator to the input image, an enhanced contrast can be achieved. While various TMO operators have been developed for adaptive contrast enhancement, here, we implement the TMO step by applying a Contrast Limited Adaptive Histogram Equalization (CLAHE) operator to the input image.

We operate on the input image using a TMO first, followed by smoothening operator (low pass filter) and not vice versa. The reason to follow this sequence of operation is as follows. Noise present in an image is mostly represented by the high frequency components in the spectrum. These high frequency components can be present at both low-light-level or at high-light-level in the spatial domain. Because of the use of a tone mapping operator, the low-light-level features get over emphasized [34, 35]. This also leads to amplification of the image noise particularly in low-light scenarios. By applying a smoothening filter after the TMO operation, we aim to remove these noise artifacts from the contrast enhancement step. Alternatively, where any noise is left after the application of a smoothening kernel on the input image, it could be amplified by the TMO operation in the next step. Therefore, one may need to alternate between the smoothening step and TMO before obtaining a final satisfactory result [36].

2.2 Phase-stretch adaptive gradient-field extractor (PAGE) kernel

Phase-stretch adaptive gradient-field extractor (PAGE) filter banks are defined by the PAGE kernel K˜uvθ and are designed to compute semantic information from an image at different orientations and frequencies. The PAGE kernel K˜uvθ, consists of a phase filter which is a function of frequency variable u and v, and a steerable angle variable θ which controls the directionality of the response. We first define the translated frequency variable u and v

u=ucosθ+vsinθE8
v=usinθ)+vcosθE9

such that the frequency vector rotates along the origin with θ

u+jvu+jvE10

We then define the PAGE kernel K˜uvθ as a function of frequency variable u and v and steerable angle θ as follows:

K˜uvθ=K˜uv=expjϕ1uϕ2vE11

where

ϕ1u=Su1σu2πexpuμu2/2σu2E12
ϕ2v=Sv1vσv2πexplnvμv2/2σv2E13

There are two important things that should be noted here. First, we consider the modulus of our translated frequency variable u and v so that our kernel is symmetric for proper phase operation as discussed in [12]. Second, for all our simulation examples here, when we consider a bank of PAGE filters, we first normalize ϕ1u and ϕ2v in the range (0,1) for all values of θ and then, multiply the filter banks with Su and Sv, respectively, in order to make sure that the amplitude of each filter in the bank is same.

These filter banks can detect features at a particular frequency and/or in a particular direction. Therefore, by selecting a desired direction and/or frequency, a hyper-dimensional feature map can be constructed. We list all parameters in Table 1 that control different functionalities of our proposed decomposition method PAGE.

NotationVariable
u and vSpatial frequency
θSteerable angle
u and vTranslated spatial frequency
ϕ1Normal filter
ϕ2Log normal filter
SuStrength of ϕ1 filter
SvStrength of ϕ2 filter
μuMean of normal distribution for ϕ1 filter
μvMean of log-normal distribution for ϕ2 filter
σuSigma of normal distribution for ϕ1 filter
σvSigma of log-normal distribution for ϕ2 filter
σLPFSigma of Gaussian distribution for L˜uv smoothening kernel
ThresholdMinMaxBi-level feature thresholding for morphological operations

Table 1.

Different parameters of our physics-inspired feature decomposition method PAGE.

The values of these parameters for Figure 2 simulation result are: Su= 3.4, Sv= 1.2, μu= 0, μv= 0.4, σu= 0.05, σv= 0.7, σLPF= 0.1 and ThresholdMinMax= 1,0.0019. The number of filters considered for a 1° resolution is equals to 180.

Figure 3AP show the generated phase profiles for ϕ1uϕ2v that select semantic information at different orientation and frequency as described in Eqs. (10)(13) using PAGE kernels. These phase kernels are applied to the input image spectrum. Using the steerable angle, the directionality of edge response can be controlled in the output phase of the transformed image. The detected output response for each directional filter is thresholded using a bi-level method. This is done to preserve negative high amplitude values as well as positive high amplitude values.

Figure 3.

Phase-stretch gradient-field extractor (PAGE) filter banks (A)–(P) phase filter banks as defined in Eqs. (8)(13) for various frequencies and directions. The frequency variables u and v are normalized from ωu to +ωu and ωv to +ωv, respectively. The center μv of the phase kernel Sv is gradually increased for control over the frequency distribution. The values for steerable angle θ considered here are 0, π/4,π/2, 3π/4.

2.2.1 Directionality

In order to detect features in a particular direction spread over the all the frequency components in the spectrum, we construct the PAGE filter banks by using Eqs. (9)(13) for K˜uvθ, ϕ1u and ϕ1v respectively. By controlling the value of sigma σu of normal distribution for ϕ1u filter, we avoid any overlapping of directional filters as seen in Figure 4.

Figure 4.

Phase-stretch gradient field extractor (PAGE) directional filter banks (A)–(D) the directional filter banks of PAGE computed using the definition in Eqs. (9)(13) for steerable angle θ = 0, π/4, π/2 and 3π/4, respectively. By monitoring the value of sigma σu of the normal filter ϕ1u, the angular spread of kernel K˜uvθ can be controlled to avoid any overlapping of directional filters.

We first evaluate the performance of these kernel by qualitatively comparing the feature detection of PAGE with PST. The image under analysis is a gray-scale image of a rose. For a better visual understanding of our method, we first compute orthogonal directional responses as shown in Figure 5. We then show results of edge detection using PST and PAGE in Figure 6. The values for the parameters strength Su=2.8, Sv=0.5, μu=0, μv=0.4, σu=0.05, σv=0.7, σLPF=0.1 and ThresholdMinMax=1,0.0019. The number of filters considered for a 1° resolution is equals to 180. Morphological operations used for the result shown in Figure 6C include edge thinning and isolated pixel removing for each directional response. As evident in Figure 6, edges are accurately extracted with our technique. Different colors in the computed edge response indicate the edge directionality.

Figure 5.

Phase-stretch gradient-field extractor (PAGE) directional filter banks response the original image is shown in (A). We design two directional PAGE filters here to detect vertical (θ=π/2) and horizontal (θ=0) edges as shown in (B) and (C) respectively.

Figure 6.

Comparison of feature detection using phase stretch transform (PST) and phase-stretch gradient-field extractor (PAGE) the original image is shown in (A). The output edge image obtained using PST without the support of directional response is shown in (B). The edge map obtained using PAGE filter banks that support edge detection at all frequencies is shown in (C). Different color values are used to show the orientation of the edges.

2.2.2 Frequency selectivity

The PAGE filter banks can also be designed to detect edges at a particular frequency by controlling the spread of log normal distribution. To demonstrate this functionality, we show the features detected at low and high frequency using the rose image as an example in the Figure 7. As seen in the figure, the features detected at low frequency are smoother and at high frequency are sharper.

Figure 7.

Feature detection using phase-stretch gradient field extractor (PAGE) at low and high frequency: Features detected at low frequency are much smoother whereas for high frequency, the features are sharper. This demonstrates the frequency selectivity for feature detection using PAGE.

Advertisement

3. Discussion

3.1 Comparison to Gabor feature extractors

We demonstrate the effectiveness of our decomposition method by comparing the directional edge response obtained by applying Gabor filter banks to an optical character image. We design 24 Gabor directional filters and augment the response from each of the filters to generate the image in Figure 8B. As seen in Figure 8C, with PAGE we have a better spatial localization of the edge response. By spatial localization, we mean that inherently PAGE has a sharper edge response, as seen in the figure. This is because, unlike the Gabor filters whose bandwidth is determined by the sigma parameter of the filter, in PAGE, the bandwidth of the response is determined by the input image dimension. Therefore, there is better localization of edge with PAGE. The parameters values are strength Su=2.8, Sv=0.5, μu=0, μv=0.4, σu=0.05, σv=0.7, σLPF=0.1 and ThresholdMinMax=1,0.0019. The number of filters considered for a 1° resolution is equals to 180.

Figure 8.

Comparison to Gabor feature extractors: Features detected using Gabor do not have inherent spatial feature localization. With PAGE, the features are more sharper as the bandwidth of the response is determined by the input image dimension.

3.2 Comparison to derivative feature extractors

To demonstrate the superiority of our decomposition method, we compare the edge response obtained by applying derivative based operators to a test image shown in Figure 9A. The response to a derivative based operator is computed by using the edge function of Matlab software (canny) and is shown in Figure 9B. As seen in Figure 9C, PAGE outperforms derivative based operators by producing the orientation information and low contrast details. The parameters values are strength Su=2.7, Sv=0.5, μu=0, μv=0.4, σu=0.05, σv=0.7, σLPF=0.1 and ThresholdMinMax=1,0.0019. The number of filters considered for a 1° resolution is equals to 180.

Figure 9.

Comparison to derivative feature extractors: Features detected with derivative based edge operators calculate the directionality based on the horizontal and vertical gradients and do not provide information about the spatial frequency of the edges. PAGE provides both the orientation as well as the spatial frequency selectivity in the output response.

3.3 Simulation results

We apply our decomposition method to different types of images to show that the directional edge response obtained by PAGE can be used for various computer vision applications. For example, in Figure 10, we show application of PAGE to a Single Electron Microscope (SEM) image of an integrated circuit chip. As seen, the PAGE feature response is able to capture the edges corresponding to the chip layout (even the low contrast details). Based on the viewing angle (camera position), the layout edges should appropriately be rendered in the image as well as in the edge map. This can be used to identify any chip artifacts during the fabrication process. The parameters values for generating the feature map shown in Figure 10 are strength Su=3.1, Sv=0.9, μu=0, μv=0.4, σu=0.05, σv=0.7, σLPF=0.1 and ThresholdMinMax=1,0.0042. The number of filters considered for a 1° resolution is equals to 180.

Figure 10.

Fabrication artifact detection using phase-stretch gradient-field extractor (PAGE) on a single Electron microscope (SEM) image of integrated circuit chip. The original image is shown in (A). The output edge image obtained using PAGE filter banks that support edge detection at all frequencies is shown in (B). Different color values are used to show the orientation of the edges that correspond to the chip layout and can be used to detect fabrication artifacts.

We also apply PAGE to detect directional edge response to an image of a finger print as shown in Figure 11. Not only does PAGE detects a directional edge response, but also has an inherent equalization property to detect low contrast edges. The parameters values are strength Su=1.5, Sv=0.4, μu=0, μv=0.4, σu=0.05, σv=0.7, σLPF=0.08 and ThresholdMinMax=1,0.0019. The number of filters considered for a 1° resolution is equals to 180.

Figure 11.

Fingerprint feature map using phase-stretch gradient-field extractor (PAGE). The original image is shown in (A). The output edge image obtained using PAGE filter banks that support edge detection at all frequencies is shown in (B). As the edges of the fingerprint rotate, the response value changes (shown here with different color value).

Next, we show application of our decomposition method PAGE to extract edges of vessels from a retinal image in Figure 12. The distribution of vessels based on the orientation of the edges can be used as an important feature to detect any abnormalities in the eye structure. As seen, the PAGE feature response is able to capture both the low contrast details as well as information about the directionality of the vessel edges which is coded in form of the color value in RGB space. The parameters values are strength Su=2.2, Sv=1.1, μu=0, μv=0.4, σu=0.05, σv=0.7, σLPF=0.1 and ThresholdMinMax=1,0.0019. The number of filters considered for a 1° resolution is equals to 180.

Figure 12.

Vessel detection using phase-stretch gradient-field extractor (PAGE) on an image of a retina. The original image is shown in (A). The output edge image obtained using PAGE filter banks that support edge detection at all frequencies is shown in (B). Different color values are used to show the orientation of the edges. The low contrast vessels are not only detected using PAGE but also information on how the direction of the blood flow changes across the eye based on the vessel distribution is extracted.

Advertisement

4. Conclusions

In this chapter, a presentation is made on a new feature engineering method that takes inspiration from the physical phenomenon of birefringence in an optical system. The introduced method called Phase-stretch Adaptive Gradient-field Extractor (PAGE) controls the diffractive properties of the simulated medium as a function of spatial location and channelized frequency. This method when applied to 2D digital images extracts semantic information from the input image at different orientation, scale and frequency and embeds this information into a hyper-dimensional feature map. The computed response is compared to other directional filters such as Gabor to demonstrate superior performance of PAGE. Applications of the algorithm for edge detection and extraction of semantic information from medical images, electron microscopy images of semiconductor circuits, optical character and finger print images is also shown.

Advertisement

Acknowledgments

The authors would like to thank Dr. Ata Mahjoubfar for his helpful comments on this work during his post-doctoral studies in Jalali Lab at UCLA. This work was partially supported by the National Institutes of Health (NIH) Grant No. 5R21 GM107924-03 and the Office of Naval Research (ONR) Multi-disciplinary University Research Initiatives (MURI) program on Optical Computing.

Advertisement

Conflict of interest

The authors declare no conflict of interest.

References

  1. 1. Perona P, Malik J. Scale-space and edge detection using anisotropic diffusion. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1990;12(7):629-639
  2. 2. Weickert J. Anisotropic Diffusion in Image Processing. Stuttgart: Teubner; 1998
  3. 3. Catt F, Lions PL, Morel JM, Coll T. Image selective smoothing and edge detection by nonlinear diffusion. SIAM Journal on Numerical Analysis. 1992;29(1):182-193
  4. 4. Alvarez L, Lions PL, Morel JM. Image selective smoothing and edge detection by nonlinear diffusion. II. SIAM Journal on Numerical Analysis. 1992;29(3):845-866
  5. 5. Nordstrom KN. Biased anisotropic diffusion: A unified regularization and diffusion approach to edge detection. Image and Vision Computing. 1990;8(4):318-327
  6. 6. Zhao H, Lu M, Yao A, Guo Y, Chen Y, Zhang L. Physics inspired optimization on semantic transfer features: An alternative method for room layout estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2017. pp. 10-18
  7. 7. Yang S, Pan Z, Amert T, Wang K, Yu L, Berg T, et al. Physics-inspired garment recovery from a single-view image. ACM Transactions on Graphics (TOG). 2018;37(5):170
  8. 8. Phan TQ, Shivakumara P, Tan CL. Detecting text in the real world. In: Proceedings of the 20th ACM International Conference on Multimedia. ACM; 2012. pp. 765-768
  9. 9. Fadeyev V, Haber C. A novel application of high energy physics technology to the problem of audio preservation. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 2004;518(1–2):456-462
  10. 10. Asghari MH, Jalali B. Edge detection in digital images using dispersive phase stretch transform. International Journal of Biomedical Imaging. 2015;2015:687819
  11. 11. JalaliLabUCLA/Image-feature-detection-using-Phase-Stretch-Transform. Available from: https://github.com/JalaliLabUCLA/Image-feature-detection-using-Phase-Stretch-Transform/
  12. 12. Suthar M, Asghari H, Jalali B. Feature enhancement in visually impaired images. IEEE Access. 2017;6:1407-1415
  13. 13. Jalali B, Suthar M, Asghari M, Mahjoubfar A. Physics-based feature engineering. In: Optics, Photonics and Laser Technology. Cham: Springer; 2017. pp. 255-275
  14. 14. Jalali B, Suthar M, Asghari M, Mahjoubfar A. Optics-inspired computing. In: Proceedings of the 5th International Conference on Photonics, Optics and Laser Technology, Vol. 1; 2017. pp. 340-345
  15. 15. Suthar M, Mahjoubfar A, Seals K, Lee EW, Jalaii B. Diagnostic tool for pneumothorax. In: 2016 IEEE Photonics Society Summer Topical Meeting Series (SUM). IEEE; 2016. pp. 218-219
  16. 16. Suthar M. Decision support systems for radiologists based on phase stretch transform [Doctoral dissertation]. USA: UCLA; 2016. Available from: https://escholarship.org/uc/item/39p0h9jp
  17. 17. He S, Jalali B. Medical image super-resolution using phase stretch anchored regression (Conference presentation). In: Optical Data Science II, Vol. 10937. International Society for Optics and Photonics; 2019. p. 109370E
  18. 18. Ilovitsh T, Jalali B, Asghari MH, Zalevsky Z. Phase stretch transform for super-resolution localization microscopy. Biomedical Optics Express. 2016;7(10):4198-4209
  19. 19. Ang RB, Nisar H, Khan MB, Tsai CY. Image segmentation of activated sludge phase contrast images using phase stretch transform. Microscopy. 2018;68(2):144-158
  20. 20. Mahjoubfar A, Churkin DV, Barland S, Broderick N, Turitsyn SK, Jalali B. Time stretch and its applications. Nature Photonics. 2017;11(6):341
  21. 21. Solli DR, Ropers C, Koonath P, Jalali B. Optical rogue waves. Nature. 2007;450(7172):1054
  22. 22. Herink G, Kurtz F, Jalali B, Solli DR, Ropers C. Real-time spectral interferometry probes the internal dynamics of femtosecond soliton molecules. Science. 2017;356(6333):50-54
  23. 23. Herink G, Jalali B, Ropers C, Solli DR. Resolving the build-up of femtosecond mode-locking with single-shot spectroscopy at 90 MHz frame rate. Nature Photonics. 2016;10(5):321
  24. 24. Chen CL, Mahjoubfar A, Tai LC, Blaby IK, Huang A, Niazi KR, et al. Deep learning in label-free cell classification. Scientific Reports. 2016;6:21471
  25. 25. Mahjoubfar A, Chen CL, Jalali B. Artificial Intelligence in Label-Free Microscopy. Springer; 2017. Available from: https://scholar.google.com/scholar?hl=en&as_sdt=2005&cites=2319859981831178467&scipsc=&q=Artificial+Intelligence+in+Label-free+Microscopy&btnG=#d=gs_cit&u=%2Fscholar%3Fq%3Dinfo%3A4wixwVzMMSAJ%3Ascholar.google.com%2F%26output%3Dcite%26scirp%3D0%26hl%3Den
  26. 26. Yasuma F, Mitsunaga T, Iso D, Nayar SK. Generalized assorted pixel camera: Postcapture control of resolution, dynamic range, and spectrum. IEEE Transactions on Image Processing. 2010;19(9):2241-2253
  27. 27. Gustafsson MG. Nonlinear structured-illumination microscopy: Wide-field fluorescence imaging with theoretically unlimited resolution. Proceedings of the National Academy of Sciences. 2005;102(37):13081-13086
  28. 28. Gustafsson MG, Shao L, Carlton PM, Wang CR, Golubovskaya IN, Cande WZ, et al. Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination. Biophysical Journal. 2008;94(12):4957-4970
  29. 29. Hell S, Stelzer EH. Fundamental improvement of resolution with a 4Pi-confocal fluorescence microscope using two-photon excitation. Optics Communications. 1992;93(5–6):277-282
  30. 30. Hess ST, Girirajan TP, Mason MD. Ultra-high resolution imaging by fluorescence photoactivation localization microscopy. Biophysical Journal. 2006;91(11):4258-4272
  31. 31. Bates M, Huang B, Zhuang X. Super-resolution microscopy by nanoscale localization of photo-switchable fluorescent probes. Current Opinion in Chemical Biology. 2008;12(5):505-514
  32. 32. Temizel A, Vlachos T. Wavelet domain image resolution enhancement using cycle-spinning. Electronics Letters. 2005;41(3):119-121
  33. 33. Piao Y, Park H. Image resolution enhancement using inter-subband correlation in wavelet domain. In: 2007 IEEE International Conference on Image Processing, Vol. 1. IEEE; 2007. p. I-445
  34. 34. Granados M, Aydn TO, Tena JR, Lalonde JF, Theobalt C. HDR image noise estimation for denoising tone mapped images. In: Proceedings of the 12th European Conference on Visual Media Production. ACM; 2015. p. 7
  35. 35. Perry S. Image and video noise: An industry perspective. In: Denoising of Photographic Images and Video. Cham: Springer; 2018. pp. 207-234
  36. 36. Milanfar P. A tour of modern image filtering: New insights and methods, both practical and theoretical. IEEE Signal Processing Magazine. 2012;30(1):106-128

Written By

Madhuri Suthar and Bahram Jalali

Submitted: 17 May 2019 Reviewed: 04 November 2019 Published: 11 February 2020