Open access peer-reviewed chapter

Latest Advances in Single and Multiwavelength Digital Holography and Holographic Microscopy

Written By

George Nehmetallah, Logan Williams and Thanh Nguyen

Submitted: 09 September 2020 Reviewed: 08 October 2020 Published: 03 November 2020

DOI: 10.5772/intechopen.94382

From the Edited Volume

Augmented Reality and Its Application

Edited by Dragan Cvetković

Chapter metrics overview

809 Chapter Downloads

View Full Metrics

Abstract

In this Chapter, we discuss the latest advances in digital holography (DH) and digital holographic microscopy (DHM). Specifically, we study the different setup configurations such as single and multiwavelength approaches in reflection and transmission modes and the reconstruction algorithms used. We also propose two novel telecentric recording configurations for single and multi-wavelength digital holographic microscopy (TMW-DHM) systems. Brief theory and results are shown for each of the experimental setups discussed. The advantages and disadvantages of the different configurations will be studied in details. Typical configuration features are, ease of phase reconstruction, speed, vertical measurement range without phase ambiguity, difficulty in applying optical and numerical post-processing aberration compensation methods. Aberrations can be due to: (a) misalignment, (b) multiwavelength method resulting in Chromatic aberrations, (c) the MO resulting in parabolic phase curvature, (d) the angle of the reference beam resulting in linear phase distortions, and (e) different optical components used in the setup, such as spherical aberration, astigmatism, coma, and distortion. We conclude that telecentric configuration eliminates the need of extensive digital automatic aberration compensation or the need for a second hologram’s phase to be used to obtain the object phase map through subtraction. We also conclude that without a telecentric setup and even with post-processing a residual phase remains to perturb the measurement. Finally, a custom developed user-friendly graphical user interface (GUI) software is employed to automate the reconstruction processes for all configurations.

Keywords

  • digital holography
  • multi-wavelength digital holography

1. Introduction to digital holography (DH)

Digital holograms are generated by recording the interference pattern of two mutually coherent beams. These two beams are the object beam and the reference beam and the recording medium is usually a CCD [1]. The digital hologram recorded on the CCD due to the interference of the object beam EO and the reference beam ER is given by

hxyIH=ER2+EO2+EREO+EOER,E1

where the * notation denotes the complex conjugate. Traditionally, in analog holography the reconstruction is performed by illuminating a holographic film by the conjugate of the reference beam ER, and the real image is obtained from the last term of Eq. (1): ER2EO. The first two terms on the right hand side and the third term contribute to the zero order and the virtual image, respectively. The digital reconstruction is generally performed by numerically propagating the field ERhxy by the recording distance, d or –d, to reconstruct either the real or virtual images. A typical schematic of the recording and reconstruction of DHs is shown in Figure 1. Several numerical reconstruction algorithms have been developed for DH, although the most common are the discrete Fresnel transform, the convolution approach, and reconstruction by angular spectrum. Each of these reconstruction algorithms will be subsequently briefly described.

Figure 1.

Coordinate system for DH recording and reconstruction.

1.1 Numerical reconstruction by discrete Fresnel transformation

The Fresnel Transform is based on the Fresnel approximation to the Huygens-Fresnel diffraction integral, and under the paraxial approximation, i.e., d3>>2π/λξx2+ηy2, the reconstruction of the hologram can be approximated by the Fresnel transformation [1, 2, 3, 4, 5, 6]:

Γξη=zξηx,yhxyERxywxykx=2πξ/λd,ky=2πη/λd,E2
wxy=expjπλdx2+y2,E3
zξη=jλdexpj2πdλexpξ2+η2/λd,E4

where x,y is the Fourier transform operator. The intensity is calculated by squaring the optical field, i.e., Iξη=Γξη2 and the phase is calculated using φξη=arctanImΓξη/ReΓξη. Since x,y are discretized on a CCD rectangular raster of Nx,Ny pixels of sizes Δx,Δy, the reconstructed image resolution in the ξ,η coordinates are given by [5, 6, 7].

Δξ=λd/NxΔx,Δη=λd/NyΔy.E5

The image resolution given by Eq. (3) is considered to be “naturally scaled,” such that the value of Δξ is automatically equal to the physical resolution limit imposed by the CCD sampled signal bandwidth [2, 6].

A reflection type Fresnel DH setup based upon the Mach-Zehnder interferometer is schematically shown in Figure 2(a). Light from a Laser source is divided into two parts with a beam splitter. One of the beams forms the reference, while the other is reflected off the object, then both interfere on a CCD camera to form a Fresnel hologram. Figure 2(b) shows a Michelson type setup [8, 9]. An example of a recorded hologram of a Newport Logo recorded using an Argon laser @ 496.5 nm and its reconstruction using Fresnel transform method are shown in Figure 3(a) and (b), respectively.

Figure 2.

Schematic of a DH setup (a) Mach-Zehnder setup, (b) Michelson setup. MO-SF: microscope objective-spatial filter, BS: beam splitter, NF: neutral density filter, M: mirror, CL: collimating Lens.

Figure 3.

The recorded hologram in (a) is reconstructed via Eq. (2) to yield (b) the reconstructed image. Note that (b) contains an in-focus (virtual) image on the right, and an out-of-focus (real) image on the left. The relevant reconstruction parameters are d = 39 cm, λ = 496.5 nm, Δx = 6.7 μm, N = 1024, with reconstructed image resolution Δξ= 28.5 μm.

1.2 Numerical reconstruction by the convolution approach

Since the diffracted field at a distance z = d from the hologram can be expressed as

Γξη=hxyERxygPSFxξyηdxdy,E6

the convolution approach can be written as:

Γξη=hξηERξηgPSFξη,gPSFξη=jλexpjk0d2+ξ2+η2d2+ξ2+η2E7

where the denotes convolution. Eq. (5) can be written as

Γξη=x,y1x,yhERx,ygPSFx,y1x,yhERGPSF,E8

where GPSF=x,ygPSF. Although the pixel sizes of the images reconstructed by the convolution approach are equal to that of the hologram, namely, Δξ=Δx,Δη=Δy, the physical image resolution remains according to Eq. (3) and is ultimately governed by physical diffraction [5, 6, 7].

1.3 Numerical reconstruction by the angular spectrum approach

In Fourier space and across any plane the various spatial Fourier components of the complex field distribution of a monochromatic wave can be considered as plane waves traveling in different directions away from that plane. The field amplitude at any other point can be calculated by adding the weighted contributions of these plane waves, taking into account of the phase shifts they have undergone during propagation [1]. Similar to the convolution approach above, the angular spectrum approach is based on direct application of the propagation of the angular spectrum of the field in the hologram plane. Accordingly, we define the angular spectrum of the field hER at the hologram plane as [1]:

E˜hkξkη=x,yhER=14π2hERexpjkξξ+kηηdξdηE9

where kξ,kη are spatial frequency variables corresponding to ξ,η. After propagating a distance z, each plane wave component of the angular spectrum acquires an additional phase factor ejkzz where

kz=k02kξ2kη2.E10

Therefore, the reconstructed field at a distance z = d becomes:

Γξη=14π2E˜hkξkηexpjdk02kξ2kη2×expjkξξ+kηηdkξdkηE11

which is similar to Eq. (5) above.

Table 1 shows the advantages and disadvantages of the different reconstruction techniques discussed in this section.

TechniqueAdvantagesDisadvantages
Fresnel
  • Fast (uses one FFT)

  • Used primarily for long distances. (Short distances possible with hologram upsampling prior to reconstruction.)

  • May be used for larger objects.

  • Image resolution can be arbitrarily scaled by applying zero padding or upscampling to the hologram.

  • Image pixel size depends on reconstruction distance and wavelength.

  • Poor depth resolution for isolating adjacent hologram planes along the propagation axis (compared to newer methods, e.g. compressive sensing).

  • Not useful in inline holograms of scattering particles that have to be evaluated on different depths

Convolution
  • Limited numerical image magnification is possible during reconstruction.

  • Image pixel size does not depend on distance and wavelength and is equal to hologram pixel size (Physical resolution still is governed by diffraction limit)

  • Useful in inline holograms of scattering particles that have to be evaluated on different depths

  • Slower (Uses at least two FFT)

  • Used for small objects

  • Used for short distances.

  • Numerical image magnification does not improve object “resolution.”

Angular spectrum
  • Image pixel size is typically equal to the hologram pixel size. (Physical resolution still is governed by diffraction limit.)

  • May be used for very short distances where Fresnel technique fails (No minimum distance required between object and CCD).

  • Typically used for in-line holograms

  • Slower (Uses at least two FFT)

  • Used for smaller objects that do not exceed the lateral extent of the CCD for in-line.

  • Zero padding techniques do not alter resolution.

Table 1.

Advantages and disadvantages of several digital holography reconstruction techniques.

Advertisement

2. Digital holographic microscopy (DHM)

DHM is usually applied to determine 3D shapes of small objects, with height excursions on the order of microns (or phase excursions on the order of a few radians). Since small objects are involved, a microscope objective (MO) is often used to zoom onto a small area of the object to enhance the transverse resolution. Holograms of microscopic objects recorded with DHM setups can be numerically reconstructed in amplitude and phase using the same DH reconstruction techniques discussed in Section 1. The phase aberrations due to the MO and the tilt from the reference beam have to be corrected to obtain the topographic profile or the phase map of the object [10, 11, 12]. Figure 4(a) and (b) show a Michelson DHM in reflection and transmission configurations, respectively.

Figure 4.

Digital holographic microscope: (a) reflective setup, (b) transmissive setup.

For a reflective object on a reflective surface, the height profile on the sample surface is simply proportional to the reconstructed phase distribution φξη, through [10]:

hzξη=λ4πφξη.E12

For a transmissive phase object on a reflective surface, its thickness can be calculated as:

hzξη=λ4πφξηΔn,E13

where Δn is the difference of the index of refraction between the transparent object material and the surrounding medium (e.g. air).

For a transmissive phase object on a transmissive surface or between transmissive surfaces, the phase change (optical thickness) can be calculated as:

hzξη=λ2πφξηΔn.E14

As stated above, in DHM we introduce a MO to increase the spatial resolution which was computed according to Eq. (3). Due to the magnification ‘M’ introduced by the MO the pixel size in the image plane, Δξmag scales according to:

Δξmag=ΔξM=λdN.ΔxM,E15

which is simply the magnification predicted by geometric imaging. This is intuitively understood by realizing that the holographic recording is now simply a recording of the geometrically magnified virtual image located at distance d as shown in Figure 5. Thus, the pixel resolution is automatically scaled accordingly. We can enhance the transverse resolution approximately to be equal to the diffraction limit 0.61λ/N.A of the MO, where N.A is the numerical aperture of the MO.

Figure 5.

Generalized holographic recording geometry using a lens. The image location is governed by geometric optics and may be on either side of the lens.

The complete reconstruction algorithm is governed by the equation [9, 10, 11, 12, 13, 14]:

Γmn=AejπλDm2Δξ2+n2Δη2Quadratic  PhaseduetoMOzmn×x,yARej2πλsinθxkΔx+sinθylΔyERehklwklm,n,E16

where zmn=jλdexpj2πdλexpjπλdm2Nx2Δx2+n2Ny2Δy2,wkl=expjπλdk2Δx2+l2Δy21D=1di1+dodi, and the focal length of the MO is: 1f=1di+1do.

Aberration compensation can be performed manually using a phase mask Ψ to cancel the effects of the quadratic phase due to the MO and the linear phase due to the reference tilt. The phase mask can be written as [11].

Ψ=Aexpj2πλmΔxsinθx+nΔysinθy×expjπλDm2Δx2+n2Δy2,E17

where θx,θy are the tilt angles of the reference beam and D is defined in Eq. (11). A more robust technique is to perform automatic aberration cancelation by approximating the residual phase front due to aberration using Zernike polynomials as explained in details in Refs. [13, 14], and shown in Section 4 below.

Consider the transmission setup shown in Figure 4(b). A USAF 1951 resolution chart target is used as an object. The resolution of a USAF resolution chart is documented as:

Rlp/mm=2G+E16,E18

where Rlp/mm is resolution in line pair per millimeter, G is the group number, and E is the element number. (See Figure 6(a)). As an example, Group 4, Elements 3 and 4 has a resolution of 20.16 lp/mm, and 22.62 lp/mm, respectively. The wavelength used is λ = 488 nm, the reconstruction distance d = 0.202 m, D = 0.14 m, the magnification is M ≈ 8.25, kx0/k0=sinθx=0.01307,ky0/k0=sinθy=0.01305. It should be noted that in practice, it is very difficult to obtain such precise parameter measurements in the laboratory. Typically, approximate measurements are made, then, varied slightly during numerical reconstruction to yield the “best focus” image. Such a process was followed to obtain the parameters listed above. In Section 4, we discuss the telecentric setups which mitigate these difficulties. Figure 6(b) shows the recorded hologram. Figure 6(c) shows the reconstructed hologram amplitude. Figure 6(d) shows the reconstructed phase using approximate phase mask parameters (see the circular fringes). Figure 6(e) shows the reconstructed phase using exact phase mask parameters (no circular fringes). Figure 6(f) shows the residual phase aberration approximation using Zernike polynomials to be subtracted from (e).

Figure 6.

(a) Schematic of the USAF resolution target, (b) recorded hologram, (c) reconstructed hologram amplitude, (d) reconstructed phase using approximate phase mask parameters, (e) reconstructed phase using exact phase mask parameters, and (f) residual phase aberration approximation using Zernike polynomials to be subtracted from (e).

Advertisement

3. Multi-wavelength digital holography (MWDH)

It is well known that one main application of holographic interferometry (HI) is the generation of a fringe pattern corresponding to contours of constant elevation with respect to a reference plane [2]. These contour fringes can be used to determine the shape of a macroscopic or microscopic three-dimensional object.

There exist three main techniques to create holographic contour interferograms: (a) The two-illumination-point method, (b) the two-refractive-index technique, which is generally not practical because we have to change the refractive index of the medium where the object is located, and (c) Multi-wavelength method, which was adopted in this section [6, 8, 11]. For large height profiles (larger than several microns) 2D topography using single wavelength holographic approach is not appropriate since phase unwrapping has limitations especially for sharp edge variations. As shown in Figure 7, the axial displacement of an image recorded with wavelength λ1 and reconstructed with another wavelength λ2, with respect to the image recorded and reconstructed with λ2, is [2, 6, 15, 16, 17, 18, 19, 20, 21, 22, 23].

Figure 7.

The path difference of the light rays on their way from the source to the surface and from the surface to the hologram.

Δdz=zλ1λ2λ2.E19

This means that the phase shift depends on the distance z between the object and the hologram plane. The height jump between two adjacent fringes in the reconstructed image is

ΔH=zΔφ=n+1×2πzΔφ=n×2π=λ1λ22λ1λ2=Λ2,E20

where Λ is known as the synthetic wavelength. For larger deformations along the z direction, the phase changes could be hundreds of multiples of 2π. Such large fringe densities may lead to difficulty in determining the object phase using the single wavelength technique. However, multi-wavelength illumination encodes the object height in terms of 2π multiples of the synthetic wavelength, which is generally much longer than either fundamental wavelength. This allows larger object deformations to be measured by multi-wavelength illumination as if illuminated by the single wavelength method, where the “single” wavelength is now given by the synthetic wavelength Λ. Typically, synthetic wavelengths can range from few microns to 10’s of microns [16, 18]. The topographic resolution is typically on the order of 1/100 of Λ and the vertical measurement range can reach several Λ’s by employing phase unwrapping for heights larger than Λ [24]. However, much longer or shorter synthetic wavelengths to measure millimeter-scale features can also be performed. Figure 8 shows the advantages of using MWDH to extend the vertical measurement range without phase ambiguity [6]. MWDH may be used to quantify surface topography and displacement measurements for both fixed objects and time-varying objects [17, 18]. It is worth noting that DHM has a lot of applications in living cells [25, 26, 27, 28, 29, 30], neural science [31], tissue analysis [32], particle tracking [33, 34, 35, 36], and MEMS analysis [37, 38, 39].

Figure 8.

The advantage of MWDH is that it extends the vertical measurement range without phase ambiguity.

In multiwavelength DH, both holograms are reconstructed separately at the correct fundamental wavelengths, λ1 or λ2. From the resulting reconstructed complex amplitudes Γλ1ξη and Γλ2ξη the phases are calculated as:

φλ1,2ξη=arctanImΓλ1,2ξη/ReΓλ1,2ξη.E21

The synthetic wavelength phase image is now calculated directly by pixel-wise subtraction of the fundamental wavelength hologram phases

Δφ=φλ1φλ2ifφλ1φλ2,φλ1φλ2+2πifφλ1<φλ2.E22

This phase map is equivalent to the phase distribution of a hologram recorded with the synthetic wavelength

Λ=λ1λ2λ1λ2.E23

At normal incidence, a phase jump corresponds to a height step of Λ/2, and the change in longitudinal distance or height Δz is given by [2, 6, 9].

Δz=Δφ2πΛ2=Δφ2πλ1λ22λ1λ2=Δφ2πΔH.E24

Note that the transverse resolution is the same as in DH namely, Δξ=λd/NΔx is the reconstructed pixel size. Similar to DH, MWDH setup can be constructed using Mach-Zehnder or Michelson configuration as shown in Figure 9(a) and (b), respectively. According to Figure 9(c) and (d), we notice that the true height measurements in the Mach-Zehnder and Michelson configurations are:

Figure 9.

(a) Mach-Zehnder configuration, (b) Michelson configuration, with illustration of the true height ΔzTrue relative to the path of phase accumulation for (c) Mach-Zehnder and (d) Michelson configurations.

ΔzTrue,MachZender=Δφ2πΛ2cosθ,E25
ΔzTrue,MichelsonΔφ2πΛ2,E26

respectively.

One important detail that must be considered when applying the two wavelengths technique is pixel matching. Recall from Eq. (3) that the pixel resolution Δξ of each hologram is dependent upon the fundamental recording wavelength (λ1 or λ2). In order for the reconstruction to be successful, the subtraction described by Eq. (18) must be performed on a pixel-by-pixel basis, in which the pixel sizes match between each hologram (i.e. Δξ1 = Δξ2). This can be accomplished by zero-padding the holograms to alter the numerical resolution according to the following procedure: One hologram is zero-padded prior to reconstruction such that its value of Δξ matches that of the second hologram. The second hologram is then either zero-padded after reconstruction, or the first hologram (which is now larger) is cropped, such that the total sizes of each image are again equal. If it is assumed that λ1 > λ2 then the degree of padding applied to both the λ1 hologram pre-reconstruction and the λ2 hologram post-reconstruction is

padsize=roundN2λ1λ21,E27

where pad size is the number of zero elements to be added symmetrically to each edge of the hologram matrix, rounded to the nearest integer value.

3.1 Experimental results for MWDH with and without spatial heterodyning (MWDH-SH)

An example of a 3D profile setup using the MWDH technique is shown in Figure 10. Since the two holograms are recorded sequentially for each wavelength, this technique needs two sequential CCD recordings (i.e. two “shots”). Obviously, this “two-shot” method will not work for dynamic objects. Figure 11(a) shows the Newport logo test object while Figure 11(b) shows the reconstructed hologram of the Newport Logo at one of the wavelengths used (λ1 = 496.5 nm). Figure 11(c) shows the wrapped phase and Figure 11(d) shows the unwrapped 3D surface profile.

Figure 10.

Shape measurement using MWDH.

Figure 11.

(a) Newport logo, (b) reconstructed hologram at λ1 = 496.5 nm, (c) wrapped phase, and (d) unwrapped phase or 3D surface profile. The two wavelengths used are: λ1 = 496.5 nm and λ2 = 488 nm and the synthetic wavelength is Λ = 28.5 μm.

Spatial heterodyning technique has the ability to capture both wavelength measurements in a single composite holographic exposure [6, 9, 21]. This is accomplished by introducing a different angular tilt to the λ1 and λ2 reference beams. These angular tilts in the spatial domain introduce linear phase shifts in the frequency domain of the recorded composite hologram. When reconstructed, the different phase shifts result in spatially separated object locations in the image that each correspond to their respective λ1 and λ2 recordings. One of these reconstructed images is cropped and digitally overlaid upon the other to perform the required phase subtraction. A typical recording configuration using MWDH-SH method is shown in Figure 12. Since the two holograms are recorded for each wavelength at the same time using spatial heterodyning this technique needs only one CCD exposure (i.e. “one-shot”) [6, 9, 21, 22]. This method is well suited for dynamic objects which change relatively quickly and only limited by the integration time of the CCD. Figure 13 shows the reconstructed Newport logo test object. The reconstructed image resolution Δξ = 32 μm/pixel. Note that, two separate reconstructions are required (λ1 and λ2) from the single hologram, although only one reconstruction is shown here.

Figure 12.

Lab setup (Michelson configuration) for macroscopic, spatial heterodyne MWDH using coaxial beams and a single spatial filter and collimation lens. The collimation lens should ideally be achromatic at the λ1 and λ2 wavelengths. M1, M2: Mirrors, BS: Beam splitter, PBS: Polarizing beam splitter. The polarizer, P0, ensures λ1 and λ2 maintain orthogonal polarization [6, 23].

Figure 13.

MWDH-SH hologram reconstruction.

In order to align the two phase images, a block matching algorithm (BMA) is used. After cropping the two reconstructed holograms it is necessary to slide the reference image over the target image looking for best correlation, this is shown in Figure 14. Given the typically rapid variation in object phase, BMA algorithms can only match to within ½ pixel. Hence, BMA matching will generally underperform the two-shot method. After aligning the images, the phase difference is calculated by phase subtraction similar to the two shot technique. The example shown in Figure 15 is for synthetic wavelength Λ= 150 μm.

Figure 14.

BMA slides (a) reference image over (b) the target image, (c) correlation. The best correlation occurs at: Xshift: 0 and Yshift: −4 pixels.

Figure 15.

MWDH-SH phase reconstruction, (a) amplitude reconstruction, (b) phase difference, and (c) phase unwrapping for Λ= 150 μm.

An alternative method of matching the two images is to introduce a phase “tilt” to either one, or both holograms during reconstruction which causes lateral shifts in the position of each image. This is typically referred to as introducing a phase mask, Ψmn, during reconstruction, and in general can take any form, although the most commonly used are tilt phases and lens phases. Proper selection of the phase mask (typically found via multiple iterations) can position one hologram reconstruction directly over the other, and phase subtraction may then be performed in a matter analogous to the “two-shot” method previously described, including appropriate resolution matching via zero-padding. An example of phase due to tilt and due to the MO are given by Eq. (13). The hologram matrix is simply multiplied by the phase mask, Ψ, prior to reconstruction. Although the phase mask method is typically more difficult to implement, requiring multiple iterations to arrive at the correct phase mask, it does not suffer from the inherent mismatch error of up to ½ pixel, as the BMA process does. However, the overlap accuracy will now depend upon the accuracy of the modeled phase mask.

3.2 Experimental results for multi-wavelength DHM (MWDHM) with and without spatial heterodyning (MWDHM-SH)

In this section, we show an example of using the MWDH technique using a microscopy setup similar to the single wavelength DHM shown in Figure 4. Thus, the technique would be abbreviated as (MWDHM). A series of micro-scale objects have been custom fabricated for this experiment as shown in Figure 16(a) and (b). Figure 17(a) shows a MWDHM setup with achromatic optics and can be operated in either the “one-shot” or “two-shot” Michelson configuration. Figure 17(b) is a photograph of the object consisting of 4 bars of photoresist (See element in red circle in Figure 16(a)), each 50 μm wide, on a silicon wafer substrate. Figure 17(c) is the intensity reconstruction of the λ1 hologram only. Figure 17(d) shows the wrapped phase difference between λ1 and λ2 reconstructions. Figure 17(e) shows the unwrapped phase (Ref. [24]) with the residual MO quadratic phase curvature and Figure 17(f) is the 3D topogram after removal of the MO phase using Eq. (13). The relevant reconstruction parameters are: d = 22.7 cm, λ1 = 632.8 nm, λ2 = 488.0 nm, Λ = 2.13 μm, Δx = 6.7 μm, and N = 1024, and M = 2.75. Note that the phase rings are due to a slight mismatch in collimation between the λ1 and λ2 beams, which causes circularly symmetric phase beating since at least one wavefront is not well collimated. This situation arises often in physical lab setups in which both beams are coaxially aligned and filtered using the same pinhole prior to using a single collimation lens. Chromatic dispersion will prevent both wavelengths from being collimated simultaneously, unless an achromatic lens is used.

Figure 16.

(a) Custom fabricated micro-scale objects and (b) 1951 USAF resolution chart with a ∼ 50 nm reflective molybdenum film sputtered on it.

Figure 17.

(a) MWDHM Michelson recording configuration. (b) Photograph of the object. (c) Amplitude reconstruction of the λ1 hologram only. (d) Wrapped phase difference between λ1 and λ2 reconstructions. (e) Unwrapped phase showing MO curvature. (f) the flattened topogram after removal of the curvature.

Here we show an example using the MWDHM technique with the microscopy setup of Figure 17(a), operated in the spatial heterodyne configuration (MWDHM-SH). The object is a set of 3 rectangular photoresist bars, each 75 μm wide, on a silicon wafer (See element in black circle in Figure 16(a)). In this case, the object is simultaneously illuminated by two wavelengths at normal incidence and only a single composite hologram is recorded by the CCD (i.e. “one-shot”). The single hologram is reconstructed twice, one at each fundamental wavelength, and the block-match algorithm is used to align the images prior to phase subtraction. Figure 18(a) shows the intensity reconstruction of the λ1 hologram only, with the region of interest circled, Figure 18(b) shows the wrapped phase difference between λ1 and λ2 reconstructions after block matching and phase subtraction, while Figure 18(c) shows the unwrapped phase with the residual MO quadratic phase curvature, and Figure 18(d) is the 3D topogram after removal of the MO phase and correction of phase errors. The relevant reconstruction parameters are: d = 23 cm, λ1 = 632.8 nm, λ2 = 488 nm, Λ = 2.13 μm, Δx = 6.7 μm, N = 1024, and M = 2.75.

Figure 18.

(a) The intensity reconstruction of the λ1 hologram only, with the region of interest circled. (b) the wrapped phase difference between λ1 and λ2 reconstructions after block matching and phase subtraction. (c) the unwrapped phase with the residual MO quadratic phase curvature, and (d) the 3D topogram after removal of the MO phase, and correction of phase errors.

Advertisement

4. Theoretical background of telecentric systems

In a conventional lens systems the magnification changes with object position change, the image has distortion, perspective errors, image resolution loss along the field depth, and edge position uncertainty due to object border lighting geometry. However, a telecentric system, such as the one shown in Figure 19, provides nearly constant magnification, virtually eliminates perspective angle error (Object with large depth will not appear tilted), and eliminates radial and tangential distortion. In a bitelecentric system, both the entrance pupil (EP) and exit pupil (XP) are located at infinity. Given that double telecentric systems are afocal, shifting either the image or object does not affect magnification.

Figure 19.

A double telecentric system.

As shown in Sections 2 and 3, traditional DHM systems record a digital hologram using a MO. The object phase recovered from digital reconstruction using the Fresnel transform suffers from a parabolic phase factor introduced by the MO. The phase of the MO is superposed over the object phase, often obscuring it. Also, the phase tilt introduced by the reference beam results in linear fringes with high frequency that also obscure the real phase of the object. Numerical techniques as well as optical configurations are usually employed to compensate for both the parabolic phase curvature and the phase tilt. One well-known technique discussed in Section 2 is based on phase mask during reconstruction, which requires knowledge of the setup parameters [13, 14, 40, 41, 42]. If the object parameters are unknown a two-step method is used, in which the hologram of a flat reference surface is initially recorded, and upon reconstruction it is subtracted from that of the hologram of the real object [43]. In this section, we adopt two telecentric configurations in reflection and transmission modes to remove optically, instead of numerically, the phase curvature due to MO [44, 45, 46]. This telecentric setup can be used in a single wavelength or multiwavelength DHM configurations. It is worth noting that while operating in the nontelecentric mode, a posteriori numerical methods will not eliminate the phase aberration completely, as it depends on sample location in the field of view (FOV) [45].

In traditional DHM, the recorded wavefront on the CCD includes the interference of the reference wavefront and the total object wavefront. The total object phase consists of the defocused object phase on the image plane as well as the spherical (quadratic in paraxial approximation) phase due to propagation of the object wave from the image plane to the CCD. The object phase is expressed as [11, 46]:

φxy=jk2Rx2+y2+φobxy,E28

where R is the radius of curvature of the spherical curvature.

Typical multiwavelength DHM setups using telecentric configurations in reflection and transmission modes are shown in Figure 20(a) and (b), respectively. In each setup, the telecentric system is formed by employing two achromatic lenses and an aperture stop similar to Figure 19. The achromatic lenses are crucial to eliminate achromatic aberration due to the use of multi-wavelength illumination. The telecentric system is set in an afocal configuration, where the back focal plane of L1 coincides with the front focal plane of L2f1f2, with the object placed at the front focal plane of L1, resulting in the cancelation of the spherical phase curvature normally present in traditional DHM systems.

Figure 20.

Schematics of the TMWDHM setups: (a) reflection and (b) transmission configurations.

Hence, the 3D amplitude distribution in the image space will be a scaled defocused replica of the 3D amplitude distribution of the object space due to the convolution with the PSF of the lens system. For each wavelength λ1λ2, the object wave recorded by the CCD can be expressed as [45]:

Oxy=1Mexpjk1,22f2+f1×OxMyMP˜xλ1,2f2yλ1,2f2,E29

where, OP˜ is the convolution of the complex amplitude scattered by the object and the PSF, (*) is the convolution operator, and the magnification is M=f2/f1 [47].

4.1 Experimental results for single wave telecentric DHM (TDHM)

Figure 21 shows a custom developed user-friendly graphical user interface (GUI) for the single wave reflection Telecentric DHM (TDHM) setup similar to that shown in Figure 20(a). The target object is shown in Figure 16(b).

Figure 21.

A custom-designed GUI showing the TDHM in reflection configuration. The object is a reflective object on a reflective substrate (see Figure 16(b)).

The MATLAB GUI is connected to a Lumenra LU120M CCD camera using a USB cable. The GUI is equipped with all the parameters needed to adapt to different CCD camera pixel size, laser wavelength, reconstruction distance, reflection vs. transmission mode. In this example, the laser wavelengths used is λ=488nm the CCD pixel size is 5.2 μm, the reconstruction distance is d = 20.2 cm. The reconstructed height is around 120 nm. It’s worth noting that slight aberrations due to the optical components exist in the final computed phase. This can be automatically corrected by subtracting the reconstructed phase shape from the background phase using Zernike polynomial approximation of the residual phase as shown in the GUI.

The telecentric technique has a lot of advantages compared to a standard DHM system since the reconstruction parameters in a standard DHM are hard to obtain and need to be measured precisely to obtain the 3D phase information.

4.2 Experimental results for telecentric multi-wavelength DHM (TMWDHM)

Figure 22 shows the GUI for the reflection configuration shown in Figure 20(a). The target in this experiment is a transmissive object (PMMA) on a reflective Si background (See element in blue circle in Figure 16(a)). The laser wavelengths used are λ1=514.5nm,λ1=488nm. The synthetic wavelength is: Λ=9.6μm and the CCD pixel size is: 5.2 μm. The reconstruction distance is: d = 20.2 cm. It’s worth noting that a slight misalignment and/or achromatic aberration may result in one residual fringe to remain in the final computed phase. Although achromatic lenses were used, there still might be some remaining chromatic aberration, since the achromats are not perfect. That might be enough to cause the one remaining fringe of phase curvature. This can be automatically corrected by subtracting a the reonstructed phase shape from the background phase using Zernike polynomial approximation of the residual phase as shown in the GUI.

Figure 22.

A custom-designed GUI showing the TMWDHM in reflective configuration. The object is a transmissive object on a reflective substrate (see Figure 16(a)).

Advertisement

5. Simulation of coherent speckle on phase and intensity

Due to the use of coherent optical sources, the recorded holograms and reconstructed fields contain coherent speckle patterns, as seen in the inset in Figure 23(a). Speckle is produced by the coherent interference of a set of wavefronts. Mutual interference occurs when coherence is lost, where coherence is defined as the wavefront having constant phase at each frequency. A well-known mechanism for incoherence is optical roughness; when illuminated with monochromatic light the reflected (or scattered) wave consist of the contribution from many scattering points. Different scattering areas or small highlights on the object emit spherical wavelets which combine and interfere coherently resulting in a complex interference pattern known as speckle (Ref. [48, 49, 50, 51, 52, 53, 54]). This speckle generation mechanism also applies to transmission (scattering) through an optically rough phase object.

Figure 23.

(a) Spatial frequency spectrum of a hologram recorded using an off-axis digital holographic setup. (b) A slice through constant spatial frequency.

Due to variable phase shifts produced as the wavefront propagates through an optically rough object, the field leaving the object has a corrugated structure of interference. In addition, the presence of an optical diffuser before the object (which consists of small thickness variations) in transmissive configuration, has the same effect as a rough surface in reflective imaging. In this section, we seek to demonstrate an accurate representation of speckle in transmissive imaging through nearly transparent samples, valid for biological imaging applications. To simulate speckle, we consider the complex phasor amplitude EOr=EOejk·r+ϕr given by a plane wave propagating through an object which induces a spatially dependent phase shift given by ϕr. A spatially dependent phase shift ϕroughr is introduced to the field at the object plane to account for optical roughness/diffuser. The optical roughness can then be represented by a combination of phasors at each location given by Aroughrejϕroughr, such that the object complex amplitude wave with the inclusion of optical roughness is given by

EOr=EOrAroughrejϕroughr=EOAroughrejkr+ϕr+ϕroughr,E30

where the total phase is the sum of the phase derived from the height profile ϕr and the phase introduced by optical roughness. The amplitude contribution of speckle Aroughr is computed by integrating the absorption coefficient of the material along the optical path length of the roughness. We assume that the phase contribution from each phasor are statistically independent as well as statistically independent from all other phasors such that the phase induced by each surface patch is uniformly distributed over the interval ϕmaxϕmax (Ref. [49]). The maximum phase shift induced by optical roughness, ϕmax, is derived from the maximum height deviation of the sample roughness. If the surface is rough relative to the optical wavelength, such that each phasor can produce phase shifts of many 2π multiples, the phase shift induced by each surface patch is uniformly distributed over the interval ππ (Ref. [49]). The numerical propagation of the complex field then captures the coherent interference of the spherical wavelets emitted from the optically rough surface as the wavefront propagates in space.

As an example, we consider a USAF resolution target with a maximum thickness of 10 microns and random height deviations of 1 micron (10% of total height and 1.6λ for red light) due to roughness, imaged through a telecentric holographic configuration with 3x magnification. Figure 24 shows probability density functions computed using the phase reconstruction of simulated speckle patterns; the real and imaginary components of the complex speckle field (A,B) are i.i.d. Gaussian random variables, such that magnitude and intensity (C,D) are Rayleigh and χ22 (negative exponential) distributed respectively, and the phase (E) is uniform. The validity of the probability density functions in Figure 24 is well documented in the literature (Ref [48, 49]).

Figure 24.

Probability density functions of reconstructed speckle patterns. The real and imaginary components of the complex speckle field (a,b) are Gaussian distributed, the magnitude and intensity (c,d) are Rayleigh and χ22 distributed respectively, and the phase (e) is uniform.

In a typical experiment speckle can be reduced using diversity in polarization, space, frequency, or time (Ref. [49]). One of the time domain techniques is through rotating a diffuser or by using a liquid crystal based electronic speckle reducer (Ref. [55]). Another technique is to average multiple holograms or reconstructions recorded by varying the optical path length of the reference beam relative to the object beam (Ref. [56]). Figure 25 show the reconstructed height profile averaged over increasing phase reconstruction frames, where the initial roughness distributions are assumed to be statistically independent from frame to frame due to the varying optical path length difference between the object and reference beam. Figure 26 shows the standard deviation of the phase and height profile contribution of simulated speckle as a function of increasing averaging frames. As expected, the standard deviation decreases as 1/N where N is the number of averaged frames.

Figure 25.

Reconstructed height profile for a telecentric configuration with a magnification of M=3 averaged over 1 (a), 3 (b), 10 (c), and 30 (d) frames.

Figure 26.

Standard deviation of speckle phase and corresponding height profile as a function of number of frames averaged.

In this section, we have demonstrated that the distributions of the simulated speckle phase and intensity are consistent with theory and observations in the limit when the optical roughness is large relative to the optical wavelength. In addition, we have shown that the reduction of speckle standard deviation associated with averaging is as expected. While we have demonstrated an accurate and robust numerical representation of optical speckle patterns in holographic imaging, we do not seek to address speckle mitigation techniques in detail. Our goal is to mimic experimentally recorded and reconstructed holograms for realistic machine learning training not to mitigate speckle, as shown in Section 6 below. In future work we seek to explore the sensitivity of speckle statistics to the roughness of the object relative to the optical wavelength.

Advertisement

6. Conclusion

In this Chapter, we developed the theory, the reconstruction algorithms, and discussed the different experimental configurations for digital holography and digital holographic microscopy. We also showed typical experimental setups for single and multiwavelength configurations. We concluded that single wavelength setups are used for heights that do not exceed few microns while multiwavelength-based setups are used for heights that can reach 100’s of microns depending on the synthetic wavelength used. We also discussed in details the two shot versus the one shot MWDH setup. Although hologram reconstruction using one-shot setup needs an extra digital correlation step, it is very well suited for dynamic objects which change relatively quickly. We also discussed briefly how Zernike polynomials are used to cancel the residual phase due to the different aberrations in the optical system. We also discussed the theory and experimental setups of novel reflection as well as transmission telecentric digital holographic microscopy configurations. The setup optically removes, without the need of any post-processing, the parabolic phase distortion caused by the microscope objective which is present in a traditional multi-wavelength digital holographic microscope. Without a telecentric setup and even with post-processing a residual phase remains to perturb the measurement. The telecentric technique has a major advantage since the reconstruction parameters needed and hard to obtain in a standard DHM do not need to be measured precisely to obtain the 3D phase information. Finally, a custom developed user-friendly GUI was employed to automate the recording and reconstruction process.

References

  1. 1. J. Goodman, Introduction to Fourier Optics (Roberts & Company, Englewood 2005).
  2. 2. U. Schnars and W. Jueptner, Digital Holography: Digital Hologram Recording, Numerical Reconstruction, and Related Techniques (Springer, Berlin 2010).
  3. 3. U. Schnars and W. Juptner, “Direct recording of holograms by a CCD target and numerical reconstruction,” App. Opt., 33, 179–181 (1994).
  4. 4. L. P. Yaroslavskii and N. S. Merzlyakov, Methods of Digital Holography (Consultants Bureau, NY 1980).
  5. 5. U. Schnars and W. Juptner, “Digital recording and numerical reconstruction of holograms,” Meas. Sci. Technol., 13, R85-R101 (2002).
  6. 6. G. Nehmetallah, R. Aylo and L. Williams, Analog and Digital Holography With MATLAB®, SPIE Press, Bellingham, Washington, 2015.
  7. 7. T. M. Kreis, M. Adams, and W. P. O. Juptner, “Methods of digital holography: a comparison,” Proc. SPIE 3098, 222–233 (1997).
  8. 8. G. Nehmetallah and P. P. Banerjee, “Digital holographic interferometry and microscopy for 3D object visualization,” Frontiers in Opt., FTuF6 (2011).
  9. 9. G. Nehmetallah, P. P. Banerjee, “Applications of digital and analog holography in 3D imaging,” Adv. Opt. and Photon., 4, 472–553 (2012).
  10. 10. E. Cuche, F. Bevilacqua, and C. Depeursinge, “Digital holography for qualitative phase-contrast image,” Opt. Lett., 24, 291–293 (1999).
  11. 11. E. Cuche, P. Marquet, and C. Depeursinge, “Simultaneous amplitude-contrast and quantitative phase-contrast microscopy by numerical reconstruction of Fresnel off-axis holograms,” Appl. Opt., 38, 6994–7001 (1999).
  12. 12. F. Charrière, J. Kühn, T. Colomb, F. Montfort, E. Cuche, Y. Emery, K. Weible, P. Marquet, and C. Depeursinge, “Characterization of microlenses by digital holographic microscopy,” Appl. Opt., 45, 829–835 (2006).
  13. 13. T. Colomb, J. Kühn, F. Charière, and C. Depeursinge, “Total aberrations compensation in digital holographic microscopy with a reference conjugated hologram,” Opt. Exp., 14, 4300–4306 (2006).
  14. 14. T. Colomb, E. Cuche, F. Charrière, J. Kühn, N. Aspert, F. Montfort, P. Marquet, and C. Depeursinge, “Automatic procedure for aberration compensation in digital holographic microscopy and applications to specimen shape compensation,” App. Opt., 45, 851–863, (2006).
  15. 15. J. C. Wyant, “Testing aspherics using two-wavelength holography,” Appl. Opt., 10, 2113–2118 (1971).
  16. 16. D. Abdelsalam, R. Magnusson, and D. Kim, “Single-shot, dual-wavelength digital holography based on polarizing separation,” Appl. Opt, 50, 3360–3368 (2011).
  17. 17. T. Kreis, Handbook of holographic Interferometry (Wiley, Weinheim, 2005).
  18. 18. C. Mann, P. Bingham, V. Paquit, and K Tobin, “Quantitative phase imaging by three-wavelength digital holography,” Opt. Exp., 16, 9753–9764 (2008).
  19. 19. Y. Morimoto, T. Matui, M. Fujigaki, N. Kawagishi, “Subnanometer displacement measurement by averaging of phase difference in windowed digital holographic interferometry,” Opt. Eng., 46, 025603.1–025603.8 (2007).
  20. 20. P. Hariharan, Optical Holography; Principles, techniques, and applications, (Cambridge University Press, Cambridge 1996).
  21. 21. J. Haus, B. Dapore, N. Miller, P. Banerjee, G. Nehmetallah, P. Powers, P. McManamon, “Instantaneously captured images using multiwavelength digital holography,” Interferometry XVI: Techniques and Analysis, Proc. SPIE 8493, 84930W (2012).
  22. 22. J. Kuhn, T. Colomb, F. Montfort, F. Charriere Y. Emery, E. Cuche, P. Marquet, and C. Depeursinge, “Real-time dual-wavelength digital holographic microscopy with a single hologram acquisition,” Opt. Exp., 15, 7231–7242 (2007).
  23. 23. L. Williams, P. Banerjee, G. Nehmetallah, and S. Praharaj, “Holographic volume displacement calculations via multiwavelength digital holography,” Appl. Opt., 53, 1597–1603 (2014).
  24. 24. J. Bioucas-Dias and G. Valadao, “Phase Unwrapping via Graph Cuts,” IEEE Trans., Image Processing, 16, 698–709 (2007).
  25. 25. M. Kim, “Principles and techniques of digital holographic microscopy,” SPIE Reviews1, 018005, (2010).
  26. 26. B. Kemper, D. Carl, J. Schnekenburger, I. Bredebusch, M. Schäfer, W. Domschke, and G. von Bally, “Investigation of living pancreas tumor cells by digital holographic microscopy,” J. Biomed. Opt. 11(3), 034005 (2006).
  27. 27. N. Pavillon, J. Kühn, C. Moratal, P. Jourdain, C. Depeursinge, P.J. Magistretti, and P. Marquet, “Early cell death detection with digital holographic microscopy,” PLoS ONE, 7(1), e30912 (2012).
  28. 28. J. Kühn, E. Shaffer, J. Mena, B. Breton, J. Parent, B. Rappaz, M. Chambon, Y. Emery, P. Magistretti, C. Depeursinge, P. Marquet, and G. Turcatti, “Label-free cytotoxicity screening assay by digital holographic microscopy,” Assay Drug Dev. Technol., 11(2), 101–107 (2013).
  29. 29. J. Kuhn, F. Montfort, T. Colomb, B. Rappaz, C. Moratal, N. Pavillon, P. Marquet, and C. Depeursinge, “Submicrometer tomography of cells by multiple-wavelength digital holographic microscopy in reflection,” Opt. Lett. 34(5), 653–655 (2009).
  30. 30. B. Rappaz, E. Cano, T. Colomb, J. Kühn, C. Depeursinge, V. Simanis, P. J. Magistretti, and P. Marquet, “Noninvasive characterization of the fission yeast cell cycle by monitoring dry mass with digital holographic microscopy,” J. Biomed. Opt.,14(3), 034049–1-5 (2009).
  31. 31. N. Pavillon, A. Benke, D. Boss, C. Moratal, J. Kühn, P. Jourdain, C. Depeursinge, P. J. Magistretti, and P. Marquet, “Cell morphology and intracellular ionic homeostasis explored with a multimodal approach combining epifluorescence and digital holographic microscopy,” J. Biophoto., 3(7), 432–436 (2010).
  32. 32. K. Jeong, J. J. Turek, and D. D. Nolte, “Fourier-domain digital holographic optical coherence imaging of living tissue,” Appl. Opt. 46(22), 4999–5008 (2007).
  33. 33. N. Warnasooriya, F.Joud, P. Bun, G. Tessier,M. Coppey-Moisan, P. Desbiolles, M. Atlan, M. Abboud, and M. Gross, “Imaging gold particles in living cell environments using heterodyne digital holographic microscopy,” Opt. Exp., 18(4), 3264–3273 (2010).
  34. 34. J. Kühn, E. Shaffer, J. Mena, B. Breton, J. Parent, B. Rappaz, M. Chambon, Y. Emery, P. Magistretti, C. Depeursinge, P. Marquet, and G. Turcatti, “Label-free quantitative cell division monitoring of endothelial cells by digital holographic microscopy,” J. Biomed. Opt., 15(3), 036009 (2010).
  35. 35. H. Sun, B. Song, H. Dong, B. Reid, M. A. Player, J. Watson, and M. Zhao, “Visualization of fast-moving cells in vivo using digital video microscopy,” J. Biomed. Opt. 13(1), 014007 (2008).
  36. 36. C. J. Mann, L. Yu, and M. K. Kim, “Movies of cellular and sub-cellular motion by digital holographic microscopy,” Biomed. Eng. Online 5(21), 1–10 (2006).
  37. 37. Y. Emery, E. Solanas, N. Aspert, A. Michalska, J. Parent, and E. Cuche, “MEMS and MOEMS resonant frequencies analysis by digital holography microscopy (DHM),” Proc. SPIE 8614, 86140A (2013).
  38. 38. A. Asundi, Digital Holography for MEMS and Microsystem Metrology, Wiley, Chichester (2011).
  39. 39. G. Coppola, S. De Nicola, P. Ferraro, A. Finizio, S. Grilli, M. Iodice, C. Magro, and G. Pierattini, “Characterization of MEMS structures by microscopic digital holography,” Proc. SPIE 4945, 71 (2003).
  40. 40. T. Colomb, F. Montfort, J. Kühn, N. Aspert, E. Cuche, A. Marian, F. Charrière, S. Bourquin, P. Marquet, and C. Depeursinge, “Numerical parametric lens for shifting, magnification, and complete compensation in digital microscopy,” J. Opt. Soc. Am. A, 23, 3177–3190 (2006).
  41. 41. F. Montfort, F. Charrière, T. Colomb, E. Cuche, P. Marquet, and C. Depeursinge, “Purely numerical compensation for microscope objective phase curvature in digital holographic microscopy: influence of digital phase mask position,” J. Opt. Soc. Am. A, 23, 2944–2953 (2006).
  42. 42. Z. W. Zhou, Y. Yingjie, and A. Asundi, “Study on aberration suppressing methods in digital micro-holography,” Opt. Lasers Eng., 47, 264–270 (2009).
  43. 43. G. Coppola, G. Di Caprio, M. Gioffré, R. Puglisi, D. Balduzzi, A. galli, L. Miccio, M. Paturzo, S. Grilli, A. Finizio, and P. Ferraro, “Digital self-referencing quantitative phase microscopy by wavefront folding in holographic image reconstruction,” Opt. Lett., 35, 3390–3392 (2010).
  44. 44. E. Sánchez-Ortiga, A. Doblas, M. Martínez-Corral, G. Saavedra, and J. Garcia-Sucerquia, “Aberration compensation for objective phase curvature in phase holographic microscopy: comment,” Opt. Lett., 39, 417 (2014).
  45. 45. A. Doblas, E. Sánchez-Ortiga, M. Martínez-Corral, G. Saavedra, and J. Garcia-Sucerquia, “Accurate single-shot quantitative phase imaging of biological specimens with telecentric digital holographic microscopy,” J. of Biom. Opt., 19(4), 046022 (2014).
  46. 46. E. Sánchez-Ortiga, P. Ferraro, M. Martínez-Corral, G. Saavedra, and Ana Doblas, “Digital holographic microscopy with pure-optical spherical phase compensation,” J. Opt. Soc. Am. A, 28, 1410–1417 (2011).
  47. 47. G. Nehmetallah, “Multi-wavelength digital holographic microscopy using a telecentric reflection configuration,” Topical Meeting in Digital Holography and Three-Dimensional Imaging (DH), DM3A.7, Shanghai, China, 24–28 May (2015).
  48. 48. J. W. Goodman, Statistical Optics (John Wiley & Sons, New York, NY, 1985).
  49. 49. J. W. Goodman, “Some fundamental properties of speckle,” J. Opt. Soc. Am., 66(11), 1145–1150 (1976).
  50. 50. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts & Co, 2006).
  51. 51. H. Funamizu, J. Uozumi, and Y. Aizu, “Enhancement of spatial resolution in digital holographic microscopy using the spatial correlation properties of speckle patterns,” Continuum 2 (6) 1822–1837 June( 2019).
  52. 52. Y. Park, W. Choi, Z. Yaqoob, R. Dasari, K. Badizadegan, and M. S. Feld, “Speckle-field digital holographic microscopy,” Opt. Exp., 17(15), 12285–12292 (2009).
  53. 53. J. Zheng, G. Pedrini, P. Gao, B. Yao, and W. Osten, “Autofocusing and resolution enhancement in digital holographic microscopy by using speckle-illumination,” J. Opt. 17(8), 085301-7p.p. (2015).
  54. 54. T. Baumbach, E. Kolenovic, V. Kebbel, and W. Jüptner, “Improvement of accuracy in digital holography by use of multiple holograms,” Appl. Opt. 45(24), 6077–6085 (2006).
  55. 55. J. Boonruangkan, H. Farrokhi, and Y. J. Kim, “Rotational diffuser for speckle reduction in quantitative phase imaging,” 2017 Conference on Lasers and Electro-Optics Pacific Rim (CLEO-PR), Singapore, 2017, pp. 1–2, doi:10.1109/CLEOPR.2017.8118645.
  56. 56. H. Lin and P. Yu, “Speckle mechanism in holographic optical imaging,” Opt. Exp., vol. 15(25), 16322–16327 (2007).

Written By

George Nehmetallah, Logan Williams and Thanh Nguyen

Submitted: 09 September 2020 Reviewed: 08 October 2020 Published: 03 November 2020