Open access peer-reviewed chapter

Flow-Scanning Microfluidic Imaging

Written By

Nicolas Pégard, Chien-Hung Lu, Marton Toth, Monica Driscoll and Jason Fleischer

Submitted: 01 June 2015 Reviewed: 27 June 2016 Published: 23 November 2016

DOI: 10.5772/64707

From the Edited Volume

Advances in Microfluidics - New Applications in Biology, Energy, and Materials Sciences

Edited by Xiao-Ying Yu

Chapter metrics overview

1,979 Chapter Downloads

View Full Metrics

Abstract

The advantages of microfluidics for fast analysis of microscopic suspensions have led to the commercial development of flow cytometers. In this chapter, we propose new microscopy methods that combine controlled motion of micro-organisms in a laminar microfluidic flow, optics, and computation. We propose three new imaging modalities. We first introduce a flow-based version of structured illumination microscopy, where the necessary phase shifts are no longer obtained by controlled displacement of the illumination pattern but by flowing the sample itself. Then, we propose a three-dimensional (3D) deconvolution microscopy method with a microfluidic device for continuous acquisition of gradually defocused images. Finally, we introduce a microfluidic device for phase-space image acquisition, and computational methods for the reconstruction of either phase of intensity, in 3D. The imaging modalities we introduce all retain the benefits of fluid systems for noninvasive bioimaging. The proposed devices can easily be integrated on existing microscopes as a modified microscope slide, or on flow cytometers, and aquatic imagers with minor adjustments. Alternative on-chip implementations are also possible, with lens-free devices, and near-field optical and microfluidic elements directly assembled on the surface of a CCD (Charge-Coupled Device) or CMOS (Complementary metal–oxide–semiconductor) chip.

Keywords

  • lab on-a-chip
  • microscopy
  • computational imaging
  • optofluidics
  • cytometry
  • structured illumination
  • deconvolution
  • light field
  • tomography

1. Introduction

1.1. Microfluidic structured illumination microscopy

The resolution of an optical imaging system is subject to the diffraction limit, which for a fixed wavelength is governed by the numerical aperture (NA) of the system. A popular technique to go beyond these limits is structured illumination (SI) [1‒7], in which a known illumination pattern (usually periodic) is projected onto a sample. Spectral beating of this pattern with the object modes folds high-resolution information into lower spatial frequencies (Moiré patterns) that can be detected by the imaging device. Deconvolving this photonic aliasing can improve resolution by a factor of two [1‒6]. Greater improvements are possible using nonlinearity [7] or with successive applications of structured illumination with higher spatial frequency patterns. Alternatively, structured illumination may be viewed as a type of diffraction, in which the signal is shifted in the spectral frequency domain by an amount equal to the applied grating period (see Figure 1). This imaging technique, however, requires the acquisition of several raw images (at least three) with a series of precise displacements of the illumination pattern in order to remove phase ambiguity. Previous SI systems relied on mechanical moving parts (e.g., piezoelectric actuators) [4] or on a spatial light modulator (SLM) [8] to perform the shift. These methods add complexity to the imaging system and can significantly reduce the image acquisition speed. Further, mechanical movement is subject to vibration error and artifacts, while SLMs are limited by their pixel size.

Figure 1.

The principle of structured illumination microscopy is to increase imaging resolution beyond the limit of a microscope objective. An optical signal carries a given distribution of spatial frequencies (red curve), but the range of frequencies that can be collected is limited by the numerical aperture (NA) of the optical system (dashed blue line). Illuminating the sample with structured light that is patterned with a spatial frequency kg shifts the distribution and brings higher spatial frequencies back into the NA-limited imaging domain.

Separate from the structured illumination approach, fluidic imaging systems for improved resolution have been developed. In microfluidic microscopes and aquatic imagers, these systems have received renewed attention with the development of integrated optofluidic devices, which are lensless imagers that place flowing samples directly over a detector [9‒11]. Among their advantages is simple and low-cost object manipulation, with little or no sample preparation. In most devices, the flow is used only to provide object throughput (e.g., to measure gas kinetics [12], live cells [13], or two-phase flow [14]). However, the flow can be used as an additional degree of freedom for imaging. For example, recent work has used fluid transport as a scanning mechanism to enhance resolution, either by using small holes before the detector [9] or by taking multiple frames with subpixel displacements [10]. In the former method, the small apertures limit the amount of light captured and greatly reduce the effective recording area. In the latter method, resolution is limited by the camera frame rate (vs. flow speed) and edge effects from pixels. In all cases, the illumination was kept as uniform as possible.

Here, we combine a steady illumination pattern and use the fluid flow to provide the necessary scanning that shifts the phase of the illumination pattern with respect to the object. From a flow perspective, the instant wavenumber (k-space) shift gives improved spatial resolution at greater speeds than subpixel methods, with better use of the camera’s dynamic range. The combined scheme thus retains all the benefits of fluidics, including high sample throughput and object sorting [15], while enabling easy integration with existing microscopes, flow cytometers, and aquatic imaging systems.

1.2. Reconstruction algorithm for flow-based structured illumination

In standard applications of SI, the illumination is simply a periodic pattern which is displaced (phase-shifted) by a convenient amount, such as a quarter wavelength (π/2 in phase). In a fluidic system, however, the phase shift between frames will depend on the flow velocity and the camera frame rate. For that reason, we first consider the problem of an arbitrary phase shift of the illumination pattern between two consecutive frames. Let A be the object to be imaged and P be the point-spread function. The pth recorded image is given by

Ip(x)=P(u)A(x+u)M(x+upδ)duE1

where we allow for a non-ideal displacement, δ, due to fluid flow, and where M is the structured illumination pattern. To generate a fixed illumination pattern, we consider the 0th and ±1st order mode of a diffraction grating so that the structured illumination pattern M(x)=|g+12eiβx+12eiβx|2= (g2+12)+ 2gcos(βx)+ 12cos(2βx), where g is the ratio of the grating efficiency between the 0th and ±1st-order diffraction. Here, we neglect the last term, assuming that the resolving power of the imaging system is not enough to detect the Moiré patterns caused by the double frequency  2β. In the following derivation, we use the form M(x)= U+2cos(βx), where U=g+12g.

Fourier transforming the convolution product in Eq. (1) gives

X˜p(k)=P˜(k)[UA˜(k)+γpA˜β(k)+γpA˜β(k)]E2

where X˜p(k) = eipδkI˜p(k), γ = eipδβ, and A˜±β(k)=A˜(kβ). For simplicity, we use the optical transfer function P˜(k)=1 for k<k0 and P˜(k)=0 elsewhere, where k0 is the maximum spatial frequency that can be resolved. Using three consecutive frames, we deduce

[X˜p+1X˜pX˜pX˜p1]=P˜(k)(γ1)[γpγp1γp1γp][A˜β(k)A˜β(k)]E3

which can be inverted to reconstruct the object field with extended resolution AE :

AE(x)=[Ip+1(x+δ)Ip(x)Ip1(xδ)]E4

where

=γ(γ1)2[γ1pγ+1eiβx+γpγ+1eiβx1Uγpeiβxγpeiβx+γ2+1γUγpγ+1eiβx+γ1+pγ+1eiβx1U]tE5

In Section 4, we provide a detailed derivation of this result.

Figure 2.

A 500 µm wide, 50 µm deep fluidic channel is at the focal plane of a 20× objective lens. The imaging system has low numerical aperture (NA = 0.1). The structured illumination source is a steady sinusoidal profile (2.78 µm stripes) orthogonal to the flow direction. Images are recorded on a CCD camera at a frame rate of 15 frames/s (fps).

1.3. Experimental setup

The experimental setup is shown in Figure 2. The microfluidic channel is a 500 µm wide, 50 µm deep groove etched on a glass slide. To generate the structured light, a 532 nm continuous laser is patterned using a transmission grating and then demagnified to reduce the fringe spacing. This light then illuminates the channel with a steady sinusoidal pattern: 2.78 µm stripes oriented orthogonal to the fluid flow direction. While the SI technique will work for any imaging system, including lensless ones, we place the channel at the focal plane of an optical microscope. As a compromise between magnification and field of view, e.g. for water analysis, we use a 20× optical objective. The objective is part of a 4f imaging configuration with an aperture located at the confocal plane. The resulting value of the numerical aperture (NA = 0.1) corresponds to a resolution limit of approximately 4 µm.

As a proof of principle, we flow a suspension of yeast particles in glycerol through the microfluidic channel. Multiple images are recorded by a CCD camera (pixel size 9.9 µm) at a constant frame rate (15 fps). Figure 3(a)-(c) shows three consecutive frames. It is clear that different features of the object are revealed as it flows past the stationary illumination pattern.

Figure 3.

Three consecutive frames of two yeast particles under structured illumination. The constant phase shift (δφ = 0.28π) of the illumination pattern between consecutive frames shows the evolution of the recorded Moiré pattern.

For numerical reconstruction, we first assume that particles flowing along the center of the microfluidic channel undergo a negligible amount of rotation during three consecutive frames time (Poiseuille-type laminar flow with negligible on-axis velocity shear). The displacement of the object between two consecutive frames, δ, is then determined by the maximization of the cross-correlation function:

C(u)=(I1(x)I2(xu))2dxE6

For the conditions here, we obtain  δ=0.8 μm, which corresponds to a flow velocity of 12 μm/s, and a phase shift of δϕ=0.28π between consecutive frames. We note that when the spatial frequency of the illumination pattern |β|<2k0, it is necessary to consider the overlap among the modes A˜(k), A˜β(k+β) and A˜β(kβ). We conduct an average reconstruction among all overlaps to evaluate the corresponding spatial frequencies.

Figure 4.

Flow-scanning structured illumination reconstruction of yeast cells. (a) Reference image with uniform illumination (without SI). (b) Reconstructed yeast particles using SI, calculated with the three consecutive images of Figure 3(a)‒(c) and Eq. (4). (c) Fourier spectrum of (a), log scale (d) Fourier spectrum of (b), log scale. (e and f) Line profiles of the cross-sections shown in (a and b) and (c and d), respectively.

Experimental results are shown in Figure 4. The yeast image without structured illumination is shown in Figure 4(a), and numerical reconstruction of the yeast particles using Eq. (4) is shown in Figure 4(c). It is clear that SI provides greater visibility and reveals more details than the uniform illumination image, even for a 1D illumination pattern. The amount of improvement can be quantified using the visibility V = (ImaxImin)/(Imax + Imin), where V = 0.15 corresponds to the Rayleigh resolution criterion. Figure 4(e) shows cross-sections of the intensity along the line connecting the two particles. For uniform illumination, the image of the left yeast particle is well below the Rayleigh limit (Vleft = 0.07) while the right particle is barely visible (Vright = 0.18). In contrast, the respective visibilities increase to 0.48 and 0.50 with structured illumination. The results shown in Figure 4(e) indicate that the left and right yeast particles are approximately 2 and 3 µm in diameter, respectively, with a center-of-mass separation of 4 µm. These measurements would not be possible using only uniform illumination, which is limited to the bare system resolution (4 µm). This ability to discriminate using a single criterion, such as Rayleigh criterion, is necessary for many applications, e.g., automated identification and classification.

Another metric of improvement follows from a measurement of the spatial frequency spectrum. Figure 4(b) and (d) shows the magnitude of Fourier transform of Figure 4(a) and (c), respectively (in log scale). Compared to the uniform illumination case of Figure 4(b), Figure 4(d) displays many more spatial frequencies along the flow direction. According to the line profiles in Figure 4(f), structured illumination provides twice as many spatial frequencies along the illumination pattern direction, kx, in agreement with linear theory [1‒5] and the x-space observations. In terms of k-space area, we measure a coverage ratio of 2.3, corresponding to an equivalent radius ratio (=kextended/k0) of 1.5. This is a significant improvement considering that only a 1D illumination pattern was used.

1.4. Structured illumination with non-ideal phase shifts

Structured illumination for super-resolution microscopy has been extensively developed. To date, all techniques have however relied on precisely controlled phase shifts of the illumination grating. Here, we consider the case of microfluidic structured illumination, where the illumination pattern remains steady, and where the phase shifts are induced by displacing the object at constant speed in a microfluidic channel above a fixed illumination grating. This approach does not allow precise control of the inter frame phase shifts and requires a more complex reconstruction technique. For this, we derive this solution and show simulation results for proof of concept. We define the following notations.

  • A(x) is the object absorption density. This is the unknown distribution which we want to measure with extended resolution.

  • (x) is the structural mask—a known illumination pattern in the object plane. Here, we use (x)=U+2cos(βx).

  • PSP(x) is the point-spread function of the image detector (CCD) in the object plane—it represents the resolution limit of the imaging system (CCD and lens system.). Its Fourier Transform [P] representsthe optical transfer function, OTF(k).

  • ” is the convolution product (fg)(x)=f(xx)g(x)dx.

  • . ” is the scalar multiplication.

  • ” is the Fourier Transform operator, e.g., [fg]=[f][g] .

  • x ” represents a vector in real space.

  • k ” represents a wavevector in the spatial frequency space (Fourier space).

Structured Illumination enables the reconstruction of extended resolution beyond the numerical aperture of a given imaging device. By illuminating the sample with a known high definition pattern, it is possible to wrap high spatial frequencies into visible large-scale Moiré patterns. Structured illumination is a well-known imaging technique [16, 17]. Current algorithms rely on pattern phase shifting by displacement of the illumination source. Here, instead, OFM microscopy leads us to displace the sample. Furthermore, because flow measurement is extremely difficult in a fluid channel, the phase shift between frames is not well controlled. In this paragraph, we solve the problem of extended resolution reconstruction for an arbitrary object displacement δ between frames.

Let v be the speed of the flow, and T be the time between frames, so that δ=vT is the displacement of the sample between two frames (assumed to be constant). Let β be the wavevector of the illumination pattern. Assuming constant flow speed, and constant image rate, we may write that the pth recorded signal is given by

Ip(x)=PSP(x)A(xpδ)(x)E7

where

(x)=U+eiβx+eiβxE8

In k space, this corresponds to

I˜p(k)=OTF(k)([A(xpδ)][(x)])E9

where

[(x)]=Uδ(k)+δ(kβ)+δ(k+β)E10

and

[A(xpδ)]=eipδkA˜(k)E11

Defining X˜p(k)=I˜p(k)eipδk and γ=eiδβ Eq. (9) becomes

X˜p(k)=OTF(k)(UA˜(k)+γpA˜(kβ)+γpA˜(k+β))E12
 X˜p+1(k)X˜p(k)=OTF(k)(γp(γ1)A˜(kβ)+γp(1γ1)A˜(k+β))BB1

Assuming k,|k|<k0, so that OTF(k)=1, we deduce that p, and |k|<k0

A˜(kβ)=γ1p(γ1)2(γ+1)[γ(X˜p+1(k)X˜p(k))(X˜p(k)X˜p1(k))]A˜(k+β)=γ1+p(γ1)2(γ+1)[γ(X˜p(k)X˜p1(k))+(X˜p+1(k)X˜p(k))]BB2

Writing Jp(x)=Ip(x+pδ), this becomes

A˜(k+β)=γ1p(γ1)2(γ+1)[γJp+1(x)(γ+1)Jp(x)+Jp1(x)]E13
A˜(kβ)=γ1+p(γ1)2(γ+1)[Jp+1(x)(γ+1)Jp(x)+γJp1(x)]E14
A˜(k)=γU(γ1)2[Jp+1(x)+γ2+1γJp(x)Jp1(x)]E15

where δ and β are experimental parameters that are either known or derived from the measurements. The time lapse between recorded frames, as well as the speed of the fluid, should be identical. By recording a series of images  Ip,p[1,N], we may use Eqs. (13)–(15) in order to reconstruct the extended k-space.

Accordingly, the extended resolution AEXT becomes

AEXT˜(k)=A˜(k+β)+A˜(k)+A˜(kβ)E16
AEXT˜(k)=γ(γ1)2[(γγ+1α++1γ+1α1U)Jp+1(x)(α++αγ2+1γ1U)Jp(x)+(1γ+1α++γγ+1α1U)Jp1(x)]E17

where α+=γpeβx, and α=γpeβx. Inverting the Fourier transform gives the result in Eqs. (4) and (5).

Advertisement

2. 3D microfluidic microscopy

In this section, we present two applications of microfluidic flow for 3D microscopy. The first method uses a microfluidic channel that is tilted along the optical axis. We record several progressively defocused images of the flowing sample as it passes across the focal plane. The resulting focal stack is then processed using a Wiener deconvolution algorithm to generate three-dimensional images. Experimental results are shown on flowing yeast cells and reveal precise surface profile information. The second method is a 3D tomography device that combines a light source providing patterned illumination through a slit aperture, a microfluidic channel, and a Fourier lens for simultaneous acquisition of multiple perspective angles in the phase-space domain. 3D absorption is retrieved using standard back-projection algorithms, here a limited-domain inverse Radon transform. Simultaneously, 3D differential phase contrast images are obtained by computational refocusing and asymmetric comparison of complementary illumination angles. We have implemented the technique on a compact glass slide. We demonstrate non invasive 3D phase contrast and absorption imaging capabilities on live, freely swimming C. elegans.

The microfluidic channel eliminates the need for a precise translation stage to control the extra degree of freedom required to acquire 3D images on a 2D sensor. Here, either with defocusing or flow scanning. In addition, high sample throughput in an insulated, nontoxic, liquid environment perfectly fits the usual requirements for bio-compatibility.

2.1. 3D microfluidic microscopy using a tilted channel

In the simplest description of an imaging system, with a fixed lens, the imaging condition ensures sharp images when an object is located at a particular depth called focal plane. Here, by tilting the channel [18], samples can descend through this plane, so that different cross-sections will sequentially come into focus.

2.1.1. Experimental setup

The experimental device is presented in Figure 5. The microfluidic channel (500 µm width, 50 µm depth) is etched on a glass slide and placed under a standard wide-field microscope. The slide is tilted by a 15° angle with respect to the optical axis of the microscope objective. The tilt angle, α, is chosen to represent a good compromise between magnification and axial defocusing range. The channel is illuminated with incoherent white light, and the CMOS camera records a 25× magnified image at a constant frame rate, here: 30 fps. We demonstrate the principle with a suspension of yeast cells in glycerol flowing through the microfluidic channel.

The frame rate and the flow speed are selected to allow the acquisition of 100 consecutive frames as each sample passes from one end of the observation window to the other. A constant flow is maintained using a fixed pressure difference (a 50 cm hydrostatic water column) between the channel input and output. Exposure is adjusted to minimize the flow-induced blur, here below the resolution limit of the imaging device. With shallow channel depth, high kinematic fluid viscosity yields a Hagen‒Poiseuille-type laminar flow [19]. Also, the concentration of particles is lower than 250 µl−1 allowing for easy separation and minimal interactions between flowing particles.

Figure 5.

(a) The experimental device is a microfluidic channel placed under the objective of a wide field microscope with a tilt angle of 15° along the optical axis. The sample is carried by the laminar flow along the channel at constant velocity with a static pressure difference. We record video data with a CMOS camera operating at a constant frame rate. (b) Samples, seen as they flow along the channel axis and pass across the focal plane.

2.1.2. Controlled motion and rotation in a laminar liquid flow

The velocity distribution of the flow (see Figure 6) is parabolic. Particles in suspension flow at a constant velocity along the channel axis. However, except for the central axis of the channel, they also experience shear-induced rotation. Here, the acquisition of accurate focal stacks relies on the absence of rotation (or its compensation), and our setup has been designed to minimize the effects of shear in all directions. Along the channel axis u, vu=0, and the absence of rotation is a property of the laminar flow. Along the y-axis, we choose to only observe samples flowing in the middle part of a wide channel. Similarly, along the z-axis rotating objects are excluded by considering only particles flowing at the highest velocity in the middle of the channel, where the shear effects cancel. In practice, rotation can be minimized by injecting particles in the center of the channel with microfluidic injection on a separate channel. We note, however, that object rotation may be useful in other contexts, e.g., for multiple viewpoints, and is easily accessible by changing the injection point or imaging different parts of the flow.

Figure 6.

Flow velocity and shear in a laminar flow for a rectangular channel section. Small particles propagate along the streamlines of the flow, and shear effects resulting from the liquid-channel edge interface induce rotations for samples propagating away from the central axis.

2.1.3. Extraction of gradually defocused images

Focal stacks are generated by tracking samples flowing into the channel, as shown in Figure 7. The background noise is subtracted from the signal, and the zero value of the signal (in gray) corresponds to the nominal transparency of the free-running fluid. Letting In(x,y) be the intensity of the nth recorded frame. The focal stack, S, is given by

Figure 7.

We record focal stacks by observing samples in-motion passing through the tilted microfluidic channel and across the focal plane. With constant flow velocity and frame rate, we record ≈ 100 progressively defocused frames along the z-axis. We digitally track the object with an algorithm based on defocusing invariant properties of the center of gravity of the image. This allows perfect vertical alignment of the focal stack. (a) We measure iso-intensity contours of yeast cells through focus (≈36 µm). Insets show images of cells at selected levels. (b) Normalized intensity display of the focal stack with background subtracted.

S(x,y,z0+n|δ|sin (α))=In(x+n|δ|,y)E18

where v is the flow velocity, T, is the frame recording period, and δ=Tv, is the object displacement along the channel axis between two frames.

2.1.4. Focal stack alignment using a defocusing invariant

The extraction of accurate focal stacks is based on aligning successive views of the moving sample as it passes across the focal plane. We experimentally estimate this displacement using the fluid flow velocity and the frame rate, but we also correct for possible position errors using a simple method of particle tracking detailed based on an optical property of defocused images. We compute the center of mass of an image and show that in the case of defocused images of 3D objects with symmetric point-spread functions, the center of mass does not depend on the amount of defocusing [20] (see Figure 8).

Figure 8.

Samples are observed as they flow along the uxaxis and pass across the focal plane. For each acquired image, we compute the center of mass, or first moment, of the image data (red cross). This quantity is a defocusing invariant and can be used to correct for motion correction and imperfections of the laminar flow.

2.1.5. Wiener deconvolution

The reconstruction of the volume absorption distribution of the object, O, relies on solving the well-known deconvolution problem using the experimentally measured focal stack, S, and point-spread function, P in 3D space. O satisfies the volume integral

S(r)=[OP](r)=O(r+r)P(r)drE19

where  r=x.x+y.y+z.z. Ideally, the optical transfer function, [P], is positive and the solution for Eq. (8) is given by

O1[[S][P]]E20

Unfortunately, this solution is known to be extremely sensitive to experimental noise [21]. several noise-reducing techniques exist to facilitate object recovery when the point-spread function is not known, e.g., maximum likelihood estimation [22] and blind deconvolution [23]. In these methods, which in general are computationally complex, the point-spread function is guessed instead of measured. Adaptive measures [24] are particularly suited to flowing objects, and engineered point-spread functions [25], can be used as well. In this experiment, we choose to measure the point-spread function of the microscope directly by applying the same focal stack acquisition procedure introduced above to a suspension of submicron-sized reference particles. For this calibration, we use a sparse suspension of 800 nm dyed polystyrene beads in glycerol. Flow velocity, frame rate, and exposure conditions are identical as those selected for imaging the samples. We record a focal stack and track one of the flowing reference particles, align the defocused images using the defocusing invariant Gn, and check that the field of observation is clear of other flowing objects. The resulting stack, centered in the window of computation and normalized to a unitary absorption, represents the point-spread function of the microfluidic microscope for this particular tilt angle. While not done here, these reference particles can be embedded in the flow with the samples, as real-time reference points for changing conditions and/or shear compensation. Even with a known, precisely measured point-spread function, the inversion (Eq. (20)) is sensitive to zeros and noise in the measurement. To compensate for this, we use a Wiener deconvolution filter [26]. In this approach, we return to Eq. (19) and consider explicitly an additive noise term, N, which we assume to be independent from the signal.

Eq. (19) becomes

S(r)=[OP](r)+N(r)E21

The Wiener filter finds the best deconvolution operator, D, so that the retrieved object OR :

OR(r)=[DS](r)E22

minimizes the RMS reconstruction error, E, given by

E=|OOR|2E23

Here, the Wiener solution, representing an optimal compromise between noise and resolution, is therefore given by

OR1[[S][P]+ϵ]E24

where ϵ=|N|2|S|2=1.2  103, is a regularization constant corresponding to the inverse value of the signal-to-noise ratio. Here, the signal intensity is normalized to 1 and the root-mean-square value of the noise (background intensity) is measured and time-averaged in an empty area near the flowing object.

Figure 9.

The 3D structure of the object is digitally reconstructed using Eq. (24). (a) An iso-level surface shows subcellular structures at the surface of the cellular membrane. (b) A 3D view of the aggregated yeast cells flowing along the channel axis (u).

2.1.6. Results

Experimental results on flowing yeast cells are shown in Figure 9. The focal stack, S, and the point-spread function, P, are each processed using Eq. (24). We compute iso-level surface contours from the retrieved three-dimensional data. Figure 9(a) shows a these contours and their projection view along the optical axis, revealing small-scale surface features (≈1−2 µm), which are clearly resolved (though smoothed somewhat by the regularization process). These features most likely represent early-stage budding of the yeast cells, though other factors can also contribute to their specific morphology [27]. In Figure 9(b), we show contours from the side. All the cells lie in the same vertical plane (a result of the controlled injection), and each cell has flat side walls. This deformation is common in flowing cells [28] and is often used as a diagnostic tool [29]. Many details that are hidden in standard imaging using 2D projections, such as cell orientation, 3D shape, and surface roughness, are readily apparent in the volume images here.

2.2. Microfluidic flow-scanning tomography

In the previous section, we demonstrated 3D surface topology of flowing objects. In this section, we present methods for full-volume tomography [30]. Depending on the respective size of the microfluidic channel and the flowing objects, two methods for the acquisition of tomographic optical projections are shown in Figure 10.

Figure 10.

There are two methods for 3D microfluidic tomography. (a) By observing objects flowing at a focal plane located near the upper (or lower) face of the channel, laminar flow-shear effects induce rotation of the objects suspended in the flowing fluid. (b) For larger samples, we introduce a phase-space flow-scanning methods that simultaneously records multiple optical projections along a broad range of angles, slice by slice, as the samples flow across an optical slit.

The method proposed in Figure 10(a) can be implemented by placing a channel in the optical path of an optical microscope. Here, we will present in greater details the lens-based, flow-scanning tomography technique shown in Figure 10(b).

The phase-space distribution is a 4D space that describes an entire light field, with spatial position {x,y} and phase information {kx,ky} (wave/ray propagation direction). 4D light-field acquisition requires more advanced recording methods than traditional imaging, such as scanning Fourier windows [31], wave front sensors [32], or light-field cameras [33]. Here, we demonstrate a technique that enables the acquisition of a 3D subspace {x,y,kx} of the 4D light field. We then extract 3D images showing phase and intensity information in two separate tomograms using computational imaging methods. We propose a microfluidic device that combines an illumination source, patterned with a slit aperture along the (y) axis, and a cylindrical lens to collect the light passing through the sample. When placed into an optical microscope, the device enables the simultaneous acquisition of multiple views of the slit aperture for a broad range of perspective angles. This {y,kx} image is recorded by the video camera while the object flows through the microfluidic channel past the slit aperture. Motion therefore provides line-by-line scanning along the (x) axis.

This new approach has several advantages. Each frame records data that are relative to a specific slice through the object. Consequently, there is very little redundancy between the information contained in two distinctive frames. This means that sampling of the optical signal is very effective. In addition, as we show in the experimental results, line-by-line acquisition is very robust to sample motion. This is a great advantage for in-vivo imaging applications.

2.2.1. Experimental setup

The experimental setup is shown in Figure 11. A white light-emitting diode (LED) light source provides a uniform illumination given by I(x,y,θx,θy)=I0, which is restricted with a slit aperture I(|x|>0.5 μm)=0. The stage is positioned at the focalplane of an optical microscope, in this case a f=2 mm objective with a 6 mm working distance and numerical aperture (NA=0.70). In between, we place a cylindrical lens at a focal distance (f'=2 mm) from the aperture. With this 1D optical Fourier transform, we record the angular spectrum kx for each point y illuminated along the slit axis (y). The result is a continuous range of perspective angles for projection tomography, given by θMAX=2arcsin (NAn), where n is the refractive index of the flowing fluid. In the experiments below, we use a buffer solution for in vivo experiments with C. elegans, with (n1.33).

Figure 11.

Experimental device. A microfluidic channel is fitted with a 1 µm wide slit aperture along the y-axis that provides a static, cylindrical illumination pattern (in green). A cylindrical lens in a Fourier imaging configuration converts the transmitted light beam into a phase-space image {y,kx}. All components are assembled onto a standard microscope slide to be used directly in a microscope.

We adjust the flow speed and frame rate so that |δx|2 μm.

Figure 12.

(a) The microfluidic slide is placed at the focal plane of an optical microscope, with a video camera for data acquisition. (b) A secondary slit aperture is used to adjust the depth of field of the acquired optical projections by reducing the effective numerical aperture to NAx = 0.15 along the slit axis. This is a required tradeoff between focal depth and resolution. (c) Each acquired frame contains a continuous range of optical projections of the slit aperture. The depth of field is given by Dz. The resolution Rz along the z-axis corresponds to the size of the domain intersecting all optical projections in the angular range given by θMAX.

In Figure 12, we show the optical path from source to detector. In the (y,z) plane, the objective creates an image of the 100 µm wide microfluidic channel cross-section (with a 50× magnification) onto the camera. The optical resolution is limited by the numerical aperture of the microscope objective and given by

Ry=λ2NA0.4 μmE25

where λ is the imaging wavelength (here, we use λ=500nm. nm at the center of the led spectrum). In the (x,z) plane, the Fourier lens separates the continuous range of perspective views of the slit along the other axis of the camera. With an adjustable slit aperture, we reduce the numerical aperture to NAx=0.15. The optical resolution limit along the slit axis is given by

Rx=λ2NAx1.7 μmE26

The resolution along the z-axis corresponds to the geometrical limitations of the projection area shown in Figure 12 (c) and is given by

Rz=δxtan(θMAX/2)3.2 μmE27

The associated depth of field, required to acquire 3D information for stack reconstruction, is given by Dz=2λNAx250 μm. In the simplest case demonstrated here, the limitation of the NA along this axis is a required tradeoff between depth of field and resolution:

Dz=8λRx2E28

We note, however, that additional computational methods are possible to overcome this relation [34].

2.2.2. 3D absorption tomography

The first imaging modality of the device is 3D absorption tomography. As the sample flows in the channel, we record N consecutive frames:

In(kx,y),n=1,, NE29

The raw data are then reassembled into the angular projection domain Pθ, given by

Pθ(x,y)=In(2πλsin θ,y)E30

where

n=[xcos θVT]E31

Pθ contains (xy) views of the flowing sample for different values of the projection angle |θx|<θMAX/2. Because each frame contains simultaneous perspective views of the slit aperture, all projected views are already aligned and therefore marginally affected by long-range sample motion. Reconstruction of the 3D structure S(x,y,z) is directly derived from data with tomographic back-projection algorithms, here an inverse Radon transform, by applying the Fourier Slice Theorem to Pθ^=x[Pθ] :

Pθ^(k,y)=S^(kcos (θ),y,ksin (θ))E32

Finally, we retrieve the 3D structure:

S(x,y,z)=x,z1[S^]E33

A proof-of-principle experiment is shown in Figure 13. We used a translation stage to displace the microscopic sample at a constant speed along the x-axis, and a prepared microscope slide with well-known biomaterial (Elodea leaf slice) to calibrate the phase-space microscopy setup. The 3D structure of cell walls was retrieved and depth information was obtained with enough spatial resolution to distinguish two cellular layers with a 6 µm depth difference (Figure 13(a) and (b)). Such separation is not possible with conventional 2D microscopy (Figure 13(c) and (d)).

Figure 13.

Experimental proof of principle showing the 3D imaging capabilities of the tomographic microscopy device on a prepared microscope slide. A translation stage provides the constant speed displacement along the x-axis during data acquisition. Tomographic views are shown at two different levels (a) in green and (b) in blue color maps. (c) The image as it would have been observed in a white light microscope is compared to (d), an overlapping image of (a) and (b) showing layer separation capability with a 6 µm depth difference.

2.2.3. 3D differential phase contrast tomography

The second imaging modality of the device is 3D differential phase-contrast (DPC) tomography. We consider the angular projection domain Pθ defined previously, and first digitally refocus our data by gradually shifting all perspective views as if the intersection of all projection directions was displaced along the optical axis by a depth z from the center of the channel [35]. The virtually defocused views at depth, z, are given by

Pθz(x,y)=Pθ(x+ztan θ,y)E34

An individual DPC image [36] at focal depth z is then given by

Δϕz(x,y)=ILz(x,y)IRz(x,y)ILz(x,y)+IRz(x,y)E35

where

ILz(x,y)=θ=θMAX0Pθz(x,y)E36

and

IRz(x,y)=θ=0θMAXPθz(x,y)E37

The resulting DPC tomogram is given by

Δϕ(x,y,z)=Δϕz(x,y)E38

Because DPC (∆φ) and intensity S are simply different methods of processing the raw data (In), the two imaging modalities are perfectly registered. This alignment also provides a strong foundation for other methods of phase retrieval, e.g., for ambiguities in reconstruction [37], and does not suffer from possible artifacts present in coherent methods, e.g., speckle and sensitivity to interference jitter. In addition, phase measurement can also be made quantitative with suitable calibration, and it is possible to combine phase and intensity reconstructions to compute the full (complex) refractive index profile [38].

2.2.4. Results on C. elegans

Now we demonstrate 3D imaging of a live C. elegans nematode freely swimming in the microfluidic channel. An XX-hermaphrodite is raised at room temperature to the adult stage of development, using standard techniques [39], and placed into a water-based liquid environment with balanced electrolytes (M9 buffer solution). The motion of the nematode along the (y,z) directions is limited by the boundaries of the microfluidic channel but can also be accounted for by preprocessing data, and cancelled with digital frame alignment techniques.

The phase-space scanning microscope setup used in Figure 13 was modified to integrate a 100 µm deep, 100 µm wide microfluidic channel for C. elegans 3D tomography. A low flow speed, 10 nl.s−1, was chosen to provide the best compromise between the available frame rate of the camera and desired resolution.

Figure 14.

Experimental results on live, adult, awake, wild type C. elegans nematodes. We display absorption (S) and differential phase contrast (∆φ) tomographic images for two different depths ((a) z = 0 µm, and (b) z = 22 µm). Tomographic data show the precise 3D location of the reproductive system with eggs (a), and of the digestive system (intestine), the cuticle and oblique somatic muscle fibers 22 µm above (b). In a conventional white light microscopy device (c), these internal features overlap on the same image and it is nearly impossible to identify and locate them.

Experimental results for 3D amplitude and phase contrast tomography are shown in Figure 14. Figure 14(a) and (b) shows digital slices of the retrieved 3D tomogram at two z-levels of interest (z = 0 µm and z = 22 µm). At each depth level, two images on the nematode show absorption, from optical projection tomography (S), and difference phase-contrast (DPC) tomography (∆φ). In the reference frame at z = 0 µm, absorption tomographic slices (S) show the pharynx and its two bulbs on the left side (head), and the reproductive system, with a view of the eggs in the center part of the body. At a different depth (z = 22 µm), the digestive system, with distal gonad and the intestine, is clearly apparent.

Differential phase contrast is well suited for the observation of interfaces between tissue layers with different refractive indices. With nearly transparent live roundworms, the DPC tomographic slices (∆φ) enable the observation and localization of a few eggs at a time at each depth level. Eggs that could not be observed clearly with absorption images only at the level of the digestive system (z = 22 µm).

We show a conventional optical microscopy image of the worm in Figure 14(c) to compare our technology with more conventional imaging methods. Here, because all the structures previously identified now overlap in a single image, it is much harder to identify them from only one perspective view. In addition, it is also impossible to find their respective positions along the optical axis.

Advertisement

3. Conclusion

In conclusion, we have presented several microfluidic microscopy methods that combine a liquid channel, optical instrumentation, and computational imaging.

These technologies inherit the advantages of microfluidic channels. Samples move with the flow in a biocompatible fluid and are guided precisely to the observation window for optical imaging. High sample throughput allows both large data sets for population studies as well as repeated imaging of the same sample for longitudinal studies (e.g., to track development/aging or to evaluate drug delivery and response). Microfluidics also provides a pathway for object sorting and fully automated imaging with little to no sample preparation.

By using the flow as a degree of freedom for imaging, we allow the imaging sensor to capture a more diverse data set than is possible with static samples. Beyond simple multiplicity of images, multiple illumination orientations, shifts, and perspectives are possible. Computational analysis then leverages the image diversity into improvements in resolution, quantitative measurement of surface structure, and even 3D tomographic imaging of phase and absorption. Remarkably, the flow also allows for a certain amount of self-error correction, as integrating the known properties of cell transport in a laminar flow enables computational adjustment for imperfections of the microfluidic channel, such as pinching and fabrication defects, as well as variations in the flow velocity.

These microfluidic microscopy methods can be either scaled down in size for individual cells or scaled up for larger animals. They can operate on their own or be integrated easily with existing devices, such as flow cytometers, microscopes, and imaging systems, e.g., with a modified microscope slide. Likewise, they can be implemented with or without lenses, enabling a variety of miniaturized, on-chip forms. And finally, they can coexist with other modalities of imaging, such as spectroscopy and (photo-) acoustic sampling, for the acquisition of higher-dimensional data cubes.

References

  1. 1. Lukosz W, Marchand M. Optischen abbildung unter ueberschreitung der beugungsbedingten aufloesungsgrenze. Journal of Modern Optics, 1963; 10(3):241–255.
  2. 2. Neil M, Squire A, Jusikaitis A, Bastiaens P, Wilson T. Wide-field optically sectioning fluorescence microscopy with laser illumination. Journal of microscopy, 2000;197(1):1–4.
  3. 3. Heintzmann R, Cremer CG. Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating. In BiOS Europe’98. International Society for Optics and Photonics; 1999. pp. 185–196.
  4. 4. Gustafsson MGL, Agard DA, Sedat JW. Doubling the lateral resolution of wide-field fluorescence microscopy using structured illumination. In BiOS 2000. The International Symposium on Biomedical Optics. International Society for Optics and Photonics; 2000. pp. 141–150.
  5. 5. Lu CH, Pégard NC, Fleischer JW. Flow-based structured illumination. Applied Physics Letters, 2013;102:161115.
  6. 6. Frohn JT, Knapp HF, Stemmer A. True optical resolution beyond the Rayleigh limit achieved by standing wave illumination. Proceedings of the National Academy of Sciences of the United States of America, 2000;97(13):7232–7236.
  7. 7. Gustafsson MGL. Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution. Proceedings of the National Academy of Sciences of the United States of America, 2005;102(37):13081–13086.
  8. 8. Kner P, Chhun BB, Griffis ER, Winoto L, Gustafsson MGL. Super-resolution video microscopy of live cells by structured illumination. Nature Methods, 2009;6(5):339–342.
  9. 9. Yang S, Zheng G, Lee SAH, Yang C. Stereoscopic optofluidic on-chip microscope. In Winter Topicals (WTM), 2011 IEEE. IEEE; 2011. pp. 91–92
  10. 10. Heng X, Erickson D, Ryan Baugh L, Yaqoob Z, Sternberg PW, Psaltis D, Yang C. Optofluidic microscopy, a method for implementing a high resolution optical microscope on a chip. Lab on a Chip, 2006;6(10):1274–1276.
  11. 11. Isikman SO, Bishara W, Mavandadi S, Yu Frank W, Feng S, Lau R, Ozcan A. Lens-free optical tomographic microscope with a large imaging volume on a chip. Proceedings of the National Academy of Sciences of the United States of America, 2011;108(18):7296– 7301.
  12. 12. Kristensson E, Richter M, Pettersson S-G, Ald´en M, Andersson-Engels S. Spatially resolved, single-ended two-dimensional visualization of gas flow phenomena using structured illumination. Applied Optics, 2008;47(21):3927–3931.
  13. 13. Fiolka R, Shao L, Hesper Rego E, Davidson MW, Gustafsson MGL. Time-lapse two-color 3d imaging of live cells with doubled resolution using structured illumination. Proceedings of the National Academy of Sciences of the United States of America, 2012;109(14):5311–5315.
  14. 14. Kristensson E, Berrocal E, Richter M, Pettersson S-G, Ald´en M. High-speed structured planar laser illumination for contrast improvement of two-phase flow images. Optics Letters, 2008;33(23):2752–2754.
  15. 15. Kim P, Abkarian M, Stone HA. Hierarchical folding of elastic membranes under biaxial compressive stress. Nature Materials, 2011;10(12):952–957.
  16. 16. Gustafsson MGL. Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. Journal of Microscopy, 2000;198(2):82–87.
  17. 17. Gustafsson MGL. Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution. Proceedings of the National Academy of Sciences of the United States of America, 2005;102(37):13081–13086.
  18. 18. Pégard NC, Fleischer JW. 3D deconvolution microfluidic microscopy using a tilted channel, Journal of Biomedical Optics, 2013;18:040503.
  19. 19. Stone HA, Kim S. Microfluidics: basic issues, applications, and challenges. AIChE Journal, 2004;47(6):1250–1254.
  20. 20. Reed Teague M. Deterministic phase retrieval: a green’s function solution. JOSA, 1983;73(11):1434–1441.
  21. 21. Tikhonov AN, Goncharsky AV, Stepanov VV, Yagola AG. Numerical methods for the solution of ill-posed problems, vol. 328. Springer; 1995.
  22. 22. Richardson WH. Bayesian-based iterative method of image restoration. JOSA, 1972;62(1):55–59.
  23. 23. Chan TF, Wong C-K. Total variation blind deconvolution.IEEE Transactions on Image Processing, 1998;7(3):370–375.
  24. 24. Dong W, Zhang L, Shi G, Wu X. Image deburring and super-resolution by adaptive sparse domain selection and adaptive regularization. IEEE Transactions on Image Processing, 2011;20(7):1838–1857.
  25. 25. Quirin S, Prasanna Pavani SR, Piestun R. Optimal 3D single molecule localization for super resolution microscopy with aberrations and engineered point spread functions. Proceedings of the National Academy of Sciences of the United States of America, 2012;109(3):675– 679.
  26. 26. Chatwin CR, Wang RK. Frequency domain filtering strategies for hybrid optical information processing. Research Studies Press Ltd.; 1996.
  27. 27. Ohya Y, Sese J, Yukawa M, Sano F, Nakatani Y, Saito TL, Saka A, Fukuda T, Ishihara S, Oka S, et al. High-dimensional and large-scale phenotyping of yeast mutants. Proceedings of the National Academy of Sciences of the United States of America, 2005;102(52):19015–19020.
  28. 28. Abkarian M, Faivre M, Horton R, Smistrup K, Best-Popescu CA, Stone HA. Cellular-scale hydrodynamics. Biomedical Materials, 2008;3(3):034011.
  29. 29. Westendorf C, Bae AJ, Erlenkamper C, Galland E, Franck C, Bodenschatz E, Beta C. Live cell flattening ‒ traditional and novel approaches. BMC Biophysics, 2010;3(1):9.
  30. 30. Pégard NC, Toth ML, Driscoll M, Fleischer JW. Flow-scanning optical tomography. Lab on a Chip, 2014;(14):4447-4450.
  31. 31. Waller L, Situ G, Fleischer JW. Phase-space measurement and coherence synthesis of optical beams. Nature Photonics, 2012;6(7):474–479.
  32. 32. Hartmann J. Bemerkungen uber den bau und die justirung von spektrographen. Z. Instrumentenkd, 1900;20(47):2.
  33. 33. Ng R, Levoy M, Brédif M, Duval G, Horowitz M, Hanrahan P. Light field photography with a hand-held plenoptic camera. Computer Science Technical Report CSTR, 2005;2(11).
  34. 34. Forster B, Van De Ville D, Berent J, Sage D, Unser M. Complex wavelets for extended depth-of-field: a new method for the fusion of multichannel microscopy images. Microscopy Research and Technique, 2004;65(1‒2):33–42.
  35. 35. Tian L, Wang J, Waller L. 3D differential phase-contrast microscopy with computational illumination using an led array. Optical Letters, 2014;39(5):1326–1329.
  36. 36. Hamilton DK, Sheppard CJR. Differential phase contrast in scanning optical microscopy. Journal of Microscopy, 1984;133(1):27–39.
  37. 37. Seldin JH, Fienup JR. Numerical investigation of the uniqueness of phase retrieval. Journal of Optical Society of America A, 1990;7(3):412–427.
  38. 38. Arnison MR, Larkin KG, Sheppard KJR, Smith NI, Cogswell CJ. Linear phase imaging using differential interference contrast microscopy. Journal of Microscopy, 2004;214(1):7–12.
  39. 39. Stiernagle T. Maintenance of C. elegans. C. elegans: a practical approach; 1999. pp. 51–67.

Written By

Nicolas Pégard, Chien-Hung Lu, Marton Toth, Monica Driscoll and Jason Fleischer

Submitted: 01 June 2015 Reviewed: 27 June 2016 Published: 23 November 2016