Open access peer-reviewed chapter

Imaging Diagnostics for Jet Breakup into Droplets: A Review

Written By

Anu Osta

Submitted: 25 June 2022 Reviewed: 25 August 2022 Published: 24 October 2022

DOI: 10.5772/intechopen.107370

From the Edited Volume

Fundamental Research and Application of Droplet Dynamics

Edited by Hongliang Luo

Chapter metrics overview

88 Chapter Downloads

View Full Metrics

Abstract

A concise review of the recent developments in some of the standard optical diagnostics applied for primary jet breakup studies has been presented here. Primary breakup is the core breakup of liquid jets and sheets into droplets upon its interaction with the ambient gaseous atmosphere. This phenomenon is encountered in various aerodynamic, fluid dynamic, and combustion situations. The imaging diagnostics reviewed here include photography, high-speed imaging, shadowgraphy, digital holography, ballistic imaging, jet core illumination, thermal imaging, Mie imaging, x-ray phase contrast imaging, and laser-induced fluorescence. The advantages and limitations of each technique, their success, and future developmental trend are discussed.

Keywords

  • atomization
  • diagnostics
  • liquid-jet
  • visualization

1. Introduction

Liquid atomization is a phenomenon associated with liquid fuel combustion processes, industrial and agricultural sprays, and our daily life activities such as when using body or hair sprays. When a liquid column or a sheet issuing from a source (nozzle or channel), interacts with the ambient atmosphere instabilities develop inside the liquid sheet or the column core. Instabilities also develop at the atmosphere-liquid interface in the form of flow structures like surface waves and ligaments. Flow conditions prevailing inside the source boundary such as void cavities or turbulence also affect instability development and the resulting bulk liquid disintegration. Figure 1 depicts typical atomization in a liquid jet that is subjected to a crossflowing fluid like air. The liquid disintegration is also referred as the ‘primary breakup’ process. Primary jet breakup is often regarded as the first step in a jet atomization process. The instabilities are mostly of the Kelvin–Helmholtz (KH) and Rayleigh Taylor (RT) type. The primary droplets formed in the initial stage of atomization may then undergo subsequent secondary breakup stages.

Figure 1.

Schematic representation of primary breakup of a liquid jet in crossflow.

Complete characterization of spray atomization would require one to analyze it at both the macroscopic and microscopic scales. Macroscopic characteristics would reveal the spray volume and its penetration into the surrounding atmosphere (both axially and radially), spray cone angle, mass flow rate, spray momentum flux, and the mass distribution of the spray fluid further downstream after atomization. The microscopic properties would reveal droplet sizes, droplet velocities, droplet number density, droplet distribution, and the temperature field of the spray.

Over the past few decades, various techniques [1, 2, 3, 4, 5] have developed both intrusive and non-intrusive to visualize the primary breakup process. These are based on physical jet interaction, thermal response, electrical response, and optical imaging techniques. No single diagnostic can completely characterize the entire spray structure. A combination of diagnostics is often applied to obtain detailed information about the spray characteristics. Newer developments in the field of optics, and electronics have vastly improved the traditional imaging and probing techniques yielding significantly better results.

The effectiveness of any visualization technique depends on several factors. The optical setup, illumination quality, light source, dynamic range and spatial resolution, light sensitivity, frame speed, and signal-to-noise ratio of the camera sensors – all play an important role. Care should be exercised to minimize the errors associated with non-uniform or unstable illumination, curvature effects, reflections, and shadows. When dealing simultaneously with structurally different entities like liquid core, liquid surface structures, and spatial droplet distribution, various challenges present themselves. They include the construction of three-dimensional atomization map from two-dimensional images, optical inaccessibility of dense jet breakup region, high-speed imaging without sacrificing high resolution, signal loss due to high noise, diffraction blurring of small droplets, overcoming multiple scattering, optical aberrations, and attenuation to name a few. The rest of this chapter will discuss some standard techniques for visualizing the primary breakup region.

Advertisement

2. Light sources and imaging devices

All optical techniques are based on certain lighting schemes, such as direct lighting, diffused lighting, flood lighting, trans-illumination, reflective, etc. In the trans-illumination scheme, a light beam is passed through the spray or sheet and is imaged on the other side. In the internally illuminated scheme, a light source is located inside the spray and light is transmitted through the spray core to the surrounding. In a reflective scheme, the incident light reflects off the liquid surface, while in fluorescence, a thin light sheet illuminates a planar section of the spray. The light sources used for illumination could be either coherent or non-coherent type (Figure 2). Incoherent sources include strobe, incandescent, halogen, arc, and fluorescent lamps among others. Coherent sources include lasers and LEDs.

Figure 2.

Some of the commonly used light sources for imaging purposes. A-gas arc lamp used as stroboscopic source, B - incandescent light source, C- fluorescent light source, D - Nd:YAG laser light source.

2.1 Strobe light

Strobe lighting involves producing flashes of light for a short period of time at regular intervals. The typical flash lasts for 200 μs and may be synchronized with the framing rate of a suitable camera. With stroboscopic illumination, one can prolong the source lifetime, operate at increased light intensities, ‘freeze’ the motion of a fast-moving object, and time the pulses.

2.2 Incandescent light

Incandescent lighting works by producing light by heating a wire filament. They have a wavelength of 300 nm – 1500 nm, flicker between 60 and 120 Hz, have an orange-yellow color cast, and are considered harsh for imaging purposes. They used to be widely used in the past for scientific imaging owing to their color being similar to natural sunlight. However low lamp efficacy (lumens per watt), luminaire efficiency, poor controllability of the light source, and incoherency presented challenges. Tungsten filament lamps, Halogen lamps, and Xenon arc lamps are some examples of incandescent light sources.

2.3 Fluorescent light source

When electricity is passed through mercury vapor in a glass tube the radiation emitted interacts with the glass coating to produce white light. The illumination is high and this is known as fluorescent lighting. However, most of them cannot be dimmed and they may Fluorescent light commonly has wavelengths in the visible spectrum of 400–700 nm and frequencies of 10 kHz to 100 MHz. They have a negative reputation in photography due to the blue or green color cast that they produce. Flicker correction using electronic ballasts and radio interference removal using suitable filter circuits might need to be applied.

2.4 Lasers

Lasers have come to be regarded widely as the most versatile source of light for imaging applications due to their many desirable properties like high brightness, stability, longevity, narrow spectral bandwidth, narrow beam divergence, high degree of spatial and temporal coherence, and well-defined polarization properties. A disadvantage of using laser beams is that they exhibit a Gaussian intensity profile, revealing particles in the middle region while particles at the edges get concealed. Some of the common types of lasers used in laboratories for research purposes have been discussed below.

2.4.1 Solid state lasers

These are one of the most commonly used laser systems, e.g. Nd:YAG Laser. They operate in the infrared 1064 nm wavelength regime, but pulses can be frequency doubled to generate wavelengths at 532 nm, or higher harmonics, 355 nm/266 nm. They can be operated in both the pulsed and the continuous mode and have an average power density of 5 × 10−3 W/cm2/10s.

2.4.2 Gas lasers

In gas lasers, an electric current is discharged through a gas or plasma to produce coherent light in the ultraviolet (excimer or nitrogen lasers) and visible range (He-Ne or ion-gas lasers). The excimer lasers typically operate at 193/248/308/351 nm with a pulse repetition rate of ∼100–200 Hz and a pulse duration of ∼10 ns. The He-Ne lasers typically operate at 632.8 nm. These lasers can produce beams, with a near-Gaussian/super-Gaussian profile. Molecular gas lasers emit in the infrared region popularly known as infrared (IR) lasers. These lasers can emit between 2 and 1000 μm range, at a rate of 300 GHz and10 THz. Since the optical depth of the jet breakup region in the infrared regime is smaller than in the visible spectrum, the infrared lasers can probe dense regions of the spray more effectively than their visible and ultraviolet counterparts [6].

2.4.3 Ionized-metal vapor lasers

These are an important tool for high-speed flow visualization, e.g. copper vapor lasers. Copper vapors are used as the lasing medium. The emitted pulses are in the green/yellow spectral range (510 nm/578 nm) with a pulse width in the range of 5–60 ns.

2.4.4 Mode-locked lasers

Mode locking or phase locking is a technique used for achieving ultrashort pulses, on the order of picoseconds or femtoseconds. Operating modes periodically interfere constructively to produce an intense burst of light with peak power several orders of magnitude higher than the average power.

Lasers are also classified as continuous wave and pulsed lasers. In continuous wave lasers, the emitted light intensity is constant as a function of time. In pulsed lasers, the energy is released in the form of a short pulse of light.

2.5 LED

LED illumination has recently gained popularity in a number of imaging applications. LED arrays produce intense, even illumination over a given object or area, have relatively low power requirements, generate very little heat, have a very long life and their pulse widths can be finely controlled. LEDs can produce coherent light under specific conditions such as in laser diodes and can be operated in continuous or pulsed mode. Their emissions spectra are broader than that of the lasers.

An integral component of any imaging system is the camera. Almost all of the imaging nowadays is carried out by digital cameras of either the CCD type (charged coupled device) or CMOS type (complementary metal–oxide–semiconductor). A comparison between the two sensor types is presented in Table 1. Figure 3(a) shows a simplified CCD sensor architecture and Figure 3(b) shows a simplified CMOS sensor architecture.

ParameterCCDCMOS
Signal-to-noise ratio (SNR)HigherLower
ResolutionHigherLower
Repetition ratesLowHigh
SpeedModerate to HighHigher
CostHigher costLower cost
SensitivityMore light sensitiveLess light sensitive
Power usageHigherLower
Dynamic rangeHighModerate
UniformityHighLow to medium
WindowingLimitedExtensive

Table 1.

Comparison of operating characteristics, of CCD and CMOS [7, 8, 9].

Figure 3.

Schematic of a typical (a) CCD sensor (left) and (b) CMOS sensor architecture (right) [7].

2.6 CCD

A CCD is an image sensor whose working principle is based on the photoelectric effect. It produces electrical charges proportional to the light intensities incident at different locations on the sensor which are then converted to digital values by “shifting” the signals one at a time between stages within the device. A photoactive capacitor array captures a two-dimensional picture of the scene. Each capacitor transfers its charges to its neighbor after which the charge is converted into voltage. The total number of image frames acquired is limited by the on-chip storage capacity. The transfer voltage signals get dampened at very high frequencies. Operating the CCD at very high frame rates leads to heating of the sensor, with accompanying thermal noise. The current practical frame rate limit is of the order of 1000 fps which could be extended to up to 100 Mfps (mega frames per second) having a spatial resolution of 312 × 260 pixels per frame [10, 11]. Intensified CCD (ICCD) is CCD coupled with an image intensifier for achieving high sensitivity in ultra-low-light-level conditions. They provide better temporal resolution and are suitable for capturing transient events.

2.7 CMOS

A CMOS image sensor consists of an integrated circuit containing an array of pixel sensors, each pixel containing a photodetector and an active amplifier. Each pixel can be read individually. This enables fast clocking speeds (time taken to read the charge of the sensor) and high frame rates. The CMOS devices are highly immune to noise and have low static power consumption. Since the CMOS sensors have readout transistors at every pixel, most of the photons falling on the chip hit the transistors instead of the photodiode, lowering the light sensitivity of the CMOS chip. CMOS is more suited for faint/low light conditions and requires taking longer exposures. Some of the latest CMOS cameras can reach frame rates of 285,000 fps at reduced resolution1 or the high definition2, 1080 HD resolution up to 2000 fps.

In order to effectively visualize the liquid breakup region under various illumination and density constraints, it is important to have a good understanding of the essential basics of photography. Some of these parameters e.g. field of view, frame rate, exposure, aperture, magnification, depth of field, depth of focus, and the dynamic range shown in Figure 4, are discussed below.

Figure 4.

Schematic of a typical camera and the different imaging parameters.

2.8 Field of view

The field of view in the case of optical instruments is the solid angle through which the camera is sensitive to light. It defines the area that the camera is able to record, and is a function of the working distance, the focal length of the lens, and the sensor/film area.

2.9 Frame rate

Frame rate is the number of consecutive images recorded per unit time by an imaging device. It is mostly expressed in frames per second (FPS) or hertz (Hz). For a fast event like atomization, a low frame rate may result in jerky and less fluidic motion while a very high frame rate would require high processing power and storage to yield sharp high-quality images of the phenomena in motion.

2.10 Exposure

It is the amount of light collected by the sensor of the camera during single image capture. Too long exposures lead to excessive light collecting on the film, which results in a washout appearance. Too short exposures lead to insufficient light collection resulting in a dark image. Exposure depends on the frame rate and camera aperture. The exposure time in CCD and CMOS cameras is set by an electronic shutter that is controlled either manually, electronically, or by software.

2.11 Aperture

Aperture is the size of the lens opening which limits the amount of light entering the camera and falling on the image plane. It controls the depth-of-field in the sense that the background can be blurred with a wide aperture keeping just the object in focus or alternatively keeping everything in focus by using a narrow aperture. A wide aperture results in a higher degree of optical aberrations (distortions) and vignetting (falling intensity toward the edges of the picture).

2.12 Magnification

Magnification is the degree of scaling (enlarging or diminishing) of a subject on the image plane. There are two ways to represent magnification, (a) Linear or transverse magnification - it is represented by Y/X where Y is the image length and X the subject length and (b) Angular magnification - angle subtended by the object at the focal point.

2.13 Depth of field

It is the range of distance along the optical axis between the nearest and farthest objects in a scene that appear to be in focus (sharp) in the photograph.

2.14 Depth of focus

This is the limit of the image plane displacement at which the image will appear sharp. Depth of focus refers to image space, while the depth of field refers to object space.

2.15 Dynamic range

It is the ratio of the saturation level of a pixel to its signal threshold or simply put the ratio between the maximum and minimum measurable light intensities. Since the dynamic range of sensors is far less than the human eye, local tone mapping and dynamic range adjustment are used.

The spatial resolution of an imaging system is its ability to distinguish separate objects within its field of view. For the same sensor size, increasing inthe field of view decreases the image resolution.

Illumination setup plays a vital role in determining the clarity of an image. Various lighting techniques exist [5]. The most commonly used method is frontal lighting in which the camera and illuminating source are placed in front of the jet and the light reflected from the breakup entities is recorded. Frontal lightening gives a three-dimensional appearance that helps in visualizing the liquid surface features. This arrangement fails to yield any inner details of the jet breakup. Illumination of a section of the liquid by a light sheet is followed mostly in cases of axisymmetric flows. Weak elastic scattering by the droplets at the breakup location helps to reveal the inner details of the breakup location in the plane of illumination. However in this case, multiple scattering is often a major drawback. Backlighting, is a technique in which the light source is placed behind the object, and translucent glass is located between the object and the light source. The translucent screen diffuses the flash uniformly over a wider region illuminating the entire breakup location section. This diffuse light can produce diffuse reflections from the object being backlit resulting in soft and blurry edges. Bidirectional lighting is when two light sources are placed at an angle of 120° from the camera’s line of sight, illuminating the breakup location.

Advertisement

3. Diagnostics

Different optical techniques are used to characterize the primary breakup process; e.g. photography, shadowgraphy, holography, ballistic imaging, jet core illumination, laser-induced fluorescence, thermal imaging, Mie imaging, and X-ray phase contrast imaging. Table 2 summarizes the application situations of the different diagnostics and the type of information they provide.

DiagnosticUnderlying featureType of breakupInformationLimitations
PhotographyReflection-jet is illuminated by an external light source and imaged.AnyImage of the focused jet surface (ligaments, drops, liquid core).The resolution, depth of field, frame speed, and 2D.
ShadowgraphyTransmission- projected shadow intensity of the jet periphery is imaged.Less to moderately dense jet breakupsJet peripheral structures (ligaments, drops, liquid core).No information about the frontal and back-ward jet surface structures. Information loss due to shadow overlap. 2D.
HolographyInterference between reflected/refracted wave fields from the object and a coherent reference wave is imaged and then reconstructed.Less to moderately dense jet breakups3D information in form of 2D,Information loss due to diffraction pattern overlap.
Low depth of focus
Ballistic imagingTransmission- ballistic photons are imaged.Dense jet breakupsSpray structures inside the optically dense atomized regime as well as on the surface.Opaque to the jet core structures.
Limited resolution due to large scattering. 2D.
Jet core illuminationLight propagates through the liquid jet core which fluoresces when a fluorescing dye is added to the liquid.AnyBreakup location, elements of breakup region, liquid jet surface features.Not easily adaptable, scattering high for dense breakup regions, no information on droplet size or velocity, 2D.
Laser induced fluorescenceLiquid jet illuminated by a laser, fluoresces.Less to moderately dense jet breakupsJet breakup structures in a specific planar section of the jet and the surrounding vapor.Scattering, non-uniform intensity distribution, attenuation, interference from other species, 2D, weaker than Mie signal.
Mie imagingLight scattering signal from jet breakup particles having the size of the order of the scattered wavelength of light.Dense jet breakupsJet breakup structures in a specific planar section of the jet, particle size.Scattered and refracted rays interfere leading to a ripple effect, assumes a spherical particle.
X-ray phase contrast imagingContrast due to the phase shift undergone by the beam upon its passage through the sample because of photon absorption and photon scattering.Dense jet breakupsThree-dimensional jet surface topology projected on a two-dimensional image, structures inside the jet core.Inability to distinguish the front and back surfaces.
Thermal imagingTemperature visualization - laser-induced emission of thermographic phosphors or organic tracers (fluorescence and phosphorescence).AnyJet breakup structures in a specific planar section of the jet and the surrounding vapor.Quenching, cross-talk, optical thickness, temperature gradient, attenuation, scattering, 2D.

Table 2.

A comparison of the different diagnostics available for visualizing primary breakup.

3.1 Photography

Photography in its simple form consists of an illumination source illuminating the object and a camera recording the images of the object. Photography of a jet breakup is used for getting measurements on breakup lengths, drop sizes, droplet distribution density, jet and droplet velocities, fluid flow behavior, etc. In some cases, a diffuse screen may help scatter the light incident on the object, and digital display help in visualizing the image in real-time as shown in Figure 5. Imaging could be in the single-shot, low speed, and high-speed modes. Illumination could be provided by any of the schemes discussed previously, e.g. strobe synchronized with the camera frame rate in the forward light scattering configuration [12, 13, 14]. Incandescent or pulsed laser sources could also be used.

Figure 5.

Optical setup of the photography technique.

Figure 6 shows a high-speed photograph of a jet surface undergoing primary breakup in still air [15] where the jet was back-lighted at 45° from the horizontal and shielded from ambient light (Figure 7).

Figure 6.

Photographic image of jet surface primary breakup in still air [15].

Figure 7.

Laser backlit illumination [16].

With increasingly dense sprays, the intensity of the back-lit illumination should increase to distinctly reveal the droplets in the breakup location. Magnification is pivotal in distinguishing and measurement of atomization entities. Droplets, ligaments, and surface features in the size range of micrometers would require significant magnification. Higher magnification comes at the expense of the depth of field and this presents a challenge toward achieving a focused image of the small non-stationary liquid surface. Optical systems have been designed to make it flexible such as by using a phase mask [16] consisting of a combination of Fresnel lenses (FL) (Figure 8) where each FL works in tandem with the primary lens to produce a sharp image for a unique object plane. A micro-actuator can be used to translate the detector along the optical axis during image integration [18]. Image processing algorithms [19] applying the focus stacking method can modify the phase of incoherent light wavefront to produce a point-spread-function (PSF) over a large region of focus yielding an extended depth-of-field. For visualization of the breakup phenomena over a wide range of distances, a large field of view with a high depth of focus might be desirable when the magnification is low.

Figure 8.

Optical setup of Bauer [17].

Spatial resolution is often denoted by lines per inch (lpi) or μm. It represents how closely two lines can be resolved in an image. Film-based photography has a spatial resolution of ∼100 μm, while digital photographs could achieve resolutions up to 10 μm. Digital image resolution is limited by pixel noise and pixel cross-talk. The spatial resolution also depends on the conical angle subtended by the object at the lens aperture. Lens resolution for narrow lens aperture is affected by diffraction, and for large apertures affected by optical aberration. For imaging small structures at widely separated points with a high resolution, a multiple-segment long-distance microscope in combination with micro-lens and aperture array has been demonstrated [17] (Figure 8).

The finite size of a camera lens with a circular aperture leads to the diffraction of parallel light rays passing through them to form a diffraction pattern in the image. It usually has an appearance of a central bright spot and surrounding bright rings, separated by dark nulls. This two-dimensional far-field diffraction pattern is called ‘Airy disc’, (Figure 9.). Its angular radius measured from the center to the first null is equal to sin−1(1.22λ/D), where λ is the wavelength of light passing through and D is the aperture diameter. The diameter of the first dark circle (width of the disk) defines the theoretical maximum resolution for an optical system such that if two objects imaged by the lens start to have overlapping airy discs by the virtue of their closeness, a blurring effect occurs.

Figure 9.

Airy disc.

Camera calibration is done both geometrically and photometrically. Geometric calibration involves using a Tsai grid (two planes at the right angle with checkerboard patterns as in Figure 10) by which the scaling factors between the image and the actual target dimensions are achieved. In photometric calibration relation between digital counts and luminance is sought by capturing test patches with known relative luminance. Photography also has some drawbacks. Mainly its inability to resolve a three-dimensional perspective of the object accurately, inability to see through optically dense droplets or liquid structures, and limitations with respect to depth of focus/magnification. For cameras, high spatial resolution comes at the expense of reduced pixel size and therefore a reduced light-sensitive area and reduced signal-to-noise ratios. The absence of cameras and light sources with very high repetition rates, high enough to capture fast motion continuity is another limitation in photography. Figure 11 shows a setup [20] for obtaining high-magnification video images of a jet breakup. In this setup, a high-speed CCD camera fitted with a long-distance microscope lens faces the spray which is itself illuminated with a flat-faced halogen lamp from the rear. A computer records and stores the high-speed movie at the rate of 2000 frames/sec. The field of view, in this case, was 13.9 mm x 8.9 mm, with an image resolution of 29.6 pixels/mm.

Figure 10.

Tsai calibration grid.

Figure 11.

High magnification video setup [20].

3.2 Shadow imaging

Shadowgraphy is a technique of imaging the shadow of the refractive index field. When light passes through a region of varying refractive index (Figure 12), it experiences retardation proportional to the material’s density and bends toward the region of higher refractive index. The angular deflection and displacement of the rays are small. The local beam intensity is not significantly affected, but the angular deviation is enough to produce a focusing effect above the higher refractive index regions as the beam propagates beyond the fluid layer. This coupled with the retardation causes the wavefronts to turn, due to which rays converge and diverge into bright and dark regions.

Figure 12.

Formation of shadow image by relative deflection of rays.

The refractive deflection of rays causes a shadow effect (spatial modulation of the light-intensity distribution) in the recording plane which is then imaged. The image intensity thus depends on the variations in the optical density of the object media. A portion of incident light refracts at the fluid-interface boundary and may produce a darker boundary region. The portion that does not interact with the object produces a homogenous background. The dark regions (shadow) in a shadowgram mark the boundaries of the object. Light transmittance is affected by scattering and absorption too.

Figure 13 shows a shadowgraph image [21] of quenching oil jet spray from a 1.2 mm atomizer nozzle at 15 m/s and 38°C. Shadowgraphy has been used to visualize liquid jet breakup [23, 24] at different length scales in-order to determine ligament-droplet sizes, and speeds. Shadowgraphs can be subjected to further image analysis for detecting the liquid contour and its individual features based on the comparison of the RGB (Red Green Blue) intensity levels to a pre-established threshold level or identification of local changes of RGB intensity, which are their greatest at the fluid boundaries. Shadowgraphy is different from backlit photography despite a light source being located behind the object in both cases. For backlit photography, the background lighting is diffusivein nature. In addition, the liquid frontal surface is illuminated and the reflected light is directly photographed with a camera. The shadow image formed on a screen is imaged by the camera. For optically dense objects the intensity variation inside the image is insignificant compared to the surrounding bright background intensity. In such cases, a dark shadow image with a bright background may result and therefore any kind of density variations analysis would be irrelevant. Shadowgraphs provide phase information (refractive index gradients) [21, 25] or simply a monochromatic projected shadow image of the object [22, 26, 27, 28]. It can help in visualizing convective flows [29].

Figure 13.

Shadowgrams of (a) quenching oil jet spray [21] and (b) water jet spray [22].

Figure 14 shows a typical laboratory setup using incident parallel laser light forming a focused shadowgraph image onto a screen. It is known as “focused shadowgraphy”. Light from the source is passed through a spatial filter, then collimated by using a collimating lens, and allowed to pass through the object. After passage, the beam is focused using a relay lens which forms a real inverted image onto a translucent or a ground glass screen or a photographic film. The camera is focused on this image from the other side of the translucent screen. Alternatively, the primary shadowgraph formed by the relay lens can be recorded directly with a camera simply by focusing the camera lens on the plane of the primary shadowgram. This allows variable magnification to be achieved as well. The relay lens somewhat limits the field of view of focused shadowgraph but sizes to the order of micro scales can be obtained using the appropriate magnification. The use of photographic films yields better resolution [26, 30, 31].

Figure 14.

‘Focused shadowgraphy’ setup (shadowgraph is formed on glass plate or film).

Shadowgraphy is easy and inexpensive to carry out, has a high spatial resolution for most practical purposes, is independent of the shape and material of the phase media and its relatively large field of view is well suited for size, shape, position, and velocity determination.

A disadvantage of shadowgraphy is that since it gives the projected image of the object, any overlapping ligament or droplet information is lost if its projection does not fall along the jet periphery or is obscured by a larger structure in its line of sight (see Figure 15). It fails to give details of the inner fluid structure of the primary breakup region high-density region. It is also unable to distinguish between fuel vapor and small droplets. Image sensitivity, blur, and feature size are always competing issues in shadowgraphy which makes precise measurements difficult. The formation of caustics in shadowgrams is a particular disadvantage when the caustics become confused with some other phenomenon. For instance, a boundary layer can act as a cylindrical lens, focusing light into a bright line.

Figure 15.

Shadowgraph’s inability to image entities hidden or obscured by the projection of the larger structures.

Basic shadowgraphy has been adapted to other imaging techniques such as Microscopic shadowgraphy, Stereoscopic shadowgraphy, and Holographic shadowgraphy [32].

Spark-shadowgraph technique [33, 34], High-speed shadowgraphy (HSS) [35, 36], and Specialized Imaging Shadowgraph (SIS) are some of the other related developments.

The tomographic Shadowgraphy technique [37] is based on a multiple-view camera setup and is capable of resolving the liquid jet core both spatially and temporally (Figure 16). The spray shadowgraphs are obtained using a pulsed LED and four synchronized double frame cameras angled 30° to each other. The spray is then reconstructed by using a line-of-sight reconstruction technique. Stereoscopic shadowgraphy [38] uses four parabolic mirrors to form two inclined intersecting beams and the object is placed at the intersection. Two cameras synchronized in the master-slave configuration record the shadowgraph pairs automatically. Upon image reconstruction, three-dimensionality is achieved.

Figure 16.

Tomography shadowgraphy setup [37].

3.3 Holography

Holograms are made by the interference between a wave field scattered from the object and a coherent background, called the reference wave [39, 40, 41]. They can focus simultaneously in three-dimensional spaces [42]. A sketch of a digital in-line holography setup is shown in Figure 17. The optical setup consists of an incident light source (laser). The incident beam is collimated using a beam expander (objective lens + spatial filter) and convex lens combination followed by splitting it into the object and the reference beams using a beam splitter. The object beam is passed through the object where it suffers diffraction and a phase change. Both the reference and the object beams are adjusted for intensity before being combined via a beam splitter. The resulting interference pattern is then recorded on a CCD sensor at the hologram plane which is at a finite distance from the true image plane. Magnification can be introduced by using a convex lens as a relay lens after the object beam has passed through, the object, and this magnified hologram would then be recorded on the CCD sensor. The recorded hologram is then reconstructed using a computer program based on the convolution type approach which solves the Rayleigh Sommerfeld formula for the reconstruction of a wave field [40, 44] using the Fast Fourier Transform algorithm (or the Fresnel-Kirchhoff for high numerical aperture or any of the other Fraunhofer transform/wavelet transform schemes).

Figure 17.

The optical system of parallel phase-shifting digital holography [43].

Holography eases the limitation of the depth of field and provides a three-dimensional image made up of two-dimensional image planes focused at different distances along the axis perpendicular to the image plane [45]. Unaffected by the non-spherical droplets and ligaments that are usually encountered very close to the injector exit, it also has a virtually unlimited depth of focus, and high magnification even for a small field of view. Depending on the angle between the reference wave and the object wave, a zero reference angle setup is called inline holography and a non-zero reference angle setup is called an off-axis holography. They can be recorded using a photographic film [46], or digitally using a CCD [47, 48, 49]. Though film-based holography provides a high spatial resolution the time involved in carrying out the wet processing of film makes it disadvantageous to be employed for measuring dynamic phenomena.

Digital holography enables quick recording and retrieval of hologram information in real time. In Figure 18, ρ is the distance between a point in the hologram plane and a point in the reconstruction plane, ξ, η are coordinates in the reconstruction image plane, and x, y are coordinates in the recording hologram plane. The hologram is reconstructed to different spanwise distances so as to focus on the particles centered at those distances from the camera in the jet breakup field. Different modifications to the reconstruction process have been adopted by various researchers to serve specific purposes. For example viewing the object at different viewing angles and with different focal lengths within the Fresnel domain [50], reconstructing the image of a diffusively reflecting object using only the phase data of the complex amplitude [51], for digitally compensating the aberrations that arise during normal holographic reconstruction process [40] (by expanding the recorded wavefront to increase the spatial resolution), and to overcome the disruption in the visibility of the nearby focused objects due to the diffraction effects caused by the out of focus objects [52]. The relay lens used for increasing magnification works well at low magnification but causes aberration and noise at higher magnification.

Figure 18.

Hologram reconstruction coordinate system.

In such a situation digital holographic microscopy [53] (DHM) is commonly adopted (Figure 19). This setup can successfully image liquid breakup elements as small as 5 μm. Only one beam is expanded with an objective lens and this serves both as a reference and an object beam. It is then passed directly through the liquid breakup region onto a CCD. The incident beam, in this case, is a coherent spherical wave emanating from a point source. This is achieved by passing the incident beam through a microscope objective followed by a pinhole. The pinhole size is of the order of the laser wavelength. A sample hologram of the liquid jet surface and its reconstructed image is shown in Figure 20. The resolution in DHM depends on (1) pin-hole size - this controls the spatial coherence and illumination cone; (2) numerical aperture - this is a function of the size and position of the CCD sensor; (3) pixel density and dynamic range - this controls fringe resolution and hologram noise level; and (4) wavelength of the laser light. Even though the depth of focus is small in this case owing to the large numerical aperture the digital reconstruction would allow one to focus at different depths of the object. Various modifications of DHM exist. In Digital Image Plane Holography (DIPH) [54], a plane of the fluid is illuminated with a laser sheet. The image of the illuminated fluid plane is then made to interfere with a reference beam on the CCD sensor which is imaged (Figure 21). The depth of focus is limited to the laser sheet thickness. Numerically reconstructing the holograms requires enormous processing power and memory. With Fast Fourier transform algorithms, the reconstructing time for a single hologram has been brought down to within a few tenths of a second on personal workstations. Another issue is adopting holography as a 3D particle characterization tool is its long depth of field [55]. Angular aperture in holography is limited to only a few degrees and this results in poor depth resolution. The finite pixel size of the image sensors limits the angle between the object and reference wave leading to the virtual and real images not being fully separated during hologram reconstruction. The field of view is also limited to the sensor size (or less if using a divergent beam).

Figure 19.

Digital holographic microscopy setup.

Figure 20.

Reconstructed holograms of a jet surface at different reconstruction distances.

Figure 21.

Digital image plane holography (DIPH) setup [54].

The limited resolution capability of digital image sensors leads to the low spatial resolution of the hologram image. Some deconvolution methods based on the Weiner filter and iterative routines with point spread functions are used for overcoming the depth of focus limitation [56]. Other drawbacks include the reconstruction of spurious twin images of particles, leading to ghost images and multiple focusing around the actual depth location of the particles [57].

3.4 X-ray phase contrast imaging

X-ray-based diagnostics have recently found increasing applicability in the study of liquid breakup due to some of its inherent advantages. The high penetrability of dense breakup regions and the ability to probe the internal mass distribution of breakup regions with a good time and spatial resolution makes it appealing. The x-ray beam experiences a significantly lower wave-shift as compared to visible light because of the large value of the index of refraction at optical wavelengths. This means the x-rays can be absorbed in specific wavelengths but they do not undergo appreciable scattering as in the case of visible light. Phase contrast imaging is based on utilizing the coherence properties of x-ray to enhance the edges of near-field Fresnel diffraction. Upon interaction with liquid jets, the x-rays suffer absorption along with a corresponding phase shift due to changes in phase media. Recording this phase shift on a CCD sensor enables us to distinguish between phase boundaries owing to improved image contrast and is termed as phase contrast imaging [58].

In a typical set-up, a synchrotron x-ray source is used whose undulator provides an extremely brilliant white monochromatic x-ray beam required for ultra-fast imaging purpose (Figure 22). The x-ray beam directly illuminates the breakup region but is tuned to the photon energy that does not get absorbed. When the x-rays pass through the sample they suffer attenuation and phase shift [59, 60, 61, 62]. This creates small variations in the speed and direction of propagation of the x-rays. Interference effects are produced at the feature boundaries which have significant contrast. After interaction with the sample, the different wavefronts of the x-ray beam get diffracted by the sample and start to overlap and interfere with each other, giving rise to a contrasting pattern. This contrast depends on the Laplacian of the phase shift undergone by the beam upon its passage through the sample [63]. A fast scintillator crystal (LYSO:Ce or YAG:Ce) converts the x-ray phase contrast pattern into visible light which is then imaged into a CCD. Around the edges of features (ligaments and drops), an evolving pattern of light and dark fringes develop. This makes this technique particularly sensitive to boundaries and interfaces of different phase media [64]. Therefore a bubble (gas surrounded by liquid) will have a dark/bright outer edge while a droplet (liquid surrounded by gas) will have a bright/dark outer edge as illustrated in the simulations shown in Figure 23(b and c). The CCD camera is coupled to the scintillator using a microscope objective and a 45° mirror and records the images at a high speed. The exposure time (∼150 ps) for each image is achieved by shuttering and timing the pulsed x-ray beam. In order to reduce the heat power in the beam, shuttering is usually employed whose synchronized operation could cut off more than 99% of the beam heat power.

Figure 22.

X-ray phase contrast imaging experimental setup.

Figure 23.

(a) Phase contrast image of jet surface structures, and simulated images of (a) air bubble in water (c) water bubble in air respectively [59].

The phase contrast images need to be normalized to exhibit the contrast/intensity gradients more efficiently. This technique is particularly suitable for studying dense primary breakup near-injector regions. The image resolution is generally a function of the detector resolution and the phase sensitivity, is a function of the source-specimen/specimen-detector distance. Phase contrast imaging has been applied to the study of spray breakup [58, 65] and as a simulation tool [66].

Some disadvantages of this technique are that the x-rays are difficult to manipulate and oftentimes the experiment needs to be designed in a way so as to be remotely controllable owing to safety issues. The breakup surface features e.g. ligaments and drops located at different axial distances are indistinguishable in the phase contrast x-ray images.

3.5 Ballistic imaging

Ballistic imaging [67, 68, 69, 70, 71, 72, 73, 74, 75] is a line-of-sight two-dimensional imaging technique that can overcome the effect of multiple scattering in dense liquid breakups. It can provide good quality images of the dense liquid breakup region which would otherwise be opaque to simpler imaging techniques. It derives its name from the term ‘ballistic photons’, the group of photons that do not suffer scattering when transmitted through highly turbid media because they travel the shortest path. This technique is based on a photon time of flight selection in which the transmitted photons are temporally selected, filtering out the multiply scattered photons which are time delayed depending on the optical length of the probed medium.

When light passes through a highly turbid medium, scattering occurs and the emerging beam comprises ballistic, snake, and diffuse photons (Figure 24).

Figure 24.

Scattering through a dense media showing ballistic, snake, and diffuse photons [70].

3.5.1 Ballistic photons

These photons pass straight through the medium without scattering. They exit within the same solid angle that they entered and so travel the shortest path, and exit first.

3.5.2 Snake photons

These photons undergo negligible scattering, i.e. from a single scattering event to up to four scattering events and exit the medium along the same incident axis but with a larger solid angle than the ballistic photons.

3.5.3 Diffuse photons

These photons exit the medium after scattering multiple times (5 times or more). It has a large photon number density, is scattered into 4π steradian and exits last. They do not retain the memory of the structure within the material.

A typical Ballistic imaging setup [68], is shown in Figure 25. Ultra short light pulses (100 fs), isolated from all of the other background and noise photons (by using polarizer) are used as the imaging light to illuminate the break-up region. A time-resolved image is recorded using an ultra-fast time gate (full optical system such as the optical Kerr gate), short enough to select photons with respect to their time of flight by separating them into ballistic, refractive, and scattered light which is then followed by constructing the image using just the selected class of photons.

Figure 25.

Schematic of ballistic imaging setup.

The optical Kerr gate could be obtained by inducing a temporary birefringence in a Kerr active liquid between two crossed polarizers. The ballistic photons are attenuated by the large liquid structures inside a dense breakup region via absorption or refraction thus creating the intensity modulation required to produce an image.

These are used to form an undistorted, diffraction-limited image of structures inside the optically dense medium. The snake photons are also used along with the ballistic photons to improve the signal-to-noise ratio. Since highly turbid media leads to significantly more diffuse photons, they must be separated from the ballistic and snake photons. In Figure 25, a light source (1-kHz Ti-Sapphire regenerative amplifier, seeded with a Ti-Sapphire mode-locked laser oscillator) generates 150 femtosecond pulses centered around 800 nm). The light exiting the amplifier is linearly polarized and is split into an optical Kerr effect (OKE) switching beam and an imaging beam using a beam splitter. The OKE is a time gate consisting of a very fast shutter that selects only the leading edge of the image pulse containing ballistic and snake photons. To introduce the beams into the breakup region the linearly polarized imaging beam is made to pass through a polarizer followed by a rotation of the beam polarization by 45°. The imaging beam is then passed through a telescope which controls the beam size as it crosses the breakup region. The relay optics then focuses the beam through the OKE switch which serves as a shutter (2 ps) and is triggered by the switching pulses. When the switching pulse is absent image transfer to the display screen does not occur. The first polarizer in the OKE gate which is also the second polarizer used in the imaging beam, allows the polarization orientation of the imaging beam to pass through. The imaging beam is then focused into the Kerr active medium and up-collimated again. The second OKE polarizer is oriented normally with respected to the first one to block the unperturbed imaging beam. When the intense electric field of the switching pulse arrives at the Kerr active medium it rotates the polarization of the imaging beam, allowing most of it to pass through the second polarizer. After the imaging beam passes through the OKE gate, the image is relayed through a short-pass filter and displayed on a screen which is then imaged using a camera. By adjusting the length of the time delay segment of the imaging beam, it would be possible to introduce a delay that would control the temporal overlap between the switching and imaging pulses when they arrive at the OKE gate for optimum time-gating.

The ballistic image thus obtained (Figure 26) reveals the main jet liquid column, droplet distribution, voids, and jet profile structures. The dark areas shown represent the liquid phase, the light areas are the gas phase, and the speckles and other spurious features are caused by diffraction.

Figure 26.

Ballistic image of a jet breakup [2].

If the droplets in the breakup region are smaller than the limit of spatial resolution for the setup (∼40–50 μm) they go undetected which is a disadvantage. The spatial resolution is limited by the CS2 liquid contained in the CS2 cell located in the Fourier plane which acts as a spatial filter to reduce the scattering noise.

3.6 Liquid jet Core illumination

Optical connectivity or liquid jet core illumination is another relatively new technique for studying atomization. Here, a light guide illuminates the liquid jet from within the injecting nozzle in a direction parallel to the flow [76, 77].

Here the liquid jet acts as an optical fiber through which the light propagates and is interrupted only at the breakup region (Figure 27). The continuous portion of the liquid jet is made to fluoresce by adding a fluorescing dye (Rhodamine WT dye). The laser beam is steered by means of a light-guide tube or an optical fiber through the liquid jet nozzle. The fluorescing dye in the liquid emits fluorescent light along the laser beam path and is interrupted at the breakup region. The intensity of the beam should be sufficient to identify this position. The advantage of introducing the laser beam from within the liquid nozzle is that it is minimally attenuated and multiple scattered and maintains a low light intensity beyond the surface of the liquid jet. The addition of the dye aids in the emission of longer wavelengths of fluorescing light which can be detected separately from the incoming laser beam thus avoiding background noise on images due to the scattered light. The fluorescing light is emitted from everywhere inside the liquid core, associated with the volume of the liquid. The liquid jet surface features act as minute mirrors to focus the laser beam which produces bright spots along the liquid surface and could be identified as locations for significant changes in the surface structure. Short-duration laser pulses allow the image to be frozen in time. The imaging system is a CCD camera fitted with a low-pass filter to suppress the scattered light from the laser beam.

Figure 27.

(a) Liquid jet core illumination [76] (b) principle of optical connectivity [2].

This diagnostic promises improved measurements for atomization occurring at the nozzle exit since scattering (caused by the diffusion of the laser light by the droplets) persists further downstream of the breakup region. Light propagation in narrow ligaments connecting the detaching masses of liquid and the surrounding droplets causes the light to become diffuse and its intensity to be significantly reduced after the core breakup region. The detached droplets do not fluoresce and the area surrounding the breakup point does not illuminate.

As the jet becomes increasingly turbulent, the transition of the smooth jet surface to a rough and wavy one results in the angle of incidence between the laser light rays and the jet surface becoming less than the critical angle for total internal reflection. This leads to increased scattering, loss of fluorescence intensity, and bright spots created due to the focusing of laser light by jet surface features which act as minute mirrors. In certain situations like at high Reynolds numbers, the liquid core can suddenly change thickness drastically, become very thin, and then thick again. The fluid constriction thus created can cause significant loss of laser light. Moreover, it does not work for a situation requiring droplet size measurements or location determination, or their velocity measurements as is often possible with other techniques.

3.7 Mie scattering

Mie scattering is an elastic scattering that occurs when the size of the particles in the liquid breakup is of the order of the scattered wavelengths. Mie-scattering intensity is proportional to the total liquid surface area. A number of researchers have used Mie imaging for investigating fuel sprays [78, 79, 80, 81, 82, 83, 84]. An instantaneous Mie scattering image of a jet in crossflow [78] is shown in Figure 28, where the inset represents the original Mie image which is then inverted to yield a dark image of the jet against a white background.

Figure 28.

Instantaneous Mie scattering image [78].

A Mie scattering setup for two-dimensional spray visualization [82, 83, 84, 85, 86] is shown in Figure 29. A wide planar laser sheet is formed using a cylindrical lens, made to pass through a particular plane of the liquid jet. The scattered signal normal to the illuminated plane is imaged onto an ICCD camera fitted with a narrowband interference filter. The images are averaged for noise removal followed by background subtraction, uniform field correction, and intensity normalization. The clarity of the Mie image depends on the extent of multiple scattering. When the droplet concentration is low and the breakup dimension is small, the detected signal does not suffer attenuation. However, when droplet concentration is high the incident beam becomes attenuated as it travels the breakup region, ‘secondary scattering’ from droplets lying between the incident beam and the detector and ‘multiply scattering’ by the droplets in the surrounding atomized region result in “extraneous light” to be detected. Since the Mie intensity is proportional to the square of the particle diameter (∼ dp2) a CCD sensor having a large dynamic range is required to simultaneously image the large liquid core and the smallest droplets. Because the scattered cross section is always larger than the corresponding ligament/droplet diameter, it is difficult to carry out accurate size measurements and distributions from the Mie signals. Spatial resolution is also a significant limitation of Mie imaging.

Figure 29.

Mie scattering setup for two-dimensional spray visualization.

Comparisons are often made between shadowgraphy and Mie imaging. However, while Mie imaging is the visualization of elastically scattered light normal to the plane of the incident light sheet passing through the breakup region, shadowgraphy is the image of the shadow of the breakup region formed by a collimated beam of light.

3.8 Planar laser-induced fluorescence

Planar laser-induced fluorescence (PLIF) is a technique [87, 88, 89, 90, 91, 92, 93] where a plane laser light sheet is incident on a cross-section of the fluid mixed with a fluorescent dye. The dye causes the liquid plane to fluoresce and this fluoresced pattern is then imaged by a camera. A part of the laser excitation lies within the absorption spectrum of the dye. Figure 30 shows a typical PLIF setup.

Figure 30.

PLIF setup.

In the fluorescence linear regime, the fluorescence intensity is proportional to the laser excitation intensity. The dye has a large separation between the absorption and emission spectra (e.g. Rhodamine WT/6G/B, and Fluorescein). A narrow-band optical filter allows only the fluorescence wavelengths to be imaged by the camera. Figure 31 shows a PLIF image of a jet primary breakup. The characteristic decay time (time necessary for 86.5% of fluorescence intensity to be emitted) and fluorescence intensity require calibration [20, 89]. The PLIF technique suffers from a non-uniform intensity profile of the planar laser sheet, and light intensity reduction due to absorption by the background dye concentration. High dye concentrations produce nonlinearities in fluorescence because of absorption-related changes in the excitation intensity. The camera exposure time depends on the spectral sensitivity of the camera’s sensor at the fluorescing wavelength.

Figure 31.

PLIF image showing the liquid jet sheet breakup [88].

Light emitted from the object plane in PLIF could be either Mie scattered radiation or fluorescence light. To reduce the scattering, the Structured Laser Illumination Planar Imaging (SLIPI) technique is carried out (Figure 32a). SLIPI [94, 95, 96] generates a recognizable signature in the incident planar beam (by time-based modulation of the excitation source) that yields signal photons. The multiply scattered photons are discarded due to their lack of this signature. Figure 32(b) shows a SLIPI Mie scattering planar image in a hollow cone spray.

Figure 32.

(a) SLIPI beam setup [2] (b) SLIPI Mie scattering planar image of hollow cone spray [2].

3.9 Thermal imaging

Thermal imaging is used for temperature visualization in the breakup regime. It uses planar laser-induced emission (fluorescence/phosphorescence) of a suitable excitation wavelength in the UV region and thermographic phosphors or organic tracer molecules. The two primary methods under thermal imaging are thermographic fluorescence and thermographic phosphorescence.

3.9.1 Thermographic fluorescence

In this technique [7, 97, 98, 99, 100], a fluorescent additive is added to the hydrocarbon liquid in the form of a monomer (M), and an exciplex forming an organic molecule called quencher (N0). When this sample is irradiated, M is electronically excited to form M* which then reacts with N0, which is in the ground state to form the organic exciplex (E*) which is an excited state complex. The reaction is represented by

M+N0EE1

Fluorescence is observed from both M* and E*. Typical fluorescence lifetimes are less than 100 nanoseconds and the populations of M* and E* are strongly temperature dependent. By comparing the decay lifetimes or the intensity ratio between two or several emission lines to a calibrated standard, the liquid breakup droplet temperature distribution can be obtained. A typical experimental setup resembles the PLIF setup [97]. An Nd:YAG laser source produces a light sheet which is incident on a liquid plane to excite the monomer/quencher pairs leading to exciplex formation and fluorescence from the breakup region. This is photographed at right angles to the plane of the laser light pulse. The liquid and vapor phase images are imaged with a single ICCD camera using UV objective and a stereoscope. Different emission lines correspond to different temperature ranges. The spatial profile of the breakup, droplets, and vapor are distinguishable and measurable. For a multi-component fuel mixture, a two-color laser-induced fluorescence technique can be adopted [101, 102] in which the additive (pyrromethene) exhibits a highly different temperature-sensitive fluorescence spectrum for two spectral bands of detection. Rhodamine B is often used as a fluorescent temperature sensor as it exhibits significant temperature dependence. This technique can measure the thermal transport properties (mean and fluctuating characteristics of the dynamic and thermal fields) of a turbulent heated jet undergoing atomization.

3.9.2 Thermographic Phosphorescence

Thermographic phosphors (e.g. La2O2S:Eu or Mg4FGeO6:Mn) are temperature-sensitive materials that when excited by UV light exhibit temperature-dependent emission characteristics. Thermographic phosphors are mixed with the liquid forming the jet. A thin laser sheet is then directed at the liquid plane to be imaged. This causes phosphor excitation and subsequent emission from the breakup region [103, 104]. After excitation, the lifetime of a particular emission line is monitored to obtain temperature measurements.

The emitted light is passed through an interference filter, and recorded by a fast-framing ICCD camera. After image acquisition, the pixel positions of the ICCD detectors are correlated with one another. Using an exponential fitting procedure, the lifetime and temperature information is then extracted. A two-dimensional temperature image is thus obtained, as shown in Figure 33 It shows a water spray temperature distribution image. Since the temperature of the droplets of different sizes after jet breakup changes due to evaporation and convection in comparison to the main jet core, their detection becomes possible using this method.

Figure 33.

Single shot water spray image using thermographic phosphors [103].

This technique has high accuracy; the advantage of remote detection and a high signal yield; and is independent of pressure variations. The excitation energies for both fluorescence and phosphorescence should be below a threshold value as high excitation energies lead to luminescence saturation. Fluorescence lifetime is on the order of nanoseconds whereas phosphorescence lifetime ranges between the orders of microseconds to milliseconds. The decay time is often a very sensitive function of temperature [105]. So any phenomenon is limited to being observed if only it occurs within the excited state period of the phosphor and the speed is essentially dependent on the camera frame rate (MHz). Combining both the fluorescence and phosphorescence techniques has shown to be successful [106, 107] in understanding the breakup and mixing of liquid fuels at high pressure and temperature conditions similar to those in combustion chambers. Figure 34 shows the experimental setup of thermal imaging by the spectral method.

Figure 34.

Experimental setup of thermal imaging by the spectral method.

Advertisement

4. Conclusion

The need to develop newer spray characterization techniques has always existed due to an increasing emphasis on understanding the different unsteady processes involved in liquid atomization, primarily for the aerodynamics and combustion industry. This involves complete characterization of flow fields, in terms of density, temperature, pressure, and flow velocity. Among the techniques discussed here, four are transillumination techniques (shadowgraphy, holography, x-ray phase contrast, ballistic) where an incident beam is passed through the liquid breakup and imaged on the other side, and one is an internal illumination technique (jet core illumination), one is a scattering (Mie) technique, and the rest are planar laser fluorescence/phosphorescence technique. Besides these high-speed ordinary imaging is based on reflection. From a study of the experimental diagnostics that are presently pursued by the scientific community, we can observe that even though the experimental techniques have remarkably improved over time in providing us with more information about the breakup process they are still limited in their applicability as far as meeting the complete demands is concerned. Some of the alternative non-intrusive laser-based diagnostic techniques that exist at presents such as Phase Doppler Particle Analyzer (PDPA), Phase Doppler Anemometry (PDA), Laser Doppler Anemometry (LDA), Laser Diffraction particle sizing, Laser-Induced Incandescence (LII), and Chemiluminescence Imaging, work mostly in the gaseous/vapor phase [108] or are mainly concerned with determining the particle sizes, velocities, and volume fraction. Almost all of the present diagnostics are restricted to low Reynolds number flows to yield accurate Spatio-temporal measurements. For some very small length and time scale fluid flow situations, the spatial and temporal resolution, as well as frequency, is beyond the resolution of present-day high-speed video cameras. There is a technological need for developing ultra-high-speed cameras of sufficient dynamic range/resolution and even techniques to capture the high-speed atomization process with extremely high spatial and temporal resolution.

References

  1. 1. Coghe A, Cossali GE. Quantitative optical techniques for dense sprays investigation: A survey. Optics and Lasers in Engineering. 2012;50:46-56
  2. 2. Linne M. Imaging in the optically dense regions of a spray: A review of developing techniques. Progress in Energy and Combustion Science. 2013;39:403-440
  3. 3. Fansler TD, Parrish SE. Spray measurement technology: A review. Measurement Science and Technology. 2015;26(012002):34
  4. 4. Soid SN, Zainal ZA. Spray and combustion characterization for internal combustion engines using optical measuring techniques - a review. Energy. 2011;36:724-741
  5. 5. Sridhara SN, Raghunandan BN. Photographic investigations of jet disintegration in Airblast sprays. Journal of Applied Fluid Mechanism. 2010;3(2):111-123
  6. 6. Parker T, Rainaldi LR, Rawlins WT. A comparative study of room-temperature and combusting fuel sprays near the injector tip using infrared laser diagnostics. Atomization and Sprays. 1998;8(5):565-600
  7. 7. Khalid AH, Kontis K. Thermographic phosphors for high temperature measurements: Principles. Current State of the Art and Recent Applications, Sensors. 2008;8:5673-5744
  8. 8. Thoroddsen ST, Etoh TG, Takehara K. High-speed imaging of drops and bubbles. Annual Review of Fluid Mechanics. 2008;40:257-285
  9. 9. Bachalo WD. Spray diagnostics for the 21st century. Atomization and Sprays. 2000;10:439-474
  10. 10. Vu Truong Son D, Goji Etoh T, Tanaka M, Hoang Dung N, Le Cuong V, Takehara K, et al. Toward 100 mega-frames per second: Design of an Ultimate Ultra-High-Speed Image Sensor. Sensors. 2010;10:16-35
  11. 11. Crua C, Shoba T, Heikal M, Gold M, Higham C. High-speed microscopic imaging of the initial stage of diesel spray formation and primary breakup. SAE International. 2010;01:2247
  12. 12. Lasheras JC, Villermaux E, Hopfinger EJ. Break-up and atomization of a round water jet by a high-speed annular air jet. Journal of Fluid Mechanics. 1998;357(1):351-379
  13. 13. Zhu Y, Oguz HN, Prosperetti A. On the mechanism of air entrainment by liquid jets at a free surface. Journal of Fluid Mechanics. 2000;404:151-177
  14. 14. Varga CM, Lasheras JC, Hopfinger EJ. Initial breakup of a small-diameter liquid jet by a high-speed gas stream. Journal of Fluid Mechanics. 2003;497:405-434
  15. 15. Hoyt JW, Taylor JJ. Turbulence structure in a water jet discharging in air. Physics of Fluids. 1977;20:S253-S257
  16. 16. Ben-Eliezer E, Marom E, Konforti N, Zalevsky Z. Experimental realization of an imaging system with an extended depth of field. Applied Optics. 2005;44:2792-2798
  17. 17. Bauer D, Chaves H, Brucker C. A multiple-segment long-distance microscope for flow visualization and measurements. Measurement Science and Technology. 2009;20:1-4
  18. 18. Nagahara H, Kuthirummal S, Zhou C, Nayar S. Flexible depth of field photography. Proceedings of European Conference on Computer Vision. 2008;4:60-73
  19. 19. Dowski ER, Cathey WT. Extended depth of field through wave-front coding. Journal of Applied Optics. 1995;34(11):1859-1866
  20. 20. Leong MY, McDonell VG, Samuelsen GS. Effect of ambient pressure on an airblast spray injected into a crossflow. Journal of Propulsion and Power. 2001;17(5)
  21. 21. Castrejon-Garcia R, Castrejon-Pita JR, Martin GD, Hutchings IM. The shadowgraph imaging technique and its modern application to fluid jets and drops. Revista Mexicana de Fisica. 2011;57:266-275
  22. 22. Osta AR, Sallam K. Nozzle-geometry effects on upwind-surface properties of turbulent liquid jets in gaseous crossflow. Journal of Propulsion and Power. 2010;26(5):936-946
  23. 23. Timmerman BH, Bryanston-Cross PJ, Skeen AJ, Tucker PG, Jefferson-Loveday RJ, Dunkley P, et al. High-speed digital Shadowgraphy for high frequency shock tracking in supersonic flows. In: 7th Symposium on Measuring Techniques in Transonic and Supersonic Flows in Cascades and Turbomachines. Sweden: Stockholm; 2004
  24. 24. Sinha A, Prakash RS, Madan Mohan A, Ravikrishna RV. Airblast spray in crossflow – Structure, trajectory and droplet sizing. International Journal of Multiphase Flow. 2015;72:97-111
  25. 25. Bougie B, Tulej M, Dreier T, Dam NJ, Meulen JJ, Gerber T. Optical diagnostics of diesel spray injections and combustion in a high-pressure high-temperature cell. Applied Physics B. 2005;80(8):1039-1045
  26. 26. Lee K, Aalburg C, Diez FJ, Faeth GM, Sallam KA. Primary breakup of turbulent round liquid jets in uniform crossflows. AIAA Journal. 2007;45(8):1907-1916
  27. 27. Klein-Douwel RJH, Frijters PJM, Somers LMT, de Boer WA, Baert RSG. Macroscopic diesel fuel spray shadowgraphy using high speed digital imaging in a high pressure cell. Fuel. 2007;86:1994-2007
  28. 28. Watanabe H, Okazaki K. Visualization of secondary atomization in emulsified-fuel spray flow by shadow imaging. Proceedings of the Combustion Institute. 2013;34:1651-1658
  29. 29. Trainoff SP, Cannell DS. Physical optics treatment of the shadowgraph. Physics of Fluids. 2002;14(4):1340-1363
  30. 30. Wu PK, Kirkendall KA, Fuller RP, Nejad AS. Breakup processes of liquid jets in subsonic crossflows. Journal of Propulsion and Power. 1997;13(1):64-73
  31. 31. Sallam K, Aalburg C, Faeth GM. Breakup of round nonturbulent liquid jets in gaseous crossflow. AIAA Journal. 2004;42(12):2529-2540
  32. 32. Settles GS. Schlieren and Shadowgraph Techniques: Visualizing Phenomena in Transparent Media. Berlin, Heidelberg, Germany: Springer; 2001. p. 376. Available from: https://doi.org/10.1007/978-3-642-56640-0
  33. 33. Muirhead JC, McCallum FL. Improvement in spark shadowgraph technique. The Review of Scientific Instruments. 1959;30:830-831
  34. 34. Fuller RP, Wu PK, Kirkendall KA, Nejad AS. Effects of injection angle on atomization of liquid jets in transverse airflow. AIAA Journal. 2000;38(1):64-72
  35. 35. Biss MM, Settles GS, Hargather MJ, Dodson LJ, Miller JD. High-speed digital Shadowgraphy of shock waves from explosions and gunshots. Shock Waves. 2009;II:91-96
  36. 36. Volpe JA, Settles GS. Laser-induced gas breakdown as a light source for Schlieren and shadowgraph particle image velocimetry. Optical Engineering. 2006;45(8):080509
  37. 37. Klinner J, Willert C. Tomographic Shadowgraphy for three-dimensional reconstruction of instantaneous spray distributions. Experiments in Fluids. 2012;53:531-543
  38. 38. Wang Q, Zhang Y. High speed stereoscopic shadowgraph imaging and its digital 3d reconstruction. Measurement Science and Technology. 2011;22(6):1-9
  39. 39. Gabor D. Holography, 1948-1971. Science, New Series. 1972;177(4046):299-313
  40. 40. Schnars U, Juptner W. Digital recording and numerical reconstruction of holograms. Measurement Science and Technology. 2002;13:R85-R101
  41. 41. Frauel Y, Naughton TJ, Matoba O, Tajahuerce E, Javidi B. Three-dimensional imaging and processing using computational holographic imaging. Proceedings of the IEEE. 2006;94(3):636-653
  42. 42. Leith EN, Upatnieks J. Holograms - their properties and uses (hologram photography, principles, techniques and application). SPIE Journal. 1965;4:3-6
  43. 43. Kubota T. 48 years with holography. Optical Review. 2014;21(6):883-892
  44. 44. Milgram JH, Li W. Computational reconstruction of images from holograms. Applied Optics. 2002;41:853-864
  45. 45. Muller J, Kebbel V, Juptner W. Characterization of spatial particle distributions in a spray-forming process using digital holography. Measurement Science and Technology. 2004;15:706-710
  46. 46. Santangelo PJ, Sojka PE. A holographic investigation of the near-nozzle structure of an effervescent atomizer-produced spray. Atomization and Sprays. 1995;5(2):137-155
  47. 47. Burke J, Hess CF, Kebbel V. Digital holography for instantaneous spray diagnostics on a plane. Particle and Particle Systems Characterization. 2003;20:183-192
  48. 48. Miller B, Sallam KA, Bingabr M, Lin K-C, Carter CD. Breakup of aerated liquid jets in subsonic crossflow. Journal of Propulsion and Power. 2008;24(2):253-258
  49. 49. Lee J, Miller B, Sallam KA. Demonstration of digital holographic diagnostics for the breakup of liquid jets using a commercial-grade CCD sensor. Atomization and Sprays. 2009;19(5):445-456
  50. 50. Yu L, An Y, Cai L. Numerical reconstruction of digital holograms with variable viewing angles. Optics Express. 2002;22(10):1250-1257
  51. 51. Yamaguchi I, Yamamoto K, Mills GA, Yokota M. Image reconstruction only by phase data in phase-shifting digital holography. Applied Optics. 2006;45:975-983
  52. 52. Monnom O, Dubois F, Yourassowsky C, Legros JC. Improvement in visibility of an In-focus reconstructed image in digital holography by reduction of the influence of out-of-focus objects. Applied Optics. 2005;44(18):3827-3832
  53. 53. Garcia-Sucerquia J, Xu W, Jericho SK, Klages P, Jericho MH, Kreuzer HJ. Digital In-line holographic microscopy. Applied Optics. 2006;45:836-850
  54. 54. Palero V, Arroyo MP, Soria J. Digital holography for micro-droplet diagnostics. Experiments in Fluids. 2007;43:185-195
  55. 55. Choo YJ, Kang BS. The characteristics of the particle position along an optical Axis in particle holography. Measurement Science and Technology. 2006;17:761-770
  56. 56. Latychevskaia T, Gehri F, Fink HW. Depth-resolved holographic reconstructions by three-dimensional deconvolution. Optics Express. 2010;18(21):22527-22544
  57. 57. Gire J, Denis L, Fournier C, Thiebaut E, Soulez F, Ducottet C. Digital holography of particles: Benefits of the ‘Inverse Problem’ approach. Measurement Science and Technology. 2008;19(7):074005
  58. 58. Wang YJ, Im K, Fezzaa K, Lee WK, Wang J. Quantitative x-ray phase-contrast imaging of air-assisted water sprays with high Weber numbers. Applied Physics Letters. 2006;89:151913
  59. 59. Osta AR, Lee J, Sallam KA, Fezzaa K. Study of the effects of the injector length/diameter ratio on the surface properties of turbulent liquid jets in still air using X-ray imaging. International Journal of Multiphase Flow. 2012;38(1):87-98
  60. 60. James RW. The Optical Principles of the Diffraction of X-Rays. Ithaca, NY, USA: Cornell University Press; 1965
  61. 61. El-Ghazaly M, Backe H, Lauth W, Kube G, Kunz P, Sharafutdinov A, et al. X-Ray Phase Contrast Imaging at MAMI. European Physical Journal A: Hadrons and Nuclei. 2006;28(S01):197-208
  62. 62. Powell CF, Yue Y, Poola R, Wang J. Time-resolved measurements of supersonic fuel sprays using synchrotron X-rays. Journal of Synchrotron Radiation. 2000;7:356-360
  63. 63. Guigay JP, Langer M, Boistel R, Cloetens P. Mixed transfer function and transport of intensity approach for phase retrieval in the Fresnel region. Optics Letters. 2007;32:1617-1619
  64. 64. Wilkins SW, Gureyev TE, Gao D, Pogany A, Stevenson AW. Phase-contrast imaging using polychromatic hard X-rays. Nature. 1996;384(6607):335-338
  65. 65. Moon S, Liu Z, Gao J, Dufresne E, Fezzaa K, Wang J. Ultrafast X-ray phase-contrast imaging of high-speed fuel sprays from a two-hole diesel nozzle. In: 22nd Annual Conference on Liquid Atomization and Spray Systems. Cincinnati, OH: ILASS; 2010
  66. 66. Peterzol A, Berthier J, Duvauchelle P, Ferrero C, Babot D. X-ray phase contrast image simulation. Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms. 2007;254(2):307-318
  67. 67. Sedarsky D, Paciaroni M, Berrocal E, Petterson P, Zelina J, Gord J, et al. Model validation image data for breakup of a liquid jet in crossflow: Part I. Experiments in Fluids. 2010;49(2):391-408
  68. 68. Sedarsky D, Gord J, Carter C, Meyer T, Linne M. Fast-framing ballistic imaging of velocity in an aerated spray. Optics Letters. 2009;34:2748-2750
  69. 69. Linne MA, Paciaroni M, Hall T, Parker T. Ballistic imaging of the near field in a diesel spray. Experiments in Fluids. 2006;40(6):836-846
  70. 70. Linne MA, Paciaroni M, Berrocal E, Sedarsky D. Ballistic imaging of liquid breakup processes in dense sprays. Proceedings of the Combustion Institute. 2009;32(II):2147-2161
  71. 71. Paciaroni M, Linne M. Single-shot, two-dimensional ballistic imaging through scattering media. Applied Optics. 2004;43:5100-5109
  72. 72. Paciaroni M, Hall T, Delplanque JP, Parker T, Linne M. Single-shot two-dimensional ballistic imaging of the liquid Core in an atomizing spray. Atomization and Sprays. 2006;16(1):51-70
  73. 73. Idlahcen S, Rozé C, Méès L, Girasole T, Blaisot JB. Sub-picosecond ballistic imaging of a liquid jet. Experiments in Fluids. 2012;52:289-298
  74. 74. Sedarsky D, Paciaroni M, Linne M, Gord J, Meyer T. Velocity imaging for the liquid–gas interface in the near field of an atomizing spray: Proof of concept. Optics Letters. 2006;31:906-908
  75. 75. Idlahcen S, Mees L, Roze C, Girasole T, Blaisot J. Time gate, optical layout, and wavelength effects on ballistic imaging. Journal of the Optical Society of America. A. 2009;26(9):1995-2004
  76. 76. Charalampous G, Hardalupas Y, Taylor A. Novel technique for measurements of continuous liquid jet Core in an atomizer. AIAA Journal. 2009;47(11):2605-2615
  77. 77. Charalampous G, Hadjiyiannis C, Hardalupas Y. Proper orthogonal decomposition analysis of photographic and optical connectivity time resolved images of an atomizing liquid jet. In: 24th Ann. Conf. Liquid Atom. Spray Sys. Portugal: ILASS; 2011. pp. 1-8
  78. 78. Elshamy OM, Tambe SB, Cai J, Jeng SM. Structure of liquid jets in subsonic crossflow at elevated ambient pressures. In: 44th AIAA Aerospace Sciences Meeting and Exhibit. Reno: AIAA; 2006. pp. 1-11
  79. 79. Smallwood GJ, Gulder OL, Snelling DR. The structure of the dense Core region in transient diesel sprays. 25th Symposium (International) on Combustion. 1994;25(1):371-379
  80. 80. Fansler TD, Drake MC. “Designer diagnostics” for developing direct-injection gasoline engines. Journal of Physics Conference Series. 2006;45:1-17
  81. 81. Sick V, Drake MC, Fansler TD. High-speed imaging for direct-injection gasoline engine Research and Development. Experiments in Fluids. 2010;49(4):937-947
  82. 82. Stenzler JN, Lee JG, Santavicca DA. Penetration of liquid jets in a crossflow. Atomization and Sprays. 2006;16(8):887-906
  83. 83. Tambe S, Elshamy O, Jeng S. Spray properties of liquid jets injected transversely into a shear layer. In: 43rd AIAA/ASME/SAE/ASEE Joint Propulsion Conference & Exhibit. Cincinnati, OH: AIAA; 2007. pp. 2007-5695
  84. 84. Rachner M, Becker J, Hassa C, Doerr T. Modelling of the atomization of a plain liquid fuel jet in crossflow at gas turbine conditions. Aerospace Science and Technology. 2002;6(7):495-506
  85. 85. Han D, Mungal MG, Orozco V. Gross-entrainment behavior of turbulent jets injected obliquely into a uniform crossflow. AIAA Journal. 2000;38(9):1643-1649
  86. 86. Siebers, D.L., (1998) Liquid-Phase Fuel Penetration in Diesel Sprays, International Congress & Exposition, Detroit, MI, Paper No. 980809.
  87. 87. Niederhaus CE, Champagne FH, Jacobs JW. Scalar transport in a swirling transverse jet. AIAA Journal. 1997;35(11):1697-1704
  88. 88. Cloeter MD, Qin K, Patil P, Smith B. Planar laser induced fluorescence (PLIF) flow visualization applied to agricultural spray nozzles with sheet disintegration; influence of an oil-in-water emulsion. In: 22nd Annual Conference on Liquid Atomization and Spray Systems. Cincinnati, OH: ILASS; 2010
  89. 89. Pastor JV, Lopez JJ, Julia JE, Benajes JV. Planar laser-induced fluorescence fuel concentration measurements in isothermal diesel sprays. Optics Express. 2002;10:309-323
  90. 90. Davidson MJ, Pun KL. Weakly Advected jets in Cross-flow. Journal of Hydraulic Engineering. 1999;125(1):47-58
  91. 91. Wieske P, Wissel S, Grünefeld G, Pischinger S. Improvement of LIEF by wavelength-resolved acquisition of multiple images using a single CCD detector – Simultaneous 2D measurement of air/fuel ratio, temperature distribution of the liquid phase and qualitative distribution of the liquid phase with the multi-2D technique. Applied Physics B. 2006;83(2):323-329
  92. 92. Webster DR, Longmire EK. Jet pinch-off and drop formation in immiscible liquid–liquid systems. Experiments in Fluids. 2001;30(1):47-56
  93. 93. Milosevic IN, Longmire EK. Pinch-off modes and satellite formation in liquid/liquid jet systems, Int. Journal of Multiphase Flow. 2002;28:1853-1869
  94. 94. Kristensson E, Berrocal E, Ritcher M, Pettersson S-G, Aldén M. High-speed structured planar laser illumination for contrast improvement of two-phase flow images. Optics Letters. 2008;33(23):2752-2754
  95. 95. Kristensson E, Berrocal E, Richter M, Alden M. Nanosecond structured laser illumination planar imaging for single-shot imaging of dense sprays. Atomization and Sprays. 2010;20(4):337-343
  96. 96. Berrocal E, Kristensson E, Hottenbach P, Alden M, Grunefeld G. Quantitative imaging of a non-combusting diesel spray using structured laser illumination planar imaging. Applied Physics B: Lasers and Optics. 2012;109:683-694
  97. 97. Desantes JM, Pastor JV, Pastor JM, Julia JE. Limitations on the use of the planar laser induced Exciplex fluorescence technique in diesel sprays. Fuel. 2005;84(18):2301-2315
  98. 98. Melton LA, Verdieck JF. Vapor/Liquid Visualization in Fuel Sprays. International Symposium on Combustion. 1985;20(1):1283-1290
  99. 99. Murray AM, Melton LA. Fluorescence methods for determination of temperature in fuel sprays. Applied Optics. 1985;24(17):2783-2787
  100. 100. Lemoine F, Antoine Y, Wolff M, Lebouche M. Simultaneous temperature and 2d velocity measurements in a turbulent heated jet using combined laser-induced fluorescence and LDA. Experiments in Fluids. 1999;26:315-323
  101. 101. Depredurand V, Miron P, Labergue A, Wolff M, Castanet G, Lemoine F. A temperature-sensitive tracer suitable for two-color laser-induced fluorescence thermometry applied to evaporating fuel droplets. Measurement Science and Technology. 2008;19(10):105103
  102. 102. Castanet G, Lavieille P, Lebouché M, Lemoine F. Measurement of the temperature distribution within monodisperse combusting droplets in linear streams using two-color laser-induced fluorescence. Experiments in Fluids. 2003;35(6):563-571
  103. 103. Omrane A, Sarner G, Alden M. 2d-temperature imaging of single droplets and sprays using thermographic phosphors. Applied Physics B: Lasers and Optics. 2004;79:431-434
  104. 104. Yi SJ, Kim KC. Phosphorescence-based multiphysics visualization: A review. Journal of Visualization. 2014;17:253-273
  105. 105. Allison SW, Gillies GT. Remote thermometry with thermographic phosphors: Instrumentation and applications. The Review of Scientific Instruments. 1997;68(7):2615-2650
  106. 106. Tran T, Kochar Y, Seitzman JM. Measurements of acetone fluorescence and phosphorescence at high pressures and temperatures. In: Paper AIAA-2006-831-356, 44th AIAA Aerospace Science Meeting. Reno, NV: AIAA; 2006
  107. 107. Ritchie BD, Seitzman JM. Simultaneous imaging of vapor and liquid spray concentration using combined acetone fluorescence and phosphorescence. In: Paper AIAA-2004-0384, 42nd AIAA Aerospace Science Meeting. Reno, NV: AIAA; 2004
  108. 108. Wigley G, Goodwin M, Pitcher G, Blondel D. Imaging and PDA analysis of a GDI spray in the near-nozzle region. Experiments in Fluids. 2004;36(4):565-574

Notes

  • MotionBLITZ EoSens® mini2
  • Photron Fastcam BC2

Written By

Anu Osta

Submitted: 25 June 2022 Reviewed: 25 August 2022 Published: 24 October 2022