Open access peer-reviewed chapter

3D Capture and 3D Contents Generation for Holographic Imaging

Written By

Elena Stoykova, Hoonjong Kang, Youngmin Kim, Joosup Park, Sunghee Hong and Jisoo Hong

Submitted: 31 May 2016 Reviewed: 21 September 2016 Published: 22 March 2017

DOI: 10.5772/65904

From the Edited Volume

Holographic Materials and Optical Systems

Edited by Izabela Naydenova, Dimana Nazarova and Tsvetanka Babeva

Chapter metrics overview

2,447 Chapter Downloads

View Full Metrics

Abstract

The intrinsic properties of holograms make 3D holographic imaging the best candidate for a 3D display. The holographic display is an autostereoscopic display which provides highly realistic images with unique perspective for an arbitrary number of viewers, motion parallax both vertically and horizontally, and focusing at different depths. The 3D content generation for this display is carried out by means of digital holography. Digital holography implements the classic holographic principle as a two‐step process of wavefront capture in the form of a 2D interference pattern and wavefront reconstruction by applying numerically or optically a reference wave. The chapter follows the two main tendencies in forming the 3D holographic content—direct feeding of optically recorded digital holograms to a holographic display and computer generation of interference fringes from directional, depth and colour information about the 3D objects. The focus is set on important issues that comprise encoding of 3D information for holographic imaging starting from conversion of optically captured holographic data to the display data format, going through different approaches for forming the content for computer generation of holograms from coherently or incoherently captured 3D data and finishing with methods for the accelerated computing of these holograms.

Keywords

  • holographic display
  • computer‐generated holograms
  • phase‐added stereogram
  • holographic printer
  • spatial light modulator

1. 3D capture and 3D content generation by digital holography

Three‐dimensional (3D) displays are the next generation displays. The claim for 3D imaging is indisputable in mass television, game industry, medical imaging, computer‐aided design, automated robotic systems, air traffic control, education and cultural heritage dissemination. The ultimate goal of 3D visual communication is 3D capture of a real‐life scene that is followed by creating its scaled exact optical duplicate at a remote site instantaneously or at a later time moment [1]. Tracking the development of 3D imaging devices from Wheatstone stereoscope designed in 1830 to modern full HD 3D displays with glasses (Figure 1) reveals some memorable periods of booming public interest to this area, e.g. the 3D theatres boom observed in 1950 as a counterpoint to the increasing popularity and commercialization of the television or the last years reaction to the ‘Avatar’ movie. Multiple parallax 3D display technology has evolved to full‐parallax displays for a naked eye observation as integral imaging displays [2] and super‐multi‐view displays [3]. However, since invention of holography by Dennis Gabor in 1948 [4] and the first holographic 3D demonstration by Emmett Leith and Juris Upatnieks in 1964 [5], the consumer public has high expectations for a truly holographic display in view that holographic imaging is the best candidate for 3D displays.

Figure 1.

History of 3D displays with some of the major events depicted as a diagram with years along the horizontal axis and intensity of the public interest along the vertical axis.

Holography is the only imaging technique which can provide all depth cues. High quality of 3D imaging from analogue white‐light viewable holograms is well known [6]. They provide a wide viewing angle due to the very fine grain size of the holographic emulsions. Realistic images can be viewed by an arbitrary number of viewers with unique perspectives. Motion parallax both vertically and horizontally and tilting of the head are possible. The viewer is capable of focusing at different depths. There are no convergence‐accommodation conflict and discontinuities between different views as for multiple parallax displays. Holographic imaging allows for building autostereoscopic displays. To realize a holographic display, the 3D scene description should be encoded as two‐dimensional (2D) holographic data. The 3D content generation is carried out by means of digital holography [7] based on the classical holographic principle. According to it, holography is a two-step process enabling storing and reconstruction of a wavefront diffracted by 3D objects. The hologram records as a 2D intensity pattern the interference of this wavefront with a mutually coherent reference wave as depicted schematically in Figure 2. The wave field from the object is characterized by a complex amplitude, O(x,y)=aO(x,y)exp[jϕO(x,y)], where aO(x,y) and ϕO(x,y) are the amplitude and phase of the object beam, respectively. Interference of O(x,y) and the mutually coherent reference beam R(x,y)=aR(x,y)exp[jϕR(x,y)] results in four terms superimposed in the hologram plane (x, y):

IH(x,y)=|O(x,y)+R(x,y)|2=O(x,y)O*(x,y)+R(x,y)R*(x,y)+O(x,y)R*(x,y)+R(x,y)O*(x,y)E1

Figure 2.

Schematic representation of holographic recording.

where the asterisk denotes a complex conjugation. The sum of intensities of the object and reference beams gives the zero‐order term. The last two terms are the +1 and –1 diffraction orders and contain the object wavefront information. The object field OR*R=O or O*RR*=O* brings into focus the virtual or the real image when IH(x,y) is multiplied by R(x,y) or its conjugate. These twin images are separated for the off‐axis geometry where the object and reference beams subtend an angle θ, and overlap in inline geometry at θ=0.

Digital holography grew from a purely academic idea into a powerful tool after the recent progress in computers, digital photo‐sensors (CCDs or CMOS sensors) and spatial light modulators (SLMs). The capability of digital holography for digital analysis and synthesis of a light field forms two mutually related branches. The branch dedicated to analysis comprises methods for optical recording of holograms using digital photo‐sensors. The holograms are sampled and digitized by the photo‐sensor, stored in the computer and numerically reconstructed using different approaches to describe diffraction of light from the hologram and free space propagation to the plane of the reconstructed image [8]. Thus, capture of both amplitude and phase becomes possible enabling numerical focusing at a variable depth and observation of transparent micro‐objects without labelling [9]. The holographic data are in the form of a real‐valued 2D matrix of recorded intensity according to Eq. (1). In this branch, different techniques have emerged for the two decades of existence of digital holography as (i) digital holographic microscopy with a plane wave or a point‐source illumination [10, 11]; (ii) optical diffraction tomography with multi‐directional phase‐shifting holographic capture [12]; (iii) infrared holography in the long wavelength region for capture of large objects [13]; (iv) determination of sizes and locations and tracking of particles in a 3D volume [14]. Feature recognition based on digital holography has been proposed [15]. A lot of efforts were dedicated to instrumental or software solutions of the twin image problem [16].

The branch dedicated to synthesis of a light field comprises methods for computer generation of holograms [17] which are fed to some kind of SLMs for optical reconstruction of the images they encode. Computer‐generated holograms (CGHs) are used for holographic displays [18], holographic projection [19] and diffractive optical elements [20]. In principle, the CGHs provide the only means to generate light fields for virtual 3D objects. A CGH is a real‐valued 2D matrix of amplitude or phase data; it may have also binary representation.

Both branches are closely related to the task of direct transfer of optically captured digital holograms to a holographic display. To realize the chain 3D capture—data transfer—holographic display digital holography requires coherent light and 2D optical sensors and SLMs with high resolution and big apertures. The 4D imaging, when the time coordinate, is added to the data further aggravates the task of building a holographic display because the latter needs much higher resolution and much more information capacity than other types of 3D displays. Generally speaking, there are two ways for 3D content generation for holographic displays: (i) conversion of optically captured holographic data; (ii) computer generation of holograms. Furthermore, we discuss these two main tendencies—direct feeding of optically recorded digital holograms to a holographic display and computer generation of interference fringes from directional, depth and colour information about the 3D objects—on the basis of our experience in forming the holographic content.

Advertisement

2. 3D content generation from holographic data

The ultimate goal of digital holography is to build a system for 3D scene capture, transmission of captured data and 3D optical display. Although based on clear theoretical grounds given by Eq. (1), it is hard to fulfil this task because of limitations encountered at digital implementation of the holographic principle due to the discrete nature of photo‐sensors and display devices, their small size and low spatial resolution. The modern devices are characterized with a pixel periods from 1 to 20 μm and active areas from 1 up to 2–3 cm2. 3D content generation from optically captured digital holograms should include three steps: (i) multi‐view capture by a set of cameras or by sequential recording from different perspectives (Figure 3); (ii) conversion of the captured data to a display data format; (iii) feeding the data to a display from many SLMs to enlarge the viewing angle (Figure 3).

Figure 3.

Schematic diagram of multi‐view holographic capture by many digital photo‐sensors and multi‐view optical reconstruction after mapping the 3D contents to a display set of SLMs.

A key problem of digital holographic capture and imaging is the very small value of the maximum angle between the object and reference beams which satisfies sampling requirement for the spatial frequency at current low spatial resolution of electrically addressable devices. In theory, the photo‐sensor must resolve the fringe pattern formed by interference of the waves scattered from all object points with the reference wave. The holographic display should support some space‐bandwidth product with regard to limitations of the human visual system. The maximum angle, θmax, between the reference and the object beams that satisfies the Whittaker‐Shannon sampling requirement for a wavelength λc,d, and a pixel period Δc,d, where ‘c’ and ‘d’ are attributed to capture and display devices, is found from

sin(θmax2)=λc,d4Δc,dE2

The limitation set by Eq. (2) means capture of small objects at a large distance from the camera and a small viewing angle at optical reconstruction. If the object lateral size D is much greater than the sensor size, the minimum distance between the object and the photo‐sensor is about zmin=DΔc/λc. Usage of coherent light seriously restricts the viewing angle of the holographic display and the size of the reconstructed image [21]. A planar configuration of many SLMs allows for visualization of larger objects [22] but the problem with the small viewing angle remains. Enlarging the viewing angle for pixelated SLMs by using higher diffraction orders and spatial filtering is proposed in Ref. [23]. Under coherent illumination, a circular arrangement of SLMs puts less severe requirements to the space‐bandwidth product of the display and supports full‐parallax binocular vision at an increased viewing angle. Different circular configurations have been recently proposed [2426].

Effective operation of the holographic imaging system requires maintaining the consistent flow of data through capture, transmission and display blocks. So the other problem of 3D content generation is non‐trivial mapping of the data from 3D holographic capture with non‐overlapping camera apertures to arbitrary configuration of display devices (Figure 3). In the general case, the wavelength, the pixel period and the pixel number differ at capture and display sides, i.e. λcλd,ΔcΔd,NcNd. This alters the reconstruction distance and the lateral and longitudinal dimensions of the reconstructed volume [27]. Another difficulty arises from the requirement the set of digital holograms captured for multiple views of the 3D object to be consistent with the display configuration built from many SLMs. Although both the amplitude and the phase can be retrieved from the captured holograms, the type of the SLM entails encoding the holographic data only as amplitude or as phase information. To illustrate the non‐triviality of 3D content generation from optically recorded digital holograms, we consider two characteristic examples from our experience with data mapping. The detailed description of the capture and display systems is given in Refs. [25, 28]. Here, we focus only on data transfer from the holograms to SLMs.

In the first example, the capture parameters substantially differed from the parameters on the display side. The mapping was done for a circular holographic display under visible light illumination when the input data were extracted from a set of holograms recorded at 10.6 μm [29]. The interest in capturing holograms in the long wavelength infrared region is due to the shorter recording distance, larger viewing angle and less stringent requirements to stability of the system. The object was a bronze reproduction of the Benvenuto Cellini Perseus sculpture with a height of 33 cm [28]—a large object for digital holography. Nine off‐axis digital holograms were captured by rotating the object with an angular step of 3° using an ASi (amorphous silicon) thermal camera with Nc=nxc×nyc=640×480 pixels and Δc=25μm. The object beam interfered with a spherical reference wave given in paraxial approximation in the plane of the photo‐sensor as Rc(xc,yc;rc)=exp[jπ(xc2+yc2)/(λcrc)]; the radius rc=zo/2 was equal to the half of the distance zo= 0.88 m between the object and the photo‐sensor. The nine phase‐only SLMs in the display set‐up were characterized by Nxd×Nyd= 1920 x 1080 pixels, a pixel period Δd=8 μm and phase modulation from 0 to 2π; the illuminating wavelength was λd= 0.532 μm. The SLMs, arranged in a circular configuration, were illuminated with a single astigmatic expanding wave by means of a cone mirror whose apex was at a distance Ds from the point light source positioned on the line of the cone mirror axis [25]:

W(xd,yd)=exp(j2πλdxd2Dh)exp[jπλd(y+hSLM/2)2Dh+Ds]E3

where Dh is the distance from the cone mirror axis to the SLM centres and hSLM is the SLM height. The reconstructed images were combined above the cone mirror by a slight tilt of the SLMs at a distance of 35 cm from each SLM. A linear stretching of images with a coefficient, m=Δd/Δc, occurs. A reference wave at a different wavelength, λd, and a different radius, rrec, yields a new reconstruction distance zi [30]:

1zi=1rrec±μm2(1zo1rc)E4

and the reconstructed image undergoes longitudinal and lateral magnifications:

Mlong=dzidzo=(zizo)2μm2,Mlat=μmzizoE5

where μ=λc/λd. Generation of 3D contents for each SLM was based on Eq. (1) and included: (i) retrieval of the phase ϕO(x,y) of the object field from the captured holograms; (ii) compensation for the non‐plane wave illumination of the SLMs; (iii) adjustment of the reconstruction position to the mandatory distance of 35 cm. The phase ϕO(x,y) was retrieved by filtering in the spatial frequency domain to extract only the real image in Eq. (1) and to suppress the zero‐order term and by multiplying the filter output with the numerical reference wave Rc*(xd,yd,rrec) taken with a new radius, rrec=zi/2=zom2/μ in the display coordinates xd=lΔd,yd=nΔd;l=1,2..Nxd,n=1,2..Nyd. The amplitude was discarded in the object field and it became H(xd,yd)=exp[jϕO(xd,yd)]. To compensate for the non‐symmetrical illumination, the phase of H(xd,yd)W*(xd,yd) was fed to each SLM. The holograms were placed at the centres of the SLMs as depicted in Figure 4. Shining W(xd,yd) on the SLMs with HW represented as a phase creates reconstruction at the plane wave illumination when rrec in Eq. (4). The reconstruction distance in this case is zi= 1.78 m. The reconstruction is stretched longitudinally and squeezed laterally at Mlong= 2.04 and Mlat=0.32. A digital converging lens, L1(xd,yd)=exp{j(2π/λd)(xd2+yd2)/ρ1} with a focal distance of ρ1= 43.5 cm was introduced to adjust the reconstruction distance to 35 cm. The image was separated from the strong non‐diffracted beam caused by the pixelated nature of the SLMs by multiplying the array with the holographic data with a tilted plane wave, P(yd)=exp(j2πydsinθt/λd), where θt=2. The phase of W*(xd,yd) was attached to the pixels outside the hologram plus the phase of the lens L2(xd,yd)=exp{j(2π/λd)(xd2+yd2)/ρ2} with ρ2= 35 cm to gather the light reflected from these pixels below the reconstructed image. The arrangement of the wave fields on the surface of each SLM is depicted in Figure 4 (actually, the phases of these fields were fed to each SLM). The processing allowed for combining the images created by all SLMs into a single reconstruction which could be viewed smoothly within an increased viewing angle of 24°. The video with the reconstruction can be found in reference [29]. The most remarkable fact of this data mapping from far infrared capture to a circular display was that we achieved more or less same longitudinal and lateral magnifications Mlong= 0.078 and Mlat=0.062 of the reconstruction volume.

Figure 4.

Schematic diagram of the chain holographic capture—data transfer—holographic display for far‐infrared capture of a large object [29] and visible light visualization [25].

The second example of data mapping is related to visualization of transparent objects by a holographic display with phase encoding of the input data. The object beam O(x,y) in this case was provided by simulation of a noiseless diffraction tomography experiment in which transmission holograms of weakly refracting transparent object with a size of 25 μm were recorded by a phase‐shifting technique [31]. The object had refractive index variation from 1 to 1.004 but due to its small size it gave rise to a strong diffraction. The capture parameters were: λc= 0.68 μm, Δc= 2.4043 μm, Nc=200×200, zo= 68 μm [31]; the display parameters were as above. Direct optical reconstruction from the captured phase‐only data failed. Usage of the full complex amplitude O(x,y) provided numerical reconstruction as a concise 3D shape resembling closely the 3D refractive index distribution within the object (Figure 5). Omission of the amplitude, aO(x,y), destroyed entirely this 3D shape. The observed severe distortions showed the necessity for phase modification in the hologram plane. This was done iteratively by applying the Gerchberg‐Saxton algorithm at known correct complex amplitude at the reconstruction plane. Quality of the numerical reconstruction from the modified phase was satisfactory. Thus the problem with optical reconstruction was solved. Introduction of a digital magnifying lens at the SLM plane enlarged the reconstructed object to 6 mm [31]. This gives about 240 times magnification in comparison with its original size.

Figure 5.

Numerical reconstructions of a virtual transparent object given in the left section of the figure as a 3D distribution of the refractive index nO (green colour, nO =1.001; red, nO = 1.002; yellow, nO =1.003; blue, nO = 1.004 [31]).

Advertisement

3. Computer generation of 3D contents for holographic imaging

3.1. Methods for computer generation of holographic fringe patterns

A CGH is a fringe pattern that diffracts light into a wavefront with desired amplitude and phase distributions and seems to be the most appropriate choice for 3D content generation. This wavefront can be created both for real and virtual objects. The goal in developing the CGHs input data for a 3D holographic display is to have real‐time generation of large‐scale wide‐viewing angle full‐parallax colour holograms which provide photorealistic reconstruction that can be viewed with both eyes. These CGHs must support a motion parallax and the coupled occlusion effect expressed in the visible surface change according to the viewer position. For the purpose, a CGH must have a very large number of samples displayed on a device with high spatial resolution. So the most important requirements to CGH synthesis are computational efficiency and holographic image quality. The CGH computation involves digital representation of the 3D object which includes not only its geometrical shape but also texture and lighting conditions, simulation of light propagation from the object to the CGH plane, and encoding of the fringe pattern formed by the interference of the object light wavefront with a reference beam in the display data format.

There are two basic frameworks for CGH generation depending on the mathematical models of 3D target objects: (i) point cloud algorithms and (ii) polygon‐based algorithms. In the point‐cloud method [32], the 3D object is a collection of P self‐luminous point sources. The method traces the ray from a source ‘p' with spatial coordinates (xp,yp,zp) to the point (ξ,η) on the hologram plane at z=0 and is sometimes referred to as a ray‐tracing approach; the distance between both points is rp=[(ξxp)2+(ηyp)2+zp2]1/2 (Figure 6). Each point source emits a spherical wave with an amplitude, ap, and an initial phase, φp. The amplitude and phase distributions in the point cloud can be controlled individually. The fringe patterns for all object points are added up at the hologram plane to obtain the CGH. The method is highly flexible due to ability to represent surfaces with arbitrary shapes and textures, but it is very time consuming.

Figure 6.

CGH synthesis from a point cloud model.

Polygon‐based representation is a wave‐oriented approach [3335]. The object is a collection of P planar segments of a polygonal shape (Figure 7). Each polygon is a tilted surface source of a light field calculated by propagation of its angular spectrum of plane wave decomposition [27] using a fast Fourier transform (FFT). An angular‐dependent rotational transformation in the spectral domain is applied to find the spectrum in a plane parallel to the hologram in the global coordinate system from the spectrum in the tilted plane of the local coordinate system (xp,yp,zp),p=1,2...P for each polygon [36, 37]. The z‐axis in the local coordinate system is along the vector to the polygon surface. The object field is found after FFT of the final angular spectrum which is a sum of the transformed angular spectra of the polygon fields in the global coordinate system. Computation of a polygon field is slower than that of a spherical wave emitted by a point light source, but the number of polygons is much smaller than that of point sources and the total computation time is shorter compared to the point cloud approach. The traditional polygon‐based method evolved to analytical implementation when the angular spectrum of a triangle of arbitrary size, shape, orientation and location in space is analytically calculated from the known spectrum of a reference triangle [3840]. The analytical method eliminates the need to apply FFT for each polygon.

Figure 7.

CGH synthesis from a polygon‐based model.

The CGH synthesis of real objects can be carried out by 3D capture based on holographic means or structured light methods under coherent or incoherent illumination [41]. The output from, e.g. profilometric/tomographic reconstruction can be converted into a point cloud which allows for CGHs synthesis. The substantial advantage is the option to adapt the captured data to any holographic display. Incoherent capture of multiple projection images to generate holographic data has a lot of advantages such as incoherent illumination, no need of interferometric equipment and display of large objects. The concept was advanced 40 years ago in [42] by generating a holographic stereogram (HS). High quality of large format HSs as a ray‐based display, especially those printed by HS printers, are well known [43]. The input data for HS imaging are composed from colour and directional information. This causes a decrease of resolution for deep scenes and blurring. Introduction of a ray‐sampling plane close to the object and computation of the light wavefront from this plane to the hologram is proposed in Ref. [44]. Synthesis of a full‐parallax colour CGH from multiple 2D projection images captured by a camera scanned along a 2D grid is proposed in Refs. [45, 46]. The approach given in Ref. [46] relies on calculation of the 3D Fourier spectrum and was further improved [47] by developing a parabolic sampling of the spectrum for data extraction and needing only 1D camera scanning. Methods in which directional information from projection images is combined with a depth map are under development [4850].

Over the last decade, the efforts were focused on improving image quality by different rendering techniques and on accelerating the CGH computation. The holographic data—amplitude and phase—should encode occlusions, materials and roughness of the object surface, reflections, surface glossiness and transparency. It is difficult to create an occlusion effect using 3D object representation as a collection of primitives—points or polygons—due to the independent contribution of all primitives to the light field. To decrease the computational cost of occlusion synthesis, a silhouette mask approach has been proposed in the polygon‐based computation [35]. The mask produced by the orthographic projection of the foreground objects blocks the wavefront of light coming from background objects. The method allows for synthesis of a very large CGH [35] but is prone to errors at oblique incidence. Computation is accelerated if occlusion is included in a light‐ray rendering process from multiple 2D projection images during the synthesis of a CGH as an HS [51]. As the method suffers from decrease of angular resolution in deep scenes, accuracy is improved by processing occlusion in the light‐ray domain along with sampling the angular information from the projection images. This is done in a virtual ray‐sampling plane [52, 53]. Furthermore, the sampled data are converted by Fourier transform to the object beam complex amplitude. Considering occlusion as geometric shadowing [54], effective CGH synthesis can be carried out by casting from each sample at the hologram plane, a bundle of rays at uniform angular separation within the diffraction angle given in Eq. (2). Such approaches are described in Refs. [5456] with representing the 3D objects as composed from planar segments parallel to the hologram plane [55] or performing a lateral shear of the 3D scene to use the z‐buffer of the graphic processing unit (GPU) for the rays with the same direction to accelerate computation [54]. Occlusion, texture and illumination issues can be handled by computer graphics techniques. Their effective use is possible when the ray casting is applied by spatially dividing the CGH into a set of sub‐holograms and building different sets of points or polygons for them [5759].

At specular reflection, the viewer is able to see only part of the object while the diffuse reflection sends light rays in all directions. Both types of reflections must be encoded in a CGH by adopting different reflection models to represent texture of the objects [6062]. In the CGH synthesis, the luminance is encoded in the amplitude while reflectance is incorporated as a phase term. The task of representing reflection becomes rather complicated at non‐plane wave illumination or in the case of background illumination [60]. A perfect diffuse reflection is achieved by adding a uniformly distributed random phase. Unfortunately, this causes speckle noise at reconstruction [63]. A variety of methods have been proposed for fast synthesis of CGHs as look‐up table methods with pre‐computed fringes [63, 64], recurrence relation methods instead of directly calculating the optical path [65], introduction of wavefront recording plane [66], HS methods and many others. Hardware solutions as special‐purpose computers like ‘Holographic ReconstructioN (HORN)’ [67] or GPU computing [32, 6870] are very effective for fast calculation because the pixels on a CGH can be calculated independently.

3.2. Phase‐added holographic stereogram as a fast computation approach

Effective acceleration of computation is achieved in coherent stereogram (CS) algorithms when the CGH is partitioned into segments and the directional data for each segment are sampled (Figure 8). A similar idea has been advanced in the diffraction‐specific fringe computation by Lucente [71] with partitioning the hologram into holographic elements called hogels and each hogel having a linear superposition of weighted basic fringes corresponding to the points in a point cloud. Each segment in the CS emits a bundle of light rays that form a wavefront as a set of patches of mismatched plane waves due to lack of depth information. This drawback was overcome by adding a distance‐related phase [72]. The phase‐added stereogram (PAS) is computationally effective if implemented by FFT. To clarify this point, we depict schematically the PAS computation with FFT in Figure 9.

Figure 8.

Synthesis of a CGH as a coherent stereogram with partitioning the hologram plane into square segments and sampling the directional information.

Figure 9.

Schematic representation of the synthesis of a CGH within a segment.

In CS and PAS algorithms, the hologram is partitioned into M×N equal square segments with S×S pixels. The object is described by a point cloud with P points. The segment size, ΔdS×ΔdS, where Δd is the pixel period at the hologram plane, is chosen small enough to approximate the spherical wave from a point as a plane wave given by a 2D complex harmonic function within the segment. This approximation means that the contribution from a point source is constant across the segment and is determined with respect only to its central pixel. In this way, the input data and computation time are substantially reduced; for the segment (m,n),m=1..M,n=1..N the contribution from the point ‘p' comprises spatial frequencies (umnp,vmnp) of the plane wave at a wavelength λd, the distance between the point ‘p' and the central point, rmnp, the initial phase of the sinusoid, Φmnp. The spatial frequencies are determined by the illuminating angles, (Θmnp,Ωmnp), of the ray coming from the point ‘p' to the central point of the segment (m,n) and angles θRξ and θRη of the plane reference wave with respect to ξ and η axes at the hologram plane as follows: umnp=(sinΘmnpsinθRξ)/λd, vmnp=(sinΩmnpsinθRη)/λd. The phases Φmnp,m=1..M,n=1..N ensure matching of the wavefronts of the plane waves diffracted from all segments and may contain the initial phase φp and also the distance‐related phase 2πrmnp/λd. For all object points, the fringe pattern across the segment is approximated as a superposition of 2D complex sinusoids. Computation of this pattern is carried out by placing the amplitudes of the sinusoids to the corresponding frequency locations in the spatial frequency domain and by applying an inverse Fourier transform to the spectrum. FFT implementation is the second step for acceleration of CGH computation (Figure 9). The FFT step moves the spatial frequencies to the nearest allowed values in the discrete frequency domain. The complex amplitudes remain the same. The two‐step procedure is repeated for each segment to compute the CGH.

The PAS approximation should yield a wavefront close to the wavefront provided by the Rayleigh‐Sommerfeld diffraction model that treats the propagating light from a point as a spherical wave. The complex amplitude in the reference model is given by:

OORS(ξ,η)=p=1PAprpexp(j2πλdrp),Ap=apexp(jφp)E6

We applied PAS computation to generate digital input contents for a wavefront printer developed by us for printing a white‐light viewable full‐parallax reflection hologram [73, 74]. The printed hologram was recorded as a 2D array of elemental holograms. The CGH for each elemental hologram was fed to amplitude SLM with 1920 × 1080 pixels. The object beam encoded in the CGH was extracted by spatial filtering and demagnified using a telecentric lens system. Unlike the HS printers [43], the wavefront printer uses full holographic data. That is why the synthesis of a large number of elemental holograms, e.g. 100 × 100, takes a very long time. This requires a fast computation method that provides quality of imaging close to the reference model. We solved this task by developing a fast PAS (FPAS) method [75] as a further elaboration of the already existing PAS methods. Usage of the FFT is crucial for fast PAS implementation, but this may affect negatively the quality of imaging due to spatial frequencies mapping to a predetermined coarse set of discrete values. The sampling step, 1/SΔd, in the frequency domain could not be made small due to necessity to approximate the reference model. Thus, the fringe pattern generated by the PAS with FFT inaccurately steers the diffracted light. The improvements developed to compensate the error caused by the frequency mapping are based on the two possible ways for steering control—phase compensation and finer sampling of the spectrum attached to each segment. The functional form of the developed approximations is shown in Table 1 which gives the fringe pattern at a single spatial frequency in the segment (m,n); (ξmnc,ηmnc) is the central point of the segment and the following notation is introduced for the complex sinusoid:

F(umnp,vmnp)=(Ap/rmnp)exp{j2π[umnp(ξξmnc)+vmnp(ηηmnc)]}E7

Fringe pattern of the method
CS: F(umnp,vmnp)
PAS (no FFT): F(umnp,vmnp)exp(jkrmnp)
PAS (FFT): F(u^mnp,v^mnp)exp(jkrmnp)
CPAS:F(u^mnp,v^mnp)exp(jkrmnp)×exp{j2π[(u^mnpumnp)(ξmncxp)+(v^mnpvmnp)(ηmncyp)]}
APAS: F(u^mnp,v^mnp)exp(jkrmnp)
ACPAS: F(u^mnp,v^mnp)exp(jkrmnp)×exp{j2π[(u^mnpumnp)(ξmncxp)+(v^mnpvmnp)(ηmncyp)]}
FPAS: F(u^mnp,v^mnp)exp(jkrmnp)×exp{j2π[umnp(ξmncxp)+vmnp(ηmncyp)]}
Spatial frequencies: u^mnp=luSΔd,v^mnp=lυSΔd;S2lu,lυS2; u^mnp=luSΔd,v^mnp=lvSΔd;S2lu,lυS2,S<S

Table 1.

Single frequency fringe pattern in the segment.

The first improvement CPAS (compensated PAS) [76] performed some steering correction by adding the phase, which includes the difference between the spatial frequencies in the continuous and the discrete domains. The CPAS provided a better reconstructed image than the PAS with FFT at almost the same calculation time. Finer sampling was proposed in the accurate PAS (APAS) [77] by computing the FFT in an area which exceeds the segment and by properly truncating the larger‐size IFFT output. Phase compensation and directional error reduction by finer sampling were merged into a single step in the algorithm ACPAS [57] which yielded quality of reconstruction very close to the reference model. The best results are provided by the FPAS algorithm which is characterized with better phase compensation than the previous methods. This was confirmed by quality assessment with conventional image‐based objective metrics as intensity distribution and peak signal‐to‐noise ratio [75] for reconstruction of a single point and also by good quality of reconstruction from white‐light viewable colour holograms (Figure 10) printed by our wavefront printing system [73] on an extra‐fine grain silver‐halide emulsion Ultimate08 [78]. The CGH computed by the FPAS algorithm for each elemental hologram was displayed on an amplitude type SLM. The demagnified pixel interval was 0.42 μm at the plane of the hologram and gives a diffraction angle of ± 39.3°. For uniform illumination of the CGH on the SLM without decreasing too much the laser beam intensity we used only 852 × 852 pixels in the SLM to project CGHs. Thus, the size of the elemental hologram became equal to 0.38 mm by 0.38 mm. The printed holograms are shown in Figure 10; their size is 5 cm × 5 cm and 9 cm × 9 cm. The smaller hologram consisted of 131 × 131 elemental holograms. The segment size for calculating the CGH fed to a given elemental hologram was 32 × 32 pixels while the FFT computation area was 128 × 128 pixels, and each elemental hologram comprised more than 700 segments.

Figure 10.

Photographs of reconstruction from printed holograms: (a)–(c): different views of a church model; (d): 9 cm × 9 cm printed hologram of a bunch of flowers.

Advertisement

4. Conclusion

Holographic imaging is a 3D imaging with all depth cues and inherent vision comfort for the viewer. That is why the last decade was marked by rapid development of methods of 3D capture and 3D content generation for holographic display, holographic projection and holographic printing. In the chapter, we considered implementation of the holographic imaging by digital means when the input data are in the form of a 2D real‐valued matrix, which should encode the light wavefront coming from the 3D scene. This wavefront can be extracted from optically recorded holograms or synthesized numerically using various 3D scene descriptions. Holographic recording by digital photo‐sensors or computer generation of holograms for pixelated SLMs imposes severe limitations on the space‐bandwidth product of the capture/display system. We discussed two cases of data mapping from holographic capture to holographic display to show that holographic data transfer from optically recorded digital holograms to the data format of a given display is not a trivial task due to inevitable distortions introduced as a result of different capture and display parameters. Representing 3D contents as computer‐generated holograms seems more flexible and promising way to create input data for holographic displays. The main requirements are to improve the quality of imaging and computational efficiency. We presented an algorithm for fast computation of holograms and showed the good quality of imaging it provided in holographic printing of white‐light viewable reflection holograms.

Advertisement

Acknowledgments

This work was supported by the Ministry of Science, ICT and Future Planning (Cross-Ministry Giga KOREA Project) and Bulgarian National Science Fund, project H 08/12 “Holographic imaging, beam shaping and speckle metrology with computer generated holograms”.

References

  1. 1. Sainov V, Stoykova E, Onural L, Ozaktas H: Trends in development of dynamic holographic displays. Proceedings of SPIE. 2006;6252: 62521C. DOI: 10.1117/12.677056.
  2. 2. Kim YM, Hong KH, Lee BH: Recent researches based on integral imaging display method. 3D Research. 2010; 1(1): 17–27. DOI: 10.1007/3DRes.01(2010)2.
  3. 3. Takaki Y: Development of super multi‐view displays. ITE Transactions on Media Technology and Applications. 2014; 2(1): 8–14.DOI: 10.3169/mta.2.8.
  4. 4. Gabor D: A new microscopic principle. Nature. 1948; 161: 777–778. DOI: 10.1038/161777a0.
  5. 5. Leith E, Upatnieks J: Reconstructed wavefronts and communication theory. Journal of the Optical Society of America. 1962;52;1123–1130.DOI: 10.1364/JOSA.52.001123.
  6. 6. Sainov V, Stoykova E: Display holography – status and future. In: Osten W, Reingand N, editors. Optical Imaging and Metrology: Advanced Technologies, 1st ed. Weinheim: Wiley‐VCH Verlag GmbH & Co. KGaA; 2012. p. 93117. DOI: 10.1002/9783527648443.ch4.
  7. 7. Schnars U, Jueptner W: Direct recording of holograms by a CCD target and numerical reconstruction. Applied Optics 1994; 33(2):179–181.DOI: 10.1364/AO.33.000179.
  8. 8. Onural L, Gotchev A, Ozaktas H, Stoykova E: A survey of signal processing problems and tools in holographic 3DTV. IEEE Transactions on Circuits and Systems for Video Technology. 2007;17 (11): 1631–1646.DOI: 10.1109/TCSVT.2007.909973.
  9. 9. Cuche E, Marquet P, Depeursinge C: Simultaneous amplitude‐contrast and quantitative phase‐contrast microscopy by numerical reconstruction of Fresnel off‐axis holograms. Applied Optics. 1998; 38: 6994–7001.DOI: 10.1364/AO.38.006994.
  10. 10. Kim M: Digital Holographic Microscopy: Principals, Techniques and Applications. New York, NY: Springer; 2011. 230 p. DOI: 10.1007/978‐1‐4419‐7793‐9.
  11. 11. Jericho M, Kreuzer H: Point‐source digital in‐line holographic microscopy. In: Ferraro P, Wax A, Zaevsky Z, editors. Coherent Light Microscopy. Berlin, Heidelberg: Springer Series; 2011. pp. 3–30. DOI: 10.1007/978‐3‐642‐15813‐1_1.
  12. 12. Charrière F, Marian A, Montfort F, Kuehn J, Colomb T, Cuche E, Marquet P, Depeursinge C: Cell refractive index tomography by digital holographic microscopy. Optics Letters. 2006; 31: 178–180.DOI: 10.1364/OL.31.000178.
  13. 13. Pelagotti A, Locatelli M, Geltrude A, Poggi P, Meucci R, Paturzo M, Miccio L, Ferraro P: Reliability of 3D imaging by digital holography at long IR wavelength. Journal of Display Technology. 2010; 6(10): 465–471. DOI: 10.1007/3DRes.04(2010)06.
  14. 14. Gire J, Denis L, Fournier C, Thiébaut E, Soulez F, Ducottet C: Digital holography of particles: benefits of the inverse problem approach. Measurement Science and Technology. 2008;19:074005. DOI: 10.1088/0957‐0233/19/7/074005.
  15. 15. Stern A, Javidi B: Theoretical analysis of three‐dimensional imaging and recognition of micro‐organisms with a single‐exposure on‐line holographic microscope. Journal of the Optical Society of America A. 2007; 24, 163–168.DOI: 10.1364/JOSAA.24.000163.
  16. 16. Stoykova E, Kang H, Park J: Twin‐image problem in digital holography‐a survey. Chinese Optics Letters. 2014; 12: 060013. DOI: 10.3788/COL201412.060013.
  17. 17. Lohmann A: A pre‐history of computer‐generated holography. Optics and Photonics News. 2008; February: 36–41. DOI: 10.1364/OPN.19.2.000036.
  18. 18. Slinger C, Cameron C, Stanley M: Computer‐generated holography as a generic display technology. IEEE Computer. 2005; 38(8): 46–53. DOI: 0018‐9162/05/$20.00.
  19. 19. Shimobaba T, Kakue T, Endo Y, Hirayama R, Hiyama D, Hasegawa S, Nagahama Y, Sano M, Oikawa M, Sugie T, Ito T: Improvement of the image quality of random phase‐free holography using an iterative method. Optics Communications. 2010, 355: 596–601. DOI: 10.1016/j.optcom.2015.07.030.
  20. 20. Chang NIY, Chung JK: Design and implementation of computer‐generated hologram and diffractive optical element. In: Chung JK, Tsai M, editors. Three‐Dimensional Holographic Imaging. John Wiley & Sons, Inc., New York. 1st ed. 2003; pp. 167–189. DOI: 10.1002/0471224545.ch9.
  21. 21. Onural L, Yaras F, Kang H: Digital holographic three‐dimensional video displays. Proceedings of the IEEE. 2011;99 (4): 576–589. DOI: 10.1109/JPROC.2010.2098430.
  22. 22. Yaras F, Kang H, Onural L: Multi‐SLM holographic display system with planar configuration. In: Proceedings of the IEEE International Conference on 3DTV: The True Vision—Capture, Transmission and Display of 3D Video.7–9 June. 2010. DOI: 10.1109/3DTV.2010.5506332.
  23. 23. Mishina T, Okui M, Okano F: Viewing‐zone enlargement method for sampled hologram that uses high order diffraction. Applied Optics. 2002; 41: 1489–1499. DOI: 10.1364/AO.41.001489.
  24. 24. Hahn J, Kim H, Lim Y, Park G, Lee B: Wide viewing angle dynamic holographic stereogram with a curved array of spatial light modulators. Optics Express. 2008; 16(16): 12372–12386. DOI: 10.1364/OE.16.012372.
  25. 25. Yaras F, Kang H, and Onural L: ircular holographic video display system, Optics Express 19: 9147–9156 (2011). DOI: 10.1364/OE.19.009147.
  26. 26. Kozacki T, Finke G, Garbat P, Zaperty W, Kujawinska M: Wide angle holographic display system with spatiotemporal multiplexing. Optics Express. 2012; 20(25): 27473–27481. DOI: 10.1364/OE.20.027473.
  27. 27. Goodman J: Introduction to Fourier Optics. 3rd ed. USA: Robert & Company Publishers; 2005. 491 p.
  28. 28. Paturzo M, Pelagotti A, Finizio A, Miccio L, Locatelli M, Gertrude A, Poggi P, Meucci R, Ferraro P: Optical reconstruction of digital holograms recorded at 10.6 μm: route for 3D imaging at long infrared wavelengths. Optics Letters. 2010; 35: 2112–2114. DOI: 10.1364/OL.35.002112.
  29. 29. Stoykova E, Yaras F, Kang H, Onural L, Geltrude A, Locatelli M, Paturzo M, Pelagotti A, Meucci R, Ferraro P: Visible reconstruction by a circular holographic display from digital holograms recorded under infrared illumination. Optics Letters. 2012; 37(15): 3120–3122 DOI: 10.1364/OL.37.003120.
  30. 30. Meier R: Magnification and third‐order aberrations in holography. Journal of the Optical Society of America. 1965;55: 987–992. DOI: 10.1364/JOSA.55.000987.
  31. 31. Stoykova E, Yaras F, Yontem A, Kang H, Onural L, Hamel P, Delacrétaz Y, Bergoënd I, Arfire C, Depeursinge C: Optical reconstruction of transparent objects with phase‐only SLMs. Optics Express. 2013; 21: 28246–28257. DOI: 10.1364/OE.21.028246.
  32. 32. Chen RH‐Y, Wilkinson T: Computer generated hologram from point cloud using graphics processor. Applied Optics. 2009; 48: 6841–6850. DOI: 10.1364/AO.48.006841.
  33. 33. Leseberg D, Frère C: Computer‐generated holograms of 3‐D objects composed of tilted planar segments. Applied Optics. 1988; 27: 3020–3024. DOI: 10.1364/AO.27.003020.
  34. 34. Matsushima K: Computer‐generated holograms for three‐dimensional surface objects with shade and texture. Applied Optics. 2005; 44(22): 4607–4614. DOI: 10.1364/AO.44.004607.
  35. 35. Matsushima K, Nakahara S: Extremely high‐definition full‐parallax computer‐generated hologram created by the polygon‐based method. Applied Optics. 2009; 48: H54–H63. DOI: 10.1364/AO.48.000H54.
  36. 36. Matsushima K, Schimmel H, Wyrowski F: Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves. Journal of the Optical Society of America A. 2003; 20: 1755–1762. DOI: 10.1364/JOSAA.20.001755.
  37. 37. Matsushima K: Formulation of the rotational transformation of wave fields and their application to digital holography. Applied Optics. 2008; 47: D110–D116.DOI: 10.1364/AO.47.00D110.
  38. 38. Kim H, Hahn J, Lee B: Mathematical modeling of triangle‐mesh‐modeled three‐dimensional surface objects for digital holography. Applied Optics. 2008; 47: D117–D127. DOI: 10.1364/AO.47.00D117.
  39. 39. Ahrenberg L, Benzie P, Magnor M, Watson J: Computer generated holograms from three dimensional meshes using an analytic light transport model. Applied Optics. 2008; 47:, 1567–1574. DOI: 10.1364/AO.47.001567.
  40. 40. Pan Y, Wang Y, Liu J, Li X, Jia J: Fast polygon‐based method for calculating computer‐generated holograms in three‐dimensional display. Applied Optics. 2013; 52: A290‐A299. DOI: 10.1364/AO.52.00A290.
  41. 41. Stoykova E, Alatan A, Benzie P, Grammalidis N, Malassiotis S, Ostermann J, Piekh S, Sainov V, Theobalt C, Thevar T, Zabulis X: 3D time‐varying scene capture technologies—a survey. IEEE Transactions on Circuits and Systems for Video Technology. 2007; 17 (11): 1568–1586. DOI: 10.1109/TCSVT.2007.909975.
  42. 42. Yatagai T: Stereoscopic approach to 3‐D display using computer‐generated hologram. Applied Optics. 1976; 15 (11): 2722–2729. DOI: 10.1364/AO.15.002722.
  43. 43. Bjelkhagen H, Brotherton‐Ratcliffe D: Ultra‐Realistic Imaging – Advanced Techniques in Analogue and Digital Color Hoography. 1st ed. London, New York: Taylor & Francis; 2013. 664 p. DOI: 10.1080/00107514.2014.907347.
  44. 44. Wakunami K, Yamaguchi M: Calculation for computer generated hologram using ray‐sampling plane. Optics Express. 2011; 19: 9086–9101. DOI: 10.1364/OE.19.009086.
  45. 45. Abookasis D, Rosen J: Computer‐generated holograms of three‐dimensional objects synthesized from their multiple angular viewpoints. Journal of the Optical Society of America A 2003; 20: 1537–1545. DOI: 10.1364/JOSAA.20.001537.
  46. 46. Sando Y, Itoh M, Yatagai T: Color computer‐generated holograms from projection images. Optics Express. 2004; 12: 2487–2493. DOI: 10.1364/OPEX.12.002487.
  47. 47. Sando Y, Itoh M, Yatagai T: Full‐color computer‐generated holograms using 3‐D Fourier spectra. Optics Express. 2004; 12(25): 6246–6251. DOI: 10.1364/OPEX.12.006246.
  48. 48. Hayashi N, Sakamoto Y, Honda Y: Improvement of camera arrangement in computer‐generated holograms synthesized from multi‐view images. Proceedings of SPIE. 2011; 7957: 795711. DOI: 10.1117/12.874444.
  49. 49. Senoh T, Wakunami K, Ichihashi Y, Sasaki H, Oi R, Yamamoto K: Multiview image and depth map coding for holographic TV system. Optics Engineering. 2014; 53(11), 112302–112302. DOI: 10.1117/1.OE.53.11.112302.
  50. 50. TakakiY, Ikeda K: Simplified calculation method for computer generated holographic stereograms from multi‐view images. Optics Express. 2013; 21(8): 9652–9663. DOI: 10.1364/OE.21.009652.
  51. 51. Smithwick QYJ, Barabas J, Smalley D, Bove VM Jr: Real‐time shader rendering of holographic stereograms. Proceedings of SPIE. 2009; 7233: 723302 DOI: 10.1117/12.808999.
  52. 52. Wakunami K, Yamashita H, Yamaguchi M: Occlusion culling for computer generated hologram based on ray wavefront conversion. Optics Express. 2013; 21: 21811–21822. DOI: 10.1364/OE.21.021811.
  53. 53. Symeonidou A, Blinder D, Munteanu A, Schelkens P: Computer‐generated holograms by multiple wavefront recording plane method with occlusion culling. Optics Express 2015; 23: 22149–22161. DOI: 10.1364/OE.23.022149.
  54. 54. Chen RHY, Wilkinson T: Computer generated hologram with geometric occlusion using GPU‐accelerated depth buffer rasterization for three‐dimensional display. Applied Optics. 2009;48: 4246–4255. DOI: 10.1364/AO.48.004246.
  55. 55. Janda M, Hanak I, Onural L: Hologram synthesis for photorealistic reconstruction. Journal of the Optical Society of America A. 2008; 25: 3083–3096. DOI: 10.1364/JOSAA.25.003083.
  56. 56. Zhang H, Collings N, Chen J, Crossland B, Chu D, Xie J: Full parallax three‐dimensional display with occlusion effect using computer generated hologram. Optics Engineering. 2011; 50 (7): 074003. DOI: 10.1117/1.3599871.
  57. 57. Kang H, Yamaguchi T, Yoshikawa H, Kim S‐C, Kim E‐S: Acceleration method of computing a compensated phase‐added stereogram on a graphic processing unit. Applied Optics. 2008; 47: 5784–5789. DOI: 10.1364/AO.47.005784.
  58. 58. Ichikawa T, Yamaguchi K, Sakamoto Y: Realistic expression for full‐parallax computer‐generated holograms with the ray tracing method. Applied Optics. 2013; 52: A201–A209. DOI: 10.1364/AO.52.00A201.
  59. 59. Zhang Y, Peng W, Chen H, Xu Y, Chen W, Xu W: Computer‐generated‐hologram‐accelerated computing method based on mixed programming. Chinese Optics Letters. 2014; 12: 030902 . DOI: 10.3788/COL201412.030902.
  60. 60. Yamaguchi K, Sakamoto Y: Computer generated hologram with characteristics of reflection: reflectance distributions and reflected images. Applied Optics. 2009; 48: H203–H211. DOI: 10.1364/AO.48.00H203.
  61. 61. Yamaguchi K, Ichikawa T, Sakamoto Y: Calculation method for computer‐generated holograms considering various reflectance distributions based on microfacets with various surface roughnesses. Applied Optics. 2011; 50: H195‐H202. DOI: 10.1364/AO.50.00H195.
  62. 62. Ichikawa T, Sakamoto Y, Subagyo A, Sueoka K: Calculation method of reflectance distributions for computer‐generated holograms using the finite‐difference time‐domain method. Applied Optics. 2011; 50: H211–H219. DOI: 10.1364/AO.50.00H211.
  63. 63. Shimobaba T, Makowski M, Nagahama Y, Endo Y, Hirayama R, Hiyama D, Hasegawa S, Sano M, Kakue T, Oikawa M, Sugie T, Takada N, Ito T: Color computer‐generated hologram generation using the random phase‐free method and color space conversion. Applied Optics. 2016; 55: 4159–4165. DOI: 10.1364/AO.55.004159.
  64. 64. Lucente M: Interactive computation of holograms using a look‐up table. Journal of Electronic Imaging. 1993; 2(1): 28–34. DOI: 10.1117/12.133376.
  65. 65. Kim S, Kim J, Kim E: Effective memory reduction of the novel look‐up table with one‐dimensional sub‐principle fringe patterns in computer‐generated holograms. Optics Express. 2012; 20: 12021–12034. DOI: 10.1364/OE.20.012021.
  66. 66. Matsushima K, Takai M: Recurrence formulas for fast creation of synthetic three‐dimensional holograms. Applied Optics. 2000; 39(35): 6587–6594. DOI: 10.1364/AO.39.006587.
  67. 67. Shimobaba T, Masuda N, Ito T: Simple and fast calculation algorithm for computer‐generated hologram with wavefront recording plane. Optics Letters. 2009; 34: 3133–3135. DOI: 10.1364/OL.34.003133.
  68. 68. Murano K, Shimobaba T, Sugiyama A, Takada N, Kakue T, Oikawa M, Ito T: Fast computation of computer‐generated hologram using Xeon Phi coprocessor. Computer Physics Communications. 2014; 185(10): 2742–2757. DOI: 10.1016/j.cpc.2014.06.010.
  69. 69. Ahrenberg L, Benzie P, Magnor M, Watson J: Computer generated holography using parallel commodity graphics hardware. Optics Express. 2006; 14:7636–7641. DOI: 10.1364/OE.14.007636.
  70. 70. Yaras F, Kang H, Onural L: Real‐time Multiple SLM Color Holographic Display Using Multiple GPU Acceleration. In: Advances in Imaging, OSA Technical Digest (CD) (Optical Society of America, 2009), paper DWA4. DOI: 10.1364/DH.2009.DWA4.
  71. 71. Lucente M: Holographic bandwidth compression using spatial subsampling. Optics Engineering. 1996; 35(6): 1529–1537. DOI: 10.1117/1.600736.
  72. 72. Yamaguchi M, Hoshino H, Honda T, Ohyama N: Phase‐added stereogram: calculation of hologram using computer graphics technique. Proceedings of SPIE. 1993; 1914: 25–31. DOI: 10.1117/12.155027.
  73. 73. Kang H, Stoykova E, Kim YM, Hong SH, Park JS, Hong JS: Color wavefront printer with mosaic delivery of primary colors. Optics Communications. 2015; 350: 47–55. DOI: 10.1016/j.optcom.2015.02.041.
  74. 74. Kim Y, Stoykova E, Kang H, Hong S, Park J, Park JS, Hong J: Seamless full color holographic printing method based on spatial partitioning of SLM. Optics Express. 2015; 23 : 172–182. DOI10.1364/OE.23.000172.
  75. 75. Kang H, Stoykova E, Yoshikawa H: Fast phase‐added stereogram algorithm for generation of photorealistic 3D content. Applied Optics. 2016; 55: A135–A143. DOI: 10.1364/AO.55.00A135.
  76. 76. Kang H, Fujii T, Yamaguchi T, Yoshikawa H: Compensated phase‐added stereogram for real‐time holographic display. Optics Engineering. 2007; 6: 095807. DOI: 10.1117/1.2784463.
  77. 77. Kang H, Yamaguychi T, Yoshikawa H: Accurate phase added stereogram to improve the coherent stereogram. Applied Optics. 2008; 47: D44–D54. DOI: 10.1364/AO.47.000D44.
  78. 78. Gentet Y, Gentet P: Ultimate emulsion and its applications: a laboratory‐made silver halide emulsion of optimized quality for monochromatic pulsed and full‐color holography. Proceedings of SPIE. 2000; 4149: 56–62. DOI: 10.1117/12.402459.

Written By

Elena Stoykova, Hoonjong Kang, Youngmin Kim, Joosup Park, Sunghee Hong and Jisoo Hong

Submitted: 31 May 2016 Reviewed: 21 September 2016 Published: 22 March 2017