Open access peer-reviewed chapter

Real-Time Diffraction Field Calculation Methods for Computer-Generated Holograms

Written By

Gokhan Bora Esmer

Submitted: 06 December 2018 Reviewed: 01 April 2019 Published: 23 May 2019

DOI: 10.5772/intechopen.86136

From the Edited Volume

Holographic Materials and Applications

Edited by Manoj Kumar

Chapter metrics overview

1,157 Chapter Downloads

View Full Metrics


Holographic three-dimensional television systems provide a natural 3D visualization. Fast calculation of the diffraction field from a three-dimensional object is essential to achieve video rate. In the literature, there are myriads of fast algorithms for diffraction field calculation from three-dimensional objects, but most of them omit the pixelated structure of the dynamic display devices which are used in the reconstruction process. In this chapter, the look-up table-based fast algorithm for diffraction field calculation from a three-dimensional object for a pixelated dynamic display device is presented. Real-time diffraction field calculations are obtained by running the algorithm in parallel on a graphical processing unit. Performance of the algorithm is evaluated in terms of computation time of the diffraction field and the normalized mean square error on the reconstructed object. To have optimization on the required memory space for the look-up table, two different sampling policies along the longitudinal axis are implemented. Uniform sampling policy along the longitudinal axis provides better error performance than nonuniform sampling policy. Furthermore, optical experiments are performed, and it is observed that both numerical and optical reconstructions are similar to each other. Hence, the proposed method provides successful results.


  • computer-generated holograms
  • holographic display
  • real-time holography
  • spatial light modulators
  • 3D visualization

1. Introduction

Holography is the only visualization technique that satisfies all the depth cues [1, 2, 3]. Therefore, it gives a natural three-dimensional (3D) visualization. Holography is based on capturing the diffracted optical waves from an object and regenerating those waves again by illuminating the recording media [1, 2, 3, 4, 5, 6]. Captured optical waves provide a significant amount of information related to the object such as surface profile, depth, and refractive index of the object. Hence, holography has a myriad of applications. For instance, holograms can be used as optical elements like prisms, lenses, and mirrors [7, 8]. Also, parallel optical computing is possible when holograms are employed [9, 10]. Furthermore, holograms are useful in metrology [11, 12, 13] and microscopic imaging to visualize very small objects like cells and bacterias [14, 15]. Another application of holography is related to nondestructive testing [16, 17, 18]. Nevertheless, major application of holography is related to 3D visualization, and it is used in education [19, 20], dentistry [21, 22], gaming [23], demonstration of cultural heritage [24], and more.

Holography setups can be assembled by using different configurations depending on the application. In optical holography setups, holographic patterns are stored on high-resolution holographic films [25, 26] and some type of crystals [27]. However, in some of the applications, we need to process the captured holographic patterns by numerical methods. Then, digital sensing devices are employed as a capturing device. Those types of setups are called as digital holography, and it has a vast amount of applications especially in nondestructive testing and microscopy. In [28], digital holography-based measurement method of 3D displacement is presented. Observed material is illuminated from four different directions sequentially; then they are combined to improve the resolution in the order of 10 nm. As a nondestructive testing method, digital holography is used in the analysis of cortical bone quality and strength impact in [29]. Furthermore, a method based on digital holography is implemented for detecting and measuring effect of moisture on the hygroscopic shrinkage strain on wood [30]. Another application of digital holography is in precise and accurate measurement of the initial displacement of the canine and molar in human maxilla [31]. By using subpixel registration and fusion algorithms, an improvement of profile measurements and expanding the field of view (FOV) in continuous-wave terahertz reflective digital holography is achieved [32]. A comprehensive review of denoising methods on phase retrieval from digital holograms in terms of signal-to-noise ratio (SNR) and computation time is presented in [33]. Removal of phase distortions by using principal component analysis (PCA) method is given in [34].

Holography is a versatile tool for visualization, measurement, and testing. In optical and digital holography methods, we need some optical sensing elements like polymers and digital devices to capture the diffracted field from the object. However, in computer-generated holography (CGH), diffraction field calculations are performed by using numerical methods and signal processing algorithms [4, 5, 6, 35]. Then, we can obtain the hologram from the calculated diffraction field and use it to drive dynamic display devices such as spatial light modulators (SLMs). After that, illumination of the SLM with a coherent light source will provide an optical reconstruction of the original object. When CGHs are calculated sequentially and used in driving SLMs, then we can have a holographic 3D television (H3DTV) as a product. An overview on holographic displays is presented in [36]. Generally, coherent light sources are used in H3DTV systems, and those light sources can generate speckle noise in the reconstructions. Low computational method for improving image quality and decreasing the speckle noise in CGH is proposed in [37]. Diffraction field calculations as in CGH are also used in other 3D display systems to improve the resolution of reconstructed objects. For instance, in integral imaging-based 3D display system, distortions on the elemental images are corrected by using holographic functional screen [38].

In diffraction field calculation from a 3D object, we have to generate a synthetic 3D object. There are plenty of ways for generating a synthetic 3D object in a computer. For instance, we can form a 3D object by using a set of point light sources which are distributed over the space. Those types of objects are called as point cloud objects. To calculate the diffraction field from a point cloud object, we superpose the diffraction fields emitted by each point light source [35, 39, 40, 41, 42, 43, 44]. Another 3D object generation method is based on stitching small planar patches. As in the process of diffraction field calculation from point cloud objects, once again the diffracted fields from each patch are superposed to obtain the diffraction field of the object [45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55]. The third method which can be used in the generation of synthetic 3D object is based on having multiple two-dimensional (2D) cross sections of the object along the longitudinal axis. Then, superposition of diffracted fields from those 2D cross sections will give the diffraction field of the 3D object [56, 57, 58, 59, 60]. A detailed summary on CGHs in terms of resolution, field of view, eye relief, and optical setups for different 3D object generation methods can be seen in [61, 62].

CGHs of the objects should be calculated rapidly to obtain H3DTV systems. Hence, fast methods such as fast Fourier transform (FFT) and look-up table (LUT)-based methods are utilized in CGH calculations. In [39, 52], algorithms which are based on FFT are used for decreasing the calculation time of CGH. Precomputed LUTs are also used for achieving fast calculations in CGH calculations [2, 39, 41, 42, 63, 64]. Another way to achieve fast calculation in CGH is based on segmentation of diffraction field from point light sources [43, 44]. Parallel processing of diffraction field calculation provides further improvements on the computation time. Graphical processing units (GPUs) are special hardware to run parallel calculations. Thus, they are one of the most convenient hardware for H3DTV systems [40, 44, 65, 66]. Time-division method can also be used in the calculation of CGHs for layered 3D objects to achieve fast computations [67].

Imposing some approximations in the diffraction field calculations provides to decrease the computational complexity, and it paves the way to obtain fast diffraction field calculations. In the meantime, we have to improve the quality of the reconstructed object. An accurate calculation method of diffraction field which is based on angular spectrum decomposition is explained in [68]. Furthermore, diffraction field calculation methods for SLMs with pixelated structure are presented in [69, 70, 71]. However, the computational complexities of those methods are too high to have real-time diffraction field calculations. As a result of this, the algorithms presented in [72, 73, 74, 75] are proposed as a solution to both computation time and quality in the reconstructed object in H3DTV. Further computational time improvements can be obtained by utilizing a LUT which is optimized for parallel processing on a GPU to achieve real-time calculations. Moreover, the pixel structure of the employed SLM in the reconstruction process is taken into account in forming LUT. Calculated LUT has one-dimensional (1D) kernels to decrease the allocated memory space.


2. Calculation of diffraction pattern used in driving SLM with pixelated structure

In CGH, it is possible to obtain 3D reconstructions of both synthetic and real objects. By employing dynamic display devices like SLMs in the reconstruction process, we can have H3DTV systems. To drive SLMs, we have to calculate diffraction fields from 3D objects by using numerical analysis methods and signal processing techniques. Calculation of diffraction field depends on the 3D object generation method. In this work, we assumed that 3D objects are represented as point clouds, because it is one of the simplest methods in 3D object generation. The diffraction field of the 3D object is calculated by superposing the diffraction fields emitted from the points that form the 3D object.

Superposition on diffraction field calculation from a point cloud object over a planar surface can be expressed as

ψ r 0 = l = 1 L ψ r l h F r 0 r l E1

where ψ r 0 and ψ r l are the diffraction fields over SLM and diffraction field at l th sample point of the 3D object, respectively. Surface of SLM is represented by the position vector r 0 = x y 0 , and the sampling points of 3D object are shown by

r l = x l y l z l . We assume that Fresnel approximation is valid and the term h F r denotes diffracted field on the SLM from a point light source expressed as

h F r = e jkz j λz e jk z x 2 + y 2 E2

where r = x y z , k is the wave number, and λ is the wavelength of the light source used in illumination of the object.

Scaled and superposed diffraction fields from point light sources provide the diffraction field of the 3D object, and its phase component is used for driving the SLM. Then, entire surface of the SLM is illuminated by a plane wave. After that, the reflected optical wave from the surface of the SLM generates an optical replica of the 3D object. Most of the off-the-shelf SLMs have square pixels with very high filling factors like 93 % [76]. Hence, the filling factor in the simulated SLM is approximated as 100 % . The pixel structure of the simulated SLM is illustrated in Figure 1.

Figure 1.

An illustration of the pixel structure of the simulated SLM.

Simulation of optical setup can be improved when the pixelated structure of the SLM is taken into consideration. For that purpose, we have to perform surface integration over each pixel area on the SLM. It is assumed that gray value over each pixel area has a constant value. The diffraction field over SLM can be found as

ψ 2 D , z = 0 n m = x n x n + 1 y m y m + 1 l = 1 L ψ r l h F r 0 r l dxdy E3

where n and m stand for indices of SLM along x - and y -axes, respectively. It is also possible to represent Eq. (3) by scaling and superposing 2D kernels K α l , 2 D ,

ψ 2 D , z = 0 = l = 1 L P r l K α l , 2 D E4

where P r l = r l e jk z l and α l = 1 λ z l 2D kernel K α l , 2 D can be decomposed into 1D kernels as

K α l , 2 D = K α l , 1 D x l T K α l , 1 D y l E5

where x l and y l refer to locations of l th point light source, used in generation of 3D object, along x - and y -axes, respectively. Each 1D kernel K α l , 1 D can be represented as

K α l , 1 D = K α l , 1 D 1 } K α l , 1 D 2 K α l , 1 D N E6

and its elements can be calculated as

K α l , 1 D n = C ζ l , n + 1 + jS ζ l , n + 1 C ζ l , n jS ζ l , n E7

where ζ l , n = x n x l λ z l . The operators C · and S · stand for cosine and sine Fresnel integrals, respectively [5, 6], and they are calculated as

C w = 0 w cos π 2 τ 2 ; S w = 0 w sin π 2 τ 2 E8

Numerical evaluation of cosine and sine Fresnel integrals given in Eq. (8) is calculated by adaptive Lobatto quadrature [77].

In the standard algorithm, diffraction field of each point is obtained by evaluating Eq. (8). Then, superposition of those fields is performed to obtain CGH. As a result of this, computational complexity of diffraction field calculation is too high to have real-time applications. As a solution to the computation time problem, we present a fast algorithm to calculate 2D kernel, K α l , 2 D , based on LUT and parallel processing.


3. Proposed algorithm for fast calculation of CGH

Fast computation of diffraction field and improved quality of reconstructed 3D object are essential issues in H3DTV. As a solution to those problems, we propose a method based on calculation of 2D kernels K α l , 2 D without evaluating sine and cosine Fresnel integrals. To achieve fast calculation, precomputed LUT is utilized, and the diffraction field of the 3D object can be obtained by scaling and superposing the 2D kernels K α l , 2 D as

ψ ̂ 2 D , z = 0 = l = 1 L P r l K ̂ α l , 2 D E9

where ψ ̂ 2 D , z = 0 denotes the estimated diffraction field of the 3D object on the SLM and K ̂ α l , 2 D is the 2D kernel which denotes the diffraction field of l th point of the 3D object on SLM. 2D kernel K ̂ α l , 2 D is calculated by multiplying 1D kernels K α l , 1 D from LUT as shown in Eq. (5). Each 1D kernel K α l , 1 D represents the diffraction field on SLM from specific depth along longitudinal axis. A simple arithmetic operation is used for speeding up data fetching from the LUT. As result of this, total computation time of the diffraction field can be improved in terms of data fetching. By increasing the number of precomputed 1D kernels in LUT, we can achieve better diffraction field estimations for proposed method, but it causes to allocate more memory space. Hence, we apply different sampling policies along longitudinal axis to optimize memory space allocation. In the first sampling policy, uniform sampling along longitudinal axis is performed. In the second sampling policy, we sample the parameter α l = 1 λ z l uniformly. Thus, we have nonuniform sampling along the longitudinal axis.


4. Simulation results

Performance assessment of the proposed diffraction field calculation method is obtained by implementing different scenarios, but a few of them are presented to give an insight to the reader. Two major performance evaluation criteria are taken into account: total computation time of the CGH and the normalized mean square error (NMSE) on the reconstructed object. NMSE on the reconstructed object can be calculated as

NMSE = n = 1 N m = 1 M ψ ̂ 2 D , z = z 0 n m ψ 2 D , z = z 0 n m n = 1 N m = 1 M ψ 2 D , z = z 0 n m E10

where ψ 2 D , z = z 0 n m and ψ ̂ 2 D , z = z 0 n m denote reconstructed objects at z = z 0 plane from the diffraction field calculated by the standard and the proposed algorithms, respectively. Simulated scenario for a CGH is illustrated in Figure 2.

Figure 2.

An illustration of simulated optical setup. The SLM employed in the setup has N and M pixels along x - and y -axes, respectively. Transversal axis sampling is indicated by X s . The variable z 0 determines the distance between SLM and the closest point light source of the 3D object.

First, a 3D point cloud object is generated in computer environment. The generated 3D object has 3144 points which are distributed over the space. The volume occupied by the object has x e = 2.8 mm , y e = 4.1 mm , and z e = 4.1 mm extensions along x -, y -, and z -axes. There is a distance between the object and the screen, and it is taken as z 0 = 61.6 mm . We assume that simulated SLM has 100 % fill factor and pitch distance X s is taken as 8 μm . Also, the simulated SLM has 512 pixels along both x - and y -axes, respectively. We assume that green laser is employed for illumination purpose; hence the wavelength is taken as 532 nm .

The proposed algorithm is implemented by using two platforms: MATLAB and Visual C++. To have shorter computation time for diffraction fields, we utilize CUDA libraries and parallel computation power of GPU. The assembled computer system has i5-2500 CPU at 3.3 GHz, 4GB RAM, and a GTX-680 GPU to run the algorithm. Operating system of the computer is chosen as 64-bit Windows 7.

Generally, off-the-shelf SLMs have pixelated structure, and phase parts of the calculated diffraction fields are used for driving the SLM. When the pixelated structure of SLM is not taken into account in CGH calculations, it is not easy to differentiate focused and unfocused parts of the reconstructed 3D objects. An illustration of such a result can be seen in Figure 3a. As a result of the similarity in focused and unfocused parts, the quality of the reconstructed object is decreased significantly. On the contrary, the difference between focused and unfocused parts in the reconstructed 3D object is clear when the proposed method is used in diffraction field calculation. Those results can be seen easily in Figure 3b.

Figure 3.

A point cloud object which has six parts and each part is located at different depths along the longitudinal axis. The leftmost piece is reconstructed in both of the figures shown above: (a) reconstruction of the 3D object from the CGH obtained without taking into consideration the pixelated structure of SLM and (b) reconstruction from the CGH calculated by the proposed algorithm.

Furthermore, numerical and optical reconstructions are very similar to each other, and that similarity in the reconstructions can be seen in Figure 4.

Figure 4.

(a) Optical reconstruction of a point cloud object and (b) numerical reconstruction of the same object given in (a).

To calculate the diffraction field in the standard method, we need to perform cosine and sine Fresnel integrals for each pixel on SLM and for each point light source in 3D object. As a result of this, computational complexity of the standard method is extremely high, and CGH is calculated at 2701.10 s . Significant improvement in computation time can be achieved when the proposed algorithm is employed in CGH calculation. When we use LUT-based method for the same scenario which is mentioned above, we need 8.15 s to calculate the CGH. Further improvement in computation time can be obtained if the presented algorithm is implemented in parallel on a GPU. Although, significant gain on the computation time of CGH is obtained by using LUT, there will be negligible amount of error on the reconstructed objects, because of having finite number of kernels and the quantization effect along the longitudinal axis. The performance of the proposed method is summarized in Table 1.

3D object = 3144 points; λ = 532 nm ; N = M = 512; X s = 8 μm Computation time (s) NMSE
Standard method 2710.10
LUT 8.15 0.08
LUT: parallel processing by using four cores 7.08 0.08
LUT: parallel processing by using GTX-680 0.08 0.08

Table 1.

Performance assessment of the proposed algorithm in terms of NMSE.

Proposed algorithm utilizes LUT which has 1D precomputed kernels for 125 different sampling points along longitudinal axis.

By increasing the number of kernels in LUT, we can improve error performance of the algorithm without having any extra computational load, but there is an increase in the size of the required memory. As a result of this, installed memory space may not be enough to perform the diffraction field calculations. To overcome memory allocation problem, we use another sampling policy in generation of LUT. Two different sampling policies along the longitudinal axis are proposed. The first sampling policy is based on uniform sampling of longitudinal axis. The second sampling policy is related to uniform sampling of α l . Hence, there will be nonuniform sampling along the longitudinal axis. Tables 2 and 3 summarize performances of the sampling policies in terms of NMSE and required memory allocation by the precomputed LUT. As it can be seen from Tables 2 and 3, when the size of the LUT is fixed, uniform sampling policy along longitudinal axis provides better NMSE performance than nonuniform one.

3D object = 3144 points; λ = 532 nm ; N = M = 512; X s = 8 μm
Number of 1D kernels NMSE Memory allocation (kB)
83 0.068 332
92 0.061 368
103 0.054 412
118 0.048 472
137 0.039 548
165 0.034 660
206 0.026 824
274 0.020 1096
411 0.014 1644
821 0.006 3284

Table 2.

Performance of the proposed algorithm according to the number of kernels used in LUT, NMSE, and allocated memory space.

LUT is formed by uniform sampling of depth parameter along longitudinal axis. Each element in 1D kernels is represented by four bytes.

3D object = 3144 points; λ = 532 nm ; N = M = 512; X s = 8 μm
Number of 1D kernels NMSE Memory allocation (kB)
83 0.127 332
92 0.114 368
103 0.104 412
118 0.088 472
137 0.077 548
165 0.062 660
206 0.051 824
274 0.038 1096
411 0.025 1644
821 0.013 3284

Table 3.

Performance of the proposed algorithm according to the number of kernels used in LUT, NMSE, and allocated memory space.

LUT is formed by uniform sampling of α l parameter. Each element in 1D kernels is represented by four bytes.

In terms of calculated numerical errors, there should be a significant amount of deviation between reconstructed objects from CGHs obtained by standard and proposed method, but it is not easy to differentiate the reconstructions visually. Illustrations of numerically reconstructed objects by using both methods are shown in Figure 5a and b, respectively. To see the difference between to reconstructions, we subtract two reconstructions from each other and then take the magnitude of that difference. Then, we scale difference image linearly between 0 and 255 to improve the visibility of insignificant deviations. Those deviations can be seen in Figure 5c. Most of the deviations are in the unfocused region, and those deviations will not decrease the quality of the reconstruction. As a result of this, the proposed algorithm provides successful results.

Figure 5.

(a) Magnitude of the reconstructed object at z = z 0 from the diffraction pattern calculated by standard algorithm and (b) by the proposed algorithm. (c) Magnitude of the difference between the reconstructed objects given in (a) and (b). Please note that image is scaled linearly from 0 to 255; thus the insignificant differences may become visible.

Performance assessment of the presented algorithm is tested by optical reconstructions as well. For that purpose, we assembled an optical setup which is shown in Figure 6. Green laser with λ = 532 nm is used as a coherent light source, and HoloEye Pluto phase-only SLM is employed as a dynamic display device. A couple of optically reconstructed objects are shown in Figure 7.

Figure 6.

Assembled optical setup for optical experiments.

Figure 7.

Optically reconstructed 3D objects: (a) hand (b) propeller.


5. Conclusions

Two major problems in H3DTV systems can be called as decreasing the computation time of CGH and improving the quality of the reconstructed object. Using fast algorithms in diffraction field calculations will be helpful to decrease the computation time, but most of those fast algorithms impose some approximations that decrease the quality of the reconstructed object. In this work, we propose a diffraction field calculation algorithm that paves the way to achieve real-time calculations of diffraction fields from point cloud objects. In the meantime, the quality of the reconstructed objects is improved by taking into account the pixelated structure of SLM. Also, the proposed method can be run in parallel on a GPU. Performed numerical and optical experiments provide similar results. The proposed method utilizes precomputed LUT to decrease the computational load. To store the precomputed LUT, we need significant amount of memory allocation, and optimization of the occupied memory space is obtained by having two different sampling policies along the longitudinal axis. In the first sampling policy, LUT is formed by having uniform sampling along longitudinal axis. In the second one, nonuniform sampling is applied. When we fix size of the LUT, better NMSE performance is obtained by uniform sampling policy. As a result of this, when we use uniform sampling policy in computation of LUT, we need to allocate less amount of memory to store it.



This work was supported by the Scientific and Technological Research Council of Turkey project under grant EEEAG-112E220 and Marmara University Scientific Research Fund project under grant FEN-A-130515-0176.


  1. 1. Saxby G. Practical Holography. 3rd ed. Boca Raton, FL: Taylor and Francis; 2003. 478 p. ISBN: 978-1-4200-3366-3
  2. 2. Lucente M. Diffraction-specific fringe computation for electro-holography [thesis]. Cambridge, MA: Massachusetts Institute of Technology; 1994
  3. 3. Benton SA, Bove VM Jr. Holographic Imaging. New Jersey: John Wiley & Sons; 2008. 288 p. ISBN: 978-0470068069
  4. 4. Toal V. Introduction to Holography. US: CRC Press Taylor and Francis Group; 2012. 502 p. ISBN: 978-1439818688
  5. 5. Goodman JW. Introduction to Fourier Optics. 2nd ed. New York: McGraw Hill; 1996. 441 p. ISBN: 978-0070242548
  6. 6. Born M, Wolf E. Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light. 7th ed. New York: Cambridge University Press; 1980. 952 p. ISBN: 978-0521642224
  7. 7. Hong K, Yeom J, Jang C, Hong J, Lee B. Full-color lens-array holographic optical element for three-dimensional optical see-through augmented reality. Optics Letters. 2014;39(1):127-130. DOI: 10.1364/OL.39.000127
  8. 8. Wang Q-H, He M-Y, Zhang H-L, Deng H. Augmented reality 3D display system based on holographic optical element. In: Proceedings Volume 10942, Advances in Display Technologies IX; 1094203. SPIE OPTO; 2019. DOI: 10.1117/12.2508136
  9. 9. Georgiou A, Kollin JS, Diaz AG. Multi-beam optical system for fast writing of data on glass. In: United States Patent US10181336B1. 2019
  10. 10. Horst F. Integrated optical circuit for holographic information processing. In: United States Patent US20190041796A1. 2019
  11. 11. Wagner C, Seebacher S, Osten W, Jüptner W. Digital recording and numerical reconstruction of lensless Fourier holograms in optical metrology. Applied Optics. 1999;38(22):4812-4820. DOI: 10.1364/AO.38.004812
  12. 12. Picart P, Mounier D, Desse JM. High-resolution digital two-color holographic metrology. Optics Letters. 2008;33(3):276-278. DOI: 10.1364/OL.33.000276
  13. 13. Claus D, Pedrini G, Buchta D, Osten W. Accuracy enhanced and synthetic wavelength adjustable optical metrology via spectrally resolved digital holography. Journal of the Optical Society of America A. 2018;35(4):546-552. DOI: 10.1364/JOSAA.35.000546
  14. 14. Kim MK. Digital Holographic Microscopy: Principles, Techniques and Applications. New York: Springer Series in Optical Sciences; 2011. ISBN: 978-1-4419-7792-2
  15. 15. Quan X, Kumar M, Matoba O, Awatsuji Y, Hayasaki Y, Hasegawa S, et al. Three-dimensional stimulation and imaging-based functional optical microscopy of biological cells. Optics Letters. 2018;43(21):5447-5450. DOI: 10.1364/OL.43.005447
  16. 16. Gholizadeh S. A review of non-destructive testing methods of composite materials. Procedia Structural Integrity. 2016;1:50-57. DOI: 10.1016/j.prostr.2016.02.008
  17. 17. Kreis T. Application of digital holography for nondestructive testing and metrology: A review. IEEE Transactions on Industrial Informatics. 2016;12(1):240-247. DOI: 10.1109/TII.2015.2482900
  18. 18. Karray M, Christophe P, Gargouri M, Picart P. Digital holographic nondestructive testing of laminate composite. Optical Engineering. 2016;55(9):095105-1-095105-7. DOI: 10.1117/1.OE.55.9.095105
  19. 19. Thornton DE, Spencer MF, Plimmer BT, Mao D. The digital holography demonstration: A table top setup for STEM-based outreach events. In: Proceedings Volume 10741, Optics Education and Outreach V; 107410J SPIE Optical Engineering + Applications; San Diego, California, United States: SPIE; 2018. DOI: 10.1117/12.2320380
  20. 20. Salançon E, Escarguel A. Holography in education and popular science: A new versatile and vibrationless color device. European Journal of Physics. 2018;40(1):015301. DOI: 10.1088/1361-6404/aae8ba
  21. 21. Xia H, Picart P, Montresor S, Guo R, Li JC, Yusuf Solieman O, et al. Mechanical behaviour of CAD/CAM occlusal ceramic reconstruction assessed by digital color holography. Dental Materials. 2018;34(8):1222-1234. DOI: 10.1016/
  22. 22. Casavola C, Lamberti L, Pappalettera G, Pappalettere C. Application of contouring to dental reconstruction. In: Jin H, Sciammarella C, Furlong C, Yoshida S, editors. Imaging Methods for Novel Materials and Challenging Applications, Volume 3. 2013. Conference Proceedings of the Society for Experimental Mechanics Series. New York, NY: Springer; DOI: 10.1007/978-1-4614-4235-6-25
  23. 23. Song W, Huang K, Xi Y, Cho K. Interactive holography system for enhancing augmented reality experiences. Advanced Science Letters. 2015;21(3):354-357. DOI: 10.1166/asl.2015.5771
  24. 24. Clini P, Quattrini R, Frontoni E, Pierdicca R, Nespeca R. Real/not real: Pseudo-holography and augmented reality applications for cultural heritage. In: Handbook of Research on Emerging Technologies for Digital Preservation and Information Modeling. IGI Global; 2017. DOI: 10.4018/978-1-5225-0680-5.ch009
  25. 25. Berneth H, Burder F-K, Fäcke T, Hagen R, Hönel D, Rölle T, et al. Holographic recordings with high beam ratios on improved Bayfol HX photopolymer. In: Proceedings Volume 8776, Holography: Advances and Modern Trends III; 877603: SPIE Optics + Optoelectronics; Prague: Czech Republic. 2013. DOI: 10.1117/12.2018618
  26. 26. Zanutta A, Orselli E, Fäcke T, Bianco A. Photopolymeric films with highly tunable refractive index modulation for high precision diffraction optics. Optical Materials Express. 2016;6(1):252-263. DOI: 10.1364/OME.6.000252
  27. 27. Zhuk DI, Burunkova JA, Denisyuk IY, Miroshnichenko GP, Csarnovics I, Tóth D, et al. Peculiarities of photonic crystal recording in functional polymer nanocomposites by multibeam interference holography. Polymer. 2017;112:136-143. DOI: 10.1016/j.polymer.2017.02.004
  28. 28. Pedrini G, Martinez-García V, Wiedmann P, Wenzelburger M, Killinger A, Weber U, et al. Residual stress analysis of ceramic coating by laser ablation and digital holography. Experimental Mechanics. 2016;56:683-701. DOI: 10.1007/s11340-015-0120-3
  29. 29. Ruiz CGT, De La Torre-Ibarra MH, Flores-Moreno JM, Frausto-Reyes C, Santoyo FM. Cortical bone quality affectations and their strength impact analysis using holographic interferometry. Biomedical Optics Express. 2018;9(10):4818-4833. DOI: 10.1364/BOE.9.004818
  30. 30. Kumar M, Shakher C. Experimental characterization of the hygroscopic properties of wood during convective drying using digital holographic interferometry. Applied Optics. 2016;55(5):960-968. DOI: 10.1364/AO.55.000960
  31. 31. Kumar M, Birhman AS, Kannan S, Shakher C. Measurement of initial displacement of canine and molar in human maxilla under different canine retraction methods using digital holographic interferometry. Optical Engineering. 2018;57(9):094106-1-094106-09410612. DOI: 10.1117/1.OE.57.9.094196
  32. 32. Wang D, Zhao Y, Rong L, Wan M, Shi X, Wang Y, et al. Expanding the field-of-view and profile measurement of covered objects in continuos-wave terahertz reflective digital holography. Optical Engineering. 2019;58(2):023111-1-023111-7. DOI: 10.1117/1.OE.58.2.023111
  33. 33. Montrésor S, Memmolo P, Bianco V, Ferraro P, Picart P. Comparative study of multi-look processing for phase map de-noising in digital Fresnel holographic interferometry. Journal of the Optical Society of America A. 2019;36(2):A59-A66. DOI: 10.1364/JOSAA.36.000A59
  34. 34. Sun J, Chen Q, Zhang Y, Zuo C. Optimal principal component analysis-based numerical phase aberration compensation method for digital holography. Optics Letters. 2016;41(6):1293-1296. DOI: 10.1364/OL.41.001293
  35. 35. Yaroslavsky L. Digital Holography and Digital Image Processing: Principles, Methods, Algorithms. New York: Springer; 2004. 584 p. ISBN: 978-1441953971
  36. 36. Memmolo P, Bianco V, Paturzo M, Ferraro P. Numerical manipulation of digital holograms for 3-D imaging and display: An overview. Proceedings of the IEEE. 2017;105(5):892-905. DOI: 10.1109/JPROC.2016.2617892
  37. 37. Shimobaba T, Ito T. Random phase-free computer-generated hologram. Optics Express. 2015;23(7):9549-9554. DOI: 10.1364/OE.23.00949
  38. 38. Sang X, Gao X, Yu X, Xing S, Li Y, Wu Y. Interactive floating full-parallax digital three-dimensional light-field display based on wavefront recomposing. Optics Express. 2018;26(7):8883-8889. DOI: 10.1364/OE.26.008883
  39. 39. Shimobaba T, Nakayama H, Masuda N, Ito T. Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display. Optics Express. 2010;18(19):19504-19509. DOI: 10.1364/OE.18.019504
  40. 40. Shimobaba T, Sato Y, Miura J, Takenouchi M, Ito T. Real time digital holographic microscopy using the graphic processing unit. Optics Express. 2008;16(16):11776-11781. DOI: 10.1364/OE.16.011776
  41. 41. Kimand S-C, Kim E-S. Effective generation of digital holograms of three dimensional objects using a novel look-up table method. Applied Optics. 2008;47(19):D55-D62. DOI: 10.1364/AO.47.000D55
  42. 42. Kim S-C, Kim E-S. Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods. Applied Optics. 2009;48(6):1030-1041. DOI: 10.1364/AO.48.001030
  43. 43. Kang H, Yamaguchi T, Yoshikawa H, Kim S-C, Kim E-S. Acceleration method of computing a compensated phase-added stereogram on a graphic processing unit. Applied Opics. 2008;47(31):5784-5789. DOI: 10.1364/AO.47.005784
  44. 44. Kang H, Yaras F, Onural L. Graphics processing unit accelerated computation of digital holograms. Applied Optics. 2009;48(34):H137-H143. DOI: 10.1364/AO.48.00H137
  45. 45. Leseberg D, Fre ́re C. Computer generated holograms of 3D objects composed of tilted planar segments. Applied Optics. 1988;27:3020-3024. DOI: 10.1364/AO.27.003020
  46. 46. Tommasi T, Bianco B. Computer-generated holograms of tilted planes by a spatial frequency approach. Journal of the Optical Society of America A. 1993;10:299-305. DOI: 10.1364/JOSAA.10.000299
  47. 47. Delen N, Hooker B. Free space beam propagation between arbitrarily oriented planes based on full diffraction theory: A fast Fourier transform approach. Journal of the Optical Society of America A. 1998;15:857-867. DOI: 10.1364/JOSAA.15.000857
  48. 48. Esmer GB. Computation of holographic patterns between tilted planes [thesis]. Ankara: Bilkent University; 2004
  49. 49. Matsushima K. Computer generated holograms for three-dimensional surface objects with shade and texture. Applied Optics. 2005;44(22):4607-4614. DOI: 10.1364/AO.44.004607
  50. 50. Yamamoto K, Senoh T, Oi R, Kurita T. 8k4k-size computer generated hologram for 3-D visual system using rendering technology. In: 4th International Universal Communication Symposium (IUCS), 18-19 October. IEEE; 2010
  51. 51. Ahrenberg L, Benzie P, Magnor M, Watson J. Computer generated holograms from three dimensional meshes using an analytic light transport model. Applied Optics. 2008;47(10):1567-1574. DOI: 10.1364/AO.47.001567
  52. 52. Kim H, Hahn J, Lee B. Mathematical modelling of triangle-mesh-modelled three-dimensional surface objects for digital holography. Applied Optics. 2008;47(19):D117-D127. DOI: 10.1364/AO.47.00D117
  53. 53. Liu Y-Z, Dong J-W, Chen B-C, He H-X, Wang H-Z. High-speed full analytical holographic computations for true-life scenes. Optics Express. 2010;18(4):3345-3351. DOI: 10.1364/OE.18.003345
  54. 54. Lee W, Im D, Paek J, Hahn J, Kim H. Semi-analytic texturing algorithm for polygon computer-generated holograms. Optics Express. 2014;22(25):31180-31191. DOI: 10.1364/OE.22.031180
  55. 55. Su P, Cao W, Ma J, Cheng B, Liang X, Cao L, et al. Fast computer-generated hologram generation method for three-dimensional point cloud model. Journal of Display Technology. 2016;12(12):1688-1694. DOI: 10.1109/JDT.2016.2553440
  56. 56. Haist T, Schönleber M, Tiziani HJ. Computer-generated holograms from 3D-objects written on twisted-nematic liquid crystal displays. Optics Communications. 1997;140:299-308. DOI: 10.1016/S0030-4018(97)00192-2
  57. 57. Yu L, Cai L. Iterative algorithm with a constraint condition for numerical reconstruction of a three-dimensional object from its hologram. Journal of the Optical Society of America A. 2001;18(5):1033-1045. DOI: 10.1364/JOSAA.18.001033
  58. 58. Rosen J, Brooker G. Digital spatially incoherent Fresnel holography. Optics Letters. 2007;32(8):912-914. DOI: 10.1364/OL.32.000912
  59. 59. Muffoletto RP, Tyler JM, Tohline JE. Shifted Fresnel diffraction for computational holography. Optics Express. 2007;15(9):5631-5640. DOI: 10.1364/OE.15.005631
  60. 60. Abookasis D, Rosen J. Computer-generated holograms of three-dimensional objects synthesize from their multiple angular viewpoints. Journal of the Optical Society of America A. 2003;20(8):1537-1545. DOI: 10.1364/JOSAA.20.001537
  61. 61. Shi L, Huang F-C, Lopes W, Matusik W, Luebke D. Near-eye light field holographic rendering with spherical waves for wide field of view interactive 3D computer graphics. Journal ACM Transactions on Graphics. 2017;36(6):236-1-236-17. DOI: 10.1145/3130800.3130832
  62. 62. Park J-H. Recent progress in computer-generated holography for three-dimensional scenes. Journal of Information Display. 2017;18(1):1-12. DOI: 10.1080/15980316.2016.1255672
  63. 63. Chang C, Xia J, Jiang Y. Holographic image projection on tilted planes by phase-only computer generated hologram using fractional Fourier transform. Journal of Display Technology. 2014;10(2):107-113. DOI: 10.1109/JDT.2013.2285174
  64. 64. Gao C, Liu J, Li X, Xue G, Jia J, Wang Y. Accurate compressed look up table method for CGH in 3D holographic display. Optics Express. 2015;23(26):33194-33204. DOI: 10.1364/OE.23.033194
  65. 65. Murano K, Shimobaba T, Sugiyama A, Takada N, Kakue T, Oikawa M, et al. Fast computation of computer-generated hologram using Xeon Phi coprocessor. Computer Physics Communications. 2014;185(10):2742-2757. DOI: 10.1016/j.cpc.2014.06.010
  66. 66. Jackin BJ, Miyata H, Baba T, Ohkawa T, Ootsu K, Yokota T, et al. A decomposition method for fast calculation of large scale CGH on distributed machines. In: Laser Applications to Chemical, Security and Environmental Analysis; 13-17 July 2014; Seattle, Washington, USA. 2014. DOI: 10.1364/AIO.2014.JTu4A.51
  67. 67. Zhao Y, Cao L, Zhang H, Tan W, Wu S, Wang Z, et al. Time-division multiplexing holographic display using angular-spectrum layer-oriented method. Chinese Optics Letters. 2016;14(1):010005-1-010005-5. DOI: 10.3788/COL201614.010005
  68. 68. Zhao Y, Cao L, Zhang H, Kong D, Jin G. Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method. Optics Express. 2015;23(20):25440-25449. DOI: 10.12364/OE.23.025440
  69. 69. Kovachev M, Ilieva R, Benzie P, Esmer GB, Onural L, Watson J, et al. Holographic 3DTV displays using spatial light modulators. In: Onural L, Ozaktas HM, editors. Three-Dimensional Television: Capture, Transmission, Display. Heidelberg: Springer-Verlag Berlin; 2008. pp. 529-556. DOI: 978-3-540-72532-9
  70. 70. Katkovnik V, Astola J, Egiazarian K. Discrete diffraction transform for propagation, reconstruction, and design of wavefield distributions. Applied Optics. 2008;47(19):3481-3493. DOI: 10.1364/AO.47.003481
  71. 71. Katkovnik V, Migukin A, Astola J. Backward discrete wave field propagation modelling as an inverse problem: Toward reconstruction of wave field distributions. Applied Optics. 2009;48(18):3407-3423. DOI: 10.1364/AO.48.003407
  72. 72. Esmer GB. Fast computation of Fresnel diffraction field of a three dimensional object for a pixelated optical device. Applied Optics. 2013;52(1):A18-A25. DOI: 10.1364/AO.52.000A18
  73. 73. Esmer GB. Performance assessment of a fast and accurate scalar optical diffraction computation algorithm. 3D Research. IEEE; 2013;4(1):1-7. DOI: 10.1007/3DRes.01(2013)2
  74. 74. Esmer GB. Algorithms for fast calculation of scalar optical diffraction field on a pixelated display device. In: IEEE AFRICON 2013; 9-12 September 2013; Mauritius: IEEE; 2013. DOI: 10.1109/AFRCON.2013.6757704
  75. 75. Esmer GB. Real-time computation of diffraction fields for pixelated spatial light modulators. Optics Express. 2015;23(10):12636-12647. DOI: 10.1364/OE.23.012636
  76. 76. HoloEye. PLUTO phase only spatial light modulator reflective [Internet]. 2015. Available from:
  77. 77. Gander W, Gautschi W. Adaptive quadrature revisited. BIT Numerical Mathematics. 2000;40(1):84-101. DOI: 10.1023/A:1022318402393

Written By

Gokhan Bora Esmer

Submitted: 06 December 2018 Reviewed: 01 April 2019 Published: 23 May 2019