Open access

Holographic 3-D Displays - Electro-holography within the Grasp of Commercialization

Written By

Stephan Reichelt, Ralf Haussler, Norbert Leister, Gerald Futterer, Hagen Stolle and Armin Schwerdtner

Published: 01 April 2010

DOI: 10.5772/8650

From the Edited Volume

Advances in Lasers and Electro Optics

Edited by Nelson Costa and Adolfo Cartaxo

Chapter metrics overview

9,526 Chapter Downloads

View Full Metrics

1. Introduction

Holography is a diffraction-based coherent imaging technique in which a complex three-dimensional object can be reproduced from a flat, two-dimensional screen with a complex transparency representing amplitude and phase values. It is commonly agreed that real-time holography is the ne plus ultra art and science of visualizing fast temporally changing 3-D scenes. The integration of the real-time or electro-holographic principle into display technology is one of the most promising but also challenging developments for the future consumer display and TV market. Only holography allows the reconstruction of natural-looking 3-D scenes, and therefore provides observers with a completely comfortable viewing experience. But to date several challenges have prevented the technology from becoming commercialized. But those obstacles are now starting to be overcome. Recently, we have developed a novel approach to real-time display holography by combining an overlapping sub-hologram technique with a tracked viewing-window technology (Schwerdtner, Leister & Häussler, 2007, Schwerdtner, Häussler & Leister, 2007). For the first time, this enables solutions for large screen interactive holographic displays (Stolle & Häussler, 2008, Reichelt et al., 2008).

This chapter presents these novel solutions for large real-time holographic 3-D displays in the context of previous and current approaches to electro-holography. The holographic display developed by us combines a tailored holographic recording scheme with active tracking of the observer. This unique approach dramatically reduces the demand for the space-bandwidth product of the hologram and thus allows the use of state-of-the-art spatial light modulators and enables real-time calculation. The fundamentals and challenges of the holographic display technology are described, its implementation in prototypes is demonstrated, and the bright prospects for the 3-D display market are discussed.

Advertisement

2. Real-time holographic display technology

When talking about holographic display technology, a note of caution about commonly used terminology is needed. For marketing or other reasons, the term ’holographic display’ is often misused to name systems, which are not truly holographic in the sense of video-holography. Systems, which make use of holographic screens or holographic optical elements to project images are just examples. But even volumetric displays that create light spots somewhere within their volume are called in many cases ’holographic’. On the other hand, truly holographic recordings are being called displays, whereas there are in fact static holograms or dynamic holograms, which are not yet real-time capable with rewriting-times in the minute range and with large-scale setups (Tay et al., 2008).

What we mean with real-time holographic displays are systems that are based on diffraction to reconstruct the wave field of a 3-D scene in space with coherent light. Such displays must operate at or near video rate to merit the name of video holography. Furthermore, real-time holography must not only display the hologram at video rate but also compute the hologram frames in real time to enable user interaction.

2.1. Why is holography the ultimate 3-D technology?

In human vision, three-dimensional perception is triggered by a large number of cues. Among them monocular cues such as shading, occlusion, relative size, fogging, perspective distortion, and texture gradient as well as binocular ones such as vergence (angular disparity) and stereopsis (horizontal disparity). In natural viewing situations, depth information is an ever-present cue in the visual perception. Generally, and in addition to parallax, the physiological depth cues of accommodation and vergence are considered to be the most important ones for depth perception. Accommodation is the mechanism by which the human eye alters its optical power to hold objects at different distances into sharp focus on the retina. The power change is induced by the ciliary muscles, which steepen the crystalline lens’ curvature for objects at closer distances. Vergence, by contrast, is the simultaneous movement of both eyes toward the point of interest. The optical axes of both eyes converge on this point to image the object onto the respective fovea regions. When the eyes are not properly aligned with each other, strabismus occurs that may adversely impair any 3-D perception. But most importantly, vergence movements and accommodation are closely linked with each other – automatically and subconsciously. That’s why the image of an object is sharp and the two perspectives are fused. Together with other monocular and binocular cues, the focus depth cues – accommodation and blur in the retinal image – contributes to our visual ability to perceive the environment in three dimensions.

Over the last decade, various technologies for visualizing three-dimensional (3-D) scenes on displays have been technologically demonstrated and refined, among them such of stereoscopic, multi-view, integral-imaging, volumetric, or holographic type. It is generally believed that the next big thing in the display industry is imminent, namely the transition from 2-D to 3-D visualization. It is seen as nothing less than the third epoch-making change in film industry, after the change from silent to sound movie in the 1920’s and from black-and-white to color in the 1950’s.

Most of the current approaches utilize the conventional stereoscopic principle, first described by Wheatstone(Wheatstone, 1838). But except of super-multiview displays, they all lack of their inherent conflict between vergence and accommodation since scene depth cannot be physically realized but only feigned by displaying two views of different perspective on a flat screen and delivering them to the corresponding left and right eye. This mismatch requires the viewer to override the physiologically coupled oculomotor processes of vergence and eye focus, which may cause visual discomfort and fatigue.

The difference between normal viewing and stereoscopic viewing with conventional 3-D displays is illustrated in Figure 1. Natural viewing provides real stimuli; the viewer is both fixated and focused on the object, i.e. accommodation distance and vergence distance are exactly matched. But the situation changes for stereoscopic 3-D displays. Though the viewer is still fixated to the object with the same vergence as in natural viewing, his eye focus is now at the display and not where the object seems to be. That is because the viewer’s eyes always focus on the brightest point or highest contrast. With stereoscopic 3-D displays, depth is only an optical illusion. Hence with stereoscopic displays, the normal physiological correlation between vergence and accommodation is disrupted (Hoffman et al., 2008). When looking at a stereoscopic display for a while, this so-called depth cue mismatch between convergence and focus leads to eye strain and fatigue. This fundamental problem to stereoscopic 3-D is a physiologically one that cannot be solved by technological means. The only workaround for stereoscopic displays is either to limit scene depth to very short sequences (short time viewing) or to artificially reduce the depth of a scene (squeezed, non-proportional depth). The so-called comfort depth range of stereoscopic displays, which can be used for depth illusion without getting eye strain and fatigue is limited to a region close to the display that corresponds approximately 20...30% of the distance between viewer and display. Only within this region the human eye can tolerate a certain amount of mismatch. According to optometrists, this tolerance is in the range of 1/4 diopter. Although stereo 3-D can work well for some applications, for example cinema with far observing distance or cell phones with short viewing time, it causes significant human factor risks for mainstream products as PC monitors and TV. Looking at current stereoscopic 3-D displays and prototypes it can also be observed that even the 1/4 diopter is scarcely utilized, limiting usable depth range even further.

Figure 1.

Comparison between natural viewing or holographic 3-D display (left) and stereoscopic viewing with a 3-D stereo display (right).

Therefore, the inherent limitations of all 3-D stereo display technologies can be summarized as follows:

  • Depth cue mismatch of convergence and accommodation leads to eye strain and fatigue,

  • Reduced comfort depth range requires a non-proportional depth squeezing or allows only short-time viewing of scenes with large depth,

  • Potential for inappropriate use, and therefore a consumer application risk.

It is important to note that the comfort depth range in a display and the content generated for instance within a movie or a game are independent. This leads to a significant risk that even in a well-made stereoscopic 3-D display improper content compromises user comfort or health and may (unfairly but) possibly held against the display manufacturer.

Contrary to stereo 3-D, which inherently causes fatigue and eye strain for natural depth 3-D scenes (i.e. properly scaled depth), holographic 3-D provides all viewing information of a natural scene – including eye focus – and therefore unlimited depth. Whoever can see 3-D in real life can see 3-D on a holographic display without fatigue or other consumer risks. Holographic displays are based on coherent object reconstruction. They deliver full focus cues that are needed to provide the observer with a completely comfortable 3-D viewing experience (Benton & Bove, 2008). We have developed and successfully demonstrated a novel approach to real-time display holography based on a sub-hologram encoding technique and a tracked viewing-window technology. Our solution is capable to fulfill the observer’s expectations on real depth perception.

2.2. Classic holography and historic obstacles

Holography was invented by Dennis Gabor in 1947 (Gabor, 1948), but high-quality holograms could only be made on photographic film, which for technical reasons preclude animation. By this classic holography, the 3-D scene information is encoded in the entire hologram, i.e. every tiny region or pixel of the hologram contributes to each object point. The specialty of such holograms is well-known: If the hologram is broken into pieces, each piece will reconstruct the original scene, even though with less resolution and in smaller size. When the hologram is illuminated by the reference wave, the combination of all of its cells reproduces the complete scene by multiple interferences. A classic film hologram has a large diffraction angle, which means it creates a large angular spectrum. The viewing zone from which the reconstructed object can be seen is large, both eyes of the viewer fit into this zone and the viewer can even move around and see different perspectives of the scene.

The difficulties arise when trying to apply the classic approach of holography to digital or electro-holography. The challenges of this approach are twofold: (a) the spatially sampled representation of the hologram by a light modulator (spatial resolution issue) and (b) the fast computation of the hologram (processing issue).

Hence, one of the most serious restrictions of video holography has been the dynamic representation of the hologram by an electrically addressed spatial light modulator (SLM) having a pixelized structure with limited spatial resolution. The complex amplitude distribution that reconstructs the desired object or scene is calculated and represented at regular discrete locations, i.e. at the pixel positions of the spatial light modulator. Since the hologram is sampled, aliasing has to be prevented. Otherwise, improper reconstruction with image artifacts would occur. The amount of information that can be recorded in the hologram is directly related to the spatial resolution and the size of the SLM. This fact is represented by the dimensionless space-bandwidth product

S B P = v x b x v y b y = b x 2 σ x b y 2 σ y E1

with being the maximum spatial frequency according to the sampling theorem, b the width and the pixel pitch of the modulator in x and y direction (Lohmann, 1967; Lohmann et al., 1996). In general, the space-bandwidth product capability of an optical system is directly related to its quality and performance. For example, a present state-of-the-art LCOS microdisplay with 1920 1080 pixel resolution, a pixel pitch of 8 μm and a total size of 0.7” gives a space-bandwidth product of 518,400. The Nyquist limit for the maximum spatial frequency is thus 62.5 Lp/mm, which translates into a maximum diffraction angle of 2.27 .

Figure 2.

Principle of classic holography. In conventional holography, every hologram pixel contributes to each object point of the 3-D scene, that is, holographic information existing at a large viewing zone.

Let us recall that in conventional holography the diffraction angle must be large to create a viewing zone that covers at least both eye and that different areas of the hologram encode the wave field originating from another perspective of the object (see Figure 2). In order words, the primary objective of conventional holography is to reconstruct the 3-D object in space, which can be seen by any viewer binocularly from different view points at different perspectives. To achieve a sufficient viewing zone, pixel sizes in the range of the one micron or less would required. Moreover, to create large objects and fully exploit the 3-D impression the display should be large. However, this corresponds to a huge amount of information that – even if large SLM with tiny pixels would be available – must still be handled in data processing and computing. To give an example, extreme-resolution displays with a pixel size of roughly 0.5 microns would be required, which translates to the huge demand for calculation of billions to trillions of complex values for each of the 2 million scene points (19201080) to determine an HDTV scene in 3-D. When considering these requirements, the insurmountable obstacles to realize conventional holography by using today’s technology become immediately obvious. The reasons why all past attempts of transferring conventional holography to display and TV applications have heretofore failed can be summarized as follows:

Insufficient display resolution: In order to achieve a viewing angle of ±30 , which is necessary to serve several users, a pixel pitch of about one wavelength or less is required. This means that for a 47-inch holographic display, for example, a resolution of 250,000 times that of HDTV is necessary.

Inadequate data volume and processing requirements: The computation of each display frame requires significantly more steps for a holographic display compared to a 2-D display. Typical hologram computation involves calculations of Fourier transformations. This factor, coupled with the greatly increased number of pixels required, places a demand for enormous amounts of computational power. Thus, real-time videoquality holograms would typically require processing power up to several hundred Peta-FLOPS, i.e. approximately 1017 floating-point operations per second. This is far more than the current computation power of super computers.

2.3. Full parallax vs. single parallax holography

With full-parallax holograms, the holographic information is delivered in both x and y direction. When looking at a full parallax hologram, the perspective of the scene varies with the viewpoint – no matter in which direction the observer is moving. In single-parallax holograms, on the other hand, the parallax information is sacrificed in one dimension. That way both the computational effort and data transfer can be substantially reduced. Because of the eyes are side by side, it is common practice to make horizontal-parallax-only holograms. A well-known example of a horizontal-parallax-only (HPO) hologram is the optically recorded white-light rainbow hologram invented by Benton (Benton, 1969, Benton & H. S. Mingace, 1970). The concept of single parallax holograms was later successfully transferred to computational holography (St-Hilaire et al., 1992).

Figure 3.

Examples of full and parallax-limited holograms. The spherical phase of a simple single-point hologram is shown (kinoform of a point or Fresnel zone lens). HPO – horizontal-parallax-only; VPO – vertical-parallax-only.

Both benefits and limitations of full and parallax-limited holograms become obvious from Figure 3, which shows a very simple hologram and its parallax-limited versions. A full parallax hologram reconstructs the object point from a large area with spatial frequencies in all directions, which comes along with a large information content that must be all calculated, transferred by computer and spatially resolved by the light modulator. In comparison, a parallax-limited hologram that is a sliced version of the full type, diffract the light basically in one dimension. Beside the reduced computational effort, such a configuration is beneficial for other reasons as well. For example, the remaining pixel (or ’saved’ bandwidth) could be used for realizing hologram interlacing for different colors or just for simplifying hologram representation with a given display architecture. However, there are also tradeoffs with single-parallax holograms. As the diffraction occurs mainly in one direction, the diffracted wave is slightly elliptical and the spatial resolution of the reconstruction in the non-diffracted direction might be marginally reduced. However, when taking into account the resolution capabilities of the human eye and generate the hologram and the display system accordingly, the benefits of the parallax-limited holograms outweigh its constraints. It should be noted that SeeReal’s sub-hologram approach is inherently applicable to both encoding principles with similar gains in efficiency.

2.4. Brief review of previous and current approaches to electro-holography

There have been many practical approaches to electro-holography in the past decades. Several of them are briefly presented in this chapter as examples.

A pioneering holographic display was set up at the MIT Media Lab in S. A. Benton’s group and continuously improved (St-Hilaire et al., 1992, Lucente et al., 1993, St-Hilaire et al., 1993). These systems use an acousto-optic modulator (AOM), scanners and an optical imaging system. High-frequency acoustic waves locally modulate the refractive index of the AOM crystal and thus the phase of transmitted light. The AOM generates a horizontal line of the hologram that is vertically continued by a vertical scanner. Recent progress was made with an improved AOM that allows higher bandwidth and a simplified optical setup (Smalley et al., 2007). The system is specified with a cube-like object volume with approximately 80 mm edge length and 24 viewing angle at a frame rate of 30 Hz.

Another approach was made by QinetiQ using a so-called Active-Tiling technique (Stanley et al., 2003, Slinger et al., 2004). A SLM with 1 million pixels is replicated sequentially 25-fold on an optically addressable SLM (OASLM) using 5 x 5 replication optics. Four of these units are stacked horizontally to yield a SLM with 100 million pixels in total at a pixel pitch of 6.6 μm. The modular system design allows stacking of more units to achieve higher numbers of pixels. A replay system with an Active-Tiling SLM with 100 million pixels achieved an object with 140 mm width and a viewing zone width of 85 mm at 930 mm distance.

Direct tiling of SLMs is used for another holographic display (Maeno et al., 1996). Five SLMs with 3 million pixels each are tiled to yield 15 million pixels in total. The object may be as large as 50 mm, 150 mm high and 50 mm deep and can be viewed with both eyes at a distance of 1 m.

Effort was also made to optimize the calculation of holograms. A computing system with dedicated hardware performs hologram calculation much faster than a PC. As an example, the HORN-6 cluster uses a cluster of boards equipped with FPGA chips (Ichihashi et al., 2009). The system needs 1 second to calculate a hologram with 1920 x 1080 pixels if the object is composed of 1 million points and 0.1 second if the object is composed of 100,000 points.

All these approaches have in common that a large number of pixels is needed to reconstruct an object with small or medium size. These requirements for the SLM and the computing system hinder scaling to larger sizes, e.g. 20” object size with unlimited depth for desktop applications or TV.

Advertisement

3. SeeReal’s novel solution to real-time holography

3.1. Fundamental idea and overview

The fundamental idea of our concept is fairly simple when considering holography – even literally – from an information point-of-view. All visual acuity is limited by the capabilities of the human eye, i.e. its angular and depth resolution, color and contrast sensitivity, numerical aperture, magnification, etc., where the characteristics of the eye may vary widely from individual from individual. It may additionally be confined by monochromatic and chromatic aberrations.

The majority of optical instruments, such as visual microscopes or telescopes utilize the eye as the final element of the optical system. The eye’s specific capabilities are thus taken into account in the optical system design. We view holography in the same way. When considering the human vision system regarding to where the image of a natural environment is received by a viewer, it becomes obvious that only a limited angular spectrum of any object reaches the retina. In fact, it is limited by the pupil’s aperture of some millimeters. If the positions of both eyes are known, it therefore would be wasteful to reconstruct a holographic scene or object that has an extended angular spectrum as it is common practice in classical holography. As mentioned above, in every part of a classic hologram the entire object information is encoded, cf. Figure 2. This means that a large viewing zone with parallax information within this zone exists; by moving within this zone the viewer can ”look around” the reconstructed object and thus sees different perspectives of the scene. This approach is historically explained by the interference-based exposure technique onto high-resolution holographic films and is useful for static holograms as known from artistic holographic recordings. The key idea of our solution to electro-holography is to reconstruct a limited angular spectrum of the wave field of the 3-D object, which is adapted laterally in size to about the human’s eye entrance pupil, cf. Figure 4. That is, the highest priority is to reconstruct the wave field at the observer’s eyes and not the three-dimensional object itself. The designated area in the viewing plane, i.e. the virtual ’viewing window’ from which an observer can perceive the proper holographic reconstruction is located at the Fourier plane of the holographic display. It corresponds to the zero-order extension of the underlying SLM cross grating. The holographic code (i.e. the complex amplitude transmittance) of each scene point is encoded on a designated area on the hologram that is limited in size. This area in the hologram plane is called a sub-hologram. The position and size of the sub-hologram is defined by the position of object point and viewing window geometry. There is one sub-hologram per scene point, but owing to the diffractive nature of holography, sub-holograms of different object points may be overlapping. The complex amplitude transmittances of different sub-holograms can be added without any loss of information.

Figure 4.

Principle of viewing-window holography. With viewing-window holography the essential and proper holographic information exists at the eye positions only.

So far, for the sake of simplicity, we have discussed the matter for a single viewing window, which carries the information for one eye only. But how is then parallax information generated? Binocular view can be created by delivering different holographic reconstructions with the proper difference in perspective to left and right eye, respectively. For this, the techniques of spatial or temporal multiplexing can be utilized.

For such a binocular-view multiplexed hologram, the reconstructed 3-D object can be seen from a single pair of viewing windows only. Advantageously, dynamic or real-time video holography offers an additional degree of freedom in system design with respect to temporal-multiplex operation. Given that the computational power is sufficient and the spatial light modulator is fast enough, the hologram can be updated quickly. By incorporating a tracking system, which detects the eye positions of one or more viewers very fast and precisely and repositions the viewing window accordingly, a dynamic 3-D holographic display can be realized that circumvents all problems involved with the classic approach to holography. The steering of the viewing window can be done in different ways, either by shifting the light source and thus shifting the image of the light source, or by placing an additional steering element close to the SLM that realizes a variable prism function. Selected implementations of steering principles will be explained in more detail in section 3.5.

To summarize, the pillars of our holographic display technology are:

Viewing-window holography: By limiting the information of the holographic reconstruction to the viewing windows, the required display resolution is decreased dramatically. Pixel sizes in the range of today’s commercially available displays are sufficient.

Real-time computation of sub-holograms: By limiting the encoding to sub-holograms, the computing requirements are greatly reduced. Sub-hologram encoding brings computation into graphics card or ASIC range. The principle also enables temporal color multiplexing, speckle reduction, and suppression of higher orders within the viewing window.

Tracking of viewing windows: An active and real-time tracking of the viewing window allows a free movement of the observer.

3.2. The viewing-window and sub-hologram concept

The optical principle of our holographic approach is schematically depicted in Figure 5. Coherent light coming from a point light source is imaged by a positive lens (L+) into the observer plane and creates the spherical reference wave for hologram illumination. Very close to the imaging lens, the spatial light modulator (SLM) is positioned.

3.2.1. What is a viewing window?

The inherent regular SLM structure generates a diffraction pattern in the far field whose zero-order extension is the viewing window (VW) where the eye of the observer is located. Given small angles, the size of the viewing window is obtained from the grating equation and trigonometry to

w x , y = λ d p x , y E2

with d being the observer distance, the wavelength and p the pixel pitch of the SLM in x or y direction, respectively. Only within the viewing window the information of the wave field

Figure 5.

Schematic principle of the sub-hologram concept (side view). L+, positive lens; SLM, spatial light modulator; SH, sub-hologram; VW, viewing window; other abbreviation defined in the text.

of the 3-D object has to be generated. In Table 1 the wavelength-dependent viewing-window size wx,y for different display types having typical viewing distances d and pixel pitches px,y are listed. For a proper visual perception of a colored scene, the size of smallest viewing window is the determining factor. For blue light, the viewing window must be therefore at least the same size as the pupil diameter. Depending on the scene luminance, the entrance pupil diameter of the human eye varies from 2...6 mm. Therefore, the required pixel pitches of the holographic display mainly result from the viewing situation and distance (television, desktop, or mobile display), where additionally the wavelength-dependent diffraction has to be considered.

Figure 6.

Table 1. Size of the viewing window wx,y for exemplary holographic display types with typical viewing distances d at RGB-wavelengths.

For the common examples, the viewing window for blue light is about 3/4 of the size for red light. Viewing windows much larger than the pupil diameter provide more tolerances for the tracking system, i.e. the accuracy in pupil detection and viewing-window shifting could be then less stringent. With larger viewing windows, on the other hand, the intensity is distributed over a larger area, which means that only part of it will pass through the pupil. The best compromise between technological issues, tracking accuracy, and reconstructed scene brightness has therefore to be chosen.

3.2.2. What is a sub-hologram?

A direct consequence of the viewing-window holographic scheme is the following. For each reconstructed object point, there is only a limited region in the hologram where data from this object point is coded. Each point of the scene (which can be treated as a point-like emitter for interference based hologram modeling) is associated with a locally limited area of the hologram. This limited region is called a sub-hologram. Size and position of this sub-hologram (SH) is defined via simple projection from the edges of the viewing window through the scene point that has to be encoded. By superimposing all sub-holograms, i.e. strictly speaking the angular limited complex amplitudes Un(x,y) = An(x,y)exp[–in(x,y)] emanating from each of the n scene points, the entire hologram is composed. Since the complex amplitudes have to be calculated only within each sub-hologram area, the computational effort is dramatically reduced, which enables a real-time hologram calculation. A beneficial side effect of that special holographic recording scheme is the reduced demand for the temporal coherence of the light source, which must have at least a coherence length of NF, where NF is the Fresnel number of the maximum sub-hologram. Monocular motion parallax information is delivered within the viewing window, which may be either in full-parallax or single-parallax, depending on the recording scheme of the hologram and the overall optical setup.

In contrast to a common hologram, if an entire viewing window-type hologram is broken into several pieces, each piece will reconstruct only part of the original scene, but with full resolution (apart from object points close to the border of the reconstructed scene fragment). The refractive analogue of one non-overlapping sub-hologram is a small lens having an amplitude and phase distribution that focuses light from the hologram to the object point.

Advertisement

3.3 Hologram synthesis

Hologram synthesis means the calculation of the complex wavefield at the hologram plane (x,y). Or, which way we know the proper amplitude and phase distribution within the hologram plane that reconstructs the desired object? The complex wave field at the hologram plane (x,y) is the superposition of the object wave (x,y) with the reference wave (x,y). The object wave in turn, represents the superposition of all Huygens’ elementary waves virtually emerged from the object being reconstructed. In classical holography this is the reflected, refracted or diffracted light from any existing object, whereas with synthetic or computer-generated holography the object exists mathematically as a set of data only (position in space, amplitude, and color) and the superposition can be easily made by computing means. When the complex wave field at the hologram plane (x,y) is again illuminated with the reference wave , the object is holographically reconstructed.

In the following two methods for hologram synthesis are described in more detail, a direct analytic method as well as a Fourier-based propagation method. Both methods assume that the object is being composed of a set of n points, which are defined in position, amplitude, and color.

3.3.1. Direct analytic modeling

A three-dimensional scene is represented by a sufficient number of points defined at discretized locations. The propagation from the point source to the hologram plane is modeled by using exact analytic functions where each object point can be regarded as a coherent point source that emits a spherical wave at a wavelength . At the hologram plane this wave is defined as Un(x,y) = An(x,y)exp[–in(x,y)]. If the location of the spherical emitter is given by Pn(x0,y0, z0), the phase at the hologram plane can be written as

φ n ( x , y ) = 2 π λ ( z 0 2 + ( x - x 0 ) 2 + ( y - y 0 ) 2 - z 0 ) + φ 0 E3

which corresponds exactly to the so-called phase function of an ideal lens with a focal length of z0. The quantity 0 (0 ≤ 0 ≤ 2) is an initial phase assigned to each scene point, which can be used as a further degree of freedom in design of the hologram. It is common practice to randomize this phase offset. To finally obtain the hologram phase function n(x,y), the phase of the reference wave must be subtracted from n(x,y). The phase of the reference wave R, which is in fact the illumination wave formed by the lens L+, is again the phase of an ideal spherical lens with focal length of d, see Figure 5. Thus, the hologram phase for one scene point is given by n(x,y) = n(x,y) – R(x,y). It might be noted that the hologram phase function n(x,y) associated to one emitter is always a rotationally symmetric function. The origin is located at the point where a virtual line connecting the center of the viewing window with the scene point intersects the hologram plane.

But in contrast to the classic approach, viewing-window holography computes and encodes the complex amplitudes Un(x,y) of the each scene points only in a designated area of the hologram plane, the sub-hologram area. Aliasing is prevented by cutting-off the spatial frequency contributions of the object points that exceed the spatial resolution of the light modulator. The spatial frequency of the hologram phase for one point-like emitter can be derived from

v ( x , y ) = ϕ n ( x , y ) 2 π E4

where the maximum allowable spatial frequency is given by the resolution of the spatial light modulator and must satisfy the relation (x) ≤ 1 2 σ x and (y) ≤ 1 2 σ y , respectively.

3.3.2. Fourier-based modeling

Figure 6 illustrates an alternative method for hologram synthesis that is based on fast Fourier transforms (FFT) of object planes. It shows a side view of a three-dimensional object or three-dimensional scene, the spatial light modulator (hologram), and the viewing window. The viewing window is positioned at or close to an observer eye. The 3-D-scene is located within a frustum that is defined by the edges of the viewing window and the SLM, respectively (drawn as red dashed lines in Figure 6). This frustum may be approximated by a pyramid if the viewing window is much smaller than the SLM.

For calculation, the 3-D-scene is sliced in layers (L1... Lm) that are parallel with both the SLM and viewing-window plane. The continuously distributed object points are assigned to the closest layer. The extension of each layer is limited by the frustum and depends on the distance from the viewing window. With the approximation of the viewing window being much smaller than the SLM, the extension of a layer is proportional to its distance from the viewing window.

Figure 7.

FFT-based hologram synthesis.

In each layer, the object points are assigned to the nearest sampling point of the layer. The calculation method uses Fresnel transforms between the object layers L1... Lm, a reference layer LR in the observer plane and a hologram layer LH in the plane of the SLM. The calculation of a Fresnel transform can be mathematically performed as a Fourier transform and multiplication with quadratic phase factors (Goodman, 1996). The discrete Fourier transform can be efficiently executed by using the fast Fourier transform (FFT) algorithms. Hologram synthesis comprises three steps:

  1. Firstly, the layers L1 to Lm are transformed subsequently to the reference layer LR by m Fresnel-Transforms.

  2. Secondly, the wave fields calculated in the first step are summed up to a superimposed complex-valued wave field in the viewing window. This superimposed wave field represents the frequency-limited wave field that would be generated by a real existing 3-D scene.

  3. Thirdly, the superimposed wave field in the viewing window is back-transformed to the hologram layer LH by an inverse Fresnel transform. This yields finally the hologram function (x,y).

The information in each layer is not continuous but sampled. It is essential that all object layers, the viewing window and the hologram layer LH contain the same number of sampling points N. This number corresponds to the pixel number of the spatial light modulator.

As mentioned above, the extension of an object layer Lm is proportional to its distance dm to the reference layer LR. Hence also the sampling interval pm in a layer is proportional to its distance dm to LR, i.e. pm dm. As a consequence, the periodicity interval wm in the reference layer LR is the same for all object layers Lm, that is wm = dm/pm = constant. This ensures that the periodicity interval in the LR has the same extension for all object layers. Therefore, a common viewing window that is located within one periodicity interval can be defined where the wave field within this viewing window is unique.

As explained, in the third step the wave field in the viewing window is transformed to the hologram layer. The hologram layer and the reference layer are related by a direct or an inverse Fresnel transform. The number of sampling points N in each object layer is the same as in the layer LH on the SLM. As the viewing window is within one periodicity interval of an object layer, it is also within one periodicity interval of the LH or SLM. Hence, as the hologram reconstructs the wave field in the viewing window, this wave field will be unique therein. Periodic repetitions of this wave field that are inherent for sampled holograms are outside the viewing window. The hologram reconstructs the wave field that would be generated by a real existing 3-D scene in the viewing window. Disregarding reconstruction imperfections, an observer whose eyes are in one or two viewing windows will have the same perception as if the wave field emanates from a real existing object. Periodic repetitions of the reconstructed object are thus not visible.

3.4. Hologram encoding methods

Hologram encoding refers to the representation of the complex wavefield at the hologram plane (x,y), i.e. to the process of converting the complex wavefield into a format, which can be displayed at the SLM by addressing its pixel. Hologram encoding is therefore directly related to the hardware implementation of the SLM. In synthetic or digital holography, a fully complex representation would be most qualified. But the major challenge is in finding a method and device to record a complex-valued hologram transmission function.

Generally speaking, there are various possibilities for a spatially sampled representation of complex wavefields by spatial light modulators:

Complex representation: A spatial light modulator that provides a full complex-valued modulation would be the ideal, whereas independent, non-coupled amplitude and phase addressing is mandatory. Although one can think of such SLM, which may implement the detour-phase principle for example, but thus far such devices are non-existent. Another possibility would be a sandwich of two active modulation layers, which are independently controlled for amplitude and phase modulation (Gregory et al., 1992). The challenge is then to put them together as close as possible to avoid cross talk. Both concepts seem difficult to realize even with today’s enabling technologies.

Decomposition methods: Since the very beginning of computational holography decomposition methods for complex-valued wavefields have been developed

For a comprehensive overview, cf. for example the book edited by Schreier (Schreier, 1984)

. Famous examples of holograms utilizing the detour-phase concept are those from Brown and Lohmann (Brown & Lohmann, 1966), Lee (Lee, 1970), Burckhardt (Burckhardt, 1970) and the double-phase holograms from Hsueh and Sawchuck (Hsueh & Sawchuk, 1978). All methods have in common that the hologram is divided into discrete resolution cells having apertures or stops of different size and position or having a certain number of sub-cells. That way both amplitude and phase quantities can be approximated. Originally developed for static holograms or holographic filters, those methods are also suited for implementation with spatial light modulators. However, to modulate both amplitude and phase two or more sub-pixels have to be combined to one macro-pixel. That means part of the light modulators original resolution has to be sacrificed for the sake of full holographic modulation.

In the following subsections, two decomposition methods capable for SLM implementation are described in more detail.

3.4.1. Burckhardt amplitude encoding

One method to decompose a complex-valued function is the method suggested by Burckhardt (Burckhardt, 1970), which is a simplified version of Lee’s original approach (Lee, 1970). One hologram cell is laterally divided into three amplitude-modulating sub-cells. The lateral shift between the sub-cells represents phase angles of 0 , 120 and 240 and acts as a phase offset, similar to the detour-phase principle. In holograms of this type, a phasor is decomposed into three vectors that run parallel to exp(i0) = 1, exp(i2/3) = –0.5 + and exp(i4/3) = –0.5 –. Since the phase values are already represented by the lateral displacement of the sub-cells, any complex amplitude transmittance = Aexp(i) can be encoded in one macro-pixel. The laterally displaced sub-pixels have positive amplitude transparencies of A1,A2,A3, respectively. Hence,
H ( x , y ) = A 1 ( x , y ) exp [ i 0 ] + A 2 ( x , y ) exp [ i 2 π 3 ] + A 3 ( x , y ) exp [ i 4 π 3 ] E5

The magnitude Ai of one term is always zero. Depending on the phase angle , only the two adjacent vectors are sufficient for representing . Figure 7 shows the decomposition of  for the case 0 ≤ ≤ 2/3

Figure 8.

Geometric representation of Burckhardt’s decomposition method into three real and positive components.

Thus, an amplitude modulating light modulator with independently-driven sub-pixels can be employed. The amplitude quantities A1,A2,A3 are are written as grey values into three sub-pixels that form one macro-pixel.

As large, high-resolution amplitude LC-displays are common in medical displays, such panels are commercially available. Another advantage is that only a single active layer is necessary to represent the complex amplitude transmittance (x,y) of the hologram. Furthermore, the decomposition can be done in an analytic way, which simplifies the calculation enormously. On the other hand, the diffraction efficiency of Burckhardt holograms is with approx. 1% quite low. Notwithstanding, a fully complex modulation is achieved in a simple and practicable way, i.e. by an amplitude-modulating SLM that comprises of an array of macro-pixels, albeit at the cost of reduced sampling and diffraction efficiency.

3.4.2. Two-phase encoding

The two-phase

Also called as double-phase or dual-phase.

encoding method is based on the principle that any complex amplitude transmittance = Aexp(i) can be decomposed into the sum of two vectors with constant magnitude of 0.5 and different phase quantities 1, 2 (Chu & Goodman, 1972; Hsueh & Sawchuk, 1978), i.e.

1 2 exp [ i ϕ 1 ( x , y ) ] + 1 2 exp [ i ϕ 2 ( x , y ) ] = cos ( ϕ 1 - ϕ 2 2 ) exp [ i ( ϕ 1 - ϕ 2 2 ) ] = A ( x , y ) exp [ i φ ( x , y ) ] E6

Figure 8 shows the decomposition of in the complex plane. For example, two identical phase values with 1 = 2 give a resulting vector with maximum amplitude and phase of  (constructive interference), whereas two phases with a phase difference of result in a vector with zero amplitude with non-defined phase (destructive interference). Arbitrary complex values are generated by combining other phases than those of the special cases = 0 or .

Figure 9.

Geometric representation of the dual-phase decomposition.

The other way round, the decomposed phase values for a given complex amplitude transmittance (x,y) can be written as

ϕ 1 ( x , y ) = φ ( x , y ) + cos [ A ( x , y ) ] ϕ 2 ( x , y ) = φ ( x , y ) - cos [ A ( x , y ) ] E7

As a result, a fast phase-only LC panel can be used as light modulator. A pair of two pixels of a phase-only modulating SLM is then combined into a complex-valued macro pixel (Birch et al., 2000). Both pixel act as the intended complex-valued macro pixel only if light modulated by both pixel is superimposed. A physical combination of light modulated by two phase subpixels may be achieved by beam-combining micro-elements. The hologram is encoded by first normalizing the amplitudes of the complex amplitude transmittance in a range from 0 to 1 and then calculating the phase quantities 1, 2 from the equations above.

Because of its phase coding, the diffraction efficiency of dual-phase holograms is compared to Burckhardt-type holograms greatly increased to approx. 10%. Again, only a single active layer is required for representing the entire hologram information. Since only two sub-pixels have to be combined to one macro-pixel a better sampling at the hologram plane is being present.

3.5. Tracking methods

For a non-tracked viewing-window-type hologram, the reconstructed 3-D object could be seen from a single or pair of viewing windows only. However, SeeReal’s approach to dynamic holography is directly related to eye tracking. In case of a movement of the observer’s eyes, the observer window is tracked to the new eye position. Hence it is possible to reduce the size of a viewing window to the size of approximately an eye pupil. Two viewing windows, i.e. one for the left eye and one for the right eye, are always located at the positions of the observer eyes. But then, how to move the viewing window in the observer plane? Advantageously, dynamic holography offers the additional freedom of temporal-multiplex operation. By incorporating a tracking system that detects the eye positions of one or more viewers very fast and precisely and repositions the viewing window accordingly, a dynamic 3-D holographic display can be realized. Thus, the viewing angle of the reconstructed object is beneficially enlarged while maintaining the moderate resolution of the spatial light modulator.

Tracked viewing-window holography must therefore fulfill the following key functions:

  • Detection of the current eye position in x,y, z and

  • Means for shifting the observer window to this position.

To realize the former task, the holographic displays developed by SeeReal are equipped with an eye position detection system composed of a stereo camera and an imaging processing means. Images from the observer having a different perspective are captured by two cameras as exemplary shown in Figure 9. Multi-threaded software that comprises of image processing, pattern recognition and artificial intelligence is working in a two step process. In a first step, the face of the observer is recognized within the captured image and afterwards the eyes are detected within the region of the face. Once the face is identified, only the eye detection algorithms have to be executed, which makes the entire recognition process much faster. The results obtained from left and right image of the stereo camera are then combined to a 3-D model that defines the position of the eyes in space.

Figure 10.

Images captured by the tracking cameras (left and right view of a stereo camera). The current system is capable to track simultaneously up to 4 viewers in real-time.

We have developed different alternatives for the tracking means, two of them are explained here in more detail. The steering of the viewing window can be done for example by shifting the light source and thus shifting the image of the light source accordingly, or by placing an additional element close to the SLM that realizes a variable deflection. In the following, we discuss implementations of these alternatives.

3.5.1. Light-source shifting

The first principle that was developed and implemented in prototypes is based on light source shifting. The optical principle is schematically sketched in Figure 10. By imaging through a lens, a shift of any light source in object space results in a shift of its image. Since the viewing window is located within the zero order of the spatial light modulator, the holographic reconstruction as can be viewed from the viewing window will be always correct. From a holographic way of thinking, this corresponds to an illumination of the hologram with a tilted reference wave. When a single lens is used, this method allows in principle for a tracking in x,y and z direction. However, there is a practical limit for x and y shifting because of the paraxial limit of the lens. A skew ray path introduces aberrations that, if too large, may deteriorate the holographic reconstruction quality. Although aberrations can be compensated by encoding means, a practical limit has been identified at approximately ±10 .

Figure 11.

Schematic principle of the light source tracking method.

The position of the light source does not have to be shifted mechanically. One possibility would be an active array with a large number of light sources only one of them switched on at the same time. Tracking would then be performed by switching between several light sources. Each light source position then corresponds to a distinct tracking position in the viewing plane.

Another possibility is the use of a secondary light source. A secondary light source might be an activated pixel in an additional liquid crystal display (LCD) that is illuminated by a homogenous backlight. By activating a pixel at the desired position on the LCD, the light source can be shifted electronically without mechanical movement.

Tracking by light-source shifting has been successfully implemented in the prototypes by using a homogenous backlight and an LCD-shutter panel, cf. section 4. For the prototypes however, not a single lens and a single (secondary) light source is used, but a matrix of simultaneously emitting light sources and lenses is utilized instead. For a large display a single lens wouldn’t be a feasible solution because of its thickness, weight, costs and display compactness. The shutter pixels act as secondary light sources. By switching on different shutter pixel, the light source position can be changed. An LED array (primary light sources) is used for illuminating the shutter. Pitch of the lens array needs to be large compared to pixel pitch of the SLM such as to still have a certain number of pixels with coherent illumination.

Light source tracking has been proven to be a reliable solution. On the other hand, it has also certain disadvantages. For example, the use of secondary light sources is not optimal in terms of light efficiency of the system. Also there may be illumination crosstalk by light from secondary sources passing the wrong lens of the lens array. This does not cause any problem in a single-user system but may be disadvantageous for a multi-user display. The most important drawback of light source tracking is the limitation of the tracking angle by aberrations. Large tracking angles put the need for an oblique optical path from the light source through the lens array. Aberrations may not necessarily degrade the reconstruction of single points, but might somewhat corrupt the observer window leading to vignetting effects in the reconstruction.

While light source tracking may be well suited for a single user display with a tracking range of about ±10 , it is less practicable for multi-user displays and large tracking ranges, as needed for example for TV applications.

3.5.2. Steering of the reconstruction

Since the capabilities of the previous tracking method are limited in terms of tracking range, alternative solutions that enable larger ranges have been developed. The conceptual design of a holographic display that steers the holographic reconstruction is shown in Figure 11. With a beam steering element placed at the front of the hologram display, the optical path from the light source to the SLM can be kept constant. As an advantage, the hologram is always illuminated by the same planar wavefront, which is ideal in terms of light efficiency and aberrations. The beam-steering element deflects the light after passing the SLM and directs the light toward the observer eyes. In addition to the prism function, it could realize a focusing function.

There exist various promising approaches to nonmechanical beam steering, which are currently at different stages of development (McManamon et al., 2009). The challenge of such beam steering devices is that often both a large deflection angle and a large aperture of the deflector are required. Refractive solutions are thus not suitable because of the thickness a prism would have. But when the optical system operates at coherent or narrow-band light, diffractive approaches can be utilized. For a transmission grating with a local period of , the angle of the diffracted light is given by the grating equation

sin α m = m λ Λ + sin α i n E8

where m is the diffraction order, the wavelength of light and in the angle of the incident light. Such variable diffractive gratings can be divided into two categories. Either a sawtooth-like grating is adjustable in its period, or in its blaze angle. The variable period grating most often operates at the first diffraction order (m = 1) and the maximum steering angle is defined by the grating’s smallest permissible period. The minimum period arises from diffraction efficiency requirements at a given angle as well as the addressing resolution of the grating. The steering up to the maximum angle is beneficially continuous. Variable blaze gratings on the other hand have a fixed period and diffract the light into the designated order by matching the blaze angle to the diffraction order m. Since the variable blaze grating type steers light only at discrete angles, an extra variable period grating stage is required for continuous steering between those angles.

Figure 12.

Schematic side-view of the steering-of-reconstruction principle. is the tracking angle and z is the focal length variation of the nonmechanical steering element.

Light could for example be steered and focused by writing a phase function including a prism and focus term into a liquid crystal layer. The effective refractive index and hence the deflection angle is controlled by a voltage applied to electrodes at the cells. Embodiments as variable period grating as well as variable blaze gratings can be realized.

Another steering or tracking concept for the holographic reconstruction is based on electrowetting. Electrowetting or exactly electrowetting on dielectrics (EWOD) (Beni et al., 1982, Berge & Peseux, 2000) can be regarded as an electrostatic manipulation of liquids that enables to vary the wettability of a conducting liquid (Mugele & Baret, 2005). The conductive liquid and an electrode are separated by a thin dielectric hydrophobic layer thus forming a parallel plate capacitor. By applying a voltage between the electrode and the conductive liquid, the droplet wets the hydrophobic dielectric. Without a voltage, the droplet returns to the dewetted state, i.e. to its initial contact angle 0. Since the thin dielectric layer prevents the liquid from electrolysis, the process is highly reliable. Below a critical saturation threshold, the behavior of electrowetting-on-dielectrics can be well-predicted by the so-called electrowetting equation

cos θ v = cos θ + ε r ε 0 2 t γ l a V 2 E9

which can be derived from Lippmann’s electrocapillary equation and Young’s equation for a three-phase contact line. In this equation, 0 is the initial contact angle at zero voltage, and t are the dielectric constant and the thickness of the dielectric layer, respectively, V is the applied voltage and la is the interfacial surface tension of the liquid-ambient (typically electrolyte-oil) interface. In recent years, electrowetting has been successfully applied to various optical applications such as varifocal lenses, amplitude-modulating displays and fiber coupler and switches.

How electrowetting can be applied for realizing a liquid prism is illustrated in Figure 12 (Kuiper et al., 2005, Smith et al., 2006). At initial state with no voltage, the liquids form a curved meniscus, depending on the interfacial surface tensions between the liquids and the solid. Since here the sidewall is hydrophobically coated, the water-based electrolyte features a large initial contact angle of 0> 150 (Figure 12 left, L = R = 0). If a voltage difference is applied between the insulated sidewall electrode and the electrolyte, the contact angle of a conducting droplet can be decreased. At a certain voltage pair of equal voltages, the contact angle at both left and right electrode reaches 90 resulting in a flat meniscus (Figure 12 middle, L = R = 90 ). The light passing through such a cell will not be altered in its propagation direction. Prism functionality can be realized if the sum of left and right contact angle equals to 180 (Figure 12 right, L + R = 180 , where L ≠ R).

Figure 13.

Operation principle of an electrowetting prism. For simplicity and visualization, here only a prism with 1D-deflection capability (2 sidewall electrodes) has been drawn. A prism capable for 2D-deflection comprises of 4 sidewall electrodes.

It is advantageous to minimize the size of the prisms to get faster response, because dynamic response scales with the volume density product of the used liquids. Therefore, the intended prism size is adapted to the pixel pitch of the SLM. Since the response time of electrowetting cells of that size are in the range of <1 ms, time sequential tracking of several users becomes feasible.

3.6. Color holography

As holography is based on diffraction and as diffraction is wavelength-dependent, the 3-D scene has to be separated in its color components. Usually, these are red, green and blue. Three holograms are computed (one for each color component) and the 3-D scene is reconstructed using three light sources with the corresponding wavelengths. There are several methods to combine the three holograms and the three light sources, for example:

Spatial multiplexing. The red, green and blue holograms are spatially separated. For instance, they may be displayed on three separate SLMs that are illuminated by red, green and blue light sources. An arrangement of dichroic beamsplitters combines the output of the SLMs. The optical setup is bulky, above all for large displays.

Temporal multiplexing. The red, green and blue holograms are displayed sequentially on the same SLM. The red, green and blue light sources are switched in synchronization with the SLM. Fast SLMs are required to avoid color flickering.

Advertisement

4. Implementations and prototypes

Our holographic approach has been successfully demonstrated by prototypes having 20.1- inch diagonal

Presented at SID 2007 (Long Beach, USA) and Display 2008 (Tokyo, Japan)

. The prototypes are intended to demonstrate the key principles of our solution for large-sized real-time holography, i.e. viewing-window holography with sub-hologram encoding technique, cheap and interactive real-time computing, and the feasibility with common pixel sizes. However, it should be emphasized that the prototypes do not represent commercial solutions with a flat design and are not at all optimized in intensity and tracking performance. Although commercial solutions that fulfill the latter features have already been developed, they are not described in this section.

Figure 14.

Optical principle of the 20.1-inch holographic display prototype. Sizes and distances are not to scale.

4.1. General description of components

The second generation of the direct view holographic display prototype (”VISIO 20”) comprises a grayscale amplitude-modulating liquid crystal panel (NEC NL256204AM15-01) with a 3 5 megapixel resolution at a pixel pitch of px = 156 μm and py = 52 μm, an operation frequency of 60 Hz with a relatively slow response time of 30 ms (Figure 14). The used 1D hologram encoding (here vertical-parallax only) is a common practice to further reduce bandwidth requirements and is well-suited for the given pixel arrangement and geometry.

Figure 15.

a) 20.1-inch direct view prototype (”VISIO 20”) and (b) photograph of a holographic reconstruction.

The optical scheme of the prototype is depicted in Figure 13. An LED backlight consists of red, green and blue high-brightness LEDs emitting at wavelengths of 627 nm, 530 nm, and 470 nm, respectively. The spectral linewidth (FWHM) of ≈ 30nm provides sufficient temporal coherence. Light coming from the RGB-LED backlight is mostly blocked by a first LC display that acts as shutter or variable secondary light source array. Only those pixels that are switched on transmit the light, and thus a variable (secondary) line light source is realized having a spatial coherence corresponding to the pixel opening. A lenticular comprising approximately 60 horizontal cylindrical lenses is used for hologram illumination and for imaging the light sources into the viewing window. Each cylindrical lens is illuminated by a horizontal line light source. Furthermore, secondary line light sources and arrayed cylindrical lenses are aligned such that all light source images coincide onto the viewing window.

In the SLM the sum of all complex amplitudes Un(x,y) is encoded by combining three amplitude-modulating pixels for each complex value according to the Burckhardt-encoding scheme described above (cf. section 3.4). Two viewing windows delivering slightly different holographic perspectives of the scene are generated by a vertically aligned lenticular beam-splitter and an interlaced (horizontally multiplexed) hologram. High-precision user tracking is realized by a stereo camera incorporated in the holographic display and advanced eye recognition algorithms combined with active light source shifting by the shutter panel.

4.2. Color implementation

In the prototypes, holographic reconstruction is performed either in monochrome (optionally R, G, or B) or in full-color. Two types of full-color holographic displays have been realized that are based on either a temporal or a spatial multiplexing of colors (Häussler et al., 2009).

At the system for which temporal color multiplexing is implemented, the colors are displayed sequentially. The SLM displays the holograms of the red, green and blue 3-D scene components one after the other where the backlight is switched between red, green and blue LEDs. Both processes are synchronized. However, two obstacles have to be taken into account to achieve good reconstruction quality:

  • The pixels of the SLM have a finite response time. For a LCD, this is the time the liquid crystals need to align to the electric field applied to the pixel cell. The LCD panel that we use as SLM stems from a medical LCD and has a long response time ton + toff of typically 30 ms.

  • The pixels do not switch simultaneously across the SLM as the pixels are addressed in columns and rows. The rows of the SLM are addressed sequentially, with one frame period needed from the first to the last row. As a consequence, there is a time lag of up to one frame period across the SLM.

Both effects have to be taken account as each part of the hologram has to be illuminated with the corresponding wavelength. For instance, if SLM and backlight were switched from red to blue simultaneously, the last rows of the SLM would still display the red hologram when the backlight is already switched to blue. Therefore, we used a scanning backlight and a time lag between switching the SLM and the backlight. Figure 15 illustrates this process.

Figure 16.

State of SLM rows (top) and backlight rows (bottom) versus time. The SLM graph illustrates the effect of finite response time and row-by-row addressing of the SLM after switching from one hologram to the next hologram. The backlight graph shows delayed and row-by-row switching of the backlight to compensate these effects.

The top graph shows the states of the SLM rows versus time and the bottom graph the states of the backlight rows versus time. The gradual color transition along the time axis of the SLM graph illustrates the finite response time after switching from a hologram of one color component to the hologram of the next color component. There is no sharp transition from one hologram to the next hologram but an intermediate interval in which the pixels of the SLM transit to the next state. The color transition along the row axis of the SLM graph illustrates that the SLM is addressed row-by-row. At a point in time at which the last row has just received the data of the current frame, the first row will already receive the data of the next frame. As an example, at the second dotted vertical line, the last row has just settled to the red hologram, whereas the first row already starts to transit to the green hologram. The intermediate states are indicated by the slanted gradual color transition. At these points in time, the state of the respective SLM pixel is undefined, and illumination by the backlight has to be avoided. Therefore, we built a scanning backlight in which the rows of LEDs are grouped in 16 groups. Switching of these groups is illustrated in the backlight graph of Figure 15. These groups are switched on and off sequentially such that the corresponding parts of the hologram are only illuminated if its pixels are in a settled state of the associated color. A complete cycle comprises three frames with colors red, green and blue and three intermediate transition frames. As the frame rate of the SLM is 60 Hz, the full-color frame rate is 10 Hz. The human vision perceives a full-color holographic reconstruction, albeit with color flickering. Color flickering will disappear and a steady reconstruction will be visible with availability of faster SLMs.

In contrast, the display with spatial color multiplexing shows the three backlight colors and the three holograms for red, green and blue color components simultaneously. The three holograms are interlaced on the same SLM. A color filter is used to achieve that each hologram is illuminated with its associated wavelength only. Six holograms are interlaced on the SLM: three red, green and blue holograms that generate the viewing window for the left eye VWL and also three holograms to generate that for the right eye VWR. A beam-splitting lenticular is used to separate the light for left and right viewing window, VWL and VWR. Color filters are used to separate the wavelengths. Figure 16 illustrates top views of two possible arrangements of color filters. The left arrangement uses color filters that are integrated in the SLM pixels. One lens of the beam-splitting lenticular is assigned to two pixels of the SLM. The light of all left pixels at the lenses coincides in the observer plane and generates VWR. Vice versa the right pixels generate VWL. The color filters are arranged in columns such that each column of the filter extends over two columns of the SLM, as illustrated in the left graph of Figure 16. Such an arrangement of color filters integrated in the SLM pixels and two neighboring pixels having the same color is not commercially available. Standard LCD panels have color filters with color changing from pixel to pixel. An external color filter laminated on the cover glass of the panel would have a disturbing separation between pixel and color filter. Therefore, in our prototype we used the arrangement illustrated in the right graph of Figure 16. The color filters are attached directly to the structured surface of the beam-splitting lenticular. This arrangement avoids a disturbing separation between lenticular and color filter and facilitates tracked viewing windows in the same way as with a monochrome display. The functional principle is analog to that of the arrangement in the left graph of Figure 16.

Figure 17.

Top view of an arrangement of color filters in a holographic display with spatial color multiplexing. The left graph shows color filters integrated in the SLM pixels (SLM + CF) and the right graph separate color filters (CF). The graphs show three lenses of the positive lenticular (L+) that splits the light to generate left viewing window (VWL) and right viewing window (VWR). For simplification, only the light illuminating VWR is shown and the light illuminating VWL is omitted. The light sources and the Fourier-transforming lenses are not shown.

Advertisement

5. Conclusions

In conclusion, a novel approach for real-time holography that has a strong market potential for desktop, TV, as well as mobile displays has been presented. To date, it is the only practical solution known to the authors that is capable of holographic reconstruction of large 3-D scenes of natural size and depth made with commercially available component technologies. The essential idea of the proprietary and patented approach is that for a holographic display the highest priority is to reconstruct the wavefront at the eye position that would be generated by a real existing object and not to reconstruct the object itself. The tracked viewing-window holographic technology limits pixel size to levels already known for commercially available displays. Sub-hologram encoding brings computation into graphics card or ASIC range. The new concept is applicable to desktop, TV, and mobile imaging.

While there have been impressive developments in 3-D display technology in the past decade, the remaining visual conflicts between natural viewing and 3-D stereo visualization have prevented 3-D displays from becoming a universal consumer product. In principle, the only 3-D display capable of completely matching natural viewing is an electro-holographic display.

SeeReal’s new approach to electro-holography not only proves that it is possible, but that it is also closer to adoption than many experts imagined. The principles and concepts are already in place. The checks and verifications are completed. Color prototypes are in full use. The technology already exists – it is just a question of time for all the pieces of the puzzle to come together and for the first commercial real-time 3-D holographic displays to hit the market.

References

  1. 1. Beni G. Hackwood S. Jackel J. L. 1982 Continuous electrowetting effect, Applied Physics Letters 40 10 912 914 . URL: http://link.aip.org/link/?APL/40/912/1
  2. 2. Benton S. A. 1969 Hologram reconstructions with extended incoherent sources, J. Opt. Soc. Am. 59: 1545.
  3. 3. Benton S. A. Bove V. M. 2008 Holographic Imaging, John Wiley and Sons.
  4. 4. Benton S. A. Mingace H. S. J. 1970 Silhouette holograms without vertical parallax, Appl. Opt. 9 12 2812 2813 . URL: http://ao.osa.org/abstract.cfm?URI=ao-9-12-2812
  5. 5. Berge B. Peseux J. 2000 Variable focal lens controlled by an external voltage: an application of electrowetting, Eur. Phys. J. E 3 159 163 .
  6. 6. Birch P. M. Young R. Budgett D. Chatwin C. 2000 Two-pixel computer-generated hologram with a zero-twist nematic liquid-crystal spatial light modulator , Opt. Lett. 25 14 1013 1015 . URL: http://ol.osa.org/abstract.cfm?URI=ol-25-14-1013
  7. 7. Brown B. R. Lohmann A. W. 1966 Complex spatial filtering with binary masks, Appl. Opt. 5 6 967 969 . URL: http://ao.osa.org/abstract.cfm?URI=ao-5-6-967
  8. 8. Burckhardt C. B. 1970 A Simplification of Lee’s Method of Generating Holograms by Computer, Appl. Opt. 9 8 1949 1949 URL. http://ao.osa.org/abstract.cfm?URI=ao-9-8-1949
  9. 9. Chu D. C. Goodman J. W. 1972 Spectrum shaping with parity sequences, Appl. Opt. 11 8 1716 1724 . URL: http://ao.osa.org/abstract.cfm?URI=ao-11-8-1716
  10. 10. Gabor D. 1948 A new microscopic principle, Nature 161 777 778 .
  11. 11. Goodman J. W. 1996 Introduction to Fourier Optics, 2nd edn, McGraw-Hill, New York.
  12. 12. Gregory D. A. Kirsch J. C. Tam E. C. 1992 Full complex modulation using liquid-crystal televisions, Appl. Opt. 31 2 163 165URL: http://ao.osa.org/abstract.cfm?URI=ao-31-2-163 .
  13. 13. Häussler R. Reichelt S. Leister N. Zschau E. Missbach R. Schwerdtner A. 2009 Large real-time holographic displays: from prototypes to a consumer product, 7237 SPIE, 72370S URL: http://link.aip.org/link/?PSI/7237/72370S/1
  14. 14. Hoffman D. M. Girshick A. R. Akeley K. Banks M. S. 2008 Vergence-accommodation conflicts hinder visual performance and cause visual fatigue, J. Vis. 8 3 1 30 URL: http://journalofvision.org/8/3/33/.
  15. 15. Hsueh C. K. Sawchuk A. A. 1978 Computer-generated double-phase holograms, Appl. Opt. 17 24 3874 3883 . URL: http://ao.osa.org/abstract.cfm?URI=ao-17-24-3874
  16. 16. Ichihashi Y. Nakayama H. Ito T. Masuda N. Shimobaba T. Shiraki A. Sugie T. 2009 Horn-6 special-purpose clustered computing system for electroholography, Opt. Express 17 16 13895 13903 . URL: http://www.opticsexpress.org/abstract.cfm?URI=oe-17-16-13895
  17. 17. Kuiper S. Hendriks B. H. W. Hayes R. A. Feenstra B. J. Baken J. M. E. 2005 Electrowetting-based optics, 5908 SPIE, 59080R URL: http://link.aip.org/link/?PSI/5908/59080R/1
  18. 18. Lee W. H. 1970 Sampled fourier transform hologram generated by computer, Appl. Opt. 9 3 639 643 . URL: http://ao.osa.org/abstract.cfm?URI=ao-9-3-639
  19. 19. Lohmann A. W. 1967 The space-bandwidth product, applied to spatial filtering and holography, Research Paper RJ-438 (IBM San Jose Research Laboratory, San Jose, Calif.) 1 23 .
  20. 20. Lohmann A. W. Dorsch R. G. Mendlovic D. Zalevsky Z. Ferreira C. 1996 Spacebandwidth product of optical signals and systems, J. Opt. Soc. Am. A 13 3 470 473 . URL: http://josaa.osa.org/abstract.cfm?URI=josaa-13-3-470
  21. 21. Lucente M. E. St-Hilaire P. Benton S. A. Arias D. Watlington J. A. 1993 New approaches to holographic video, 1732 SPIE, 377 386 URL: http://link.aip.org/link/?PSI/1732/377/1.
  22. 22. Maeno K. Fukaya N. Nishikawa O. Sato K. Honda T. 1996 Electro-holographic display using 15mega pixels lcd, 2652 SPIE, 15 23 URL: http://link.aip.org/link/?PSI/2652/15/1.
  23. 23. Mc Manamon P. F. Bos P. J. Escuti M. J. Heikenfeld J. Serati S. Xie H. Watson E. A. 2009 A review of phased array steering for narrow-band electrooptical systems, Proceedings of the IEEE 97 6 1078 1096 .
  24. 24. Mugele F. Baret J. C. 2005 Electrowetting: from basics to applications, Journal of Physics: Condensed Matter 17(28): R705 -R774. URL: http://stacks.iop.org/0953-8984/17/R705
  25. 25. Reichelt S. Häussler R. Leister N. Fütterer G. Schwerdtner A. 2008 Large Holographic 3D Displays for Tomorrow’s TV and Monitors- Solutions, Challenges, and Prospects, Proceedings of the IEEE LEOS Annual Conference. (invited).
  26. 26. Schreier D. 1984 Synthetische Holografie, Physik-Verlag,Weinheim.
  27. 27. Schwerdtner A. Häussler R. Leister N. 2007 A new approach to electro-holographic displays for large object reconstructions, Adaptive Optics: Analysis and Methods/ Computational Optical Sensing and Imaging/Information Photonics/Signal Recovery and Synthesis Topical Meetings on CD-ROM, Optical Society of America, PMA5 URL: http://www.opticsinfobase.org/abstract.cfm?URI=URI=DH-2007-PMA5
  28. 28. Schwerdtner A. Leister N. Häussler R. 2007 A new approach to electro-holography for TV and projection displays, SID-Proc., 32 33.
  29. 29. Slinger C. W. Cameron C. D. Coomber S. D. Miller R. J. Payne D. A. Smith A. P. Smith M. G. Stanley M. . Watson P. J. 2004 Recent developments in computer-generated holography: toward a practical electroholography system for interactive 3d visualization, 5290 SPIE, 27 41 URL: http://link.aip.org/link/?PSI/5290/27/1 .
  30. 30. Smalley D. E. Smithwick Q. Y. J. Michael V. Bove J. 2007 Holographic video display based on guided-wave acousto-optic devices, 6488 SPIE, 64880L URL: http://link.aip.org/link/?PSI/6488/64880L/1
  31. 31. Smith N. R. Abeysinghe D. C. Haus J. W. Heikenfeld J. 2006 Agile wide-angle beam steering with electrowetting microprisms, Opt. Express 14 14 6557 6563 . URL: http://www.opticsexpress.org/abstract.cfm?URI=oe-14-14-6557
  32. 32. St-Hilaire P. Benton S. A. Lucente M. 1992 Synthetic aperture holography: a novel approach to three-dimensional displays, J. Opt. Soc. Am. A 9 11 1969 1977 . URL: http://josaa.osa.org/abstract.cfm?URI=josaa-9-11-1969
  33. 33. St-Hilaire P. Benton S. A. Lucente M. E. Sutter J. D. Plesniak W. J. 1993 Advances in holographic video, 1914 SPIE, 188 196 URL: http://link.aip.org/link/?PSI/1914/188/1.
  34. 34. Stanley M. Bannister R. W. Cameron C. D. Coomber S. D. Cresswell I. G. Hughes J. R. Hui V. Jackson P. O. Milham K. A. Miller R. J. Payne D. A. Quarrel J. Scattergood D. C. Smith A. P. Smith M. A. G. Tipton D. L. Watson P. J. Webber P. J. Slinger C. W. 2003 100-megapixel computer-generated holographic images from active tiling: a dynamic and scalable electro-optic modulator system, 5005 SPIE, 247 258 URL: http://link.aip.org/link/?PSI/5005/247/1.
  35. 35. Stolle H. Häussler R. 2008 A new approach to electro-holography: Can this move holography into the mainstream?, Information Display 2 4 .
  36. 36. Tay S. Blanche P. A. Voorakaranam R. Tunc A. V. Lin W. Rokutanda S. Gu T. Flores D. Wang P. Li G. St-Hilaire P. Thomas J. Norwood R. A. Yamamoto M. Peyghambarian N. 2008 An updatable holographic three-dimensional displayNature URL: http://dx.doi.org/10.1038/
  37. 37. Peyghambarian N. (2008). An updatable holographic three-dimensional display., Nature . URL: 451 7179 694 698 ., . http://dx.doi.org/10.1038/nature0659606596
  38. 38. Wheatstone C. 1838 On some remarkable, and hitherto unobserved, phenomena of binocular vision, Philosophical Transactions of the Royal Society of London 128 371 394 .

Notes

  • For a comprehensive overview, cf. for example the book edited by Schreier (Schreier, 1984)
  • Also called as double-phase or dual-phase.
  • Presented at SID 2007 (Long Beach, USA) and Display 2008 (Tokyo, Japan)

Written By

Stephan Reichelt, Ralf Haussler, Norbert Leister, Gerald Futterer, Hagen Stolle and Armin Schwerdtner

Published: 01 April 2010