Synthetic Image Holograms

This chapter is dedicated to synthetic image holograms the elements which can create a reconstruction of a 3D object for observation with the human eye. Holography as a technique of image recording and reconstruction has been extensively developed from sixties of the twentieth century. During this time there have been various attempts to synthesize holograms artificially without the presence of the real object in the classical recording setup. Different approaches have been used, several trying to synthesize the three-dimensional object from two-dimensional views using the classical recording setup, the others trying to calculate the microstructure of the hologram completely in a computer. Today, we can divide synthetic holography into two major streams, the first containing the methods for creating the image for observation by human eye and the second consisting of approaches for designing the synthetic diffractive structures for general wavefront generation. The former techniques can exploit various imperfections of human vision and omit several parameters of the optical wave. The latter techniques are usually based on the direct calculation of the microstructure and they try to create the reconstruction in its full complexity. Only the first group of synthetic image holograms will be analyzed in this chapter. The synthetic approach to hologram creation can have several advantages, but also noticeable disadvantages. The most important advantages are connected with flexibility in modifying the recorded object. First, the object need not to exist in reality in a form of a physical model. For most synthetic approaches, it is fully sufficient to have a 3D computer model for preparing the recording data. Also for real physically existing objects it could be tricky to perform the recording process in a classical setup. For example, various outdoor scenes such as buildings and others could not be included in the laboratory setups. Generally, the scaling possibility is very limited in classical holography, so the recorded object (or its model) must be of final size. On the other hand, it is easy to scale the computer model of an object. The next problem is in various corrections of color properties, surface textures, and general fine tuning of the recorded object. While such operations are very simple in the case of computer models, they could bring insoluble problems for real physical models. The stability of the object is also very important. It is crucial to highly stabilize the object for recording in classical holographic recording setup (when exposing with a continuous-wave laser), whereas in a computer stability is not a problem. This can apply also for holograms of living objects or dynamic scenes, where it is easy to take snaphots using photographic techniques, but holographic exposure is almost impossible. Finally, according to the recording technology chosen, other parameters of the synthetic hologram can be highly superior to those of classical holograms (e.g. fidelity of color mixing, contrast of the image, etc.). 10


Introduction
This chapter is dedicated to synthetic image holograms -the elements which can create a reconstruction of a 3D object for observation with the human eye. Holography as a technique of image recording and reconstruction has been extensively developed from sixties of the twentieth century. During this time there have been various attempts to synthesize holograms artificially without the presence of the real object in the classical recording setup. Different approaches have been used, several trying to synthesize the three-dimensional object from two-dimensional views using the classical recording setup, the others trying to calculate the microstructure of the hologram completely in a computer. Today, we can divide synthetic holography into two major streams, the first containing the methods for creating the image for observation by human eye and the second consisting of approaches for designing the synthetic diffractive structures for general wavefront generation. The former techniques can exploit various imperfections of human vision and omit several parameters of the optical wave. The latter techniques are usually based on the direct calculation of the microstructure and they try to create the reconstruction in its full complexity. Only the first group of synthetic image holograms will be analyzed in this chapter. The synthetic approach to hologram creation can have several advantages, but also noticeable disadvantages. The most important advantages are connected with flexibility in modifying the recorded object. First, the object need not to exist in reality in a form of a physical model. For most synthetic approaches, it is fully sufficient to have a 3D computer model for preparing the recording data. Also for real physically existing objects it could be tricky to perform the recording process in a classical setup. For example, various outdoor scenes such as buildings and others could not be included in the laboratory setups. Generally, the scaling possibility is very limited in classical holography, so the recorded object (or its model) must be of final size. On the other hand, it is easy to scale the computer model of an object. The next problem is in various corrections of color properties, surface textures, and general fine tuning of the recorded object. While such operations are very simple in the case of computer models, they could bring insoluble problems for real physical models. The stability of the object is also very important. It is crucial to highly stabilize the object for recording in classical holographic recording setup (when exposing with a continuous-wave laser), whereas in a computer stability is not a problem. This can apply also for holograms of living objects or dynamic scenes, where it is easy to take snaphots using photographic techniques, but holographic exposure is almost impossible. Finally, according to the recording technology chosen, other parameters of the synthetic hologram can be highly superior to those of classical holograms (e.g. fidelity of color mixing, contrast of the image, etc.). 10 www.intechopen.com Unfortunately, the synthetic approach brings also a whole range of specific problems and disadvantages. Generally, the recording setup is much more complicated in synthetic holography than in classical holography (although it can be much easier to operate some synthetic writers than to tune the classical laboratory recording setup). Furthermore, most synthetic techniques are limited in final size of the hologram (e.g. for directly written synthetic elements the typical area is in cm 2 , while for classical holograms it is usually in dm 2 ). The recording process itself takes usually much longer in synthetic holography (although the preparation for exposure could be longer when using the classical recording). It is also important to realize, that in most synthetic approaches, the reconstructed optical wavefront does not exactly correspond to the one from a similar classical hologram. Various parameters are often omitted due to the limited performance of human eye, which can not evaluate the optical signal in full complexity. The most important features of the synthetic hologram can be summarized as follows: the object data could be obtained without necessity to work with a physical model, various modifications of the object could be possible; the object could be potentially dynamic, spacious, etc.; the hologram itself could be as large as possible, it could enable general color mixing, the reconstruction could be possible in white light. In the following sections the key steps of the design and recording processes are described together with the common technology used for recording the synthetic holograms. Section 1 contains analysis of the human vision with focus on 3D perception by human eyes. Section 2 describes the approaches to the hologram synthesis using two most common techniques -synthesis at the hologram plane and synthesis at the eye-pupil plane. Various details concerning the color mixing, special 3D properties, and kinetic behavior of the holograms are also discussed. Finally, in Section 3 the most common devices and recording materials are briefly mentioned.

Human vision
The visual sense of a 3D scene is caused by the brain interpreting the image information from the eye nerves from both eyes. A significant advantage is that the eyes are quadratic detectors (they do not feel the phase); therefore, 3D vision can be "faked" by only transmitting proper intensity images into the eyes as it will be shown. Due to the relative positions of our eyes, the image from the left eye slightly differs from the image from the right eye. This is the crucial aspect causing the 3D sense and that is why it is used in hologram synthesis. The image of the scene in front of our eyes is displayed by the eye's optical system on the photosensitive layer of the eye -the retina. This image information is processed by the photoreceptors (rods and cones) and transmitted to the brain through the optic nerve for further processing. What happens in the eye is not the only relevant factor for vision. There are also factors supporting depth perception, which relate to the shape of the object. Vision cues can be divided into physiological and psychological ones [Najdek, 2008]. The physiological cues relate to the eye physiology and the principles of vision -the necessary information from the viewed scene is either obtained by tracking the position of the image on the eye's retina (stereopsis, movement parallax) or by sensing the tension in the eye's muscles (accommodation, convergence). The physiological cues express the depth of the viewed scene by the means of 2D images. These cues are for example perspective, relative size, occlusion, shading, depth of field, etc.
The most important cues in 3D imaging are the following physiological cues -stereopsis and movement parallax. Stereopsis is based on the difference between the image viewed by the left eye and the image viewed by the right eye. Each eye has slightly different angle of viewing due to the different spatial position of the eyes (common distance between the eyes is about 65 mm). The effect is called binocular disparity or binocular parallax. The image information from the two different images is then processed in the brain's visual center and the relative spatial position of the viewed objects is conceived. Stereopsis is probably the most important and strongest cue, from close up to medium viewing distances. The principle of stereopsis is illustrated in Fig. 1.   Fig. 1. Stereopsis: When viewing a 3D object, the images that are incident on each eye's retina slightly differ in angle. Brain interprets this change in angle as the depth of the object. Besides movement parallax, this is a crucial cue, that contributes to 3D vision.
Movement parallax is also based on processing an image from retina, but it is a monocular cue. The difference in the position of the image is caused by the observer moving his head. This technique can be found in animals whose eyes are placed on the opposite sides of their head -e.g. some types of birds that swing their heads to achieve motion parallax. There are also surveys showing that people with monocular vision use the movement parallax to obtain the depth perception [Faubert, 2002;Ferris, 1972].

Stereoscopic methods and holography
There is a variety of methods of providing 3D perception for the observer. Until the "invention" of binocular vision, 3D space was represented mainly by the monocular means of psychological cues as the relative size, occlusion, and shading. By using perspective, the images started to be more real. The first 3D perception as we know it from real life came with Sir Charles Wheatstone's invention in 1838 [Wheatstone, 1838;Wheatstone, 1852]. He described the binocular vision and proposed the device called the stereoscope. This device used stereopsis for imaging depth -it created two separated viewing zones with two different images, one for each eye. Since the stereoscope was invented before photography, the first 3D images were paintings. From that time on, many stereoscopic methods have been developed. Modern 3D display methods can be classified according to the number of viewing zones and the way they are separated. The methods with two viewing zones only are denoted binocular methods or methods of selective observation. The observer has to wear some kind of glasses to split the particular views into the left and right eye. On the other hand, there are methods of spatial display, which create more viewing zones. In such a case, these zones need to be spatially displaced 211 Synthetic Image Holograms www.intechopen.com by the displaying device itself. These methods are therefore denoted the methods of selective display or autostereoscopic methods. As shown in Fig. 2a, in the binocular methods, there are two 2D images (binocular pair) overlapping each other, which are transmitted in the direction of the observer [Takanori, 1976]. To view the correct image by the corresponding eye, the observer must wear glasses with some sort of filters that will block the image for the other eye and let only the correct one to pass. The first device ever, that was used for this kind of display was the already mentioned stereoscope. Typical representatives of these display methods nowadays are e.g. anaglyphs or the 3D cinema. In anaglyphs [Zone, 2003], the two observation channels are superimposed, but each one of them is encoded by a different color filter. The observer wears glasses with colored filters that separate properly the two images into the eyes. Using a pair of pure colors from R-G-B for filtering enables only monochromatic observation. Therefore, to achieve a color result, mixed color filters are used for the glasses. The most common is red-cyan (50 % blue and 50 % green). Other color filters have been invented to improve color rendering. An example is Infitec -narrow band interference filters. The modern 3D cinemas use either polarization filters or shutter systems [Turner, 1986]. The polarization principle is the same as in anaglyphs -the two viewing channels are superimposed, each encoded into a different polarization state. The glasses are equipped with proper polarization analyzers to split the two channels. The shutter systems split the binocular pair in the time domain. The movie frames are projected with about twice the frequency of a 2D movie. The shutter glasses are synchronized with the projector and they block periodically the left and the right eye, so that the observer sees only one frame with the correct eye at a time.
The main disadvantages of all the binocular methods are the necessity of wearing the filtering glasses, the lack of motion parallax, and the distortion of the image. The image can be viewed undistorted only from the same position from which it was taken. That means, for example, that in the cinema, most spectators see the scene in a somewhat distorted way. Nevertheless, these methods are satisfying for 3D imaging and are widely used in the entertainment industry. Unlike separating the channels at the observer by using the selective observation methods, autostereoscopic methods create the separated viewing zones using the displaying device itself [Halle, 1997]. These methods are capable of creating more than two spatially separated viewing zones and motion parallax can be obtained without the necessity of wearing any glasses. Another advantage over binocular imaging is the fact, that when multiple viewing zones are involved, there is no constraint on the observer's position. Each zone contains the image information from the particular direction and wherever the observer is located relative to the displaying device, he sees the 3D image in the correct perspective. A typical representative of this imaging method is integral photography. The object is captured through an array of tiny convex lenses. Each of the lenses creates an image of the object with a slightly different perspective. These images are recorded on a photographic plate. After being developed, the plate is illuminated from behind and the light travels again through the array of lenses. Since the setup is exactly the same as it was during the recording, the light rays from various lenses meet again in the original position, where the surface of the object was located (Fig. 2b). For viewing such an image, the observer does not need to wear any glasses. Another improvement in autostereoscopic methods comes with using diffractive structures. The tiny lenses are replaced with a diffractive structure, that controls the distribution of the image information into proper directions. The diffractive methods are discussed in detail in Section 2. An illustration of a synthetic diffractive structure is shown in Fig. 2c. The most perfect autostereoscopic method is "true" holography. The hologram is capable of reproducing a wavefront of an object in its full complexity and therefore it works in fact as an autostereoscopic display with infinite number of viewing zones. An illustration of reconstruction from a hologram is shown in Fig. 2d.

Synthetic holography
In this section, the basics of synthetic image holography are discussed. According to the above described principles of autostereoscopy, it is desirable to exploit the directionality of the diffraction of light on the hologram (or more generally on the diffractive structure) for creating the spatially separated image channels in space. The problem consists of two main tasks, first the acquisition of image data for recording of particular channels and the second dealing with the recording process itself. Thus in Section 2.1 the general ideas of decomposition of the three-dimensional signal to the set of two-dimensional views are described. In Section 2.2 two particular approaches to the multiplexing of the 2D information within a single holographic element are presented. In the last part (Section 2.3), more detailed aspects of the recording and reconstruction processes are analyzed such as the true-color mixing based on holographic elements, various spatial properties of the reconstructed image, and possible synthesis of the holograms with kinetic behavior.

3D image as a set of 2D views
The analysis of human vision (Section 1.1) showed that three-dimensional image perception can be artificially generated by creating the spatially separated discrete image channels in space. It has been also mentioned, that due to the inability of the human eye to register the phase properties of the incident optical wave (except for the directionality of the wave), the real image signal can be substituted by two-dimensional intensity views. The question is, how the discretization of the spatial channels should be done to create a satisfactory undisturbed 3D view. The method is obviously based on imperfections of the detecting system (in this case the human eye) in terms of a pupil size and limited resolution. It has also been denoted, that classical holography is the most perfect autostereoscopic method, as it reconstructs the signal wavefront in its full complexity. The interesting question arises if limiting the viewing channel size by the synthesis of the hologram from 2D views will converge to the "analogue" optical hologram. From the point of view of intensity distribution the classical hologram is apparently the limiting case of the synthetic construction, but there is a huge difference in phase properties of the reconstructed optical field. Although the phase behavior of the reconstruction must possess certain resemblance to that of the "ideal" hologram as the signal directional distribution is similar, particular spatial views are not mutually phase-synchronized as they would be from the classical hologram. This effect in principle does not affect the observation by human eye, but can cause problems when using the element for other applications.

Acquiring the source data
Both approaches presented in this chapter need to somehow capture the set of 2D images before the synthesis can be processed. These images can be obtained using a real camera taking pictures of a real object or using a virtual camera capturing a virtual 3D object in a PC.
There is a variety of methods that can be used for this sampling of an object. The methods can be classified according to the shape of the trajectory of the camera and according to the direction of the optical axis of the camera. The way of capturing also depends on the demands on the 3D behavior of the reconstruction -if the final hologram should be composed of both parallaxes or if it should be single parallax only (Section 2.3.2). Fig. 3 shows one of the possible capturing setups for both the single-parallax and the full parallax case.
Assuming the single parallax case, the above figure shows the method in which the camera is moved through equal length steps on a straight line, while the optical axis of the camera remains perpendicular to the direction of movement. In this case, taking the snapshots with a real camera would require a special objective, so-called shifted-lens camera. However, this method is more often used in the case of capturing the virtual 3D object in a PC. Therefore, the need for a special objective can be bypassed by using "wide angle" capturing and cropping the recorded image. A big advantage of this method is a relatively simple recording setup for the master-hologram -with only linear translations. This is the method used in the experiments described in this chapter. The other possible setup for a single parallax case is the equiangular method with the circular movement of the camera. This method is the simplest from the capturing point of view, since the camera can be static while the object rotates around a vertical axis. It is also the closest approach to the viewing setup when the observer holds the hologram and tilts it around the vertical axis. However, if the recording medium is not coated on a cylindrical surface, this method demands sophisticated image preprocessing of the 2D images to match the flat surface of the hologram plate.
The closest method to the viewing setup is the one when the camera makes equidistant steps on a straight line and the camera axis tilts towards the object. It resembles the case when the hologram is hanging on a wall and the observer moves around, turning his head towards the hologram. Nevertheless, it is much more complicated from the capturing point of view and, similarly to the equiangular method, the images would have to be processed in a computer before recording the master-hologram. The above mentioned methods are the ones that are the most widely used for the 2D data capturing. If the final hologram should include both the horizontal parallax and vertical parallax, the process of camera movement should also include the vertical parallax. There are again several ways how to capture the spatial information. Similarly to the equiangular method with the circular movement of the camera, there is a "spherical" method for capturing the two parallaxes. In such a case the object turns around its vertical axis and the snapshots for the horizontal parallax are taken. The next horizontal "row" of exposures is taken after the object is turned about a small angle around its horizontal axis. In this manner the required range of angles in both parallaxes is taken. In Fig. 3, the planar method of capturing the two parallaxes is also provided -the "single parallax" line of capturing is repeated in the vertical direction. The axis of the camera is again kept perpendicular to the plane of movement. This is how the continuous 3D information is sampled into a set of discrete 2D views. The following sections describe how various synthetic holograms are created from such a set of views.

Multiplexing views within a single hologram
In this section the basic approaches to the multiplexing of angular views obtained in the previous section are presented. There are many different ways how to realize different spatial channels but only those based on holographic or, more generally, on diffractive principles will be analyzed. As has been already mentioned, the decomposition of the 3D information into the two-dimensional channels is conditioned by the limited performance of the human eye. The first of the presented methods, named "synthesis at the hologram plane" is based on the limited resolution of the human eye in the image plane. The next approach, called commonly "synthesis at the eye-pupil plane" is based on the limited (finite) pupil size of the human eye which limits the monocular 3D perception. Both methods exploit holographic principles for image creation.

Data synthesis at the hologram plane
In optical holography, the relative positions of the object and the final hologram (the physical plate on which the recording has been made) can be arbitrary. However, during the copying process the object is usually placed in the close neighborhood of the final hologram for several reasons. The most important are dispersion and sensitivity to the imperfections of the reconstructing light source. When the image is formed close to the hologram, the blurring caused by the dispersion effects is negligible as is the blurring caused by the spatial incoherence of the light source. If we suppose such a geometry, we can assume that the image points and their projections to the hologram plane are approximately of the same size. When the hologram is observed from the conventional distance ∼ 25 cm, the approximate resolution limit of the human eye is ∼ 0.1 mm. On the other hand, from the point of view of diffractive structures such a dimension is large enough for carrying much more than just the information about a single point. As a rough estimate, if we assume an elementary diffraction grating with period ∼ 800 nm and area 4 × 4µ m (which contains 5 periods of such a grating), we can place more than 600 different gratings within the area of 0.1 × 0.1 mm 2 . The main idea of image multiplexing at the hologram plane is based exactly on this calculation.
To understand the method one has to realize how the principle of stereoscopy applies for the given geometry. Let us assume the situation according to Fig. 3. If the observer sees the 3D image floating in the neighborhood of the hologram plane, it means, that each point on the hologram should send different information to the different directions (meaning that the visibility of the particular point on the hologram surface in terms of color and intensity is dependent on the observation angle). It is impossible to achieve such an angular functionality from conventional photography, where the angular luminosity is a smooth function. On the other hand, the idea of realizing an angularly selective image on the base of diffraction is pretty straightforward. Let us assume the diffraction of white light by the elementary regular diffraction grating according to Let the diffraction grating have the period 750 nm, the incidence angle is 45 • , and the grating lines are perpendicular to the plane of incidence. If the light is monochromatic (with wavelength in the middle of the visible region, e.g. λ = 530 nm), there will be only two diffracted beams according to the grating equation namely the 0-th order with α 0 = α i and the first diffraction order with α 1 ≈ 0 • . When the light is polychromatic, there will be one "white" 0-th order and many 1 st orders, one for each spectral color. They will slightly differ in direction according to the equation (1). If we specify the direction of observation to be the normal to the hologram at its midpoint, one will observe only a narrow spectral band of colors and, in our example, he will see the grating as green (∼ 530nm). The described principle is used in synthetic holography for generation of the different colors from a single white light source. Namely if the grating period changes within the range from 600nm to 950nm, an observer looking in the direction perpendicular to the hologram will perceive the wavelengths from 430nm to 670nm, which is approximately the range of the visible light. Of course, if he changes his viewing direction, the perceived color will change according to the grating equation. Each image point can in principle change its color from blue to red through all rainbow colors; this is why such holograms are also called rainbow holograms [Benton, 1969]. The rainbow hologram can be observed in correct color composition only under the defined geometry (relative position of the light source, hologram, and observer). Otherwise the colors are shifted according to the equation (1). If there is more than one color in the image, particular colors will maintain their relative relations, but they will be all shifted in the same manner when the geometry is changed (e.g. when the hologram is vertically tilted). It has been shown that the period of the elementary grating on the hologram surface determines the color observed from the particular direction if the hologram is illuminated with white light in the defined geometry. The next important parameter of the grating is the orientation of the grating lines. In the above example they were perpendicular to the plane of incidence of the reconstructing light beam. In general, the grating orientation can be rotated within the range of 360 • . The effect of such a change is demonstrated in Fig. 4b.
Based on this analysis we can suggest the method for generating a synthetic 3D hologram from 2D views. As has been mentioned, in the given geometry 3D imaging requires the ability of the particular point on the hologram surface to send different signal in the different directions. Such a functionality can be accomplished using the elementary point segmentation according to Fig. 5. Each distinguishable point on the hologram surface (further called a macro-dot) is divided into a set of smaller units (further called micro-dots). Each micro-dot is filled with a regular grating with appropriate period and orientation. As shown earlier, the number of micro-dots with dimensions sufficient to contain a functional grating, which could fit within a single macro-dot, is several hundreds, so there is plenty of space for representing a high number of angular views of an object. In fact, it is not necessary to use such a high number of micro-dots for two reasons. First, most holograms synthesized using this method are of rainbow type, thus the vertical parallax is omitted and angular views are captured only in the horizontal direction. This significantly reduces the number of necessary micro-dots. Second, the finite angular resolution of the human eye and the finite eye-pupil size limit the number of horizontal views (of course this number is also influenced by the desired angular range). In practice, only a relatively small number of horizontal views is needed to create a satisfactory 3D perception. Usually 9-16 angular channels are used. At this point it is necessary to mention that the spatial separation of the micro-dots within the elementary resolvable macro-dot is not the only method of multiplexing the elementary gratings. Depending on the holographic recording techniques, particular gratings can also partially or fully overlap within the macro-dot area. The multiple exposure of a single macro-dot area can record all gratings while maintaining their particular properties. However, it can be shown, that the potential drawbacks of this method have more serious consequences than those of the spatial multiplexing. Moreover, the exposure parameters are dependent on the particular grating parameters and the number of multiplexed gratings when exposing the same area multiple times, whereas they are completely independent when the gratings are separated spatially. Unfortunately, the spatial separation also brings several unwanted effects to the image reconstruction, the fragmentation of the microstructure to the micro-dots being the worst of them. In practice, the often used dimension of the micro-dot is below 10µm. Thus the diffraction by the grating is affected with the diffraction by the aperture of the grating. Mathematically, the diffraction pattern consists of the convolution of the diffraction by the grating and by the rectangular aperture. This leads to the spreading of the diffraction order, introduces noise, and decreases the observed luminosity of the point. As a consequence, the micro-dot size must be chosen as a trade off between resolution and luminosity of the image. In Fig. 6 there are examples of the desired and real microstructure which has been synthesized (a) (b) Fig. 6. Example of the macro-dot segmentation (a) and the corresponding real micro-structure (b).
using the described technique. The micro-dots are clearly visible together with the gratings inside them which have a periodicity of ∼ 600 − 950nm depending on particular micro-dot and the dimension of micro-dots is ∼ 13µm. The macro-dot (or the observable image point) has a dimension of ∼ 39µm and is segmented into 9 micro-dots (there are 9 angular views distributed in horizontal direction, the hologram is of rainbow type).
The described synthesis at the hologram plane has many advantages but also several disadvantages in comparison with other hologram synthesis techniques. Because of the direct calculation of properties of each image point, the approach is very flexible. It enables us to calculate full 3D holograms with true-color RGB color mixing (see Section 2.3.1), kinetic behavior (see Section 2.3.3), and high contrast with low noise. There is one more aspect which should be mentioned, namely the possibility to fully adjust the luminosity of each image point. The overall luminosity of the micro-grating is given by its area and diffraction efficiency. To manage the luminosity continuously, either of these two parameters can be used. If the area of micro-dot is maintained, the diffraction efficiency can be changed by tuning the profile shape or the depth of the modulation (usually the depth of the modulation is tuned as it can be done easily by changing the exposure dose for the particular micro-grating). Unfortunately, it is relatively tricky to maintain the proper luminosity relations as the dependency of the depth of the modulation on the exposure dose can be highly nonlinear, apart from the potential problems with changing the dose independently for each grating within the chosen recording setup. Thus, the luminosity is usually driven by changing the micro-dot area and simultaneous maintaining of other parameters of the grating (shape and depth of the modulation). The micro-dot is divided into two parts, one filled with grating and the second unexposed (see Fig. 6a). This allows one to perfectly (linearly) tune the luminosity and set the desired luminosity relations over the hologram area. The main drawback is the further segmentation of the micro-dots, which enhances the diffraction effects on the grating aperture (which is now even smaller) and can lead to higher noise levels in reconstruction. Thus the proper choice of the dot size as a kind of trade off between the mentioned effects crucially influences the final quality of the reconstructed image.

Data synthesis at the eye-pupil plane
In the previous case, the area of the hologram is divided into the macro-and micro-dots, where the primary gratings are finally directly recorded using one of the approaches described in Section 3.1. In the case of synthesis at the eye-pupil plane, the microstructure of the final synthetic hologram is continuous without any segmentation. The discretization of the spatial information is achieved using holographic principles -an interference pattern of one reference beam and multiple signal beams is recorded to the synthetic hologram. Each of the signal waves would represent one particular viewing zone (see Fig. 3). To achieve this effect, the data synthesis at the eye-pupil plane consists of two steps. First, the synthetic master-hologram is created, where the particular 2D views are recorded and then the master-hologram is transferred into the final hologram. The name of the method is derived from the fact that the position of the former master-hologram coincides with the eye-pupil plane when viewing the final hologram. The most instructive one and probably the most known method is the one shown in Fig. 7. There are four main steps included in this type of synthesis: 1. sampling of the 3D object (obtaining the set of 2D views - Fig. 3a), 2. recording of these 2D views (spatially separated) into the synthetic master-hologram (Fig. 7a), 3. transferring these recorded views into the final hologram by a single shot (Fig. 7b), and 4. viewing the final hologram (Fig. 3b). The 2D images are sequentially displayed on the ground glass screen shown in Fig. 7a. The holographic plate is placed in front of the screen approximately within the plane, where the snapshots were taken from. The plate is covered with a shade with a vertical slit, so that only a narrow stripe of the same width (or smaller than the width) of eye pupil is exposed at each shot. The reference wave is incident from the same side as the signal wave. When the stripe This means that the master-hologram is composed of a number of these elementary holograms placed next to each other. (b) The transfer scheme. All the stripes from the master-hologram are replayed at once using a conjugate "reference wave" from (a). All the reconstructed images overlap in the former ground glass plane and the final hologram is recorded.
is exposed, the image on the ground glass is changed to the next view, the shade is shifted to the next position and the next exposure is taken. In this manner, all the views of the object are processed.
After developing the plate, this synthetic master is placed into the transfer scheme (Fig. 7b).
All the recorded stripes are reconstructed at once and all the 2D views overlap in the former plane of the ground glass screen. The final hologram plate is placed exactly in this plane and the hologram is exposed. When viewing the final hologram, the stripes of the synthetic master-hologram are reconstructed in the exact position where the master was placed during the transfer process. The plane of the observer's eyes coincide with these stripes and each eye perceives only the view of the object from the correct direction (Fig. 3).
The main advantage of this method over the method of synthesis at the hologram plane is in the achievable size of the final hologram. Unlike the directly written synthetic holograms, there is no high demand on accuracy of the positioning of the recorded stripes (let us say about 0.1 mm). Therefore there are no big constraints on the recording device. Furthermore, the size of the final image is not limited by the size of the master-hologram, but by the size of the image on the screen. There are, of course, also several disadvantages depending on the particular setup. For example the main disadvantage of the setup in Fig. 7 is wasting the energy on the slit aperture. Although the energy loss can be lowered by shaping the reference beam, the reference beam must be shifted along with the shade, which makes the exposure complicated. There are, however, ways how to make the synthesis more efficient in a simpler setup (Section 3.2). The general disadvantage of these methods is in lower contrast compared to the directly written synthetic holograms. The hologram described in the above section lacks vertical parallax. Therefore, it is used for creating a single-parallax rainbow hologram. The setup can be modified to create various types of final holograms (full parallax, color, etc.). The following section shows the possibilities of creating such types of holograms using the two discussed synthetic approaches.

Advanced properties of synthetic holograms
The following paragraphs provide information about the most common types of synthetic holograms and their properties, that can be created using the discussed methods. It is concerned with color holograms, full parallax holograms and kinetic holograms.

Color mixing in synthetic holography
Since the reconstruction of the hologram is based on the diffraction of the light on the hologram structure, the observed colors are spectral ones. If a non-spectral color is required (e.g. white), the additive color mixing must be introduced. Additive color mixing is based on the colorimetric theory. It can be described by the means of CIE 1931 color space. The CIE defines a so called standard colorimetric observer and the color sensation experienced by the observer is determined by the spectral sensitivity of his eye cells (the cone cells). There are three peaks of their sensitivity. Hence, the sensation can be described using three parameters (X, Y, and Z), so called tristimulus values. These values can be used for computing the coordinates of a color in the well known CIE xy chromaticity diagram. More details of colorimetry can be found e.g. in [Wyszecki and Stiles, 2000]. The most common way of achieving a mixed color is to use two or three spectral colors, usually red, green, and blue (RGB), although the color mix can be achieved from another colors. As was stated the hologram is a diffractive structure, thus these three colors are obtained by the means of diffraction gratings. Considering the data synthesis at the hologram plane, these gratings are created directly in each particular dot. The true-color dot is divided into three smaller areas, each filled with one grating. The relative areas of these three gratings control the contributions of each of the colors to the true-color mix. Such an array of "mixed" dots can be seen in Fig. 8. The area of the dot is divided into two major parts -one filled with grating and the second empty. The ratio of these areas defines the overall luminosity of the image point. The grating part is further segmented into three parts each of them filled with one basic grating for color mixing. In principle, the color can be mixed from two, three or more gratings, but the most common is the three-color mix. The particular basic gratings are chosen individually for each mixed color as the closest components in the colorimetric triangle. In a case of a true-color 3D synthetic hologram, each dot is divided into micro-dots corresponding to the spatial channels (see Section 2.2.1), where each such micro-dot is further segmented into the color components. As already mentioned, such complicated fragmentation can negatively affect the efficiency of the hologram and overall noise level, so the dot size must be carefully chosen.
In the case of the eye-pupil synthesis, the area of the final hologram is not composed of any elementary dots. Thus the color mixing must be synthesized during the recording of the master hologram. If three different lasers are used for synthesis (with proper wavelengths corresponding to the desired RGB mix), the synthesis is relatively simple, all angular channels are exposed three times (once for each color) within the same area of the recording medium. For each exposure, the recorded 2D view contains only particular color channel of the image. However, three-laser synthesis is relatively rare, usually the complete recording process is performed using a single laser (including the copying of the master to the final hologram). Thus the master hologram for color mixing (in a case of color rainbow hologram synthesis) consists of three independent vertically shifted rows of exposures. The shift between the three rows (color channels) can be calculated from analysis of the dispersion effects accompanying the diffraction by the final hologram [Saxby, 1994]. Each of the signal waves coming from the three channels carries the same spatial information about the object. The correct true-color mix is achieved by controlling the relative intensities of these waves. Fig. 9a shows copying of such an RGB synthetic master into the final hologram. Fig. 9b shows the diffraction of white light by the final rainbow hologram. In Fig. 10a there is an example of the RGB color master for a rainbow hologram synthesized using the method described in Section 3.2. The final hologram (copy of the master) is displayed in Fig. 10b. Color mixing as shown in Fig. 9a is exactly correct for the plane of the final hologram. If the color point is outside of the hologram plane, it gets distorted due to the different axial (a) (b) Fig. 10. (a) Photograph of the "RGB color" master-hologram. The three color channels can be clearly seen. (b) Photograph of the final color synthetic rainbow hologram, that was obtained using the master from step (a).
magnification for the three wavelengths that actually compose this particular point. This issue can be solved by a proper "pre-distortion" of the image data before the recording process. 1 The described techniques for color mixing utilized the rainbow effect (diffraction of white light by the volume transmission or thin grating). However, the color can be synthesized also in volume reflection holograms, where all steps are complicated by the Bragg condition (the dispersion cannot be used for generation of whole spectral range of colors as the grating recorded with the given geometry will not work for other than the design color). Nevertheless, a reflection color hologram can be synthesized using special techniques [Hariharan, 1996].

Single and full parallax synthetic holograms
In general, the classical hologram can record the complete 3D properties of the objectincluding both horizontal and vertical parallaxes. However, the parallax perpendicular to the fringes on the hologram surface can potentially suffer from strong dispersion effects. If the reconstructing light source is monochromatic, the dispersion will not apply and full three-dimensionality can be easily achieved. In a case of polychromatic reconstruction (which is the usual demand), the dispersion effects can lead to unacceptable rainbow blurring of the image. Let us assume the common geometry of reconstruction according to Fig.  4a. The reconstruction beam is coming from the top and the observer is viewing the image from the front of the holographic plate. Thus the main direction of the fringes is horizontal. Besides 1 The position of the non-compensated reconstructed color point differs from the original object point due to the different axial magnification for the particular reconstruction wavelengths. In the case of green recording wavelength (λ G = 532 nm), assuming the reference and reconstruction waves are planar, the axial magnifications for the particular color channels are α R = 0.84, α G = 1, and α B = 1.13 (for the red wavelength λ R = 633 nm and blue wavelength λ B = 470 nm). Since the hologram is mostly synthesized from the 2D views obtained from the computer, this distortion can be compensated by pre-distorting the 3D model on the computer in the opposite manner. To overlap in the plane of the hologram the red point has to be located on the "beam" coming from the red channel and the blue point on the beam coming from the blue channel. To compensate for the distortion in the z direction, the red "object" has to be axially magnified by the ratio 1/α R and the blue one shrank by the ratio 1/α B . During the 2D images processing, the proper pre-distortion can be introduced to each color channel. When reconstructed, the spectral deformation causes all the three channels to overlap in the position of the green channel.
the reconstruction by the monochromatic source (which is also possible today thanks to the wide availability of semiconductor laser sources) there are two principal solutions of this problem. The first is to use volume reflection holograms which can be highly selective to the reconstruction wavelength -it means the holographic reconstruction will appear only for a narrow band of wavelengths also in a case when the light source is polychromatic. The second approach exploits the idea of rainbow holography -the vertical parallax is omitted and the object maintains its 3D properties only in horizontal plane. When observing such a hologram from different directions in vertical plane, the object is seen in different "rainbow" colors. However, such a single parallax hologram can offer a very truthful 3D perception thanks to the geometry described in Fig. 4a and the horizontal position of human eyes. The missing vertical parallax disrupts the observation less than one might expect. In Fig. 11, there is an example of the recorded master for the rainbow hologram created using Fig. 11. Photograph of single parallax master-hologram. Each "shining" holopixel represents one view of the object.
the synthesizing method described in Section 2.2.2. Each of the "shining" spots in the figure represents recorded interference pattern of signal and reference wave. These spots are further denoted "holopixels". As it is seen, the process is very simple as only a small number of exposures is needed. Also the data acquisition from the object or computer model is much simpler than in the general full parallax case. The "single row" master is copied to the final plate in the transfer scheme and the rainbow hologram is created. The final hologram can be thin or thick, white light reconstruction is possible and the dispersion effects will only create the rainbow reconstruction under different directions of observation in the vertical plane. By contrast the monochromatic reconstruction is undesirable, in such case the reconstruction is visible only in a narrow stripe. The full parallax hologram can be also synthesized at the eye-pupil plane. In such a case, the views must be acquired in both vertical and horizontal directions and recorded in a two-dimensional field. In Fig. 12a, there is a typical full parallax master recorded using the method described in Section 2.2.1. The exposed area corresponds to the future region where the observer's eyes will move. Finally, the recorded master must be copied to the reflection copy in the classical holographic transfer setup. In Fig. 12b there is an example of the image reconstructed from a full parallax reflection hologram fabricated using the described method. When using the synthesis at the hologram plane, usually rainbow holograms (single parallax) are synthesized. The rainbow effect is used for achieving proper color composition and white light diffraction is essential for proper image synthesis. The common techniques for recording such holograms are also usually capable of recording thin gratings only, so the volume reflection hologram cannot be easily recorded. However, the volume recording is theoretically possible using one of the following methods. If the hologram is designed for monochromatic reconstruction, it can maintain both parallaxes and, based on the thin gratings of various period and orientation, the full 3D images can be constructed. Such a "master" can be copied in laser light to a different plane and then again back to the original plane using volume recording materials and a reflection holographic recording setup. This method gives the desired result, but the complexity of the approach is pretty high and most advantages of the direct recording at the hologram plane will vanish. The second possible approach to synthesizing a volume reflection hologram directly at the hologram plane is based on using a volume material already for the primary recording. However, in such a case the elementary micro-gratings must be recorded as volume reflection elements. The method requires full control over the direction of the recording beam and in the case of color holograms also the possibility to change the reference beam direction. Recording of this kind is very complicated and most of the advantages of the synthetic approach will be suppressed. In Fig. 13, there is an example of the rainbow 3D hologram with true-color RGB mix recorded at the hologram plane using a dot-matrix writer.

Kinetic behavior and holographic animations
The above described methods demonstrated the ability of the diffractive structures to create a system of spatially separated image channels in observation space. Such a system can be used not only for simulating the three-dimensional behavior of the object, but also for introducing dynamic effects into the viewed scene. The main idea of the synthesis of holographic animation is straightforward. Instead of the different angular views of the object, particular channels can contain the views of the object at the different time instants. However, it is necessary to analyze how the dynamic effect will influence another parameters of the reconstructed image, particularly how can the dynamic behavior be combined with the three-dimensional effects. There are in principle two main groups of dynamic effect which can be synthesized. The first is the continuous animation when the same scene is observed from all viewing directions but with some continuous incremental changes between neighboring channels. The second group consists of effects when the observed scene completely changes if the observer moves his head or tilts the hologram -such effects are usually called flip-flop. When the continuous holographic animation is to be synthesized, the crucial problem can arise from interference between the animation and the 3D effects. If the time development is introduced in the horizontal plane, each eye of the observer will perceive a different object (taken at the different time instant). Such disproportion can rapidly disturb the observation.
To remove the problem, the reconstruction can be observed with one eye only, which means Fig. 13. Example of the real reconstruction of the true-color RGB synthetic 3D rainbow hologram designed using the synthesis at the hologram plane and fabricated using dot-matrix laser lithography (in scale).
the three-dimensionality will be perceived only through the movement parallax. Another solution could be based on introducing the time development in the vertical direction. Such an approach automatically suspends application of rainbow holograms as the rainbow effect occupies the vertical parallax. However, for full parallax holograms the approach is possible. The recording process is very similar to the one from Section 2.3.2 with the only difference in image contents for particular rows of exposures (while there the rows corresponded to different vertical views of the object, here each row corresponds to different time states of the scene). As known from rainbow holography, the vertical parallax is not crucial for 3D perception so this approach can offer satisfactory 3D reconstruction with dynamic behavior when moving the observer's head up and down. Practically, because of the complicated process of recording of volume reflection holograms, many holographic animations are based on the combination of rainbow hologram with dynamic behavior. Both 3D effect and the time development are included in the horizontal direction and the observer eyes are forced to perceive a kind of unnatural view. However, if the time development of the scene is slow and smooth (there is no complete change of the objects but only small parts are moved), the reconstruction can be satisfactorily observed including both the 3D and the dynamic effects. When the dynamic flip-flop behavior is desired, the above described problems will not apply. For example, if two different objects should be observed, one from the left and second from the right, all channels left of the normal will contain only angular views of the first object and all channels right from the normal will contain the views of the second one. The transition zone, when the left eye of the observer sees the left object and the right eye sees the right one, is usually ignored. In principle, several different objects can be recorded in a single hologram. Both approaches -full parallax (volume reflection holograms) and single parallax (rainbow holograms) can be used when appropriate requirements are met. In principle, the analysis above is independent of the chosen recording technology. Both synthesis at the hologram plane and eye-pupil plane can be used. Of course, limitations similar to those discussed in Section 2.3.2 can arise when synthesizing the volume reflection hologram at the hologram plane.

Recording technology and materials
There are various techniques which can be used for recording of synthetic holograms. In principle, they can be divided into two major groups. The first group consists of approaches where the diffractive microstructure is recorded point-by-point with high resolution. The devices used are usually the commonly available direct writers such as electron beam or laser beam lithography writers. The second group comprises approaches based on classical holography where the microstructure is created "automatically" using exposure with an interference field of two or more large laser waves.

Direct-write lithography techniques
When synthetic image holograms are calculated point-by-point, a recording device with very high resolution is needed. The typical periodicity of the hologram microstructure is for most holograms within the range 500 − 1000nm. In this case, direct-write lithography is usually used for recording. In principle, three different approaches can be chosen according to the particular application, namely the e-beam lithography, focused laser lithography, and matrix laser lithography. The finest microstructure can be recorded using a focused electron beam. Very high resolution can be achieved using the e-beam writing technology, where feature size can be on the nanometric scale. However, for image holograms such extreme resolution is not necessary. As has been shown in Section 2.2.1, the typical synthetic image hologram consist of a set of regular microgratings with periodicity just below one micron and dimensions of several microns. The e-beam lithography can easily achieve these parameters, but with very high costs and several restrictions following from the principles of the technique. The strongest limitations are usually the overall size of the hologram which can hardly exceed several cm 2 and complications connected with the fact, that the recording must take place in vacuum and on conductive recording material. In case of electron beam the interference effect cannot be exploited in order to simplify the exposure process. The exposure itself is usually extremely long (many hours to days according to the area to be exposed).
A very similar technique to e-beam writing is laser lithography with a focused laser beam. The recording process is practically identical to the e-beam recording except that instead of an e-beam, a focused laser beam is used. Usually visible lasers with wavelength in the blue part of the spectrum are used. The biggest advantage of the process is operation in normal conditions (no vacuum is needed) and overall costs of the process. The biggest disadvantage is limited resolution given by the focusability of the laser beam (which depends on the recording wavelength and numerical aperture of the focusing system). The grating periodicity of hundreds of nanometers is more or less on the edge of capabilities of the technology. The most advanced technology for recording the optical microstructures is the matrix laser lithography. In these devices, the elementary grating is recorded as a pattern of interference between two focused laser beams. Thus the parameters of the grating (periodicity and orientation) are given by the angle between the beams and their plane of incidence (instead of the size of the focused spot). Moreover, a whole micro-grating with area of hundreds of µm 2 is exposed at once in a single exposure, which rapidly speeds up the recording process. The original so-called dot-matrix devices used two physical laser beams with mechanical alignment of their geometry. However, such an approach could not ensure sufficient resolution for shaping the micro-gratings, precise alignment of the gratings within larger areas, or massive segmentation needed by multi-channel RGB stereograms. Recently, the advanced devices have been developed based on the projection of a microstructure from the computer driven micro-display with large demagnification. The basic idea of such a device is depicted in Fig. 14. (a) (b) Fig. 14. (a) Basic setup of the matrix laser writer. (b) Photograph of exposed photoresist.
The system of micro-gratings (with general parameters) is projected on a micro-display (usually a liquid crystal based spatial light modulator -SLM). The display is imaged on the recording material in laser light with large demagnification (several hundred times). In a single exposure a relatively large area is exposed at once, which can contain different gratings with arbitrary parameters. Micro-gratings within this area (typically ∼ 0.01 − 0.05 mm 2 are perfectly phase-synchronized and also a completely aperiodic microstructure can be recorded. The exposure is performed multiple times and between subsequent exposures the recording material is moved. The device can operate in typical room environment and can easily reach a recording speed of several cm 2 per hour. In Fig. 14b there is an example of the microstructure obtained using the described technology. Particular exposures in the form of rectangular tiles are clearly visible. Each tile contains a system of micro-gratings. The advantages of the direct writing are high flexibility of the recorded microstructure and excellent properties of the reconstructed image (high contrast, low noise, true-color RGB, etc.). The main disadvantages are high costs of the technology and the area of the hologram, which rarely exceeds several tens of cm 2 .

Wide-beam interference based methods
As has been stated in Section 2.2.2, the synthesis in eye-pupil plane requires two steps to create the final synthetic hologram. The second step is more or less the standard holographic transfer copying. Concerning the first step -recording the synthetic master-hologram, there are various approaches [DeBitetto, 1969;Huff, 1981;Ratcliffe, 2003]. The one that we are going to present in more detail omits the ground glass with the focused image (Fig. 7a) and it also saves energy by shaping the reference beam and omitting the slit aperture. This particular setup differs from the one shown in Fig. 7 -the image wavefront is recorded before it actually creates the image. The principle of the method is explained in Fig. 15.
(a) (b) Fig. 15. (a) Recording a single 2D image near the output pupil of the objective. (b) While replaying with the "reference" wave, the magnified image is focused in the former image plane. The magnified image in the image plane is two-dimensional, lying in the plane perpendicular to the plane of paper (the correct projection of the "big cube" in the image should be a line segment).
The recording process itself is in the following sequence. The 2D image, representing a particular view, is uploaded to a transparent SLM. The signal wave passes through the SLM and the 2D image is focused by the "writing" objective onto its image plane with some magnification. Near the output pupil of the objective, the pattern of interference between the signal and reference beam (the holopixel) is recorded onto a holographic plate. Then the plate is moved, the image on the SLM is switched for the next view and the next holopixel is recorded, and so on. The process is more complicated, since the movement of the holographic plate needs to be compensated by moving also the SLM. 2 The scheme of the particular setup can be seen in Fig. 16. All the views obtained by sampling the object are recorded in this manner and the plate is developed and bleached. Since the reference wave is collimated, the replay beam can be a wide collimated wave that reconstructs all the recorded holopixels at once. The reconstructed images from all the holopixels overlap at the place where the object was situated during capturing. The final hologram plate is placed into this place and the final hologram is recorded (similar to Fig. 7b). The device is driven by a PC -the images are sequentially uploaded to the SLM and the two x − y stages are operated as shown in the figure. Fig. 16. Scheme of the eye pupil synthesis device: 1 laser, 2 shutter, 3 mirror, 4 beam splitter, 5 microscope objective with spatial filter, 6 beam expander, 7 collimator, 8 holographic diffuser, 9 square aperture, 10 signal shade, 11 SLM, 12 special objective, 13 holographic plate.

Commonly used recording materials, light sources, and mass replication techniques
A very important issue in the image holography is the replication process. For many applications it is desirable to produce the final hologram in large quantities (e.g. holograms 2 Since the master-hologram plate is moved during the recording process, the relative position of the recording device and the already recorded images is changed. But in the final hologram, all the recorded views have to "emerge" from one particular place to form the 3D impression. That is the reason why all the possible recording setups either have to include tilting the objective or have to implement some kind of movement compensation. There are several possibilities for the mechanical movement that can be considered. However, the simplest and the most robust way from the mechanical point of view is one where the "writing" objective is static. In such a case, the signal spot is always in the same position and the reference beam also does not have to be moved. The only moving elements would be the holographic plate and the SLM (as the compensating element). The step of the holographic plate is equal to the size of the holopixel. The step of the SLM is in the opposite direction and it is smaller simply in the ratio of the objective magnification.
for document security applications are often made in series of a million). Thus it is extremely important to look for a proper technology which could enable cost effective production of large series. In classical image holography the holograms are usually copied optically in an optical setup, which is still a relatively expensive process. Volume gratings could not be copied in any other way. However, if the grating is of the relief type, it can be also copied using some of the imprint techniques (like mechanical embossing, etc.). To make this possible, the hologram must be exposed in proper relief recording material. The light source used usually depends on the requirements of the particular recording medium. For exposure of relief gratings photoresists are often used (if exposing with laser beam). For e-beam exposure the electron beam sensitive resists are used. Unfortunately, the gratings in photoresists are usually thin, so this material can be used only when the volume properties are not needed (theoretically, the relief gratings in resist can be "thick" and possess volume effects, but they can not be copied using an embossing technology). Photoresist can be used for all exposure steps or the hologram can be finally transfered to the resist material from a different recording medium. The spectral sensitivity is maximum in UV part of the spectrum so photoresists are usually exposed using short wavelength visible sources (such as semiconductor lasers with wavelength ∼ 400nm or gas lasers with wavelengths within the range 400 − 460nm). Because of their overall low sensitivity the photoresists are not perfectly suitable for primary exposure of large areas.
After the hologram is recorded in the relief material such as photoresist it is usually metalized and copied to a hard metal (usually Nickel) relief copy using the electro-forming process. Such a matrix is then used for mechanical embossing. For recording of the volume gratings whole range of holographic materials can be used according to particular needs. Silver halide gelatin, dichromated gelatin, and some photopolymers can satisfy the requirements. In contrast to the resist materials, they can be easily sensitized for laser sources within the whole visible range. Recently, diode pumped solid state lasers are widely used in holography because of their high output power, very good coherence properties, high efficiency, and relatively low operational costs. Very common are 532nm sources based on the second harmonic from Neodymium doped active material. For further details concerning the recording materials and other components necessary for the recording process see [Bjelkhagen, 1993;Collier et al., 1971].
The synthetic image holograms can be constructed in various ways according to the particular application and other demands on the hologram properties. Within this chapter only the basic ideas have been presented. Within the Optical Physics Group at the Faculty of Nuclear Sciences and Physical Engineering of the Czech Technical University in Prague, various approaches to image synthesis are researched. Besides research in the field of holographic techniques, also recording materials are developed and automated recording devices are constructed. All samples presented in this text have been fabricated using the technology available at the Optical Physics Group.