## Abstract

Photogrammetry is a technique used for studying and precisely defining the shape, dimension, and position in space of any object, using mainly measurements taken over one or more photographs of that object. Today, photogrammetry is a popular science due to its ease of application, low cost, and good results. Based on these causes, it is becoming a good alternative to scanning. This has led to its implementation in different sectors such as the archeological, architectural, and topographical for application in element reconstructions, cartography, or biomechanics. This chapter presents the fundamental aspects of this technology, as well as its great possibilities of application in the engineering field.

### Keywords

- 3D scan
- reverse engineering
- 3D design
- point cloud
- CAD
- virtual model
- 3D reconstruction
- virtual assembly
- augmented reality
- virtual reality

## 1. Reverse engineering

Reverse engineering is based on the study of certain principles and information of a product. The main function of reverse engineering is to obtain the maximum information about an element or device, including its geometry and appearance, among other things [1, 2]. Its first appearance was around World War II, in military operations.

The field of application of this type of engineering is very wide, highlighting the 3D digitalization used mainly for research, analysis, and reasoning of the technology used by other companies, for the development of elements without making use of specific information (redesign), and for the tasks of inspection or virtual metrology of a product in almost every industry [3].

The main 3D digitization technologies are shown in Figure 1, among which photogrammetry stands out for its ease of use and low cost.

## 2. Photogrammetry

Photogrammetry is distinguished by the measurement on photographs, allowing to obtain from any object its real dimensions, position, shape, and textures [4, 5]. These processes or this science emerged in the middle of the nineteenth century, being as old as photography. The first photogrammetric device and the first methodology were created in 1849 by the Frenchman Aimé Laussedat. He, “the father of photogrammetry,” used terrestrial photographs and compiled a topographic map. This method was known as iconometry, which means the art of finding the size of an object by measuring its image. Digital photogrammetry was born in the 1980s, having as a great innovation the use of digital images as a primary data source [6, 7].

The main phases of digital photogrammetry are analysis of the shape of the object and planning of the photos needed to be taken; calibration of the camera; image processing with specific software to generate a cloud of points; and transfer of this point cloud to the CAD software to create a 3D model. The accuracy of the reconstruction depends on the quality of the images and textures. Photogrammetry algorithms typically indicate the problem, such as minimizing the sum of the squares of a set of errors, known as “package fit” [8]. Structure algorithms, from motion (SfM), can find a set of 3D points (P), a rotation (R), and the camera position (t), given a set of images of a static scene with 2D points in correspondence, as shown in Figure 2 [10].

Photogrammetric technology is generally based on the illumination of one object and the inclusion of solutions derived from the measurement of conjugated points, appearing in two photographic images or measuring the conjunction of points in multiple photographic images (three or more images). There are different photogrammetric techniques. One of them is to ensure that the surface of the object has enough light and optical texture to allow conjugated dots to be paired through two or more images. In some cases, optical texture can be achieved by projecting a pattern over the surface of the object at the time of image capture [11, 12, 13].

## 3. Fundamentals of photogrammetry

The basic mathematical equations underlying photogrammetry, called collinearity equations, are responsible for unifying the coordinate system of the image in the camera with the object being photographed [14] (Eqs. (1)–(3)):

where *λ* = scaling factor; *M* = rotation matrix; *Xo, Yo *, and *Zo * = the position of the perspective center in the object’s space; and *pn = (xn, yn)T *and *Pn = (Xn, Yn, Zn)T * = target *n* coordinates at the image plane and the space of the object, respectively. The above equation manipulated algebraically produces the well-known collinearity equations that relate the location of destination *nth* in the space of objects with the corresponding point in the plane of the image:

where *mij *(*i, j* = 1, 2, 3) = elements of the rotation matrix M which are functions of the Euler orientation angles (*ω, ф, к*), which are essentially the angles of tilt, rotation and rotation of the camera in the object space (Eq. (4)–(11))

The plane of the image can be transformed analytically into its *X*, *Y*, and *Z* coordinates in global space. Photogrammetry is effective and computationally simple. It should be noted that its algorithm is based on definitions of both interior and exterior orientations. In a photographic system, if the internal parameters of a camera are known, any spatial point can be fixed by the intersection of two beams of light that are projected.

There are two main factors that induce photogrammetry measurement errors: System error due to lens distortion and random error due to human factors.

System error due to lens distortion. It causes a point in the image in the plane to move from its true position (

*x, y*) to a disturbed position. The coordinates of any point in the image can be compensated with Eqs. (13)–(14):

In the lens, the largest error occurs at the point of the projected image. Therefore, *dx*, *dy* can be broken down by Eqs. (15)–(16):

2. Random error due to human factors. Theoretically, a point captured in two different photos is enough to set its 3D coordinates. To complete this, this step requires an identification and marking of the point in the two images. Any human can have failures in the marking of points, giving rise to the random error.

## 4. Evolution from analytical to digital photogrammetry

From the analytical photogrammetry, it is possible to describe the evolution from photogrammetry to digital, based on physical and mathematical principles. The main distinction is given by the nature of the measurement of the information taken in the images [15].

The analytical photogrammetry coordinates the image, and the gray digital image is evaluated with the digital photogrammetry. In both methods appropriate Gaussian-Markov evaluation procedures are used. Pertinent relations between object space models and image space data are obtainable. Radiometric concerns take a more important role than previously. The data evaluation of the gray value of the digital image is no longer based on the digital image correlation. As an alternative, the gray values of an image are projected directly onto the models in the object space, this being a new principle. However, these numerical procedures in digital photogrammetry need to be stabilized by adjustment methods. Thus, the original concept of digital photogrammetry can be pragmatic to images from any sensor.

Considerable advances in digital photogrammetry have been made in recent years due to the availability of new hardware and software, such as image processing workstations and increased storage capacity [16, 17].

## 5. Device and acquisition characteristics

The main camera and photography parameters are focal length, focal point, bias, distortion, and pixel error; they will allow more accurate calibration [18] and are shown in Figure 3.

### 5.1 Camera objective

Included in the optical part of the camera, it is in charge in projecting the image that crosses it on the same plane and in outstanding conditions of sharpness. Therefore, it is a matter of focusing on the objects that are at equal distance on the focal plane. From certain distance, all the objects will be projected on the same plane. The light points are transmitted to an element that composes the scenario. As a result of diffraction, this is shown as a circular point with a halo around it and concentric rings, named Airy discs. Suppressing them is unfeasible because it is a physical light effect. Even so, it would be desirable for such rings to be as diffuse and thin as possible [17, 19].

Its resolving capacity depends on two parameters: aberrations and diffraction. One of the main functions of the objective is to suppress aberrations. When the diaphragm is closed, the aberrations are placated, and the only limiting factor is diffraction. When the diaphragm is opened, diffraction diminishes its significance in the wake of aberrations, which add up to force [20].

### 5.2 Focal length

This parameter is measured from the optical center of the lens to the focal plane, when the camera focused toward the infinity [5, 21]. Normal lenses are those which have a distance close to the diagonal of the cliché. The representation of the focal length is shown in Figure 4.

### 5.3 Relative aperture

Relative aperture (*Ab*) is the connection of the lens diameter (*D*) and its focal length (*f*) (Eq. (17)).

It is shown by the denominator, known as brightness or “f-number.” In a different way, the aperture is the span through which light enters to be captured by the sensor. The more spacious the opening will be, the more light will enter the sensor as the number becomes smaller [4, 7]:

### 5.4 Field angle

This is the viewing angle of the camera and is closely related to the focal length and dimension of the sensor [8, 22]. A schematic representation is proposed in Figure 5.

### 5.5 Shutter

It is a mechanism that keeps the light passing through the lens into the closed camera. At certain intervals of time, it has the ability to open, allowing the passage of light so that the film can be impressed. The opening time can be set [21].

### 5.6 Focus depth

It is related to the permissiveness that occurred between obtaining a sharp image with a suitable impression and another less adequate exposure, although also producing a sharp image. Depth of focus is altered by lens magnification and numerical aperture, and under some pretexts, large aperture systems have more pronounced depths of focus than low aperture systems, even if the depth of field is small [19].

### 5.7 Depth of field

Depth of field is the area of sharp reproduction seen in the photograph. In this one, there are some objects observed which are located at a certain distance, as well as others more distant or adjacent to them [20].

### 5.8 Sensor

Its function is to modify the light received in order to obtain a digital systematization. The sensor is called pixel in its minimum element. A digital image consists of a set of pixels. The technology based on complementary metal oxide semiconductor (CMOS) sensors is the most applied. The sensors consist of a semiconductor and sensitive material in the visible spectrum, between 300 and 1000 nm [10]. Charge-coupled device (CCD) sensors are becoming obsolete due to the cost and speed of processing images.

The comparison reading of the information in the CMOS sensors has the advantage of obtaining enough captures, obtaining readings using less time and with greater flexibility. Using a high dynamic range of work, high contrasts and a correct display of objects are achieved. In terms of quality, the physical size of the sensor is more significant than the number of cells or resolution. A large unit may allow higher-quality photographs to be taken than another sensor with a higher resolution but with a smaller surface [23].

As far as color is concerned, it must be seen that color is just a human visual perception. In order to be able to glimpse the color of an object, it is necessary to have a light source and something that reflects this light. A color is represented in digital format by applying a system of representation. The most commonly used is the RGB system. To represent a color, the exact percentages of primary red, primary green, and primary blue (RGB, red, green, blue) must be available. By this way, the color is displayed through the implementation of three numbers [24].

### 5.9 Diaphragm

The function of this element is to enlarge or decrease the percentage of light circulating through the target. The diaphragm aperture is related to the percentage of aperture it has. It is counted in f-numbers. The step is the shift from one value to the next. The ratio of luminosity, according to the scale of the f, does it in a factor of 2 [5] (Figure 6).

### 5.10 Other aspects to be taken into account

#### 5.10.1 Focus

The first step in taking a picture is focusing. The most commonly used types of automatic focusing are [25]:

Phase detection autofocus (PDAF). Its management is done by applying photodiodes through the sensor. The focusing element is moved in the lens to focus the image. It is a slow and inaccurate system due to the use of photodiodes.

Dual pixel. This method uses more focus points along the sensor than the PDAF. This system uses two photodiodes at each pixel to compare minimal dissimilarities. This is the most effective focusing technology.

Contrast detection. It is the oldest of the three systems exposed. Its operation theoretically bases that the contrast of an image is greater, and its edges are appreciated in a clearer way, when it is focused correctly. The disadvantage is its slowness.

#### 5.10.2 Perspective

A photograph is a perspective image of an object. If straight lines are drawn from all points of an object to a fixed point (called point of view or center of projection) and lines are considered that cross an intermediate surface (called projection surface), the image is drawn on this surface and is known as perspective [1, 26].

The camera is responsible for executing and materializing perspectives of objects. The projection surface is the flat extension of the image sensor or the capture surface. Focal distance is the orthogonal distance separating the viewpoint from the projection surface. Knowing the distance between the point of view and the plane that contains the points of the object, the focal distance with which the photograph was taken and the inclination of the plane in which the points of the object to be measured are located with respect to the projection plane, the reliable coordinates of the points can be disintegrated, using basic trigonometry (Figure 7).

The orthogonal and the geometric perspectives are the most widely used in photogrammetry. Using a conventional camera (reel or digital), a geometric perspective will be plotted. From a photograph in which the points of the object to be measured are in a plane parallel to the projection plane or the one on which the photographic film is spread, the real position of the points in space is obtained by using Eqs. (18)–(19):

where *f* is the focal length, (*X, Y, Z*) are the actual coordinates of the point, and *P(x,y)* are the coordinates on the projection plane of the image or photograph.

It would be in front of more complex expressions if the planes that contain the points are not parallel to the one of projection, being indispensable to know the inclination of the plane having as reference the plane of projection. In practice, in order to avoid complications in the calculation of coordinates, photographs are usually taken in a way that the planes are parallel.

#### 5.10.3 Exposure

It is based on the capture of a scene by means of a sensitive material. In analog photography, this corresponds to the film and in digital photography, the sensor. Exposure is based on three variables to control the entry of light into the focal plane (sensor) and achieve an adequate exposure [9]:

ISO Sensitivity: it indicates the amount of light required to take a picture. The higher the light, the lower the ISO.

Diaphragm opening: it inspects the light reaching the focal plane, along with the shutter speed, and regulates the depth of field of the photograph.

Shutter speed: shutter opening time allows light to reach the sensor. The higher the shutter speed, the lower the percentage of light reaching the sensor.

When a sensor has the ability to capture as many tones (dynamic range) and information (light) as its ability allows, the picture is perfectly exposed.

#### 5.10.4 Dynamic range

It measures the amount of light and dark tones that a camera has the ability to capture in the same picture. It shows the amount of tonal nuances that a camera is capable of capturing, measurable by contrast and sharpness.

Contrast and sharpness are based on the differentiation of tonality with which a pair of white and black lines are obtained, captured, or reproduced. It is measurable of the degree of detail, being 100% when both lines can be perfectly differentiated as pure whites and blacks. Resolution and contrast are closely related concepts. If the contrast falls below 5%, it is difficult to observe any detail, which is shown more clearly and distinctly the higher it is. Frequency and modulation are shown in the way they are altered when light passes through the different optical components of the lens of the photographed image, thanks to contrast transfer functions. As the viewer moves away, a substantial loss of contrast begins to be noticed [12].

By performing a contrast correction, different filters are applied to the central zones instead of the peripheral zones. An example of contrast and resolution is shown in Figure 8.

#### 5.10.5 Aberrations

One of the most outstanding components of a camera is the photographic lens, which produces a series of aberrations that distort the images of the photographs, making difficult to visualize the correct dimensions of the object [27, 28]. There are different types of aberrations, being the most common in photographic lenses:

Point aberrations: housed in the position arranged by the paraxial optics. It is a “stain” instead of a point. There are also chromatic aberration, spherical aberration, astigmatism, and coma.

Shape aberrations: the point is shown as a point but with a different position to the one arranged by means of paraxial approximation. This is a systematic error and can be of two types: field curvature and distortions.

Field curvature: defect when creating the image, being curved instead of flat. It is difficult to correct the aberration, but it can be mitigated in a low percentage.

Distortion: only affects the shape of the image. It occurs due to the difference in the scale of reproduction of the image off-axis. If an object with straight lines is photographed, such as a square, the center lines will appear straight, and the edge lines will curve inward or outward producing the so-called barrel or cushion distortions. This aberration is not corrected by closing the diaphragm. This error affects the tone of the image and needs to be corrected.

#### 5.10.6 Environmental conditions

Stability of environmental conditions must be achieved:

Temperature: the ideal temperature for taking a photograph should be between approximately 18 and 26° in order to avoid dilatation of the lens.

Wind: calm wind, to avoid hindrances when taking the photo.

Illumination: sufficient light bulb. In most cases, natural light is not sufficient, and it is necessary to use spotlights or other artificial elements.

Other significant parameters, such as the texture of the element, significantly help the quality of the 3D reconstruction, and optimal results are obtained with the highest level of ambient light (exposure 1/60, f/2.8, and ISO sensitivity 100). The surface of an element should be opaque, with Lambertian reflection and surface homogeneity. A single point on the surface of the object must be visible from at least two or more sensors [26, 29].

#### 5.10.7 Image quality

Image quality is a prerequisite for working with it properly. There are two main characteristics that define it:

Resolution in amplitude (bit depth): number of bits per point of an image

Spatial resolution: the number of pixels per unit area

Image processing is the transformation of an input image into an output image. It is carried out to facilitate the analysis of the image and to obtain a greater reliability of this [30]. Among the transformations, those that eliminate noise or variation in the intensity of the pixels stand out. There are two types of operations: individual operations (rectification or binarization) and neighborhood operations (filtering).

#### 5.10.8 Histogram

This is a visual tool very useful for the study of digital images. With the naked eye, it is possible to study the contrast or the distribution of intensities, because it follows the following discrete function of Eq. (20):

where *x* is the level of gray or color and *n* is the number of pixels in the image with this value. The histogram is normalized in values ranging from 0 to 1. In Figure 9 it is possible to see their different zones [18, 31].

The most common errors in the image, which prevent good image quality, can be identified in the histogram and are muted tones, black areas, overexposure or burned areas, and backlight. In order to know that a good image is acquired, the best thing is to have a histogram that has the shape of a Gauss bell, that is to say, that has the most information in the central part and less in the extremes. Another important point is that the histogram must embrace and reach both ends, so as to ensure that there are blacks and whites in the photograph.

#### 5.10.9 Binarization

The representation of an image with two values is obtained. The dimensions of the image are still preserved. The decision threshold must be chosen correctly and used in a step filter with an algorithm similar to Eq. (21):

where 0/1 represents the black/white values and *f* is the value of the gray tone of the coordinates (*x,y*) [32]. Figure 10 shows a grayscale image versus a binary.

To obtain an image with sufficient quality, the binarization must correspond with white pixels to the objects of interest, being the blacks of the environment. If the object of interest turns out to be darker than the environment, a reversal is applied after the binarization. The most important point in the process is the calculation of the threshold. There are different methods for this: histogram, clustering, entropy, similarity, spatial, global, and local.

The setting of the threshold value is latent, due to its difficulty, in all methods. The techniques are supported by statistics applied to the histogram. They are as follows: carry error method, Otsu method, and Saulova’s pixel deviation method.

#### 5.10.10 Spatial filtering

It is based on a convolution operation between the two-dimensional functions image, f, and a nucleus, called h, in digital images. This operation aims to transform the value of a pixel p into the position (*x,y*), always taking into account the values of the adjacent pixels. For this operation, a weighted sum of the values of the neighboring points of this point p is required. A mask (*h*), behaving like a filter, is in charge of exposing the values of the weighting. The size of the mask varies according to the pixels used.

#### 5.10.11 Geometrical transformations

These operations modify the spatial coordinates of the image. There are several operations that are easy to understand and apply, such as interpolation, rotation, rectification, and distortion correction.

#### 5.10.12 Lens distortion

Due to the geometry of the lens, it reproduces a square object with variations in its parallel lines. There are three types of distortion: barrel, pincushion, and mustache (combination of the first two) (Figure 11) [25, 33]. This error is negligible in a photograph of a natural scene, but to take engineering measurements and obtain a virtual object, it is necessary to compensate for the distortion. There is a mathematical model for the treatment of distortion.

The barrel distortion is centered and symmetrical. Therefore, to correct the distortion of a certain point, a radial transformation is performed, expressed mathematically in Eq. (22):

where *x, y*), (x_{d}, y_{d}) represents the center of the distortion which is usually a point near the center of the image, and finally the radial function *L(r)* determines the magnitude of the distortion correction as a function of the distance from the point to the center of distortion [34].

The radial function *L(r)* is performed by applying two strategies. The first one gives rise to the so-called polynomial models (Eq. (23)):

The second one is based on an approach (Eq. (24)):

The values *k1–kn *are called distortion model parameters. These values, together with the distortion center coordinates *(xd, yd)*, completely represent the distortion model. The distortion of the lens is represented by the *ki *coefficients. They are obtained from a known calibration image.

#### 5.10.13 Rectification (perspective distortion)

Image correction is necessary because either it is difficult to keep the optical axis vertical at all points of the shot or the axis is tilted toward the vertical. Vertical images are obtained free of displacement because of the inclination of the shot but still have inclinations, product of the depth of the workpiece. Displacements can be suppressed by applying differential grinding or orthorectification process. In the original digital image or a scan, the technique is applied pixel by pixel. In a scanned image, the initial data are the coordinates of the control points. The procedure is divided into two steps:

Determination of the mathematical transformation related to real coordinates and those belonging to the image

Achievement of new image, being aligned to the reference system

After this process, it is necessary to know that all the pixels of the resulting orthophotography have their level of gray, performing a digital resampling [17, 34]. Figure 12 shows an unrectified (left) and rectified (right) photograph.

Several resamples are made on the initial image. Three resampling methods are regularly used: bilinear interpolation, nearest neighbor, and bicubic convolution. The transformations to be applied to the images are [19] Helmert transformation; affine transformation; polynomial transformation; and two-dimensional projective transformation.

## 6. Obtaining a 3D model from 2D photographs

To obtain a 3D model of an object from a 2D one, photographs must be taken from different views, with adequate quality. From these photographs, the reconstruction process begins.

3D reconstruction is the process by which real objects are reproduced on a computer. Nowadays there are several reconstruction techniques and 3D mesh methods, having a function to obtain an algorithm that is able to make the connection of the set of representative points of the object in form of surface elements. The efficiency with which the techniques are used will be linked to the final quality of the reconstruction.

The stereoscopic scene analysis system presented by Koch uses image matching, object segmentation, interpolation, and triangulation techniques to obtain the 3D point density map. The system is divided into three modules: sensor processing, image pair processing, and model-based sequence processing.

Pollefeys features a 3D reconstruction process based on well-defined stages. The input is an image sequence, and the output of the process is a 3D surface model. The stages are the following: image ratio, structure and motion recovery, dense matching, and model construction.

Another proposal is expressed by Remondino. He presents a 3D reconstruction system following these steps: image sequence acquisition and analysis, image calibration and orientation, matching process and the generation of points, and 3D modeling [18].

### 6.1 From a photograph

It is used in revolutionary pieces. With only one photograph, it is possible to obtain the axis and dimensions. In 1978 Barrow and Tenenbaum demonstrated that the orientation of the surface along the silhouette can be calculated directly from the image data, resulting in the first study of silhouettes in individual views. Koenderink showed that the sign of the silhouette’s curvature is equivalent to that of the Gaussian curvature. Thus, concavities, convexities, and inflections of the silhouette indicate hyperbolic, convex, and parabolic surface points, respectively. Finally, Cipolla and Blake exposed that the curvature of the silhouette has the corresponding sign as the normal curvature along the contour generator in the perspective projection. A similar result was derived for the orthographic projection by Brady [35].

First, the silhouette *ρ* of a surface of revolution (SOR) is extracted from the image with a Canny edge detector, and the harmonic homology *W* that maps each side of *ρ* to its symmetrical complement is predictable by minimizing the geometric detachments among the original silhouette *ρ* and its transformation version *ρ´ = Wρ*. The image is rectified, and the axis of the figure is rotated and put in orthogonal projection (Figure 13).

The apparent contour is first manually segmented from the rectified silhouette. This can usually be done easily by removing the upper and lower elliptical parts of the silhouette. The points are then sampled from the apparent contour, and the tangent vector (i.e.,

For *Ψ ≠ 0*, *Rx(Ψ)* first transforms the display vector *p(s)* and the associated surface normal *n(s*) at each sample point: the transformed display vector is normalized so that its third coefficient becomes one, and the following Eqs. (25)–(26) can be used to recover the depth of the sample point:

where

### 6.2 From two photographs

This section is based on an investigation using a practical heuristic method, for the reconstruction of structured scenes from two uncalibrated images. The method is based on an initial estimation of the main homographies of the initial 2D point coincidences, which may contain some outliers, and the homographies are recursively refined by incorporating the point and line support coincidences on the main spatial surfaces. The epipolar geometry is then recovered directly from the refined homogenies, and the chambers are calibrated from three orthogonal vanishing points, and the infinite homography is recovered.

First, a simple homography-guided method is proposed to fit and match the line segments between two views, using Canny edge detector and regression algorithms. Second, the cameras are automatically calibrated with the four intrinsic parameters that vary between the two views. A RANSAC mechanism is adopted to detect the main flat surfaces of the object from 2D images. The advantages of the method are that it can build more realistic models with minimal human interactions and it also allows more visible surfaces to be reconstructed on the detected planes than traditional methods that can only reconstruct overlapping parts (Figure 14).

### 6.3 Though more than two photographs

#### 6.3.1 Reconstruction of geological objects

This is one of the fields where photogrammetry is most applied nowadays. In this specific point, the reconstruction is carried out applying Delaunay’s triangulation and the tetrahedron. Many data models based on tetrahedron mesh have been developed to represent the complex objects in 3D GIS.

The tetrahedron grid can only be used to represent the geometrical structure of geological objects. The natural characteristics of geological objects are reflected in their different attributes, such as different rock formations, different contents of mineral bodies, etc. It is defined that the attribute value of the internal point can be linearly interpolated from the attribute values in four vertices in a tetrahedron. But the attributes could change suddenly between different formations and different mineral bodies. To cope with sudden changes, interpolation of the tetrahedron is needed that can only be applied to six sides of a tetrahedron. Those interpolated points are only used as time data for the following processing [37].

#### 6.3.2 Reconstruction of objects with high surface and texture resolution

This section presents a robust and precise system for the 3D reconstruction of real objects with shapes and textures in high resolution. The reconstruction method is passive, and the only information required is 2D images obtained with a camera calibrated from different viewing angles as the object rotates on a rotating plate. The triangle surface model is obtained through a scheme that combines the octree construction and the walking cube algorithm. A texture mapping strategy based on surface particles is developed to adequately address photographic-related problems such as inhomogeneous lighting, lights, and occlusion [38]. To conclude, the results of the reconstruction are included to demonstrate the quality obtained (Figure 15).

The scheme combining octree construction and isolevel extraction through marching cubes is presented for the problem concerning the shape of the silhouette. The use of octree representation allows to reach very high resolutions, while the method of fast walking cubes is adapted through a properly defined isolevel function to work with binary silhouettes, resulting in a mesh of triangles with vertices precisely located in the visual object.

Calibration is performed on the camera and rotary table. One of the problems found is the discontinuity of the texture due to the nonhomogeneous lighting in different parts of the element due to shadows.

Next, the octree is represented. An octree is a hierarchical tree structure that can be used to represent volumetric data in terms of cubes of different sizes. Each octree node corresponds to a cube in the octree space that is entirely within the object. This opens up different possibilities: voxels, particles, triangles, and more complicated parametric primitives, such as splines or NURBS. Voxels are used to represent volumes but can also be used to represent surfaces. A related primitive is a particle that is defined by its color, orientation, and position. By marching the cube triangulation of the octree, the white and black points denote the corners of the cube that are inside and outside, respectively, while the gray points are the points of the triangle’s vertex on the surface (Figure 16).

The application of the isolevel function calculated by means of the dichotomous subdivision procedure allows for the construction of a faithful model of the object. The triangular vertices that make up the object’s mesh are placed precisely on the surface of the digitized model even at low resolutions. This creates an efficient compromise between resolution and geometric accuracy. The octree construction followed by the walking cube algorithm generates a triangular mesh consisting of an excessive number of triangles, which must be simplified.

#### 6.3.3 Object reconstruction

The reconstruction of objects is mainly based on the archeological field. The process to obtain the 3D model will be governed by Figure 17.

First of all, corresponding or common characteristics must be found among the images of the object. The process occurs in two phases:

The reconstruction algorithm generates a reconstruction in which dimensions are not correctly defined. A self-calibration algorithm performs a reconstruction equivalent to the original one, formed by a set of 3D points.

All the pixels of an image are made to coincide with those of the neighboring images so that the system can reconstruct these points.

The system selects two images to set up an initial projective reconstruction frame and then reconstructs the matching feature points through triangulation.

Then a dense surface estimation is performed. To obtain a more detailed model of the observed surface, a dense matching technique is used. The 3D surface is approached with a triangular grid, to reduce geometric complexity and adapt the model to the requirements of the computer graphic display system. Then construct a corresponding 3D mesh by placing the triangle vertices in 3D space according to the values found in the corresponding depth map. To reconstruct more complex shapes, the system must combine multiple depth maps. Finally, it is provided with texture.

#### 6.3.4 3D reconstruction of the human body

It is used for medical purposes in many cases, as a base for implants, splints, etc. The process consists of the following parts: acquisition and analysis of the image sequence; calibration and orientation of the images; matching process on the surface of the human body; and generation and modeling of the point cloud. Once the necessary images have been obtained from different points of view, the calibration and orientation of the images are carried out.

The choice of the camera model is often related to the final application and the required accuracy. The correct calibration of the sensor used is one of the main objectives. Another important point is image matching [40].

To evaluate the quality of the matching results, different indicators are used: an ex post standard deviation of the least squares adjustment, the standard deviation of the change in the x-y directions, and the shift from the initial position in the x-y directions. The performance of the process, in the case of uncalibrated images, can only be improved with a local contrast enhancement of the images.

Finally, 3D reconstruction and modeling of the human body shape is performed. The 3D coordinates of each matching triplet are calculated through a forward intersection. Using collinearity and the results of the orientation process, the 3D paired points are determined with a solution of least squares. For each triplet of images, a point cloud is calculated, and then all the points are joined together to create a unique point cloud. A spatial filter is applied to reduce noise and obtain a more uniform point cloud density. Figure 18 shows the results before and after filtering (approximately 20,000 points, left); a view of the recovered point cloud with pixel intensity (center); and a 3D human model (right).

The system is composed of two main modules. The first one is in charge of image processing, to determine the depth map in a pair of views, where each pair of successive views follows a sequence of phases: detection of points of interest, correspondence of points, and reconstruction of these. In this last phase, the parameters that describe the movement (rotation matrix R and translation vector T) between the two views are determined. This sequence of steps is repeated for all successive pairs of views of the set.

The second module is responsible for creating the 3D model, for which it must determine the total 3D points map generated. In each iteration of the previous module, the 3D mesh is generated by applying Delaunay’s triangulation method. The results obtained from the process are modeled in a virtual environment to obtain a more realistic visualization of the object [16].

The number of detected minutiae is related to the number of reconstructed 3D points and the quality of that reconstruction (higher number of details). Therefore, the higher the number of points on the map, the more detailed areas are obtained. In some cases this does not apply, due to the geometry of the object, for example, in a cube, more points can result in a distorted object.

## 7. Conclusion

The technological development of 3D photogrammetry makes it a real option in the various applications of 3D scanners. Among the different benefits it brings are faster raw data acquisition, simplicity, portability, and more economical equipment. Different studies have verified the accuracy and repeatability of 3D photogrammetry. These investigations have compared the digital models of objects obtained from 2D digital photographs with those generated by a 3D surface scanner. In general, the meshes obtained with photogrammetric techniques and with scanners show a low degree of deviation from each other. The surface settings of photogrammetric models are usually a little better. For these reasons, photogrammetry is a technology with an infinite number of engineering applications.

In this chapter the basic fundamentals, the characteristics of the acquisition, and the aspects to be taken into account to obtain a good virtual model from photogrammetry have been explained.

## Acknowledgments

The authors would like to thank the call for Innovation and Teaching Improvement Projects of the University of Cadiz and AIRBUS-UCA Innovation Unit (UIC) for the Development of Advanced Manufacturing Technologies in the Aeronautical Industry.

## Conflict of interest

The authors declare no conflict of interest.