Open access peer-reviewed chapter

Photogrammetry as an Engineering Design Tool

Written By

Ana Pilar Valerga Puerta, Rocio Aletheia Jimenez-Rodriguez, Sergio Fernandez-Vidal and Severo Raul Fernandez-Vidal

Submitted: 27 April 2020 Reviewed: 24 May 2020 Published: 26 June 2020

DOI: 10.5772/intechopen.92998

From the Edited Volume

Product Design

Edited by Cătălin Alexandru, Codruta Jaliu and Mihai Comşit

Chapter metrics overview

766 Chapter Downloads

View Full Metrics

Abstract

Photogrammetry is a technique used for studying and precisely defining the shape, dimension, and position in space of any object, using mainly measurements taken over one or more photographs of that object. Today, photogrammetry is a popular science due to its ease of application, low cost, and good results. Based on these causes, it is becoming a good alternative to scanning. This has led to its implementation in different sectors such as the archeological, architectural, and topographical for application in element reconstructions, cartography, or biomechanics. This chapter presents the fundamental aspects of this technology, as well as its great possibilities of application in the engineering field.

Keywords

  • 3D scan
  • reverse engineering
  • 3D design
  • point cloud
  • CAD
  • virtual model
  • 3D reconstruction
  • virtual assembly
  • augmented reality
  • virtual reality

1. Reverse engineering

Reverse engineering is based on the study of certain principles and information of a product. The main function of reverse engineering is to obtain the maximum information about an element or device, including its geometry and appearance, among other things [1, 2]. Its first appearance was around World War II, in military operations.

The field of application of this type of engineering is very wide, highlighting the 3D digitalization used mainly for research, analysis, and reasoning of the technology used by other companies, for the development of elements without making use of specific information (redesign), and for the tasks of inspection or virtual metrology of a product in almost every industry [3].

The main 3D digitization technologies are shown in Figure 1, among which photogrammetry stands out for its ease of use and low cost.

Figure 1.

Classification of 3D scanning technologies.

Advertisement

2. Photogrammetry

Photogrammetry is distinguished by the measurement on photographs, allowing to obtain from any object its real dimensions, position, shape, and textures [4, 5]. These processes or this science emerged in the middle of the nineteenth century, being as old as photography. The first photogrammetric device and the first methodology were created in 1849 by the Frenchman Aimé Laussedat. He, “the father of photogrammetry,” used terrestrial photographs and compiled a topographic map. This method was known as iconometry, which means the art of finding the size of an object by measuring its image. Digital photogrammetry was born in the 1980s, having as a great innovation the use of digital images as a primary data source [6, 7].

The main phases of digital photogrammetry are analysis of the shape of the object and planning of the photos needed to be taken; calibration of the camera; image processing with specific software to generate a cloud of points; and transfer of this point cloud to the CAD software to create a 3D model. The accuracy of the reconstruction depends on the quality of the images and textures. Photogrammetry algorithms typically indicate the problem, such as minimizing the sum of the squares of a set of errors, known as “package fit” [8]. Structure algorithms, from motion (SfM), can find a set of 3D points (P), a rotation (R), and the camera position (t), given a set of images of a static scene with 2D points in correspondence, as shown in Figure 2 [10].

Figure 2.

Structure of the motion algorithm [9].

Photogrammetric technology is generally based on the illumination of one object and the inclusion of solutions derived from the measurement of conjugated points, appearing in two photographic images or measuring the conjunction of points in multiple photographic images (three or more images). There are different photogrammetric techniques. One of them is to ensure that the surface of the object has enough light and optical texture to allow conjugated dots to be paired through two or more images. In some cases, optical texture can be achieved by projecting a pattern over the surface of the object at the time of image capture [11, 12, 13].

Advertisement

3. Fundamentals of photogrammetry

The basic mathematical equations underlying photogrammetry, called collinearity equations, are responsible for unifying the coordinate system of the image in the camera with the object being photographed [14] (Eqs. (1)(3)):

xnx0yny0c=λMXnX0YnY0ZnZ0E1

where λ = scaling factor; M = rotation matrix; Xo, Yo, and Zo = the position of the perspective center in the object’s space; and pn = (xn, yn)T and Pn = (Xn, Yn, Zn)T = target n coordinates at the image plane and the space of the object, respectively. The above equation manipulated algebraically produces the well-known collinearity equations that relate the location of destination nth in the space of objects with the corresponding point in the plane of the image:

xnx0=cm11XnX0+m12YnY0+m13ZnZ0m31XnX0+m32YnY0+m33ZnZ0E2
yny0=cm21XnX0+m22YnY0+m23ZnZ0m31XnX0+m32YnY0+m33ZnZ0E3

where mij (i, j = 1, 2, 3) = elements of the rotation matrix M which are functions of the Euler orientation angles (ω, ф, к), which are essentially the angles of tilt, rotation and rotation of the camera in the object space (Eq. (4)–(11))

m11=cosфcosкE4
m12=sinωsinфcosк+cosωsinкE5
m13=cosωsinфcosк+sinωsinкE6
m21=cosфsinкE7
m22=sinωsinфsinк+cosωcosкE8
m23=cosωsinфsinк+sinωcosкE9
m31=sinфE10
m32=sinωcosфE11
m33=cosωcosфE12

The plane of the image can be transformed analytically into its X, Y, and Z coordinates in global space. Photogrammetry is effective and computationally simple. It should be noted that its algorithm is based on definitions of both interior and exterior orientations. In a photographic system, if the internal parameters of a camera are known, any spatial point can be fixed by the intersection of two beams of light that are projected.

There are two main factors that induce photogrammetry measurement errors: System error due to lens distortion and random error due to human factors.

  1. System error due to lens distortion. It causes a point in the image in the plane to move from its true position (x, y) to a disturbed position. The coordinates of any point in the image can be compensated with Eqs. (13)(14):

x´n=xn+dxE13
y´n=yn+dyE14

In the lens, the largest error occurs at the point of the projected image. Therefore, dx, dy can be broken down by Eqs. (15)(16):

dx=dxr+dxdE15
dy=dyr+dydE16

  1. Random error due to human factors. Theoretically, a point captured in two different photos is enough to set its 3D coordinates. To complete this, this step requires an identification and marking of the point in the two images. Any human can have failures in the marking of points, giving rise to the random error.

Advertisement

4. Evolution from analytical to digital photogrammetry

From the analytical photogrammetry, it is possible to describe the evolution from photogrammetry to digital, based on physical and mathematical principles. The main distinction is given by the nature of the measurement of the information taken in the images [15].

The analytical photogrammetry coordinates the image, and the gray digital image is evaluated with the digital photogrammetry. In both methods appropriate Gaussian-Markov evaluation procedures are used. Pertinent relations between object space models and image space data are obtainable. Radiometric concerns take a more important role than previously. The data evaluation of the gray value of the digital image is no longer based on the digital image correlation. As an alternative, the gray values of an image are projected directly onto the models in the object space, this being a new principle. However, these numerical procedures in digital photogrammetry need to be stabilized by adjustment methods. Thus, the original concept of digital photogrammetry can be pragmatic to images from any sensor.

Considerable advances in digital photogrammetry have been made in recent years due to the availability of new hardware and software, such as image processing workstations and increased storage capacity [16, 17].

Advertisement

5. Device and acquisition characteristics

The main camera and photography parameters are focal length, focal point, bias, distortion, and pixel error; they will allow more accurate calibration [18] and are shown in Figure 3.

Figure 3.

Scheme of operation of a camera objective.

5.1 Camera objective

Included in the optical part of the camera, it is in charge in projecting the image that crosses it on the same plane and in outstanding conditions of sharpness. Therefore, it is a matter of focusing on the objects that are at equal distance on the focal plane. From certain distance, all the objects will be projected on the same plane. The light points are transmitted to an element that composes the scenario. As a result of diffraction, this is shown as a circular point with a halo around it and concentric rings, named Airy discs. Suppressing them is unfeasible because it is a physical light effect. Even so, it would be desirable for such rings to be as diffuse and thin as possible [17, 19].

Its resolving capacity depends on two parameters: aberrations and diffraction. One of the main functions of the objective is to suppress aberrations. When the diaphragm is closed, the aberrations are placated, and the only limiting factor is diffraction. When the diaphragm is opened, diffraction diminishes its significance in the wake of aberrations, which add up to force [20].

5.2 Focal length

This parameter is measured from the optical center of the lens to the focal plane, when the camera focused toward the infinity [5, 21]. Normal lenses are those which have a distance close to the diagonal of the cliché. The representation of the focal length is shown in Figure 4.

Figure 4.

Representation and focal length types on a camera.

5.3 Relative aperture

Relative aperture (Ab) is the connection of the lens diameter (D) and its focal length (f) (Eq. (17)).

It is shown by the denominator, known as brightness or “f-number.” In a different way, the aperture is the span through which light enters to be captured by the sensor. The more spacious the opening will be, the more light will enter the sensor as the number becomes smaller [4, 7]:

Ab=DfE17

5.4 Field angle

This is the viewing angle of the camera and is closely related to the focal length and dimension of the sensor [8, 22]. A schematic representation is proposed in Figure 5.

Figure 5.

Focal distances and corresponding angles.

5.5 Shutter

It is a mechanism that keeps the light passing through the lens into the closed camera. At certain intervals of time, it has the ability to open, allowing the passage of light so that the film can be impressed. The opening time can be set [21].

5.6 Focus depth

It is related to the permissiveness that occurred between obtaining a sharp image with a suitable impression and another less adequate exposure, although also producing a sharp image. Depth of focus is altered by lens magnification and numerical aperture, and under some pretexts, large aperture systems have more pronounced depths of focus than low aperture systems, even if the depth of field is small [19].

5.7 Depth of field

Depth of field is the area of sharp reproduction seen in the photograph. In this one, there are some objects observed which are located at a certain distance, as well as others more distant or adjacent to them [20].

5.8 Sensor

Its function is to modify the light received in order to obtain a digital systematization. The sensor is called pixel in its minimum element. A digital image consists of a set of pixels. The technology based on complementary metal oxide semiconductor (CMOS) sensors is the most applied. The sensors consist of a semiconductor and sensitive material in the visible spectrum, between 300 and 1000 nm [10]. Charge-coupled device (CCD) sensors are becoming obsolete due to the cost and speed of processing images.

The comparison reading of the information in the CMOS sensors has the advantage of obtaining enough captures, obtaining readings using less time and with greater flexibility. Using a high dynamic range of work, high contrasts and a correct display of objects are achieved. In terms of quality, the physical size of the sensor is more significant than the number of cells or resolution. A large unit may allow higher-quality photographs to be taken than another sensor with a higher resolution but with a smaller surface [23].

As far as color is concerned, it must be seen that color is just a human visual perception. In order to be able to glimpse the color of an object, it is necessary to have a light source and something that reflects this light. A color is represented in digital format by applying a system of representation. The most commonly used is the RGB system. To represent a color, the exact percentages of primary red, primary green, and primary blue (RGB, red, green, blue) must be available. By this way, the color is displayed through the implementation of three numbers [24].

5.9 Diaphragm

The function of this element is to enlarge or decrease the percentage of light circulating through the target. The diaphragm aperture is related to the percentage of aperture it has. It is counted in f-numbers. The step is the shift from one value to the next. The ratio of luminosity, according to the scale of the f, does it in a factor of 2 [5] (Figure 6).

Figure 6.

Solution to (a) different openings, (b) shutter speeds, and (c) ISO.

5.10 Other aspects to be taken into account

5.10.1 Focus

The first step in taking a picture is focusing. The most commonly used types of automatic focusing are [25]:

  • Phase detection autofocus (PDAF). Its management is done by applying photodiodes through the sensor. The focusing element is moved in the lens to focus the image. It is a slow and inaccurate system due to the use of photodiodes.

  • Dual pixel. This method uses more focus points along the sensor than the PDAF. This system uses two photodiodes at each pixel to compare minimal dissimilarities. This is the most effective focusing technology.

  • Contrast detection. It is the oldest of the three systems exposed. Its operation theoretically bases that the contrast of an image is greater, and its edges are appreciated in a clearer way, when it is focused correctly. The disadvantage is its slowness.

5.10.2 Perspective

A photograph is a perspective image of an object. If straight lines are drawn from all points of an object to a fixed point (called point of view or center of projection) and lines are considered that cross an intermediate surface (called projection surface), the image is drawn on this surface and is known as perspective [1, 26].

The camera is responsible for executing and materializing perspectives of objects. The projection surface is the flat extension of the image sensor or the capture surface. Focal distance is the orthogonal distance separating the viewpoint from the projection surface. Knowing the distance between the point of view and the plane that contains the points of the object, the focal distance with which the photograph was taken and the inclination of the plane in which the points of the object to be measured are located with respect to the projection plane, the reliable coordinates of the points can be disintegrated, using basic trigonometry (Figure 7).

Figure 7.

Diagram of the projection of a camera.

The orthogonal and the geometric perspectives are the most widely used in photogrammetry. Using a conventional camera (reel or digital), a geometric perspective will be plotted. From a photograph in which the points of the object to be measured are in a plane parallel to the projection plane or the one on which the photographic film is spread, the real position of the points in space is obtained by using Eqs. (18)(19):

xX=XZE18
yf=YZE19

where f is the focal length, (X, Y, Z) are the actual coordinates of the point, and P(x,y) are the coordinates on the projection plane of the image or photograph.

It would be in front of more complex expressions if the planes that contain the points are not parallel to the one of projection, being indispensable to know the inclination of the plane having as reference the plane of projection. In practice, in order to avoid complications in the calculation of coordinates, photographs are usually taken in a way that the planes are parallel.

5.10.3 Exposure

It is based on the capture of a scene by means of a sensitive material. In analog photography, this corresponds to the film and in digital photography, the sensor. Exposure is based on three variables to control the entry of light into the focal plane (sensor) and achieve an adequate exposure [9]:

  1. ISO Sensitivity: it indicates the amount of light required to take a picture. The higher the light, the lower the ISO.

  2. Diaphragm opening: it inspects the light reaching the focal plane, along with the shutter speed, and regulates the depth of field of the photograph.

  3. Shutter speed: shutter opening time allows light to reach the sensor. The higher the shutter speed, the lower the percentage of light reaching the sensor.

When a sensor has the ability to capture as many tones (dynamic range) and information (light) as its ability allows, the picture is perfectly exposed.

5.10.4 Dynamic range

It measures the amount of light and dark tones that a camera has the ability to capture in the same picture. It shows the amount of tonal nuances that a camera is capable of capturing, measurable by contrast and sharpness.

Contrast and sharpness are based on the differentiation of tonality with which a pair of white and black lines are obtained, captured, or reproduced. It is measurable of the degree of detail, being 100% when both lines can be perfectly differentiated as pure whites and blacks. Resolution and contrast are closely related concepts. If the contrast falls below 5%, it is difficult to observe any detail, which is shown more clearly and distinctly the higher it is. Frequency and modulation are shown in the way they are altered when light passes through the different optical components of the lens of the photographed image, thanks to contrast transfer functions. As the viewer moves away, a substantial loss of contrast begins to be noticed [12].

By performing a contrast correction, different filters are applied to the central zones instead of the peripheral zones. An example of contrast and resolution is shown in Figure 8.

Figure 8.

Contrast sensitivity change as a function of the spatial frequency of the target.

5.10.5 Aberrations

One of the most outstanding components of a camera is the photographic lens, which produces a series of aberrations that distort the images of the photographs, making difficult to visualize the correct dimensions of the object [27, 28]. There are different types of aberrations, being the most common in photographic lenses:

  1. Point aberrations: housed in the position arranged by the paraxial optics. It is a “stain” instead of a point. There are also chromatic aberration, spherical aberration, astigmatism, and coma.

  2. Shape aberrations: the point is shown as a point but with a different position to the one arranged by means of paraxial approximation. This is a systematic error and can be of two types: field curvature and distortions.

  • Field curvature: defect when creating the image, being curved instead of flat. It is difficult to correct the aberration, but it can be mitigated in a low percentage.

  • Distortion: only affects the shape of the image. It occurs due to the difference in the scale of reproduction of the image off-axis. If an object with straight lines is photographed, such as a square, the center lines will appear straight, and the edge lines will curve inward or outward producing the so-called barrel or cushion distortions. This aberration is not corrected by closing the diaphragm. This error affects the tone of the image and needs to be corrected.

5.10.6 Environmental conditions

Stability of environmental conditions must be achieved:

  1. Temperature: the ideal temperature for taking a photograph should be between approximately 18 and 26° in order to avoid dilatation of the lens.

  2. Wind: calm wind, to avoid hindrances when taking the photo.

  3. Illumination: sufficient light bulb. In most cases, natural light is not sufficient, and it is necessary to use spotlights or other artificial elements.

Other significant parameters, such as the texture of the element, significantly help the quality of the 3D reconstruction, and optimal results are obtained with the highest level of ambient light (exposure 1/60, f/2.8, and ISO sensitivity 100). The surface of an element should be opaque, with Lambertian reflection and surface homogeneity. A single point on the surface of the object must be visible from at least two or more sensors [26, 29].

5.10.7 Image quality

Image quality is a prerequisite for working with it properly. There are two main characteristics that define it:

  1. Resolution in amplitude (bit depth): number of bits per point of an image

  2. Spatial resolution: the number of pixels per unit area

Image processing is the transformation of an input image into an output image. It is carried out to facilitate the analysis of the image and to obtain a greater reliability of this [30]. Among the transformations, those that eliminate noise or variation in the intensity of the pixels stand out. There are two types of operations: individual operations (rectification or binarization) and neighborhood operations (filtering).

5.10.8 Histogram

This is a visual tool very useful for the study of digital images. With the naked eye, it is possible to study the contrast or the distribution of intensities, because it follows the following discrete function of Eq. (20):

fxi=niE20

where x is the level of gray or color and n is the number of pixels in the image with this value. The histogram is normalized in values ranging from 0 to 1. In Figure 9 it is possible to see their different zones [18, 31].

Figure 9.

Histogram areas.

The most common errors in the image, which prevent good image quality, can be identified in the histogram and are muted tones, black areas, overexposure or burned areas, and backlight. In order to know that a good image is acquired, the best thing is to have a histogram that has the shape of a Gauss bell, that is to say, that has the most information in the central part and less in the extremes. Another important point is that the histogram must embrace and reach both ends, so as to ensure that there are blacks and whites in the photograph.

5.10.9 Binarization

The representation of an image with two values is obtained. The dimensions of the image are still preserved. The decision threshold must be chosen correctly and used in a step filter with an algorithm similar to Eq. (21):

gxy=01fxy>kfxykE21

where 0/1 represents the black/white values and f is the value of the gray tone of the coordinates (x,y) [32]. Figure 10 shows a grayscale image versus a binary.

Figure 10.

Grayscale (left) and binary (right).

To obtain an image with sufficient quality, the binarization must correspond with white pixels to the objects of interest, being the blacks of the environment. If the object of interest turns out to be darker than the environment, a reversal is applied after the binarization. The most important point in the process is the calculation of the threshold. There are different methods for this: histogram, clustering, entropy, similarity, spatial, global, and local.

The setting of the threshold value is latent, due to its difficulty, in all methods. The techniques are supported by statistics applied to the histogram. They are as follows: carry error method, Otsu method, and Saulova’s pixel deviation method.

5.10.10 Spatial filtering

It is based on a convolution operation between the two-dimensional functions image, f, and a nucleus, called h, in digital images. This operation aims to transform the value of a pixel p into the position (x,y), always taking into account the values of the adjacent pixels. For this operation, a weighted sum of the values of the neighboring points of this point p is required. A mask (h), behaving like a filter, is in charge of exposing the values of the weighting. The size of the mask varies according to the pixels used.

5.10.11 Geometrical transformations

These operations modify the spatial coordinates of the image. There are several operations that are easy to understand and apply, such as interpolation, rotation, rectification, and distortion correction.

5.10.12 Lens distortion

Due to the geometry of the lens, it reproduces a square object with variations in its parallel lines. There are three types of distortion: barrel, pincushion, and mustache (combination of the first two) (Figure 11) [25, 33]. This error is negligible in a photograph of a natural scene, but to take engineering measurements and obtain a virtual object, it is necessary to compensate for the distortion. There is a mathematical model for the treatment of distortion.

Figure 11.

Types of lens distortion.

The barrel distortion is centered and symmetrical. Therefore, to correct the distortion of a certain point, a radial transformation is performed, expressed mathematically in Eq. (22):

x̂xdŷyd=Lxxd2+yyd2xxdyydE22

where x̂ŷ represents the result of the distortion correction at point (x, y), (xd, yd) represents the center of the distortion which is usually a point near the center of the image, and finally the radial function L(r) determines the magnitude of the distortion correction as a function of the distance from the point to the center of distortion [34].

The radial function L(r) is performed by applying two strategies. The first one gives rise to the so-called polynomial models (Eq. (23)):

Lr=1+k1r2+k2r4++knr2nE23

The second one is based on an approach (Eq. (24)):

Lr=11+k1r2+k2r4++knr2nE24

The values k1–kn are called distortion model parameters. These values, together with the distortion center coordinates (xd, yd), completely represent the distortion model. The distortion of the lens is represented by the ki coefficients. They are obtained from a known calibration image.

5.10.13 Rectification (perspective distortion)

Image correction is necessary because either it is difficult to keep the optical axis vertical at all points of the shot or the axis is tilted toward the vertical. Vertical images are obtained free of displacement because of the inclination of the shot but still have inclinations, product of the depth of the workpiece. Displacements can be suppressed by applying differential grinding or orthorectification process. In the original digital image or a scan, the technique is applied pixel by pixel. In a scanned image, the initial data are the coordinates of the control points. The procedure is divided into two steps:

  1. Determination of the mathematical transformation related to real coordinates and those belonging to the image

  2. Achievement of new image, being aligned to the reference system

After this process, it is necessary to know that all the pixels of the resulting orthophotography have their level of gray, performing a digital resampling [17, 34]. Figure 12 shows an unrectified (left) and rectified (right) photograph.

Figure 12.

Visual example of photo rectification.

Several resamples are made on the initial image. Three resampling methods are regularly used: bilinear interpolation, nearest neighbor, and bicubic convolution. The transformations to be applied to the images are [19] Helmert transformation; affine transformation; polynomial transformation; and two-dimensional projective transformation.

Advertisement

6. Obtaining a 3D model from 2D photographs

To obtain a 3D model of an object from a 2D one, photographs must be taken from different views, with adequate quality. From these photographs, the reconstruction process begins.

3D reconstruction is the process by which real objects are reproduced on a computer. Nowadays there are several reconstruction techniques and 3D mesh methods, having a function to obtain an algorithm that is able to make the connection of the set of representative points of the object in form of surface elements. The efficiency with which the techniques are used will be linked to the final quality of the reconstruction.

The stereoscopic scene analysis system presented by Koch uses image matching, object segmentation, interpolation, and triangulation techniques to obtain the 3D point density map. The system is divided into three modules: sensor processing, image pair processing, and model-based sequence processing.

Pollefeys features a 3D reconstruction process based on well-defined stages. The input is an image sequence, and the output of the process is a 3D surface model. The stages are the following: image ratio, structure and motion recovery, dense matching, and model construction.

Another proposal is expressed by Remondino. He presents a 3D reconstruction system following these steps: image sequence acquisition and analysis, image calibration and orientation, matching process and the generation of points, and 3D modeling [18].

6.1 From a photograph

It is used in revolutionary pieces. With only one photograph, it is possible to obtain the axis and dimensions. In 1978 Barrow and Tenenbaum demonstrated that the orientation of the surface along the silhouette can be calculated directly from the image data, resulting in the first study of silhouettes in individual views. Koenderink showed that the sign of the silhouette’s curvature is equivalent to that of the Gaussian curvature. Thus, concavities, convexities, and inflections of the silhouette indicate hyperbolic, convex, and parabolic surface points, respectively. Finally, Cipolla and Blake exposed that the curvature of the silhouette has the corresponding sign as the normal curvature along the contour generator in the perspective projection. A similar result was derived for the orthographic projection by Brady [35].

First, the silhouette ρ of a surface of revolution (SOR) is extracted from the image with a Canny edge detector, and the harmonic homology W that maps each side of ρ to its symmetrical complement is predictable by minimizing the geometric detachments among the original silhouette ρ and its transformation version ρ´ = Wρ. The image is rectified, and the axis of the figure is rotated and put in orthogonal projection (Figure 13).

Figure 13.

Harmonic homology of the figure and its transformation to orthogonal projection [35].

The apparent contour is first manually segmented from the rectified silhouette. This can usually be done easily by removing the upper and lower elliptical parts of the silhouette. The points are then sampled from the apparent contour, and the tangent vector (i.e., ẋsandẏs) at each sample point is calculated by fitting a polynomial to the neighboring points.

For Ψ ≠ 0, Rx(Ψ) first transforms the display vector p(s) and the associated surface normal n(s) at each sample point: the transformed display vector is normalized so that its third coefficient becomes one, and the following Eqs. (25)(26) can be used to recover the depth of the sample point:

ns=1αnsẏsẋsxsẏsẋsysE25

where

αns=psxdpsdsE26

6.2 From two photographs

This section is based on an investigation using a practical heuristic method, for the reconstruction of structured scenes from two uncalibrated images. The method is based on an initial estimation of the main homographies of the initial 2D point coincidences, which may contain some outliers, and the homographies are recursively refined by incorporating the point and line support coincidences on the main spatial surfaces. The epipolar geometry is then recovered directly from the refined homogenies, and the chambers are calibrated from three orthogonal vanishing points, and the infinite homography is recovered.

First, a simple homography-guided method is proposed to fit and match the line segments between two views, using Canny edge detector and regression algorithms. Second, the cameras are automatically calibrated with the four intrinsic parameters that vary between the two views. A RANSAC mechanism is adopted to detect the main flat surfaces of the object from 2D images. The advantages of the method are that it can build more realistic models with minimal human interactions and it also allows more visible surfaces to be reconstructed on the detected planes than traditional methods that can only reconstruct overlapping parts (Figure 14).

Figure 14.

The matching results of the line segments in four main planes [36].

6.3 Though more than two photographs

6.3.1 Reconstruction of geological objects

This is one of the fields where photogrammetry is most applied nowadays. In this specific point, the reconstruction is carried out applying Delaunay’s triangulation and the tetrahedron. Many data models based on tetrahedron mesh have been developed to represent the complex objects in 3D GIS.

The tetrahedron grid can only be used to represent the geometrical structure of geological objects. The natural characteristics of geological objects are reflected in their different attributes, such as different rock formations, different contents of mineral bodies, etc. It is defined that the attribute value of the internal point can be linearly interpolated from the attribute values in four vertices in a tetrahedron. But the attributes could change suddenly between different formations and different mineral bodies. To cope with sudden changes, interpolation of the tetrahedron is needed that can only be applied to six sides of a tetrahedron. Those interpolated points are only used as time data for the following processing [37].

6.3.2 Reconstruction of objects with high surface and texture resolution

This section presents a robust and precise system for the 3D reconstruction of real objects with shapes and textures in high resolution. The reconstruction method is passive, and the only information required is 2D images obtained with a camera calibrated from different viewing angles as the object rotates on a rotating plate. The triangle surface model is obtained through a scheme that combines the octree construction and the walking cube algorithm. A texture mapping strategy based on surface particles is developed to adequately address photographic-related problems such as inhomogeneous lighting, lights, and occlusion [38]. To conclude, the results of the reconstruction are included to demonstrate the quality obtained (Figure 15).

Figure 15.

Flowchart to the reconstruction of objects.

The scheme combining octree construction and isolevel extraction through marching cubes is presented for the problem concerning the shape of the silhouette. The use of octree representation allows to reach very high resolutions, while the method of fast walking cubes is adapted through a properly defined isolevel function to work with binary silhouettes, resulting in a mesh of triangles with vertices precisely located in the visual object.

Calibration is performed on the camera and rotary table. One of the problems found is the discontinuity of the texture due to the nonhomogeneous lighting in different parts of the element due to shadows.

Next, the octree is represented. An octree is a hierarchical tree structure that can be used to represent volumetric data in terms of cubes of different sizes. Each octree node corresponds to a cube in the octree space that is entirely within the object. This opens up different possibilities: voxels, particles, triangles, and more complicated parametric primitives, such as splines or NURBS. Voxels are used to represent volumes but can also be used to represent surfaces. A related primitive is a particle that is defined by its color, orientation, and position. By marching the cube triangulation of the octree, the white and black points denote the corners of the cube that are inside and outside, respectively, while the gray points are the points of the triangle’s vertex on the surface (Figure 16).

Figure 16.

From cube to triangulation, adapted from [38].

The application of the isolevel function calculated by means of the dichotomous subdivision procedure allows for the construction of a faithful model of the object. The triangular vertices that make up the object’s mesh are placed precisely on the surface of the digitized model even at low resolutions. This creates an efficient compromise between resolution and geometric accuracy. The octree construction followed by the walking cube algorithm generates a triangular mesh consisting of an excessive number of triangles, which must be simplified.

6.3.3 Object reconstruction

The reconstruction of objects is mainly based on the archeological field. The process to obtain the 3D model will be governed by Figure 17.

Figure 17.

Steps to obtain the 3D model, adapted from [39].

First of all, corresponding or common characteristics must be found among the images of the object. The process occurs in two phases:

  1. The reconstruction algorithm generates a reconstruction in which dimensions are not correctly defined. A self-calibration algorithm performs a reconstruction equivalent to the original one, formed by a set of 3D points.

  2. All the pixels of an image are made to coincide with those of the neighboring images so that the system can reconstruct these points.

The system selects two images to set up an initial projective reconstruction frame and then reconstructs the matching feature points through triangulation.

Then a dense surface estimation is performed. To obtain a more detailed model of the observed surface, a dense matching technique is used. The 3D surface is approached with a triangular grid, to reduce geometric complexity and adapt the model to the requirements of the computer graphic display system. Then construct a corresponding 3D mesh by placing the triangle vertices in 3D space according to the values found in the corresponding depth map. To reconstruct more complex shapes, the system must combine multiple depth maps. Finally, it is provided with texture.

6.3.4 3D reconstruction of the human body

It is used for medical purposes in many cases, as a base for implants, splints, etc. The process consists of the following parts: acquisition and analysis of the image sequence; calibration and orientation of the images; matching process on the surface of the human body; and generation and modeling of the point cloud. Once the necessary images have been obtained from different points of view, the calibration and orientation of the images are carried out.

The choice of the camera model is often related to the final application and the required accuracy. The correct calibration of the sensor used is one of the main objectives. Another important point is image matching [40].

To evaluate the quality of the matching results, different indicators are used: an ex post standard deviation of the least squares adjustment, the standard deviation of the change in the x-y directions, and the shift from the initial position in the x-y directions. The performance of the process, in the case of uncalibrated images, can only be improved with a local contrast enhancement of the images.

Finally, 3D reconstruction and modeling of the human body shape is performed. The 3D coordinates of each matching triplet are calculated through a forward intersection. Using collinearity and the results of the orientation process, the 3D paired points are determined with a solution of least squares. For each triplet of images, a point cloud is calculated, and then all the points are joined together to create a unique point cloud. A spatial filter is applied to reduce noise and obtain a more uniform point cloud density. Figure 18 shows the results before and after filtering (approximately 20,000 points, left); a view of the recovered point cloud with pixel intensity (center); and a 3D human model (right).

Figure 18.

3D reconstruction of a human body, adapted from [40].

The system is composed of two main modules. The first one is in charge of image processing, to determine the depth map in a pair of views, where each pair of successive views follows a sequence of phases: detection of points of interest, correspondence of points, and reconstruction of these. In this last phase, the parameters that describe the movement (rotation matrix R and translation vector T) between the two views are determined. This sequence of steps is repeated for all successive pairs of views of the set.

The second module is responsible for creating the 3D model, for which it must determine the total 3D points map generated. In each iteration of the previous module, the 3D mesh is generated by applying Delaunay’s triangulation method. The results obtained from the process are modeled in a virtual environment to obtain a more realistic visualization of the object [16].

The number of detected minutiae is related to the number of reconstructed 3D points and the quality of that reconstruction (higher number of details). Therefore, the higher the number of points on the map, the more detailed areas are obtained. In some cases this does not apply, due to the geometry of the object, for example, in a cube, more points can result in a distorted object.

Advertisement

7. Conclusion

The technological development of 3D photogrammetry makes it a real option in the various applications of 3D scanners. Among the different benefits it brings are faster raw data acquisition, simplicity, portability, and more economical equipment. Different studies have verified the accuracy and repeatability of 3D photogrammetry. These investigations have compared the digital models of objects obtained from 2D digital photographs with those generated by a 3D surface scanner. In general, the meshes obtained with photogrammetric techniques and with scanners show a low degree of deviation from each other. The surface settings of photogrammetric models are usually a little better. For these reasons, photogrammetry is a technology with an infinite number of engineering applications.

In this chapter the basic fundamentals, the characteristics of the acquisition, and the aspects to be taken into account to obtain a good virtual model from photogrammetry have been explained.

Advertisement

Acknowledgments

The authors would like to thank the call for Innovation and Teaching Improvement Projects of the University of Cadiz and AIRBUS-UCA Innovation Unit (UIC) for the Development of Advanced Manufacturing Technologies in the Aeronautical Industry.

Advertisement

Conflict of interest

The authors declare no conflict of interest.

References

  1. 1. Rolin R, Antaluca E, Batoz JL, Lamarque F, Lejeune M. From point cloud data to structural analysis through a geometrical hBIM-oriented model. Journal of Cultural Heritage. 2019;12:1-26. DOI: 10.1145/3242901
  2. 2. Valerga AP, Batista M, Bienvenido R, Fernández-Vidal SR, Wendt C, Marcos M. Reverse engineering based methodology for modelling cutting tools. Procedia Engineering. 2015;132:1144-1151. DOI: 10.1016/j.proeng.2015.12.607
  3. 3. Rabbani T, Dijkman S, van den Heuvel F, Vosselman G. An integrated approach for modelling and global registration of point clouds. ISPRS Journal of Photogrammetry and Remote Sensing. 2007;61:355-370. DOI: 10.1016/j.isprsjprs.2006.09.006
  4. 4. Schenk T. Introduction to Photogrammetry. Department of Civil and Environmental Engineering and Geodetic Science. Athens, USA: The Ohio State University; 2005. pp. 79-95. Available from: http://gscphoto.ceegs.ohio-state.edu/courses/GeodSci410/docs/GS410_02.pdf
  5. 5. Derenyi EE. Photogrammetry: The Concepts. Canada: Department of Geodesy and Geomatics Engineering University of New Brunswick; 1996. DOI: 10.1017/9781108665537.002
  6. 6. Ackermann F. Digital image correlation: Performance and potential application in photogrammetry. The Photogrammetric Record. 1984;11:429-439. DOI: 10.1111/j.1477-9730.1984.tb00505.x
  7. 7. Sansoni G, Trebeschi M, Docchio F. State-of-the-art and applications of 3D imaging sensors in industry, cultural heritage, medicine, and criminal investigation. Sensors. 2009;9:568-601. DOI: 10.3390/s90100568
  8. 8. Katz D, Friess M. Technical note: 3D from standard digital photography of human crania - A preliminary assessment. American Journal of Physical Anthropology. 2014;154:152-158. DOI: 10.1002/ajpa.22468
  9. 9. Martorelli M, Lepore A, Lanzotti A. Quality analysis of 3D reconstruction in underwater photogrammetry by bootstrapping design of experiments. International Journal of Mechanical Sciences. 2016;10:39-45
  10. 10. Guerra MG, Lavecchia F, Maggipinto G, Galantucci LM, Longo GA. Measuring techniques suitable for verification and repairing of industrial components: A comparison among optical systems. CIRP Journal of Manufacturing Science and Technology. 2019;27:114-123. DOI: 10.1016/j.cirpj.2019.09.003
  11. 11. Givi M, Cournoyer L, Reain G, Eves BJ. Performance evaluation of a portable 3D imaging system. Precision Engineering. 2019;59:156-165. DOI: 10.1016/j.precisioneng.2019.06.002
  12. 12. Aguilar R, Noel MF, Ramos LF. Integration of reverse engineering and non-linear numerical analysis for the seismic assessment of historical adobe buildings. Automation in Construction. 2019;98:1-15. DOI: 10.1016/j.autcon.2018.11.010
  13. 13. Murphy M, McGovern E, Pavia S. Historic building information modelling - Adding intelligence to laser and image based surveys of European classical architecture. ISPRS Journal of Photogrammetry and Remote Sensing. 2013;76:89-102. DOI: 10.1016/j.isprsjprs.2012.11.006
  14. 14. Dai F, Lu M. Assessing the accuracy of applying photogrammetry to take geometric measurements on building products. Journal of Construction Engineering and Management. 2010;136:242-250. DOI: 10.1061/(ASCE)CO.1943-7862.0000114
  15. 15. Wrobel BP. The evolution of digital photogrammetry from analytical photogrammetry. The Photogrammetric Record. 1991;13:765-776. DOI: 10.1111/j.1477-9730.1991.tb00738.x
  16. 16. Styliadis AD, Sechidis LA. Photography-based façade recovery & 3-d modeling: A CAD application in cultural heritage. Journal of Cultural Heritage. 2011;12:243-252. DOI: 10.1016/j.culher.2010.12.008
  17. 17. Murtiyoso A, Grussenmeyer P, Börlin N. Reprocessing close range terrestrial and uav photogrammetric projects with the dbat toolbox for independent verification and quality control. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives. 2017;42:171-177. DOI: 10.5194/isprs-archives-XLII-2-W8-171-2017
  18. 18. Remondino F, Fraser C. Digital camera calibration methods: Considerations and comparisons. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2006;36:266-272
  19. 19. Bister D, Mordarai F, Aveling RM. Comparison of 10 digital SLR cameras for orthodontic photography. Journal of Orthodontics. 2006;33:223-230. DOI: 10.1179/146531205225021687
  20. 20. Bill Triggs AWF, McLauchlan PF, Hartley RI. In: Triggs B, editor. Bundle Adjustment—A Modern Synthesis Bill. Vision Algorithms ’99. LNCS 1883;2000:298-372. DOI: 10.3760/cma.j.issn.2095-4352.2016.06.018
  21. 21. Veal CJ, Holmes G, Nunez M, Hoegh-Guldberg O, Osborn J. A comparative study of methods for surface area and three dimensional shape measurement of coral skeletons. Limnology and Oceanography: Methods. 2010;8:241-253. DOI: 10.4319/lom.2010.8.241
  22. 22. Kaufman J, Clement M, Rennie AE. Reverse engineering using close range photogrammetry for additive manufactured reproduction of Egyptian artifacts and other Objets d’art (ESDA1014-20304). Journal of Computing and Information Science in Engineering. 2015;15:1-7. DOI: 10.1115/1.4028960
  23. 23. Bernard A. Reverse engineering for rapid product development: A state of the art. Three-Dimensional Imaging, Optical Metrology, and Inspection. 1999;3835:50-63. DOI: 10.1117/12.370268
  24. 24. Adeline KRM, Chen M, Briottet X, Pang SK, Paparoditis N. Shadow detection in very high spatial resolution aerial images: A comparative study. ISPRS Journal of Photogrammetry and Remote Sensing. 2013;80:21-38. DOI: 10.1016/j.isprsjprs.2013.02.003
  25. 25. Yi XF, Long SC. Precision displacement measurement of single lens reflex digital camera. Applied Mechanics and Materials. 2011;103:82-86. DOI: 10.4028/www.scientific.net/AMM.103.82
  26. 26. Webster C, Westoby M, Rutter N, Jonas T. Three-dimensional thermal characterization of forest canopies using UAV photogrammetry. Remote Sensing of Environment. 2018;209:835-847. DOI: 10.1016/j.rse.2017.09.033
  27. 27. Galantucci LM, Lavecchia F, Percoco G, Raspatelli S. New method to calibrate and validate a high-resolution 3D scanner, based on photogrammetry. Precision Engineering. 2014;38:279-291. DOI: 10.1016/j.precisioneng.2013.10.002
  28. 28. Menna F, Nocerino E, Remondino F. Optical aberrations in underwater photogrammetry with flat and hemispherical dome ports. In: Videometrics, Range Imaging, and Applications XIV. SPIE Optical Metrology. Munich, Germany: SPIE;. 2017;1033205:1-14. DOI: 10.1117/12.2270765
  29. 29. Nevalainen O, Honkavaara E, Tuominen S, Viljanen N, Hakala T, Yu X, et al. Individual tree detection and classification with UAV-based photogrammetric point clouds and hyperspectral imaging. Remote Sensing. 2017;9:1-34. DOI: 10.3390/rs9030185
  30. 30. Yoo Y, Lee S, Choe W, Kim C. CMOS image sensor noise reduction method for image signal processor in digital cameras and camera phones Youngjin. In: Proceedings of SPIE-IS&T Electronic Imaging Digital Photography III. 2007. pp. 1-10. DOI: 10.1117/12.702758
  31. 31. Fliegel K, Havlin J. Imaging photometer with a non-professional digital camera. In: SPIE 7443, Applications of Digital Image Processing XXXII. 2009. pp. 1-8. DOI: 10.1117/12.825977
  32. 32. Gomez-Gil P. Shape-based hand recognition approach using the morphological pattern spectrum. Journal of Electronic Imaging. 2009;18:13012. DOI: 10.1117/1.3099712
  33. 33. Ng R, Hanrahan PM. Digital correction of lens aberrations in light field photography. In: International Optical Design Conference. 2006. p. 6342. DOI: 10.1117/12.692290
  34. 34. Jianping Z, John G. Image pipeline tuning for digital cameras. In: IEEE International Symposium on Consumer Electronics. Irving, TX; 2007. pp. 167-170
  35. 35. Wong KYK, Mendonça PRS, Cipolla R. Reconstruction of surfaces of revolution from single uncalibrated views. Image and Vision Computing. 2004;22:829-836. DOI: 10.1016/j.imavis.2004.02.003
  36. 36. Wang G, Tsui HT, Hu Z. Reconstruction of structured scenes from two uncalibrated images. Pattern Recognition Letters. 2005;26:207-220. DOI: 10.1016/j.patrec.2004.08.024
  37. 37. Xue Y, Sun M, Ma A. On the reconstruction of three-dimensional complex geological objects using Delaunay triangulation. Future Generation Computer Systems. 2004;20:1227-1234. DOI: 10.1016/j.future.2003.11.012
  38. 38. Yemez Y, Schmitt F. 3D reconstruction of real objects with high resolution shape and texture. Image and Vision Computing. 2004;22:1137-1153. DOI: 10.1016/j.imavis.2004.06.001
  39. 39. Pollefeys M, Van Gool L, Vergauwen M, Cornelis K, Verbiest F, Tops J. 3D recording for archaeological fieldwork. IEEE Computer Graphics and Applications. 2003;May/June:20-27. DOI: 10.1109/MCG.2003.1198259
  40. 40. Remondino F. 3-D reconstruction of static human body shape from image sequence. Computer Vision and Image Understanding. 2004;93:65-85. DOI: 10.1016/j.cviu.2003.08.006

Written By

Ana Pilar Valerga Puerta, Rocio Aletheia Jimenez-Rodriguez, Sergio Fernandez-Vidal and Severo Raul Fernandez-Vidal

Submitted: 27 April 2020 Reviewed: 24 May 2020 Published: 26 June 2020