Open access peer-reviewed chapter

Utilization of Unmanned Aerial Vehicle for Accurate 3D Imaging

Written By

Yoichi Kunii

Submitted: 19 July 2018 Reviewed: 20 November 2018 Published: 20 December 2018

DOI: 10.5772/intechopen.82626

Chapter metrics overview

923 Chapter Downloads

View Full Metrics

Abstract

In order to acquire geographical data by aerial photogrammetry, many images should be taken from an aerial vehicle. After that, the images are processed with the help of the structure-from-motion (SfM) technique. Multiple neighboring images with a high rate of overlapping should be obtained for high-accuracy measurement. In the event of natural disasters, UAV operation may sometimes involve risk and should be avoided. Therefore, an easy and convenient method of operating the UAVs is needed. Reports exist on some applications of the UAVs with other devices; however, it will be difficult to prepare a number of such devices in emergency. We considered the most suitable condition for image acquisition by using the UAV. Specifically, some of the altitudes and the rate of overlapping were attempted, and accuracies of the 3D measurement were confirmed. Furthermore, we developed a new camera calibration and measurement method that requires only a few images taken in a simple UAV flight. The UAV in this method was flied vertically and the images were taken at a different altitude. As a result, the plane and height accuracy was ±0.093 and ±0.166 m, respectively. These values were of higher accuracy than the results of the usual SfM software.

Keywords

  • UAV
  • 3D measurement
  • camera calibration
  • overlapping
  • accuracy

1. Introduction

The demand for the unmanned aerial vehicles (UAVs) is increasing as they find applications in various fields. For example, more accurate geographical data can be acquired by using the UAVs than by using the usual aerial photogrammetry [1]. The UAVs can take high resolution images as they are able to fly at low altitudes [2]. In addition, the UAVs can be used for observation of natural disasters [3, 4] or for surveying the construction sites [5, 6]. Such applications need rapid and low-cost surveying, and the UAVs are well suited for that purpose [7]. In the case of applying this method to the public survey, the manual published by the Geographical Survey Institute of the Ministry of Land, Infrastructure and Transport in Japan prescribes that the overlap ratio between continuous images is 80% or more. Therefore, even in a narrow target area, it should be taking about several dozen sheets. Also, photogrammetry software equipped with SfM (Structure from Motion), which is now mainstream, also supports such a large number of images. However, since the imaging method as described above requires technology and labor for operating the UAV, there is a concern in terms of cost, such as requiring a dedicated operator when applying in various construction sites or the like.

Therefore, when applying survey by UAV in the landscaping space, with the aim of minimizing the labor of imaging while ensuring adequate measuring accuracy, measuring precision with respect to change in ground level and the number of images taken to be verified. In addition, we tried 3D modeling in urban plaza and hilly terrain by using the obtained results. Furthermore, we developed a new camera calibration and measurement method which requires only a few images taken in a simple UAV flight. The UAV in this method was flied vertically and the images were taken at a different altitude. We compared the measurement accuracy of the proposed method against the SfM method and evaluated the performance of the proposed method by checking the accuracy.

Advertisement

2. Background of UAV photogrammetry

UAV has been developed for military purposes in the United States since the 1950s and has been developed as a small unmanned reconnaissance aircraft around 1970 due to progress of electronic guidance technology and the like. Utilization of UAV in Japan started spreading because it was used since the late 1990s for spraying pesticides; now it is applied in information gathering and surveying at various sites, and its use in media and entertainment is expanding. Among them, aerial photogrammetry is an application field of particular importance. Normal aerial photogrammetry is carried out by a manned aircraft to image the ground above several hundred to several thousand meters from the altitude to the ground, mainly to create a topographic map. On the other hand, since the altitude of the UAV to the ground is as low as several tens to 100 m, it is possible to create a more detailed topographic map than the manned aircraft. Also, since UAV is inexpensive, maneuverable, and easy to operate compared with a manned aircraft, it demonstrates superior ability in capturing terrain during emergencies such as when a disaster occurs. Furthermore, it is expected to be a tool to improve the efficiency of surveying in earthworks and concrete works. Therefore, it can be said that evaluation of measurement accuracy for UAV photogrammetry is required due to such applications.

Advertisement

3. Acquisition of images for evaluation

3.1 UAV devise and test site

The images for checking the accuracy were taken at a UAV test site in Kanagawa, Japan. The UAV test site is managed by the Japan Society for Photogrammetry and Remote Sensing. Figure 1 shows the entrance to the UAV test site. There are 76 points of circular ground marks that have Japanese national coordinate in the test area of about 5000 m2, as shown in Figure 2. The center coordinates of the ground marks were given by performing the ground survey of the whole site by a total station. This allowed comparing the given coordinates and the results of the UAV photogrammetry and checking the accuracy of the photogrammetry.

Figure 1.

Entrance to UAV test site.

Figure 2.

76 ground marks in the test site.

Figure 3 shows the UAV “DJI Inspire 1” which was used for taking the images. The camera “FC350” on the Inspire 1 has 4000 × 2250 pixels and 4 mm focal length.

Figure 3.

DJI Inspire 1.

3.2 Altitude and overlapping rate

The taking the image of the accuracy verification on the test site was carried out on 23 October, 2016. The altitude of the UAV was set as three stages of 40, 60, and 80 m. The taking at each altitude was carried out so that the overlap ratio was 90% and the side lap ratio was 60%. As a result, the number of image acquired at each altitude at the ground level was 135 for 40 m, 57 for 60 m, and 26 for 80 m. Figure 4 shows samples of images taken at each altitude.

Figure 4.

Sample images at each altitude. (a) 40 m, (b) 60 m, and (c) 80 m.

Advertisement

4. Verification of measurement accuracy

4.1 Details of the verification

3D surveying for each ground mark was carried out by photogrammetry, and accuracy verification was carried out using the image of the test site obtained by such taking image. In the verification of accuracy, in addition to verification by each altitude to ground level, verification is also required when using images with overlap rates of 50, 60, 70, 80, and 90% in the form of thinning took images, respectively. As a result, the number of images in the verification was 135 with the relationship between the ground altitude and the overlap ratio of 40 m and 90%, which was the largest number, and the number of the images was 6 in the case of 80 m and 50%. In the verification, among the 76 points of the ground marks at the test site, 9 points, 13, 17, 25, 33, 42, 47, 70, 75, and 76, were set as control points. On the other hand, the other 67 points were set as verification points, and the accuracy verification of the 3D coordinates obtained for the verification point was decided. For accuracy verification, Agisoft PhotoScan Professional (hereinafter referred to as PhotoScan) which is a general photogrammetry software with SfM was used.

4.2 Results of the verification

In order to verify the accuracy for each condition, root mean square errors (RMSEs) were calculated with the following equation.

σ0=±v12+v22+v32++vn2n1=±v2n1where,σ0:RMSEv:Residual errorn:Number of the dataE1

Table 1 shows the results of accuracy verification carried out as mentioned above. The measurement accuracy shown in the same table is set by setting the value of the ground control point survey at each verification point to the true value and the value of the photogrammetry by the UAV as the measurement value and calculating the standard deviation calculated from the difference value of both at each point. From this result, it can be confirmed that an accuracy of about ±0.05 m is obtained at any altitude of ground and overlap ratio. According to the precision standard of earthmoving specified by the Ministry of Land, Infrastructure and Transport, if it is within ±0.1 m, it can be applied to the construction surveying and rock surveying. The results of this verification can be confirmed to satisfy the above-mentioned numerical values in any of the results. On the other hand, it is also possible to apply it to measurement of shape within ±0.05 m. Regarding this numerical value, at the overlap rate of 90%, any ground altitude is satisfied; however, the result satisfied with 80% or less is mostly at the ground altitude of 40 m. In theory, in the photogrammetry, the altitude of the ground is low, and as the overlap rate becomes higher, the accuracy improves. It is thought that the results according to the situation were obtained.

Table 1.

Result of accuracy verification.

In addition to the above results, it was confirmed that certain results can be obtained at any ground altitude and overlap rate in this verification. In other words, in the landscaping space where various environments exist in natural space and urban space, the possibility that the method of flight of UAV is limited may be considered; however, by this verification, it is possible to fly according to the local situation.

From the above results, it is suggested that the usefulness of surveying in the landscaping space using UAV was suggested, so the case of 3D modeling by UAV conducted at the two survey sites is shown below.

Advertisement

5. Examples of application

5.1 Measurement for plaza

First of all, as an application to UAV’s open space in urban space, we decided to do 3D modeling on Yurinoki Plaza at Tokyo University of Agriculture Setagaya Campus (Setagaya, Tokyo) as shown in Figure 5. In the Yurinoki Plaza, several trees were planted in a space of about 6000 m2 covered with lawn. In addition, there are buildings such as research buildings around the open space, and these buildings are also subject to 3D modeling. The images of Yurinoki Plaza were taken by UAV, and a total of 431 images were taken. Of these, 87 images were taken from the UAV in the vertical direction to the ground with an overlap rate of 80% from the altitude of 20 m to the ground and the other 344 aimed the camera in the horizontal direction. Figure 6 shows samples of the took image.

Figure 5.

Yurinoki Plaza.

Figure 6.

Image from UAV.

The image taken as mentioned above was processed by PhotoScan; 3D point cloud data of each feature captured in the image was generated. Next, a high density point cloud is generated from the obtained point cloud. Compared with the point cloud, the high density point cloud has a high density of point clouds composed of data. Therefore, it seems that texture is attached at the viewpoint from a distance, since it is a set of points to the last, the part where the hole is open as a surface becomes conspicuous because it is a set of points to the last. Figure 15 shows a target point indicated by the high density point cloud. Finally, texture mapping was performed for the high density point cloud, and a 3D model could be generated as shown in Figure 7. In addition, it was confirmed that the area of the plum tree calculated from the created 3D model was 6358.4 m2, which was almost the same as the area (6357.7 m2) obtained by ground survey by the total station. In addition, this plaza closed in 2017, and a new research building scheduled for completion in 2020 is being built in this place. In other words, since the results of this report acquired 3D data before closing the open space, it is expected to be utilized as a record of changes in the campus.

Figure 7.

3D model of Yurinoki Plaza. (a) Vertical view, (b) Bird’s-eye view.

5.2 Measurement for mountain area

As an application to the natural space of UAV, it is necessary to perform 3D modeling on the hilly area of about 200,000 m2 between Tenjinzawa and Hatayazawa in Matsuda town, Kanagawa Prefecture, respectively, as shown in Figure 8. This area was a place where Matsuda Castle was built in the late Heian era (twelfth century) and is currently managed by Matsuda Town as Matsuda Castle Ruins. The Tomei Expressway passes the southern end of the slope; however, the slope was excavated when constructing this expressway, and there was a circumstance that part of Matsuda Castle site was lost. In this research, the image taken at the Matsuda Castle site by UAV was performed on May 19, 2016, and the image taken with an 80% overlap ratio was secured from the altitude of 70 m. As a result, the number of images taken was 949.

Figure 8.

Matsuda Castle Ruins.

The images taken as mentioned above were processed by PhotoScan. The difference in height from the vicinity of the top of Matsuda Castle to the Tomei Expressway, which was obtained from the created 3D model, is about 66.3 m, which is almost equal to the value (65.5 m) obtained from the Geographical Survey Institute. In addition, the created 3D model also includes the parts excavated by the road construction mentioned above. Therefore, the terrain before excavation clarified by the excavation survey was reproduced as shown in Figure 9 in complementing the current 3D model. As a result, it is expected that the drilling site by road construction will become visibly apparent, and it will be useful for preservation and management of the future remains.

Figure 9.

3D modeling for Matsuda Castle Ruins and excavation area.

Advertisement

6. Development of new photogrammetric method

In the above sections, many images should be taken from an aerial vehicle which moves in the horizontal direction and at a fixed altitude [8]. After that, the images are processed with the help of the SfM technique [9]. Multiple neighboring images with a high rate of overlapping should be obtained for high accuracy measurement [10], which calls for labor and cost. In the event of natural disasters, UAV operation may sometimes involve risk [11] and should be avoided. Therefore, an easy and convenient method of operating the UAVs is strongly needed. Reports exist on some applications of the UAVs with other devices [12]; however, it will be difficult to prepare a number of such devices in emergency.

In this research, we developed a method of limiting the movement of UAV only in the vertical direction, using only a small number of images vertically taken at different ground altitudes and performing aerial photogrammetry without using the ground reference point. In addition, in order to evaluate the performance of the developed method, verification was performed by comparing surveying accuracy with general photogrammetry software.

6.1 Acquisition of images for evaluation

The images for checking the accuracy were also taken at a UAV test site, and DJI Inspire 1 also was used for taking the images.

Since the method developed in this research eliminates the ground control point, the obtained 3D coordinates are local coordinates based on arbitrary origin and coordinate axes. Also, to calculate 3D coordinates by this method, it is necessary to give only the distance between two arbitrary points as a known quantity. In this research, we decided to treat the distance (14.831 m) between the airspace signs No. 27 and 35 as shown in Figure 10 as a known amount.

Figure 10.

Given distance between 2 points.

6.2 Theory of the proposed method

In this research, photogrammetry is carried out by using a plurality of vertical images taken from UAV and acquiring common corresponding points for each image. In general photogrammetry procedures, first of all, after performing orientation processing (camera calibration) to obtain exterior orientation parameters such as shooting points and posture of the camera at the time of shooting and interior orientation parameters such as focal length and lens distortion correction coefficient, 3D surveying of the measurement point will be carried out. However, the method developed in this research is to obtain the optimal solution of each parameter while advancing the camera calibration and the 3D survey at the same time. The details of this method will be described below for each procedure.

6.2.1 Estimation of relative distance

We estimate relative positional relationships with each principal point with respect to a plurality of vertical images taken from the UAV. Figure 11 schematically shows the situation of the camera at the time of taking each image, and it is assumed to be 1, 2, … in descending order of altitude to ground. First, the approximate ground altitude for each image was calculated. In this calculation, let the arbitrary two point distance set as described above be a known amount L, let the length on the sensor when L is took on the image be l1, l2,… the focal point of the camera when the distance is f, the approximate imaging heights H1, H2, … for each image are obtained by the following equation.

Hi=Llifi=123E2
where,Hi:Altitude of picturesapproximatemL:Given distancemli:Given distanceonsensorsmf:Focal lengthapproximatem

Figure 11.

Positional relation of vertical images.

Therefore, the relative distance between the lowest principal point and the other Bz1, Bz2, … and Bz5 could be calculated by the following equation.

Bzi=HiH1i=123where,Bzi:Distance between principal pointsapproximatemHi:Altitude of picturesapproximatemE3

6.2.2 Relative orientation

The relative orientation is to obtain relative took points and postures with respect to a plurality of took images. Generally, relative orientation is often performed only between two images; however, in this study, based on the image No.1 in Figure 11 as a reference, relative orientation with respect to the other images after image 2. We decided to do all at the same time. In other words, it is assumed that image 1 is taken with no inclination at the origin of the relative coordinates, and the relative point and rotation angle at the time of taking after the image 2 are obtained at the same time. Furthermore, with respect to mutual orientation in this research, the interior orientation parameter of the camera is also set as an unknown quantity as a parameter common to each image, and the orientation is performed at the same time. Figure 12 shows a coplanar condition that focuses on only No. 1 and 5 images. The principal points of these two images and a common point P are set as one plane (epipolar plane). Hereinafter, the details of the present method will be described based on the figure.

Figure 12.

Coplanarity condition of two vertical images.

Let the principal points of each image be O1(0, 0,0) and O5(Bx, By, Bz), image points of P be p1(x1, y1) and p5(x5, y5). Then, the relationship of these two images is expressed by the following coplanarity equation.

BxByBzX1Y1Z1X5Y5Z5=0where,X1Y1Z1=x1y1fX5Y5Z5=Rx5y5f+BxByBzR=1000cosωsinω0sinωcosωcosϕ0sinϕ010sinϕ0cosϕcosκsinκ0sinκcosκ0001ω,ϕ,κ:rotation angle ofNo.5f:focal lengthE4

The relative distance Bz1 obtained by Eq. (3) is substituted for Bz in Eq. (4). That is, under normal coplanar conditions, Bx is a fixed value; however, in this case, relative orientation is done with Bz as a known quantity. Furthermore, in the mutual orientation in this method, since the interior orientation parameter common to each image is also treated as an unknown quantity, it is necessary to consider the interior orientation parameter with respect to the image coordinates of the image points p1 and p5. That is, considering the principal point positions as u0 and v0, the scale factors as a1, a2, a3, and a4, as for the lens distortion, the radiation direction (coefficients: k1, k2, and k3) and the tangential direction (p1, p2), (xi, yi) (i = 1, 5) in the Eq. (4) is obtained by converting the pixel coordinates (ui, vi) (i = 1, 5) obtained from each took image by the following equation.

xi=xi+xik1r2+k2r4+k3r6+p1r2+2xi2+2p2xiyiyi=yi+yik1r2+k2r4+k3r6+p2r2+2yi2+2p1xiyiwhere,k1,k2,k3:Coefficientsofradialdistortionp1,p2:Coefficientsoftangentialdistortionr=xi2+yi2ui=xp+a1xi+a2yivi=yp+a3x+a4yixi,yi:Measurementpointmmui,vi:Measurementpointpixelxp,yp:Principalpointpixela1,a2,a3,a4:ScalefactorE5

By sequentially deriving the coplanar conditional expressions from each pair based on image 1, the parameters shown in Table 2 are unknown quantities in the mutual orientation here. In other words, if one set of corresponding points is obtained between each image, one coplanar condition formula can be obtained, so it is necessary to acquire corresponding points so that a coplanar condition formula exceeding the number of unknown quantities can be obtained. For example, if the number of images is five, the unknown quantity is 10 + 5 × (5 − 1) = 30; however, if 8 or more corresponding points are obtained, the coplanar conditional expression becomes 8 × (5 − 1) = 32 or more, and it is possible to obtain a solution.

Table 2.

Unknown parameters of relative orientation.

6.2.3 Calculation of 3D actual coordinates

Since the relative orientation parameter and the interior orientation parameter for all the images were obtained by the above processing, here, the calculation of the 3D relative coordinates for each measurement point is performed under the collinear condition. The collinear condition is a condition in which the three points, the ground survey point (X, Y, Z), the image point (x, y) on the sensor, and the principal point (X0, Y0, Z0), exist in a straight line. Yes, it is expressed by the following equation as a collinear condition expression.

x=fa11XX0+a12YY0+a13ZZ0a31XX0+a32YY0+a33ZZ0y=fa21XX0+a22YY0+a23ZZ0a31XX0+a32YY0+a33ZZ0where,a11a12a13a21a22a23a31a32a33=1000cosωsinω0sinωcosωcosϕ0sinϕ010sinϕ0cosϕcosκsinκ0sinκcosκ0001E6

That is, since two collinear conditional expressions for one measurement point are obtained for each image, if there are two or more images, it is possible to obtain 3D relative coordinates by 2 × 2 = 4 or more collinear conditional expressions. It is possible to solve the three unknown quantities. As a result, 3D relative coordinates for all measurement points are obtained.

Further, all of the obtained 3D relative coordinates are converted into the coordinates of the real scale by the length given as the known amount as shown in Figure 10. That is, from the ratio between the actual length and the length on the sensor between points known as known amounts, the 3D relative coordinates for all the measurement points are converted to the coordinates on the real scale. When converting to real scale coordinates, it is necessary to set the coordinate origin and coordinate axes.

6.2.4 Absolute orientation

Since the 3D coordinates on the real scale with respect to all the measurement points are obtained by the above processing, here, the interior orientation parameter common to each image and each exterior orientation parameter are determined by absolute orientation. In other words, in this orientation, all collinear conditional expressions are derived with all measurement points from which 3D coordinates are obtained as ground reference points, and the interior orientation parameter shown in Table 2 and the took points and attitude angles for each image. All exterior orientation parameters are to be obtained at the same time. As a result, absolute orientation for each image is completed.

6.2.5 Final orientation

The orientation parameters for every camera and the absolute 3D coordinates for every measurement point were acquired by the procedure described above. However, errors in estimation of the absolute 3D coordinates are possible due to conversion from the relative coordinates if using only one given distance. Therefore, as the final stage of this measurement process, all three orientation parameters for all the orientation parameters and all the measurement points are regarded as unknown quantities and the final line. The orientation process shall be carried out.

6.3 Checking accuracy

In order to evaluate the performance of the proposed method, image taking was carried out at the UAV test site and the measurement accuracy verification was carried out. The images were taken by UAV in the vertical direction from the center of the UAV test site. Moreover, every 5 m in the range of the ground altitude of approximately 70–90 m and acquires 5 photos in total as shown in Figure 13. In addition, 3D coordinates for 39 points of anti-aircraft signs, which are commonly found in 5 photos, were calculated by this development method and accuracy verification was carried out based on residuals with known coordinates. At that time, as shown in Figure 14, the origin is set to No. 27 anti-aircraft marker, the direction of No. 35 is the X axis, the plane formed by these two points and three points is the XY plane, the XY plane is set as the Z axis. In order to applying the proposed orientation method for acquisition of 3D coordinate of these anti-aircraft signs except origin point, the orientation can be performed by using only 2 images. However, in the case of a small number of images, the observation equation and the number of unknown quantities compete with each other, and the convergence state of the calculation by the least squares method becomes unstable. Even in the images taken in this research, trial was done with a small number of sheets; however, it was difficult to stably obtain a convergent solution with 4 or less, so we decided to use all 5 images. Table 3 shows the results of final orientation for 5 photos. Since the ground altitude in the table is an approximate value obtained by independent positioning with GPS mounted on UAV, a difference of several meters is generated from the Z coordinate in the orientation result.

Figure 13.

Vertical images for checking accuracy. (a) 70 m, (b) 75 m, (c) 80 m, (d) 85 m, and (e) 90 m.

Figure 14.

Local coordinate system.

Table 3.

Results of final orientation.

As shown in Table 4, the accuracy verification results showed that the mean square error was within ±0.200 m for both plane and height. When this precision is applied to the surveying accuracy at the earthmoving site, it is considered that the 3D point group within the position accuracy of 0.20 m can be applied to partial payment measurement, and it was recognized that it can be applied as a simple method for earthwork.

Table 4.

Results of checking accuracy.

Meanwhile, as a comparison target, measurement accuracy was also calculated by general photogrammetry software. The software used is PhotoScan. In this research, PhotoScan also captured the 5 images shown in Figure 13 and calculated the 3D coordinates and measurement accuracy for anti-aircraft signs. At that time, we tried two patterns, one with only the No. 27, 35, and 62 as the reference point and one with the 39 points as the reference point. In addition, we also decided to compare it with the case where shooting was performed by a general method as photogrammetry. In other words, with the altitude of the ground set constant at approximately 70 m, the UAV was made to fly in parallel, a total of 57 images were taken to cover the entire test site with securing an overlap rate of 80%, and the image was taken into PhotoScan. The 3D coordinates and measurement accuracy in the case were also calculated. At that time, we decided to use the same number of 9 points as the standard photogrammetry. Also, as an index for evaluating each measurement accuracy obtained above, the standard accuracy generally used in photogrammetry was calculated by the following equation [13].

σx=σy=Hfσp,σz=2HfHBσpwhere,σx,σy,σz:Standard error for each axismH:Altitudemf:Focal lengthmB:Base linemσp:Pointing accuracymE7

In the above equation, since five vertical took images are used in this study, H in Eq. (7) is the average value (83.944 m) of ground altitude after orientation for 5 images, B is 5. The standard accuracy was calculated using the distance (20.137 m) between the two most distant images. As for the reading accuracy, as in the general photogrammetry, one pixel was used, and the pixel was converted into the length on the sensor of the camera and was used.

As a result, in the case of using only 5 vertical images, the plan accuracy was lower than the standard accuracy for both the proposed method and PhotoScan; however, for the height accuracy only the proposed method exceeded the standard accuracy. In other words, it was confirmed that the proposed method can obtain the accuracy equivalent to that of ordinary photogrammetry, especially in the height direction, although the imaging method is simple and the ground reference point is unnecessary. On the other hand, when images taken by general parallel imaging were processed by PhotoScan, the accuracy was high enough to be applicable to the volume control of the earthworks. From the above results, it is necessary to select the shooting method by UAV according to the situation; however, it can be said that the proposed method is useful for grasping the situation of the site easily in a short time.

6.4 Consideration of the results

Figure 15 shows distribution of residuals of X and Y coordinates with respect to 39 points in an arrow direction. From the figure, within the range of about 10–20 m from the origin, the residuals at most verification points are within ±0.04 m, which is equivalent to the standard accuracy, but No. 45, 50 and 51, 55, 61, and 66, the residual is around ±0.2 m, and it can be confirmed that the accuracy deteriorates. Also, in the same figure, the distribution of the verification point positions can be confirmed to be relatively wide, ranging from about 40 m in the X direction to 50–60 m in the Y direction. In other words, in this verification, it is speculated that the verification point where the Y coordinate is far from the origin is due to the decrease in the Y coordinate accuracy in the proposed method. From these results, it is considered preferable to set the origin as close to the measurement object as possible when applying this method in the field.

Figure 15.

Error distribution of the proposed method.

On the other hand, in order to confirm the utility for 3D measurement by the UAV, measurement accuracy of this result was compared with measurement accuracy by satellite image [14] and aerial image [15]. As a result, the RMSEs of measurement by using the satellite image were ±0.3 to 1.0 m, and the case of the aerial image were ±0.1 to 0.5 m. Such results were dependent on several number of GCPs. Therefore, it can be said the UAV is utilized for accurate measurement in a limited area.

Advertisement

7. Conclusions

In this research, we developed a method of using an image taken vertically from UAV and performing aerial photogrammetry without using the ground reference point. In addition, in order to evaluate the performance of the developed method, verification was performed by comparing surveying accuracy with general photogrammetry software. As a result, since the developed method uses only a small number of vertical took images, it is presumed that the imaging effort can be reduced as compared with the usual method. Also, since the ground reference point is unnecessary, preparation for imaging is unnecessary.

On the other hand, the accuracy verification was performed by comparing with the accuracy of the ground survey by the total station; however, it is inferior in the case of using the general imaging method and software, it was confirmed that the measurement with accuracy of. Specifically, in the general method, it is about ±0.040 m, whereas in the proposed method, it is about ±0.100 m. Also, since the shooting method simply shoots UAV in the vertical direction and shoots several images, it is possible to drastically reduce the time and labor involved in shooting. From these facts, it is expected that the present development method will be used for surveying the current conditions at the earthmoving site and grasping the damage situation at the time of a disaster.

As a future task, we need to consider means for further improving accuracy. In particular, since it is confirmed that the accuracy of this method decreases with respect to a point away from the origin, it is desirable to stabilize the accuracy with respect to the position of the measurement point. Specific countermeasures include verifying the optimum number of photos according to the situation, verifying the optimum altitude difference between the photos, and using GNSS (GPS) positioning information at UAV flight. In this study, 3D coordinates are obtained as local coordinates without using the ground reference point; however, it is necessary to continue discussion on a method for efficiently obtaining global coordinates such as planar rectangular coordinates.

References

  1. 1. Valavanis K, Vachtsevanos G, editors. Handbook of Unmanned Aerial Vehicles. Netherlands: Springer; 2015. DOI: 10.1007/978-90-481-9707-1
  2. 2. Beaudoin L, Avanthey L, Gademer A, Roux M, Rudant J. Dedicated payloads for low altitude remote sensing in natural environments. In: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XL-3/W3. La Grande Motte, France; 2015. pp. 405-410
  3. 3. Galarreta J, Kerle N, Gerke M. UAV-based urban structural damage assessment using object-based image analysis and semantic reasoning. Natural Hazards and Earth System Sciences. 2015;15:1087-1101. DOI: 10.5194/nhess-15-1087-2015
  4. 4. Li M, Li D, Fanb D. A study on automatic UAV image mosaic method for paroxysmal disaster. In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXIX-B6. Melbourne, Australia; 2012. pp. 123-128
  5. 5. Barazzetti L, Brumana R, Oreni D, Previtali M, Roncoroni F. True-orthophoto generation from UAV images: Implementation of a combined photogrammetric and computer vision approach. In: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. II-5. Riva del Garda, Italy; 2014. pp. 77-81
  6. 6. Feifei X, Zongjian L, Dezhu G, Huad L. Study on construction of 3D building based on UAV images. In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXIX-B1. Melbourne, Australia; 2012. pp. 469-473
  7. 7. Tanzi T, Chandra M, Isnard J, Camara D, Sebastien O, Harivelo F. Towards “drone-borne” disaster management: Future application scenarios. In: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. III-8. Prague, Czech Republic; 2016. pp. 181-189
  8. 8. Amrullah C, Suwardhi D, Meilano I. Product accuracy effect of oblique and vertical non-metric digital camera utilization in UAV-photogrammetry to determine fault plane. In: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XLI-B1. Prague, Czech Republic; 2016. pp. 41-48
  9. 9. Westoby J, Brasinton J, Glasser F, Hambrey J, Reynolds M. ‘Structure-from-motion’ photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology. 2012;179:300-314
  10. 10. Bagheri O, Ghodsian M, Saadatseresht M. Reach scale application of UAV + SfM method in shallow rivers hyperspatial bathymetry. In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XL-1/W5. Kish Island, Iran; 2015. pp. 77-81
  11. 11. Longhitano G, Quintanilha J. Rapid acquisition of environmental information after accidents with hazardous cargo through remote sensing by UAV. In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XL-1/W1. Hannover, Germany; 2013. pp. 201-205
  12. 12. Persad R, Armenakis C. Co-registration of DSMs generated by UAV and terrestrial laser scanning systems. In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XLI-B1. Prague, Czech Republic; 2016. pp. 985-990
  13. 13. Yanagi H, Chikatsu H. Performance evaluation of 3D modeling software for UAV photogrammetry. In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XLI-B5. Prague, Czech Republic; 2016. pp. 147-152
  14. 14. Rupnika E, Deseillignya MP, Delormeb A, Klingerb Y. Refined satellite image orientation in the free open-source photogrammetric tools Apero/Micmac. In: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. III-1. Czech Republic; 2016. pp. 83-90. DOI: 10.5194/isprsannals-III-1-83-2016
  15. 15. Jung J, Bang K, Sohn G, Armenakis C. Matching aerial images to 3D building models based on context-based geometric hashing. In: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. III-1. Czech Republic; 2016. pp. 17-23. DOI: 10.5194/isprsannals-III-1-17-2016

Written By

Yoichi Kunii

Submitted: 19 July 2018 Reviewed: 20 November 2018 Published: 20 December 2018