Open access peer-reviewed chapter

New Applications of 3D SLAM on Risk Management Using Unmanned Aerial Vehicles in the Construction Industry

By Alfredo Toriz Palacios, José Maria Enrique Bedolla Cordero, Modesto Raygoza Bello, Edgar Toriz Palacios and Jessica L. Martínez González

Submitted: September 29th 2017Reviewed: December 21st 2017Published: June 27th 2018

DOI: 10.5772/intechopen.73325

Downloaded: 435

Abstract

Risk Management is an integral part of the Corporate Governance of the Companies, whose objective is to estimate the risks related to each line of business and to make appropriate decisions regarding the adoption of preventive measures. The construction industry, due to its peculiar characteristics about occupational risks, is a sector that must pay particular attention to this issue. Unmanned aerial robots are part of a generation of new technologies, which are emerging in the attempt to develop robust and efficient algorithms capable of obtaining 3D models of structures under construction, to support the assessment of the situation in case of an eventuality, before the direct human intervention. This article proposes to develop a risk management strategy for the construction industry based on obtaining 3D models of work environments using drones, which will allow safe evaluation of risks present in construction zones.

Keywords

  • unmanned aerial robots
  • risk management
  • construction industry
  • 3D models
  • SLAM

1. Introduction

In most industrialized countries, the construction industry is one of the most important in terms of contribution to gross domestic product (GDP). In the United States alone, this sector generated $818.6 USD billion in the second quarter of 2017 [1], according to the Bureau of Economic Analysis. In March 2017, the Bureau of Labor Statistics reported that 6,882,000 people were employed in the construction industry, accounting for 4% of total employment in the United States.

One of the main concerns in such a financially important industry is related to the safety issues of the employees who work in it. According to the National Census of Fatal Occupational Injuries in 2015, there were 4379 deaths of workers in private industry in 2015; of these, 937 (21.4%) belonged to the construction industry, that is, one in five [2]. In addition, according to information reported in Safety + Health, a construction worker has a 75% chance of a disabling injury and a 1-in-200 chance of a fatal injury during a 45-year career.

The U. S. Department of Labor reported that 64% of the deaths of private sector workers in the construction industry in 2015 had four causes, called Fatal four: falls, which represented 364 deaths (38.8%), blows per object with 90 deaths (9.5%), electrocution with 81 deaths (8.6%) and trapping with 67 deaths (7.2%) [1]. It is estimated that the prevention of Fatal Fours would save approximately 600 workers’ lives in the United States each year. Almost all construction workers will have at least one work-related injury in their lifetime.

Technological advances in areas such as personal protective equipment, safety-conscious design, focused safety training, among others, have improved worker safety. However, even with such improvements, construction continues to be one of the most dangerous industries in the U. S. economy in terms of serious injury, lost work time, hospitalization, disability, and mortality [3].

According to [4], risk management in the construction industry consists of assessing and responding to the risk that will inevitably be associated with a project. In parallel, risk management seeks to establish mechanisms to reduce the disturbances that can occur during project action [5]. However, one of the problems is that, in many cases, the risk is analyzed once the accident has occurred, i.e., the analysis looks backwards and only helps with recommendations, probabilities and data on accidents that have occurred [6].

The above activity is very helpful in reducing accidents, but it is essential to use more robust tools when performing high-risk work as dynamic as they do in the construction sector, so it is necessary to use flexible technological tools that can interact in complex environments. One of the most important needs of companies in risk management is to reduce accidents from prevention, so it is important to promote the development of a technological-inclusive model that helps to make the process of industrial risk analysis more robust, to contribute to the prevention and reduction of hazards and accidents. Nowadays, with support of multiple options offered by technology, new forms of work can be incorporated, in which human beings can interact with novel tools, at low cost, generating spaces where it is possible to carry out their tasks minimizing risks. Thus, worker protection against occupational hazards requires organizations to evaluate models that combine technology in the design of their jobs, productivity, and employee satisfaction [7].

In recent years, there has been a growing demand for the use of robots in civil contexts [8, 9, 10, 11] because companies and individuals have identified the advantages of using such technologies. These include the significant growth of Unmanned Aerial Vehicles (UAV), given the significant advantages of mobility and the ability to gain access to places of very difficult access. Although construction has been known as a highly complex field of application for these types of robotic systems, recent advances in this field offer great hope, as they represent a low-cost alternative for aerial surveillance in risk identification over other equipment where very robust structures are required [12].

The purpose of this research is to develop and test a UAV Technological Inclusion Model for the prevention of high-risk work in the construction industry, based on the Research, Assessment, Analysis and Selection methodology for risk management (RAAS). It also aims to incorporate into the industrial risk analysis management processes the benefits of integrating an incremental innovation that relies on 3D reconstruction to contribute to the knowledge of applications of the tools that are used in robotics.

2. Related works

Continuous monitoring of unsafe conditions and actions is essential to eliminate potential hazards and ensure safe working conditions in the construction industry. Many works have included risk management and its impact, which for Jannadi and Almishari [13] is a measure of the probability, severity, and exposure to hazards of a specific activity. Given the diversity of risks, there is no methodology to cover all the existing casuistry, nor to investigate the causes of accidents, but to anticipate them with the support of tools developed for this purpose.

Several papers show the interest of researchers in this subject. In 2007, Hallowell [14] validated a method that matches the elements of the safety program in construction processes. Rozenfeld et al. [15] developed the “Construction Work Safety Analysis” method, which can predict fluctuations in risk levels. Benjaoran and Bhokha [16] integrated the construction process through risk analysis throughout the design, planning and control phases. Rajendran [17] assessed the relative capacity of elements of a program to improve site security. Markowski et al. [18] proposed the ORA method, which estimates risk levels to colleagues (workers) through three steps: identification, assessment, and hierarchy of potential risks. Salla and Sanna [19] agreed that risk assessments and instruction in safe work practices should ensure that workers have relevant safety knowledge.

In 2004, Azcuénaga proposed a method based on four stages: Identification, Assessment, Action, and Monitoring (IAAM) [20]: 1. Identification of existing risks through activities aimed at such identification (inspections and observations). 2. Assessment or evaluation of risks, using a method that classifies them according to their criticality. 3. Action, taking corrective measures following the priority set by the evaluation, trying to eliminate the risks and, if not, reduce and control them. And 4. Follow-up of corrective measures, indicating the person responsible for their execution, their deadline, and verification of their effectiveness.

Although the tools mentioned above have managed to improve the working conditions of the construction industry, most inspection processes are carried out manually by qualified inspectors, which means that they are subjective. Robotics and computer vision can solve some of these situations, improving inspection quality by collecting different types of data, through which 3D models of working environments are generated [21].

3D reconstruction, based on images of civil infrastructure, is an emerging topic that is gaining interest in the scientific and commercial sectors of the construction industry. In the last decade, reliable computer vision-based algorithms have been made available and can now be applied to solve real-life problems in uncontrolled environments. For example, Fathia et al. [22] analyzed the state-of-the-art image-based 3D reconstruction and categorized existing algorithms according to different metrics, identifying gaps and highlighting future research topics that could contribute to the widespread adoption of this technology in the construction industry.

Seo et al. [23] categorized computer vision techniques for construction safety into three groups, according to the types of information needed to evaluate unsafe conditions and acts: object detection, object tracking, and action recognition. The results demonstrated significant research challenges include a complete understanding of the scene, variation in camera positioning tracking accuracy, and identification of action by multiple teams and workers.

The application of UAVs to project sites has also grown exponentially in recent years [24]. Today, many Architecture / Engineering / Construction and Facility Management (AEC / FM) firms and relevant service companies use these platforms to visually monitor construction and operation in buildings, bridges and other types of civil infrastructure systems. This application captures very large collections of images and videos, along with methods that process visual data in 3D models.

The implementation of UAVs on project sites has been made possible by rapid advances in detection, battery and aeronautical technologies, along with self-contained navigation methods and low-cost digital cameras equipped, which has helped make UAVs more affordable, reliable and easy to operate [25]. Although the direct application of these methods for construction monitoring and risk condition analysis has the potential to obtain efficient evaluation mechanisms, their industrial use has lagged to merely monitoring by collecting videos of the work areas.

3. UAV-based 3D environment reconstruction

The focus of the proposal carried out in this research is to obtain a model of the structure that can be studied; for this task, a tool was developed to reconstruct three-dimensional environments based on the concept of 3D SLAM. In this activity, the UAV made flights to capture different images and videos to identify risks. During the field tests, the Parrot AR Drone 2.0, was used. This UAV belongs to the micro and mini UAV category, which weighs between 100 g and 30 k, and flies at a low altitude (below 300 m) [26]. The Parrot AR 2.0 is powered by four electric motors in quadcopter configuration, has a microprocessor, a series of sensors, two 1280 × 720-pixel cameras, and an integrated Wi-Fi connector that allows it to link to personal mobile devices while receiving images and telemetry data in real time.

To make the 3D reconstruction of the environment, the site resident provided a collision-free route with objects, considering critical points that should be present in the reconstruction. The control of the UAV to follow this path was carried out following the rapidly-exploring random tree (RRT) methodology, presented in [27], which builds an incremental position tree in a way that rapidly reduces the expected distance to a chosen point (Figure 1).

Figure 1.

Evolution of the RRT method.

To perform the RRT control, it was first necessary to establish the quadcopter motion model to identify how the vehicle’s movement evolves from instant i to i + 1, which consists of seven state variables:

Xi=xiyiziẋiẏiżiΨiθiφiφ̇iTE1

where xtytztrepresent the position of the quadcopter’s center of mass, ẋtẏtżtthe velocity in m/s, both related to the system’s fixed coordinate system; in addition, the 10 variables contain the orientation Ψθφof the UAV, referring to the angles of Roll, Pitch and Yaw, respectively.

To obtain the i + 1 status of the UAV, horizontal acceleration x¨,y¨and vertical acceleration z¨, as well as angular accelerations (Ψ¨,θ¨,φ¨) are derived from the current status Xiand active control inputs Ui, which give the possible end positions of the RRT method.

According to [28], horizontal acceleration is proportional to the horizontal force acting on the quadcopter. Thus, we get the following equation:

x¨y¨faccelationfdragE2

where faccelationdenotes the acceleration force on the UAV, and fdragdenotes the pulling force. The acceleration force depends on the inclination angle of the UAV, while the pulling force is proportional to its horizontal velocity, which leads to the following equations:

x¨Xi=c1cosφisenΨicosθisenφisenθic2ẋiE3
y¨Xi=c1senφisenΨicosθicosφisenθic2ẏiE4

Proportionality coefficients c1 and c2 were empirically determined from 30 flight tests. The control command U=Ψ̂θ̂φ̇̂ż̂11defines the desired roll and Pitch angles, as well as the desired Yaw angular velocity and vertical velocity as a fraction of the maximum allowed value. These parameters serve as input values for drone control. A similar model to the one used for horizontal accelerations, but this time to obtain angular speeds Roll and Pitch, as well as angular acceleration Roll and vertical acceleration, was used as follows:

Ψ̇XiUi=c3Ψ̂ic4ΨiE5
θ̇XiUi=c3θ̂ic4θiE6
φ¨XiUi=c5φ̇̂ic6φ̇iE7
z¨XiUi=c7ż̂ic8żiE8

Similarly, elements c1, c2, c3 to c8 were empirically determined from 30 flight tests. Finally, the transition from drone state i to the i + 1 is given by the following expression:

Xi+1=Xi+δixiyizix¨Xiy¨Xiz¨XiUiΨ̇XiUiθ̇XiUiφ̇iφ¨iXiUiE9

where δirepresents the period of the control cycle.

Once the UAV’s advance to the designated position is checked, the next step is to know the approximate actual location of the vehicle, as internal and external disturbances may cause the position to be only partially reached. From the preceding, it is necessary to know relevant internal information about the vehicle to carry out the 3D reconstruction task, such as height, speed, turns, stability and inertia of the UAV, which have been extensively studied and analyzed in Bristeau et al. [29]. The extraction of external information through the onboard cameras is equally important because the acquisition and processing of this information is needed to construct the 3D environment. Thus, the 3D reconstruction algorithm is initially based on predicting the position of the UAV using its odometric information. This first assumption is of vital importance since one of the leading elements for obtaining the 3D model of the environment is the camera’s spatial location.

The images captured by the on-board cameras are then used through the monocular SLAM method presented by Civera [30] for more precise location of the vehicle. This algorithm is based on the extraction of invariant points of interest within the image, located in the R3 space, which are used by the Kalman Extended filter (a tool widely used to solve the SLAM problem), for the location of the UAV and the simultaneous construction of the map.

Thus, once the image of the environment is obtained, the Speeded-up robust features (SURF) method proposed by Bay et al. [31] is used to collect characteristic points, which is achieved by relying on integral images for convolution. In this sense, the SURF method makes use of a detector based on the Hessian matrix to determine points of interest within the image and their scale. Then, given a point P=xyin the image, the Hessian matrix η=Pσin Pwith scale σis defined as:

ηPσ=τxxPστxyPστxyPστyyPσE10

where τxxPσis the convolution of the second-order derivative of the Gaussian 22x2gσwith the image I in the point P, and similarly to τxyPσand τyyPσ. The Hessian determinant indicates the scale of the point of interest, and is calculated through approximations to the second-order derivatives of Gaussian (Ψxx,Ψxy,Ψyy) with the following formula:

Detηaprox=ΨxxΨyy0.9Ψxy2E11

with the intention of obtaining points of interest that could be identified in consecutive images under different viewing conditions.

Once the scale is obtained, the next step of the SURF method is to apply a descriptor, to obtain points of interest identifiable in consecutive images under different viewing conditions, so the descriptor must be distinctive and, at the same time, robust to noise, detection errors, and geometric and photometric deformations. For this purpose, it was necessary to calculate the orientation of the point of interest, which is based on the sum of the “Haar wavelet” for directions xyin a circular region of radius 6 s, where s is the scale of the point of interest. Once the directions have been calculated for all neighbors, the dominant orientation is estimated by calculating the sum of all the results within a window that covers a π3angle. The purpose of this step is to construct a square region centered around the point of interest to describe the distribution of the intensity of the content. The result of applying the SURF method is in Figure 2.

Figure 2.

Result of the application of the SURF method in the image captured by the UAV.

Although the SURF method provides an efficient method for identifying characteristic points in an image, the result of its application is limited to the R2 space. Thus, the images obtained with the identification of such points were treated using the reverse depth representation method for the monocular SLAM problem, proposed by Civera [30]. This representation implies that, from a series of fixed sequence images of an environment obtained from a moving camera, it is possible to get a three-dimensional description of the characteristic points contained in the images and use them to achieve a three-dimensional model of the scene.

Thus, for each key point contained in the images, the inverse depth method will eliminate the uncertainty of the depth of each of them, taking their information from the R2 space to the R3 space. Then, each point i in the three-dimensional space is defined by a 6-dimensional state vector.

yi=xiyiziθiφipiTE12

which encodes the beam from the position of the camera from which the characteristic was first observed, being xiyizithe optical center of the camera, and θiφithe azimuth and elevation coded in the global reference system of the scene. The depth of the feature along the beam diis obtained through its inverse pi=1/di(Figure 3).

Figure 3.

Characteristics parametrization. Source: Own elaboration.

The vector obtained is then used to model a 3D point in terms of XYZ Euclidean coordinates as follows:

xi=XiYiZi=xiyizi+1pimθiφiE13

where m=cosφisenθisenφicosφicosθirepresents a unitary directional vector.

Once the characteristic points of the scene have been obtained in the R3 space, the environment is 3D–reconstructed using them in combination with Kalman’s extended filter.

The first step for the EKF to be executed is the creation of the covariance matrix P, which is formed by the covariance matrices between the camera and the characteristics of the environment, and the state vector λ, which will contain the position of the camera and the estimation of the scene’s characteristic points.

The matrix Phas the following form:

P=PxxPxy1Pxy2Py1xPy1y1Py1y2Py2xPy2y1Py2y2E14

And the state vector λ has the following form:

λ=xvTy1Ty2Ty3TynTTE15

where yiis the three-dimensional state vector of characteristic i:

yi=XiYiZiTE16

and where xvis the camera’s state vector, consisting of a position vector ro, an orientation quaternion qCOrepresenting the C orientation of the robot’s camera with respect to the global reference system O, a linear velocity vector vorelative to the global reference system, and an angular velocity vector ωCrelative to the UAV’s camera reference system, the latter obtained from the odometric system itself.

Next, the EKF algorithm [32] shown in Figure 4 is executed for each image captured by the UAV as follows:

Figure 4.

EKF Algorithm. Source: Adapted from [32].

The EKF Algorithm steps are:

1. Stage of prediction: The filter updates the position and speed of the UAV from the odometric data reported by it.

2. Data association: It consists of aligning the characteristics that are being observed in step k with those that have been stored on the map. It is necessary to emphasize the importance of the right association between the characteristics for a consistent construction of the map, as any false association could invalidate the entire process. Thus, the joint compatibility data association method (JCBB), proposed by Neira and Tardos [33], was used to perform this stage. This algorithm performs a test to determine the individual compatibility between an observation (zi) and a characteristic obtained from the map, and a selection criterion to decide which are the best associations among the set of compatible associations.

3. Correction stage, in which the gap between the measurement and its estimation is known and the positions of the observed characteristics and the UAV are corrected, reducing the uncertainty between them (Figure 5).

Figure 5.

Evolution of the EKF algorithm with SURF points. Source: Own elaboration.

Finally, with the completion of the EKF, the update of the environment map must be done by adding the observed localized characteristics that were not part of it and that will serve in each step of the process for the localization process.

Considering that the creation of the 3D map aims to present a complete vision that can be analyzed by an external agent, an extra functionality is added to the reconstruction, in which the localized positions of the UAV will be used to obtain a cloud of points that represents the 3D environment. To this purpose, each image obtained is treated to obtain the edges of the objects present in them and, using two consecutive pictures, these aligned edges are positioned on the map using the concept of stereo vision. In the stereo vision the position of the cameras in different time moments is well known, and from a triangulation of points, these points can be situated in the space R3 to obtain the detailed map (Figure 6).

Figure 6.

Point cloud of the scene. Source: Own elaboration.

After having carried out all the 3D reconstruction design, the work of collecting samples by UAV reconnaissance flight started in the second week of January 2015, and carried out for 3 weeks, excluding Sundays. Days not suitable for the test (rain, strong winds-over 15 km/h, and extreme fogs) were also excluded. Three schedules were established (7:00, 11:00 and 15:00 h) for collecting images and videos.

4. Industrial risk identification model based on unmanned aerial vehicles

Once obtained the 3D environment reconstruction tool shown in section 3, the next step was to develop an analysis methodology that could have the 3D environment and data collected by the UAV at its center. Thus, the Industrial Risk Identification Model based on Unmanned Air Vehicles (IRIMUAV) [34] presented in this section, aims to help identify and recognize risks using a UAV during industrial accident prevention work, in addition to generating technological advantages in industrial safety.

IRIMUAV was designed to be developed through nine steps grouped into four stages (planning, technological innovation UAV, use of ICT and results) based on the PHVA improvement cycle (also known as Deming cycle). The nine steps are: work program, definition of the area to be inspected, design of the inspection route, establishing critical points, reconnaissance flight, image and video capture, 3D reconstruction of the scene, risk zone analysis, risk log and final risk report (Figure 7).

Figure 7.

Industrial risk identification model based on unmanned aerial vehicles (IRIMUAV). Source: Adapted from [34].

4.1. Planning stage (P)

Step 1. Work schedule. In this level, the personnel in charge of the construction site (the technical safety supervisor or labor resident), must have very clear the working plan or program to be executed, together with the specific activities, either on paper or in a digital format.

Step 2. Definition of the area to be inspected. The site manager, together with his work team, based on the experience and technical capacity of all, and historical information on risks or accidents, will be responsible for defining the area to be observed to identify industrial risks.

4.2. Technology innovation stage UAV (H)

Step 3. Inspection path design. Once defined the work area to be inspected, called an experimental observation unit, the project manager establishes the list of work that is pending and determines the path or paths of inspection and monitoring to be carried out by the UAV, that is, the places where the UAV must transit. The path design can be done by just creating a flight map.

Step 4. Establishment of critical points. To perform this step, the construction manager must inform the personnel trained to operate the UAV, through the flight map, which are the marked points or places where the UAV should be positioned and take images and video, to identify possible dangers. It is highly probable and recommended that the person in charge of the work be the person trained to operate the UAV.

Step 5. Reconnaissance flight, image and video capture. In this step, the UAV operator will proceed to turn on and pilot the UAV through the designated area and guide it along the inspection pathway, allowing the UAV to obtain images and video at the critical points established in step 4. It is important to keep in mind that during take-off and flight control of the UAV, the following parameters must be specified: wind speed, altitude limit, speed, the height to be inspected and the maximum vertical, as well as the maximum rotation speed with maximum angle of inclination. At his point it is considered that incremental innovation has been achieved.

4.3. ICT use (V)

Step 6. Image reproduction and 3D reconstruction of the scene. Once the UAV obtained videos with front, right side, left side, back, and top views for 360° images, a 3D reconstruction of the scene is obtained.

Step 7. Analysis of risk zones. For this step, the critical points of risk must be analyzed in the images, videos and 3D scene obtained to identify the existing risks.

Step 8. Risk log. It will be necessary to record all the risks identified in step 7 in a risk log, which can be designed in a spreadsheet such as Excel or in physical form.

4.4. Results (A)

Step 9. Final risk report. Once the risk or risks have been recorded, it will be imperative to fill out a risk identification report sheet or format, which will be an essential input of information as the steps of the risk management procedure are performed, especially when there is a need to assess, analyze and monitor risks. Also, the risk identification information obtained will serve to strengthen the planning stage in a new cycle of application of the IRIMUAV model, specifically step 2. The final risk report ensures the management focus of the model.

Once the final risk report is obtained, the magnitude of the risks and the analysis and monitoring of hazards must be performed, to develop risk management and decision making. The last step of the IRIMUAV, the final risk report, provides valuable information for risk assessment. It also makes it possible to achieve a better analysis and monitoring of risks, since it will help to guarantee a better estimation of risks and their probability of occurrence (whether low, medium or high), as well as supporting a more reliable decision making regarding the consequences of determining the level of exposure to the hazard (light, medium or extreme). It is also important to proceed with the steps of risk analysis, for which it is advisable to use some technological tool available in the market, such as SE Risk (riesgo)—Gestión de riesgos y controles, ORCA Risk Management (gestión de riesgos), o GCI Risk (riesgo) [35].

5. Results

To validate this proposal, it was applied to a Mexican construction company located in the central area of Veracruz, Mexico. An action plan was drawn up to implement the IRIMUAV model (Table 1), performed from November 2016 to May 2017.

Application areaPlan elementsNeeded documentsResponsibleProgrammingPlace
AdministrationSensitizationSensitization programStaffOctoberTraining room
InducciónInduction programStaffOctoberTraining room
TrainingTraining programStaffOctoberTraining room
Policy and normativityOfficial Mexican NORM NOM-031-STPS-2011Safety coordinatorNovemberCompany
Work teams organizationResponsibility matrixStaffNovemberSafety department
Evaluation of the UAV acquisitionComparing tableStaffNovember
Manual writing and training to use the acquired UAVUAV acquisition
ProcessManualsStaffCompany
ControlModel implementationIRIMUAV (model)Staff, safety coordinatorNovember–MarchSafety department
Allocation area for beam installation

Table 1.

Action plan to implement the MIRIUAV model.

Source: Own elaboration.

Figure 8.

Beam placement process. Source: Own elaboration.

For the development of the implementation of the IRIMUAV, at each point of the procedure, the observation and recording were performed and included in a list that considered the tasks involved in each activity. The list included the following information: the personnel that executes the activity, people involved, frequency and time of exposure to risks, and the equipment, tools, and machinery to be used. The design of the inspection path was based on the company’s specialization, i.e., the most frequent maneuvers, with emphasis on the assembly of steel and concrete bridge straps. The procedure for carrying out this maneuver is shown in Figure 8.

Subsequently, to operate the UAV through the flight map, the marked points or places where the UAV should be positioned and take images and video were defined to identify possible dangers. Starting from the bridge construction project, the activity of bridge beam assembly was selected as the inspection and follow-up route for the UAV. The first step was to carry out reconnaissance flights. Figure 9 shows the flight made by the UAV, to take images to detect critical points for the placement of beams, taking into account the hydro meteorological and topographical conditions, architectural design and materials used for the assembly of beams on each bridge.

Figure 9.

Reconnaissance flight on the right side of Tower 3. Source: Own elaboration.

The UAV collected images and video footage of the beam-mounting activity. Based on this information, a 3D reconstruction of the scene was generated (Figure 6) and from it the present risks were analyzed, obtaining the logbook shown in Table 2.

DayNumber of risks identified by the residentNumber of risks identified through the UAV
11520
21017
31320
41316
51117
61118
71417
81116
91321
101216
111518
121520
153216

Table 2.

Number of risks identified by resident/number of risks identified by UAV.

Source: Own elaboration.

Statistical analysis was then carried out, which consisted of observing, with the aid of simple random sampling, the behavior of variable Y (25 possible accident risks). The risks were assessed using two different methods: 1) through direct observation and experience of the construction site resident and 2) through the images and videos captured by the UAV (see Figure 10). With the help of the UAV, 216 risks were identified. Compared to the 153 risks detected by the site resident’s experience, with the help of the UAV, 41.17% more risks were identified during the observation period (Figure 10).

Figure 10.

Comparison of risks identified by resident vs. UAV. Source: Own elaboration.

A hypothesis test was then carried out to establish that it is possible to identify a higher number of risks with the help of the UAV compared to the traditional method used by the site resident, using the SPSS version 9 software for Windows. The data were first observed to follow a normal distribution, and a common test was performed on large samples [36]. The mean and standard deviation parameters were calculated using the data in Table 1. As a known population, data taken from the construction site resident were included, and the mean (μ = 3.648649) and standard deviation (σ = 1.441725) were calculated and the sample mean was estimated based on data obtained by the UAV (Ӯ = 5.573166) and as standard deviation (s = 1.454058). The significance level was set to 0.05 with a value of Z = 1.96 (see Table 3).

Hypothesis testing
H0≤3.648649 (Approximately 4 risks)
H1>3.648649 (Greater than 4 risks)
CriterionZ=y¯μs/nZ=7.94139
Calculation result

Table 3.

Statistical results of hypothesis testing.

Source: Own elaboration.

As can be seen from the data in Table 2, the value calculated for Z is greater than the value of Z in tables, so the null hypothesis established for the test performed is rejected, and it is concluded that the risks identified by UAV are higher than those determined by the traditional method. Besides, for the specific study, it is concluded that on average approximately six risks were identified with the help of UAV compared to the four average risks using the traditional method.

With the application of hypothesis testing and during the development of the Delphi technique, the experts observed favorable results during the identification and risk recognition phase. The procedure increases the value of the model since it includes a more robust and structured logical-deductive process by covering more observation points and information (images, videos and 3D reconstruction). The expert opinion also strengthens the proposal to include the UAV use for high-risk prevention.

6. Conclusions

UAVs as an incremental innovation for the IRIMUAV model proved to be a combination that produced a tool that contributes to strengthening the safety risk management process in the construction industry. The model also showed characteristics that may support a more effective identification and recognition of technical risks, to help prevent and reduce accidents at work, as well as reducing the costs of accidents.

Because of its technological capability, the information that can be collected by UAVs reveals many details imperceptible at plain sight. The evolution of the potential that allows the development of artifacts, procedures, and norms that facilitate the interaction between man and the environment, assuring the adaptation and satisfaction of needs through the application of systematic methods with scientific foundations, results in the rise of technology [37].

In addition to innovation, integrating technology is fundamental, as it has become a factor of organizational growth and increased competitive advantage, somehow minimizing the risk of failure [38]. The risk assessment procedures cited in this document have an added value for risk and cost reduction, through the design of a technological inclusion model using UAVs. The use of UAVs provides greater versatility and extensive use during the stages of the process life cycle since it reduces the limitations of traditional methods. With the capture of images, videos and 3D recognition with the UAVs, it is possible to make more exhaustive studies. The use of UAVs also reduces the subjectivity of traditional studies, actively supporting the analyst, broadening human experience, and improving the level of confidence by providing more non-subjective information on the identified risks [24].

Today we live in a society of global risk, which coexists with global risk scenarios such as ecological risks, financial risks, and terrorist risks, to name a few, where it is necessary to generate brave actions or predictability models [39]. In other words, our society should not work so much for the calculation or prevention of risks, but instead, devote to acting to anticipate future dangers.

The great diversity of technology within the analysis of labor risks allows the industry to face challenges to carry out preventive practices without focusing on economic issues, but on the reduction of the frequency of accidents, and on the seriousness of the damage as a function of the workers’ well-being [40]. The present study showed that technology intervention, specifically the use of UAVs employing surveillance robotics to help this great need, is beneficial. The broad scope and visibility of UAVs facilitate work on irregular and uneven terrain, not to mention the low-cost alternative for aerial surveillance in risk identification aspects over other equipment, where very robust structures are required.

The experience gained shows that, with adequate risk management based on correct identification, the number of accidents tends to decrease in line with the magnitude of their consequences. In that sense, this work helps to identify the decisive factors in risk development, locate the central actors and fields of action or points of convergence where the future is built day by day. This research could, therefore, have a positive impact on the development of the emerging technology called security robotics using UAV. Also, the study initially identifies aspects related to UAVs, their environment and mode of use as part of service robotics. It also provides a framework for possible development within the analysis of labor risks and the opportunity to implement as a technological strategy the adoption of new technologies, providing incremental technological innovation, specifically for the process of risk analysis within industrial safety, particularly in construction.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Alfredo Toriz Palacios, José Maria Enrique Bedolla Cordero, Modesto Raygoza Bello, Edgar Toriz Palacios and Jessica L. Martínez González (June 27th 2018). New Applications of 3D SLAM on Risk Management Using Unmanned Aerial Vehicles in the Construction Industry, Drones - Applications, George Dekoulis, IntechOpen, DOI: 10.5772/intechopen.73325. Available from:

chapter statistics

435total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Land Use Information Quick Mapping Based on UAV Low- Altitude Remote Sensing Technology and Transfer Learning

By Lu Heng, Fu Xiao, Liu Chao, Li Longguo, Li Naiwen and Ma Lei

Related Book

First chapter

Green Comparable Alternatives of Hydrazines-Based Monopropellant and Bipropellant Rocket Systems

By Dov Hasan, Dan Grinstein, Alexander Kuznetsov, Benveniste Natan, Zohar Schlagman, Avihay Habibi and Moti Elyashiv

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us