Open access peer-reviewed chapter

Smart-Road: Road Damage Estimation Using a Mobile Device

Written By

Izyalith E. Álvarez-Cisneros, Blanca E. Carvajal-Gámez, David Araujo-Díaz, Miguel A. Castillo-Martínez and L. Méndez-Segundo

Submitted: 24 August 2021 Reviewed: 03 September 2021 Published: 18 October 2021

DOI: 10.5772/intechopen.100289

From the Edited Volume

Information Extraction and Object Tracking in Digital Video

Edited by Antonio José Ribeiro Neves and Francisco Javier Gallegos-Funes

Chapter metrics overview

461 Chapter Downloads

View Full Metrics

Abstract

Mexico is located on five tectonic plates, which when moving, generate telluric movements. These movements, depending on their intensity, affect the telecommunications infrastructure. Earthquakes tend to cause landslides, subsidence, damage to structures in houses, buildings, and roads. In the case of road damage, it is reflected in cracks in the pavement, which are classified according to their size, shape, and depth. The methods that are currently implemented to inspect roads mainly use human perception and are limited to a superficial inspection of the terrain, causing this process ineffective for the timely detection of damage. This work presents a method of road analysis using a drone to acquire images. For the processing and recognition of damages, a mobile device is used, allowing to determine the damage type on the road. Artificial intelligence techniques are implemented to classify them into linear cracks or zig-zag cracks.

Keywords

  • convolutional neural networks
  • computational vision
  • descriptors
  • cracks road
  • earthquakes

1. Introduction

A country that is endowed with good road infrastructures can generate the basic elements of competitiveness and provide opportunities for better economic development and at the same time to promote its social and cultural development [1]. There are several factors by which access roads can be affected, and some examples are as follows: time, use, excessive weight, the quality of materials, ubication, natural disasters, etc. We can highlight the damage to the roads caused by earthquakes, which is due to the movement of the tectonic plates, it can cause fissures that are on the surface. Within the damages caused on the roads, deterioration can occur from small cracks too wide ruptures or separations in the road, and these types of incidents tend to occur mainly in the seismic areas of the country.

1.1 Seismicity in Mexico

The Mexican Republic is in located one of the most seismically active regions in the world and is immersed within the area known as the Circumpacific Belt (or Pacific Ring of Fire), where the greatest seismic and volcanic activity on the planet is concentrated [2], Figure 1.

Figure 1.

Circumpacific Belt Zone. Source: National Institute of Statistic and Geography.

The Global Seismic Hazard Assessment Program (GSHAP) was a project sponsored by the United Nations (UN) that assembled the first worldwide map of earthquake zones [3]. In Mexico, an earthquake hotspot follows the route through the Sierra Madre Occidental that reaches south of Puerto Vallarta to the Pacific coast on the border with Guatemala [3]. Figure 2 shows the seismic regionalization in the Mexican Republic, marking the zone A as the one with the lowest risk, followed by zones B, C, and D. These last three are the ones that generate the greatest concern to the scientific community and the inhabitants of these areas, due to the structural damage that occurs on the roads and in their localities.

Figure 2.

Areas with the highest propensity to earthquakes in Mexico [3].

1.2 Road network in Mexico

In Mexico, as in other countries, the road network is the most widely used transport infrastructure. The national network has 378,923 km, which is made up of avenues, streets, highways, and rural roads that allow connectivity between practically all the populations of the country. Figure 3 shows the main roads that interconnect the Mexican Republic [4].

Figure 3.

Mexico’s major highways 2009 [4].

1.2.1 Damage to land access roads

Seismic activity is recurrent in certain areas, causing damage to road infrastructure. These damages are identified as fissures, cracks in the asphalt, landslides, separation of road sections, subsidence, and other damages in different access roads that interconnect the country. The road must be inspected, and the damages detected must be reported and repaired. Figure 4 shows some examples of damage caused by seismicity in Mexican territory on different roads [5].

Figure 4.

Damage to roads caused by earthquakes, left: Chiapas, 5.6°r earthquake, right: Oaxaca, 5.2°r. Source: Google.

In Mexico, most of the road inspections after an earthquake are carried out in person, which can generate more conflicts in some critical points. In these conflict points, semi-autonomous surveillance systems are required that implement mobile technology to detect damage to access roads. Therefore, it is proposed to develop a methodology that, through image processing techniques and neural networks, allows the identification of damage to roads, classifying two types of cracks: linear and zig-zag. This chapter is divided into six sections. Section 2 explains the related work, Section 3 explain methods and materials, Section 4 provides tests and results, Section 5 conclusion and finally Section 6 discussion.

Advertisement

2. Related work

There is research in the field of artificial intelligence related to techniques and practices used to automate the detection of road defects. Below are some related works that have been developed: In [6], an automatic system for identifying cracks in roads through a camera is developed. It scans the roads by zone and inspects the condition of cracks and fissures. The authors propose the following stages: i) smooth, adjust, and binarize the image using the threshold value method, ii) perform morphological operations such as dilation and erosion, iii) eliminate false cracks in the image with smoothing filters, iv) clean and perform the connection of cracks in the image, and finally v) estimate the shape of the crack, using geometric characteristics and shape description. In [7], using texture classifiers, the authors address the descriptors. Through these techniques, it is possible to detect color and texture changes in an image, and thus perform the identification of edges by extracting a set of characteristics, generated from these histograms. For each frame of a pavement video analyzed, the method extracts the characteristic and creates its binary version to classify each region.

In [8], the authors apply morphological operations to the images and segment the images using the K near-neighbor (Knn) method. The proposed algorithm highlights the information of the image texture, and the results are classified using the standard deviation; to define regions delimited by the intensity of gray, these techniques allow to detect patches on the roads through images processed on a Smartphone. Figure 5 shows the results presented by the authors.

Figure 5.

Detection of the mobile damage system. a) Hole in the pavement, b) longitudinal crack, c) transverse crack, and d) horizontal crack [8].

In [9], a system for identifying cracks in buildings from an unmanned aerial vehicle (UAV) equipped with a camera is presented. Fly through the building to acquire images, which are transmitted remotely via Wi-Fi to a computer for processing. Images are segmented with techniques to change the Red, Green, Blue (RGB) color space to grayscale. The threshold is calculated with statistical methods (mean and standard deviation), to categorize the black and white pixels and identify the cracks in the building.

Maeda et al. [9] developed a system for identifying cracks in the pavement, where images are captured from a Smartphone mounted on a cell phone holder on the dashboard of a car. It develops an application that analyzes images obtained from the Smartphone through a deep neural network that allows the identification of cracks in the road. In this work, they use deep neural networks such as Region-based Convolutional Neural Networks (R-CNN), You Only Look Ones (YOLO), and Single Shot MultiBox Detector (SSD), for the extraction of characteristics from the region of interest (cracks). In 2019, Zhang et al. [10] propose an intelligent monitoring system to evaluate the damage in the pavement, and this methodology proposes the use of a set of points of an image obtained from a UAV, making use of Harris performing the processing in the cloud for the identification of cracks in the pavement.

Advertisement

3. Methods and materials

In Figure 6, the general architecture of the proposed methodology for the classification and identification of linear and zig-zag cracks is shown.

Figure 6.

Proposed architecture for the identification and classification of cracks.

Figure 6 shows the methodology is composed of different stages of development which are image acquisition, pre-processing, descriptors, classification, and result. Each of these stages is detailed below.

3.1 Image acquisition

In this step, the image is taken with the camera that the PARROT BEBOP 2 FPV Drone has, which has the following characteristics: 14-megapixel camera with wide-angle lens, unique digital image stabilization system, live video from a Smartphone or tablet with a viewing angle of 180°, photo format: RAW, JPEG, DNG, and image resolution of 3800 × 3188 pixels to automate the route of the drone, a function implemented to trace the flight path is used. In Figure 7, the programmed route map is shown.

Figure 7.

Simulation of a programmed route at 15 mts.

3.2 Pre-processing

3.2.1 Dimension reduction

In this section, the image scaling is performed by implementing the Discrete Wavelet Transform “Haar” (DWT-H). In Figure 8, three levels of decomposition are shown.

Figure 8.

DWT-H decomposition.

3.2.2 Edge enhancement

To obtain the edge enhancement of the image obtained from point 3.2.1, the use of a Laplacian filter is proposed. The Laplacian of an image highlights the regions of rapid intensity change and is an example of a second-order or a second derivative method of enhancement. It is particularly good at finding the fine details of an image. Any feature with a sharp discontinuity will be enhanced by a Laplacian operator [11]. The Laplacian is a well-known linear differential operator approximating the second derivative given by Eq. (1).

2f=2fx2+2fy2E1

where f denotes the image. The following process is performed, a 3 × 3 matrix is convolved with the image, Figure 9.

Figure 9.

Convolution of the 3 × 3 kernel at a point (x, y) in the image.

In Figure 10, the Laplacian filtering process is shown. This process consists of the following steps: From the image obtained by the DWT-H (Figure 10a), the convolution is performed with the proposed 3 × 3 kernel (Figure 10b). Finally, the sub-image of the crack is obtained with the edges highlighted as seen in Figure 10d.

Figure 10.

The result obtained by the Laplacian Filter.

3.3 Feature extraction

One of the main objectives of this work is to implement the methodology on a mobile device, which will perform the image processing offline, obtaining the result on the site. It is therefore essential to extract only the key points that provide information about features outstanding image and thus make their classification in a convolutional neural network LeNet efficient. To do this, it is proposed to perform the extraction of the characteristics through the scale-invariant feature transform (SIFT) and the pixel rearrangement of the points thrown from the Laplacian filter through statistical moments. The following is the extraction of the characteristics:

3.3.1 Statistical central moments

Central moments also referred to as moments of the mean has been calculated as [12], Eq. (2),

μm=n=0L1XnymtXnE2

where ‘m’ is the order of the moment, ‘L’ is the number of possible intensity values, ‘Xn’ is the discrete variable that represents the intensity level in the image, and ‘y’ is the mean of the values, t(Xn) is the probability estimate of the occurrence of ‘Xn’, Eq. (3).

y=n=0L1XntXnE3

The mean is the first-order moment followed by variance, skewness, and kurtosis as the second, third, and fourth moments. The mean at the first-order central moment is used to measure the average intensity value of the pixel distribution. Variance (μ2) was used to measure how wide the pixels spread over from the mean value, Eq. (4).

μ2=n=0L1Xny2tXnE4

To know the dispersion of the values located as key points by SIFT, the second central moment is implemented to group the pixels of the image processed by the Laplacian Filter. The smoothness texture “R” is defined by Eq. (5),

R=111+μ2xE5

where ‘μ2’ is the variance and ‘x’ is an intensity level. Then, the following condition is established, by Eq. (6),

IFissureij=Ry=1,IFissure=1OTHERWISE,IFissure=0E6

3.3.2 SIFT

According to the SIFT methodology [13], the first step is scale detection. For the particular case of the crack contour, this step is very useful for the identification of the crack, since the taking of the images can vary depending on the shooting distance. The formal description of this step is detailed below.

3.3.2.1 Scale detection

The scalar space L (x, y) of an image, is obtained from the convolution of an input image IFissure, through a Gaussian filter G (x, y, σ) at different scales of the value of σ = 0.5 [13], Eq. (7):

Lxyσ=GxyσIFissurexyσE7

where, xyσ=12πσ2ex2+y22σ2, is the function of the Gaussian filter; it is applied in both dimensions (x,y) of the IFissure image plane.

To obtain the different scale versions of the IFissure image, it is necessary to multiply the value of σ with different values of the constant k to obtain the projections of the contiguous scales (where k takes values k > 1), each scale’s projection is subtracted with the original scale, obtaining the differences from the original image IFissure, Eq. (8):

Dmxyσ=LxyLxyσE8

The search for extreme values on the spatial scale produces multiple candidates of which the points that are not considered are the low contrast ones since they are not stable to changes in lighting and noise. Eq. (9) shows how the points of interest are located within the image and these locations are given by [13]:

z=2D1xyσx2DxyσxE9

Subsequently, the vectors are arranged according to the orientation of the points obtained from Eq. (9), and it is explained below.

3.3.2.2 Orientation mapping

This step assigns a constant orientation to the key points based on the properties of the image obtained in the previous steps. The key point descriptor can be represented with this orientation, achieving the invariance to rotation, which is important to highlight because the image can be taken at different shooting angles. The procedure to find the orientation of the points, is as follows [13]:

Using the scalar value of the points of interest selected in Eq. (4).

  1. Calculation of the magnitude value, M.

    Mxy=Lx+1yLx1y2+Lxy+1Lxy12E10

  2. Calculation of orientation, θ

θxy=tan1Lxy+1Lxy1/Lx+1yLx1yE11

Finally, the description of the characteristic points obtained in the previous steps must be identified the interesting points, Figure 11.

Figure 11.

Results obtained from the proposed methodology: a) original image, b) image obtained with the DWT-H, c) image with the Laplacian filter, d) image obtained with the descriptors, and e) final image.

3.3.3 Convolutional neural network LeNet

The neural network will allow, based on the characteristics obtained, to train and identify the cracks that appear in the image. The neural network used in this research is the convolutional neural network LeNet, which is made up of five layers of neurons in its architecture, and has an input of 1024 × 1024 × 3 values, and an output of two possible classes [14]. LeNet is a network that is optimized for mobile devices, which allows greater efficiency in the detection and performance of the processes on the mobile device. The network architecture is presented in below Figure 12 [14]:

Figure 12.

LeNet neural network architecture [14].

For the training of the network, a collection of approximately 500 images were made in various areas of Tecámac, State of Mexico. Therefore, in this investigation, cracks with different intensities will be detected, so these will be identified and classified in the following categories [15]:

  1. Erratic or zigzag cracks (ZZC): These types of cracks in the pavement with erratic longitudinal patterns. It is presented by extreme changes in temperature, defective base, and seismic movements.

  2. Significant cracks (LC): These are cracks with a length greater than 30 centimeters.

  3. Very significant cracks (VSC): Those that are shown in the pavement and have a length greater than 60 centimeters due to their size are a risk. These cracks are the most visible.

  4. Non-significant cracks (NSC): These are cracks that appear in the pavement and that have a fine shape and a length of fewer than 30 centimeters. Figures 13 and 14 show images referring to the classifications that have been delimited for identification. These define the two classes to detector zig-zag crack (ZZC) and linear crack (LC), respectively.

Figure 13.

Zig-zag cracks.

Figure 14.

Linear cracks.

Advertisement

4. Tests and results

To perform the tests, they were divided into phases to estimate the time of each one of these and thus detect which of them generates a greater consumption of time compared to the others. The processing and results were developed on a Motorola X4 mobile device with a processor: 2.2 GHz and 3 GB RAM. To select the optimum distance for taking the images, tests were carried out between 10 and 30 meters above ground level. At each distance, it was ensured that the images were clear and that the crack would be visualized. Figure 15 shows the range in height and image visibility for the 500 sample images. From Figure 15, we can see that at a height of 10 meters the drone has a visibility range of 26 meters in radius. In a similar way we can observe that for heights of 15, 20, 25 and 30 meters, they correspond to 40, 53, 67 and 80 meters of visibility radius. For our case we consider a height between 15 and 20 meters.

Figure 15.

Parrot Drone viewing distances in meters.

4.1 Phase 1. Distance estimation

To validate the distances shown in Figure 15 and their visibility range, four consecutive objects are placed on the crack in the road. From Figure 16, only three elements can be observed which are enclosed in circles as can be seen. The dimensions of the objects placed on the crack are 10 × 10 cm, which were used to estimate the field of view of the drone camera. Based on these tests, a height of 15 meters is proposed for clear detection of the object by the drone, coupled with its stability in the air currents present in the tests.

Figure 16.

Range of visibility of objects a) and b) no objects are present; c) 15 meters away, and d) 30 meters away.

4.2 Phase 2. Estimation of the pre-processing stage

Table 1 shows the average times calculated for the number of samples acquired, as the DWT-H decomposition increases, the average processing time increases. From Table 1, we observe the processing times for the feature extraction and classification stage for each decomposition scale of the DWT-H. The dimension of the initial image is 2048 × 2048, we observe that decomposition level 4 it gives a processing time of 14,445 ms for the two proposed stages.

Scale number (DWT-H)Dimension (pixels)Processing time average (ms)Convolution time process average(ms)Total processing time average (ms)
12048 × 204814.4870.019614.507
21024 × 102414.5060.009414.516
3512 × 51214.5610.008714.570
4256 × 25614.7370.008314.574

Table 1.

Result of pre-processing stage time.

4.3 Phase 3. Descriptors

Table 2 shows the average results obtained for the 500 images in the feature extraction stage. From Table 1, it is concluded that the optimal wavelet decomposition size for this estimation is at the fourth wavelet decomposition level.

Image size (pixels)Feature extraction (statistical moments)Feature extraction (SIFT)Total processing time average
1024 × 10240.0898 (ms)0.1826 (ms)0.2724 (ms)

Table 2.

Descriptor stage processing time result.

4.4 Phase 4. Classification of images

The tests to validate the proposed methodology were carried out with 150 images acquired at a height of 15 meters. During the development of the test scenarios, four cases were considered: two correct classifications and two wrong classifications. The correct classifications are true positive (TP) and false positive (FP); and the misclassifications are false negative (FN) and true negative (TN). By using these metrics, we can obtain different performance measures like [13].

Sp=TNTN+FPE12
Se=TPTP+FNE13
Acc=TP+TNcracks detected in the imageE14

where specificity (Sp) is the ability to detect non-crack pixels, sensitivity (Se) reflects the ability of the algorithm to detect the edge of the crack, Accuracy (Acc) measures the proportion of the total number of pixels obtained correctly (sum of true positives and true negatives) by the total number of pixels that constitute the image of the cracks [13]; this is the probability that a pixel belonging to the crack image will be correctly identified. Table 3 shows the results obtained from 150 test images that were acquired in flight, obtaining a total of 140 images (TP), 1 (FN), 5 (FP), and 4 (TN).

Prediction
PositiveNegative
150 imagesTrue(TP) = 140(FN) = 1
False(FP) = 5(TN) = 4

Table 3.

Confusion matrix.

In Table 4, the results obtained from Acc, Sp, Se of the 150 acquired test images are shown. From Table 4, the results obtained show that 99.29% was obtained for Acc, which indicates that in this percentage the cracks were detected and classified positively. In addition, 96.55% of Sp represents that the result of no crack is true, as well as the value of Se with 80% to detect that it is not a crack.

MetricResult
Acc0.9929
Sp0.9655
Se0.8000

Table 4.

The obtained results from the acquired images.

In Figure 17, some images obtained through the proposed methodology and the result obtained from the classification are shown.

Figure 17.

Results obtained by the proposed methodology: a), c), and e) original image, b) and d) processed image (ZZC), finally f) processed image (LC).

Finally, the mobile application that serves as the development and user interface is shown in Figure 18.

Figure 18.

Graphical user interface for interaction with the proposed methodology: a) LC detected and b) ZZC detected.

Advertisement

5. Discussion

Based on the tests carried out in the monitoring of the roads using the Parrot drone, we observed that the height between 15 and 20 meters gives satisfactory results. Within the development of the proposal, the size reduction stage made it possible to speed up the processing of the extraction of the characteristics, as well as the proposal to reduce the key points obtained by the statistical descriptors and SIFT, through Eq. (6). These development stages are fundamental because all the crack detection and identification processing are carried out internally on a mid-range mobile device. The section of the LeNet neural network was streamlined through the preprocessing stage, observing that the precision results obtained were not affected, which are 99%, even limiting the data that are entered into the neural network.

Advertisement

6. Conclusion

In conclusion, it can be emphasized the fact that the objectives that were sought to be achieved with the identification of cracks in roads, streets, highways or avenues, were achieved. It had the specific characteristics that allowed using the proposed processes on a mobile device, and it was possible to demonstrate that the processing of the proposed methodology was developed on an Android platform that to date is one of the most commercial platforms worldwide between mobile devices. The pre-processing results show a clear trend in terms of the time required to adapt an image and perform the crack identification process, time that does not exceed 14.79 ms, thanks to the use of DWT-H instead of other processes that require greater computational complexity for image size reduction. On the other hand, the results show that the proposed operations are 99% accurate in finding cracks. It was also found that the times of certain stages of the process can be improved by changing some processes such as the scaling of the images, which reduces the time by up to 200 milliseconds, among other possible improvements that can be implemented.

Advertisement

Acknowledgments

The work team is grateful for the support provided to perform this research to the Secretaria de Educación, Ciencia, Tecnología e Innovación de la Ciudad de México with the project SECITI/072/2016 and SECTEI/226/2019. We also thank the Instituto Politécnico Nacional for the research project SIP 20210178.

Advertisement

Conflict of interest

The authors declare no conflict of interest.

References

  1. 1. Economic Commission for Latin America and the Caribbean. Local Economic Development and Decentralization in Latin America: Comparative Analysis. Economic Commission for Latin America and the Caribbean. 2001. Available from: https://www.cepal.org/es/publicaciones/2691-desarrollo-economico-local-descentralizacion-america-latina-analisis-comparativo [Accessed: 27 July 2021]
  2. 2. Mexican Geological Service. Evolution of Tectonics in Mexico. Mexican Geological Service. 2017. Available from: https://www.sgm.gob.mx/Web/MuseoVirtual/Riesgos-geologicos/Evolucion-tectonica-Mexico.html [Accessed: 27 July 2021]
  3. 3. Alden A. The World’s Major Earthquake Zones. 2020. Available from: https://www.thoughtco.com/seismic-hazard-maps-of-the-world-1441205 [Accessed: 06 August 2021]
  4. 4. Geo-México. Geo-México the Geography and Dinamics of Modern México. 2015. Available from: https://geo-mexico.com/?p=12955 [Accessed: 27 July 2021]
  5. 5. Secretary of Communications and Transportation. Federal Roads and Bridges Package for Income and Related Services. Secretary of Communications and Transportation. 2018. Available from: https://www.sct.gob.mx/fileadmin/Transparencia/rendicion-decuentas/MD/34_MD.pdf [Accessed: 27 July 2021]
  6. 6. Porras Díaz H, Castañeda Pinzón E, Sanabria Echeverry D, Medina Pérez G. Detección automática de grietas de pavimento asfáltico aplicando características geométricas y descriptores de forma. Dialnet. 2012;8:261-280
  7. 7. Radopoulou S, Brilakis I. Patch detection for pavement assessment. Automation in Construction. 2015;53:95-104. DOI: 10.1016/j.autcon.2015.03.010
  8. 8. Tedeschi A, Benedetto F. A real-time automatic pavement crack and pothole recognition system for mobile Android-based devices. Advanced Engineering Informatics. 2017;32:11-25. DOI: 10.1016/j.aei.2016.12.004
  9. 9. Maeda H, Sekimoto Y, Seto T, Kashiyama T, Omata H. Road damage detection using deep neural networks with images captured through a smartphone. Computer-Aided Civil and Infrastructure Engineering. 2018;2018:1-14. DOI: 10.1111/mice.12387
  10. 10. Zhang B, Liu X. Intelligent pavement damage monitoring research in China. IEEE Access. 2019;7:45891-45897. DOI: 10.1109/ACCESS.2019.2905845
  11. 11. Bhairannawar S. Efficient medical image enhancement technique using transform HSV space and adaptive histogram equalization. In: Soft Computing Based Medical Image Analysis. EEUU: Science Direct; 2018. pp. 51-60. DOI: 10.1016/B978-0-12-813087-2.00003-8
  12. 12. Prabha D, Kumar J. Assessment of banana fruit maturity by image processing technique. Journal of Food Science and Technology. 2013;2013:1-13. DOI: 10.1007/s13197-013-1188-3
  13. 13. Ramos-Arredondo RI, Carvajal-Gámez BE, Gendron D, Gallegos-Funes FJ, Mújica-Vargas D, Rosas-Fernández JB. PhotoId-Whale: Blue whale dorsal fin classification for mobile devices. PLoS One. 2020;15(10):e0237570. DOI: 10.1371/journal.pone.0237570
  14. 14. Pymasearch. LeNet – Convolutional Neural Network in Python. 2016. Available from: https://www.pyimagesearch.com/2016/08/01/lenet-convolutional-neural-network-in-python/ [Accessed: 27 July 2021]
  15. 15. Secretary of Communications and Transport. Catalog of Deterioration in Flexible Pavements of Mexican Highways. Secretary of Communications and Transport. 1991. Available from: https://imt.mx/archivos/Publicaciones/PublicacionTecnica/pt21.pdf [Accessed: 27 July 2021]

Written By

Izyalith E. Álvarez-Cisneros, Blanca E. Carvajal-Gámez, David Araujo-Díaz, Miguel A. Castillo-Martínez and L. Méndez-Segundo

Submitted: 24 August 2021 Reviewed: 03 September 2021 Published: 18 October 2021