Open access peer-reviewed chapter

Using Unmanned Aerial Systems and Deep Learning for Agriculture Mapping in Dubai Emirate

Written By

Lala El Hoummaidi, Abdelkader Larabi and Khan Alam

Submitted: 12 July 2023 Reviewed: 13 July 2023 Published: 25 September 2023

DOI: 10.5772/intechopen.1002436

From the Edited Volume

Drones - Various Applications

Dragan Cvetković

Chapter metrics overview

68 Chapter Downloads

View Full Metrics

Abstract

Dubai’s ‘Sustainable Future’ vision prioritizes Sustainable Agriculture as a key pillar of its ‘Food Security Strategies’. To boost productivity and efficiency, Dubai Emirate has adopted advanced technologies. Accurate land monitoring is crucial for effective food security control and support measures. However, traditional methods relying on costly and time-consuming field surveys conducted by experts are limited in scope. To address this, affordable and efficient agriculture mapping relies on remote sensing through drone surveys. Dubai Municipality utilizes Unmanned Aerial Vehicles (UAVs) to map farming areas across the Emirate, identify cultivable lands, and establish a precise agriculture database. A study conducted over 6 months used Trimble UX5 (HP) drones for high-resolution imaging in 12 Dubai communities. It employed novel object detection methods and geospatial analysis. Deep learning models achieved 85.4% accuracy in vegetation cover and F1-scores of 96.03% and 94.54% for date palms and GHAF trees, respectively, compared to ground truth data. This research highlights the potential of UAVs and deep learning algorithms for large-scale sustainable agricultural mapping. By providing specialists with an integrated solution to measure and assess live green vegetation cover derived from processed images, it contributes to the advancement of sustainable agriculture practices.

Keywords

  • precision agriculture
  • multispectral imaging
  • UAV
  • remote sensing
  • machine learning
  • deep learning

1. Introduction

Undoubtedly, agriculture plays a pivotal role in ensuring the sustainability of economies [1, 2]. Its significance, however, may vary across different countries [3, 4, 5, 6]. While agriculture was traditionally limited to food and crop production, it has now expanded in numerous countries to encompass processing, marketing, and distribution of agricultural products. Agricultural activities not only serve as a primary source of livelihood and contribute to GDP growth [7], but also drive national trade, reduce unemployment, provide raw materials for other industries, and contribute to overall economic development [8, 9, 10].

Remote sensing techniques, including soil property mapping, crop type classification, crop water stress detection, disease monitoring, and crop yield mapping, have gained widespread adoption in both governmental and private sectors [11, 12]. Leveraging sensors and geospatial analysis tools, remote sensing brings together data from multiple sources to support decision-making in agriculture. Unmanned Aerial Systems (UAS) or drones, with their flexible spatial and spectral resolution, have become a preferred platform for collecting such data [13]. Additionally, remote sensing-based land cover classification has found applications in change detection monitoring, agricultural management, green vegetation classification, biodiversity conservation, land use planning, and urban planning [14, 15]. Vegetation detection stands out as a significant application of land cover classification. Consequently, researchers and experts have explored various methods, including digital photo interpretation, supervised and unsupervised classification, classification and regression trees (CART), and Deep Learning Object Detection [16, 17, 18, 19, 20].

Deep learning techniques have gained prominence in land cover classification since 2012, furthermore, due to remarkable progress in computer vision applications like image classification, object detection, tracking, and semantic segmentation, researchers have been able to explore various methodologies. Snehal et al. [21] utilized convolutional networks to tackle the challenge of multispectral image classification. In a separate study, Zhang et al. [22] conducted an in-depth analysis of different techniques for object detection in land cover classification, particularly focusing on the utilization of high-resolution multispectral imagery. Their study compared deep learning models with traditional methods, concluding that deep learning-based approaches leveraging both spatial and spectral information outperformed conventional pixel-based methods. Other studies have also demonstrated the potential of artificial intelligence and deep learning methods for land cover classification and vegetation detection [23, 24, 25].

Amidst all-time low commodity prices and increasing pressure for enhanced product quality, the modern farming industry faces challenges that necessitate improved resource management. Dubai Emirate shares this need, as it aims to leverage its re-export hub and global gateway status in the fresh food sales sector. Notably, organic farms in Dubai witnessed a 53% increase in 2019, with production rising by 89% from 1240 tons to 2356 tons [26]. To support an efficient workflow for assessing crop health, making informed decisions, and mitigating losses due to disease outbreaks or extreme weather events, Dubai Municipality has initiated projects utilizing drones and connected analytics for surveying and mapping agricultural areas. High-resolution multispectral drones enable growers, service providers, and researchers to efficiently scout crops, identify stress, track plant growth, and access real-time quality data, ultimately reducing costs and improving yields. Moreover, multispectral data reveals field variability invisible to the naked eye, aiding in early disease detection and response.

This chapter focuses on evaluating the suitability of UAS-based remote sensing, using a novel object-based vegetation detection method that combines NDVI and deep learning techniques, for monitoring crops in Dubai. The contributions of this study include the introduction of a superior object-based vegetation and tree detection method using NDVI and deep learning, highlighting the potential use of NDVI imagery as an alternative to standard RGB images, and discussing the reasons behind the superior performance of our deep learning model compared to other methods, along with potential strategies for its further application.

Advertisement

2. Materials and methods

2.1 Study area

Dubai Emirate, the second largest among the seven emirates comprising the United Arab Emirates, is strategically located along the southeastern coast of the Arabian Gulf, spanning from coordinates 55°18′14.8188″ East to 25°16′ 17.4564”North. Encompassing a combined expanse of 3900 square kilometers, the emirate extends for about 72 kilometers along the coastline of the Arabian Gulf. In terms of its geographical location, Dubai is positioned adjacent to Sharjah in the northeast, the capital city of Abu Dhabi to the south, and the Sultanate of Oman to the southeast. The administrative boundaries of the Dubai Emirate define its territorial extent, depicted in Figure 1, delineate its territorial extent, accounting for approximately 5% of the total area of the United Arab Emirates.

Figure 1.

Dubai emirate and major cities within the United Arab Emirates.

The diverse landscape of Dubai encompasses shallow shores, sandy deserts, and coral reefs. With over 300 species of fish inhabiting its waters, the rich marine life of Dubai has served as a significant source of income for its residents for thousands of years. Date palms dominate much of Dubai’s cultivated land, primarily found in the arc of small oases that make up the Hatta Area. Dubai Municipality supports farmers through various incentives, such as a 50% subsidy on fertilizers, seeds, and pesticides. Additionally, it offers loans for machinery and provides technical assistance [27, 28].

According to the data presented in Table 1, vegetable cultivation accounts for 13% of the total cultivated land in Dubai Emirate, while fruit crops occupy 32%, feed crops cover 14%, and the remaining 43% is allocated for various other uses. Notably, the region of Hatta demonstrates high productivity due to its access to underground water sources from the nearby mountains of Oman, benefiting from abundant rainfall. In this region, the main crops cultivated include tomatoes, melons, and dates. Despite the challenges posed by the desert environment, vegetable production in Dubai has successfully overcome obstacles, resulting in a noteworthy production of over 27 tons in 2019, as depicted in Table 2.

YearsForest trees (dūnum)Vegetables (dūnum)Fruit trees (dūnum)Feed crops (dūnum)Temporary fallow (dūnum)Other lands (dūnum)Total (dūnum)
2017760176515,8103177172521,64544,882
2018784166015,9033218166418,46441,693
2019784752920,4098108191723,16861,914
Area %1%13%32%14%3%37%100%

Table 1.

Land use distribution in Dubai, area in donums (dūnum) - 2017–2019.

CropValueAverage of productionQuantityArea
(000 AED)(Tons/Dūnum)(Tons)(Dūnum)
Tomatoes18,030.96.26297.91010.8
Cucumber8166.99.42946.7313.9
Pepper1637.14.9403.382.5
Squash8517.32.32793.91219.8
Eggplants4575.63.32333.8716.6
Cauliflower7216.03.52590.3740.1
Cabbage7488.86.04082.0680.3
Water melon474.93.0391.9130.3
Leafy vegetables1740.71.3579.9447.1
Other10,567.62.34970.52187.3
Total68,415.83.627,390.37528.8

Table 2.

Vegetables by crop in Dubai – 2019.

Tomatoes, cabbage, eggplant, squash, and cauliflower serve as the primary vegetable crops that fulfill a significant portion of the Emirate’s demand during the respective growing season. Furthermore, citrus fruits and mangoes are the main fruit crops cultivated in addition to dates [29].

Dubai has successfully addressed significant challenges, including harsh environmental conditions, limited water resources, and soil salinity, through innovative solutions. These solutions often involve tapping into underground aquifers or accessing water supplies from the mountains. As indicated in Table 3, agriculture in Dubai is practiced on approximately 8000 dunums of cultivable land, with a significant portion dedicated to Couch grass and Alfalfa.

CropValueAverage of productionQuantityArea
(000 AED)(Tons/Dūnum)(Tons)(Dūnum)
Alfalfa20,056.86.012,535.52089.2
Rhode grass32,362.36.021,574.93595.8
Sorghum11,953.06.07470.61245.1
Maize2059.42.82059.4748.9
Other3343.16.02571.6428.6
Total69,774.65.746,212.08107.6

Table 3.

Total and area of field crops by crop in Dubai – 2019.

2.2 Overall study workflow

The workflow, illustrated in Figure 2, consists of multiple steps: acquiring multispectral imagery, labeling, dataset preparation, model training, object detection, result analysis, and field validation. The ArcGIS API for Python was used to export training data and train deep learning models within the study area. PyTorch and fast.ai libraries were utilized for data preparation, augmentation, and model training. The object detection models employed the PASCAL_VOC_rectangles format for training samples. Data preparation involved organizing and formatting training labels, splitting data, applying augmentation, and creating data structures. ArcGIS Pro facilitated these tasks, allowing direct reading of training samples and creating a suitable DataBunch with specified parameters.

Figure 2.

Complete workflow illustration, encompassing the transformation of raw imagery into structured information about vegetation cover feature layers.

2.3 Drone data

Figure 3 depicts the various elements comprising the Trimble UX5 HP Unmanned Aircraft System (UAS), which serves as the primary tool for field data capture. This UAS device is user-friendly, fully automated, and capable of capturing high-resolution aerial photography with resolutions as fine as 1 cm. It offers an intuitive workflow that facilitates the efficient creation of top-quality ortho-mosaics and 3 Advanced three-dimensional (3D) models have been developed for a variety of applications within the realm of agriculture. These applications encompass agriculture mapping, field leveling, progress monitoring, and asset mapping [30].

Figure 3.

Utilization of Trimble drones for data collection in Dubai emirate.

In Dubai Emirate, the team operating in the field utilizes the capabilities of high-resolution multispectral drone imagery to effectively capture valuable data for their operations, enabling the generation of Normalized Difference Vegetation Index (NDVI) maps. These maps are instrumental in distinguishing between soil, forests, and grass, as well as identifying different crop stages and detecting plants under stress. Significant research findings have firmly established robust associations between crop yield and specifically measured NDVI (Normalized Difference Vegetation Index) data at distinct stages of crop growth. Consequently, monitoring crop growth at key stages enables precise crop yield estimation and early identification and resolution of issues [31, 32]. For specific details regarding the acquisition performance levels of the Trimble UX5 Drone, please refer to Table 4.

Resolution (GSD)1 cm to 25 cm (4 to 99 in)
Height above take-off location (AGL)75 m to 750 m (246 to 2460 feet)
Absolute accuracy XY/Z (no ground control points)down to 2–5 cm
Relative Ortho-mosaic/3D model accuracy(1−2x/1−5x GSD)
Resolution (GSD)1 cm to 25 cm (4 to 99 in)

Table 4.

Acquisition performance parameters for Trimble UX5 drone.

Dubai Municipality employs a drone equipped with state-of-the-art photogrammetric and navigation equipment, granting it a ground resolution capacity of up to 3 centimeters. Its programming allows for the detection of crucial details such as NDVI, water stress, and specific nutrient deficiencies in crops. The Geographic Information Systems Centre (GISC) in Dubai Emirate seamlessly integrates drone-based mapping efforts into disaster risk reduction and management (DRRM) and climate change adaptation (CCA) strategies. The Trimble UX5 HP drone, equipped with a modified color-infrared (CIR) Sony NEX5R camera and a 16 mm lens, was employed during field surveys. Each flight day involved gathering around 100 ground-based normalized difference vegetation index (NDVI) measurements using the Trimble Green Seeker Handheld device, consistently maintaining a height of 80 cm above the target. The georeferencing process ensured a geospatial accuracy of 2 cm using a Trimble R8RTK GNSS system [33]. Figure 4 showcases sample outputs from the drone, including an ortho-rectified image with elevation contours, a three-dimensional surface generated from collected point clouds, and a three-dimensional surface created using processed contour lines.

Figure 4.

Outputs of drone mapping: (a) rectified image with contours, (b) 3D surface created from point clouds, (c) 3D surface constructed from contour lines.

Aerial surveys are crucial, operating in real airspace alongside other aircraft. The process involves creating a flight plan, estimating Ground Control Points (GCP), and conducting a risk assessment. Figure 5 illustrates capturing aerial data with multispectral cameras and LIDAR using Trimble UX5. Surveys cover areas like Hatta in 4 days, while processing takes 1 day for DEM and Ortho-Photo generation.

Figure 5.

Process of drone mapping field Mission in Hatta region.

In this study, a total area of 770 square kilometers across 12 areas was comprehensively surveyed using five teams over the course of 139 days. The subsequent data processing phase took around 78 days to complete. Some areas, like AL WOHOOCH and SAIH SHUAIB, were easier to survey due to their uniform flat sand dune surfaces. Please refer to Table 5 for a detailed breakdown of flying time, processing time, number of flights, and repetitions for each community in 2019.

CommunitySQKMFlying time in daysProcessing time in daysNumber of flightsRepetitions per year (2019)
Saih Shuaib41.615451
Hadaeq Sheikh Mohammed bin Rashid38.68193382
Aleyas10.525851
Al Kheeran7.3346123
Al Lesaily112.691310131
Margham152.592512251
Al Wohoosh26.513231
Al Maha41.73214422
Remah82.875751
Grayteesah91.83128121
Al Fagaa140.531513151
Hessyan23.85121121
Total770.7513978187

Table 5.

Details of unmanned aerial system (UAS) missions for the designated study areas.

The modified Sony NEX5R camera captures 3-band R-G-NIR imagery, with blue-filtered pixels receiving NIR and green/red filters capturing visible light along with red edge and NIR wavelengths. It stored both 14-bit per band linear lossy compressed RAW files (35 MB each) and 8-bit per band gamma-compressed JPEG files (15 MB each) [34]. See Figure 6 for the Trimble UX5 device’s true color (RGB) spectral responses.

Figure 6.

Spectral response of the Trimble UX5 HP Sony NEX-5 N.

2.4 Processing and analysis

The methodology developed for processing in this study involved four main steps: photogrammetric pre-processing, deep learning-based object detection, data analysis, and result evaluation. The initial step focused on pre-processing the imagery obtained from the Unmanned Aerial System (UAS) using digital photogrammetry techniques. Next, deep learning algorithms were carefully selected and implemented to detect vegetation cover, identify diseases, and perform template matching for segmenting the main crop area and detecting individual crops on the orthophoto mosaic. Subsequently, advanced geoprocessing tools were employed for data analysis. Finally, the detection accuracy threshold was established, and a comprehensive comparison was conducted between crop volume, estimated crop pests, and field samples [35].

2.4.1 Pre-processing

Data layers, including ground measurements, were aligned and registered using ArcGIS software. UAS-based NDVI values and spectral profiles were evaluated for multitemporal monitoring. Point features were interpolated using natural neighbor interpolation. A digital terrain model was generated for crop height calculation.

The use of topographic maps is crucial for agriculture and vegetation mapping [36], as certain species thrive at specific elevation levels. Therefore, data captured using drones and aligned through aerial triangulation, ortho-rectification, and georeferencing using ground control point (GCP) information is essential. Figure 7 illustrates the final Digital Elevation Model (DEM) and contour lines generated for the Hatta Region using ortho-rectified drone imagery.

Figure 7.

Generated contour map for Hatta region.

2.4.2 Object detection algorithms (deep learning)

Deep learning models were evaluated in ArcGIS for classifying tree points in geospatial datasets. Two models, “Tree Point Classification” and “Landcover Classification,” were modified for the arid environment of Dubai using Python scripts. These models successfully detected vegetation cover and individual trees with TensorFlow. The workflow, shown in Figure 8, involved data preparation, training with ArcGIS.learn module, and deployment as deep learning packages. All algorithms were implemented in Python, and experiments were conducted on a high-performance workstation.

Figure 8.

Usage workflow of the General Deep Learning Packages (DLPKs).

The most challenging aspect of the work involved data preparation, training sample creation, and model training to extract features from the imagery. These steps have been completed, and a trained model is now utilized to detect various types of crops in the processed drone imagery. Achieving optimal results in object detection requires thorough testing and adjustment of parameters. The model’s performance was fine-tuned by testing it on a small section of the image until satisfactory results were obtained. Subsequently, the detection tools were extended to cover the entire drone-captured areas [37]. Within Figure 9, a selection of labeled training samples depicting palm trees is displayed, illustrating the outcomes produced by the object detection algorithm employed in the present investigation.

Figure 9.

(a): Recorded training samples for palm trees, (b): Results generated for a larger area utilizing Object Detection Algorithm.

The creation of high-quality training samples plays a crucial role in training deep learning or image classification models. This step is often challenging and time-consuming. In order to equip our deep learning model with the necessary information to accurately identify various crop types in the images, we generated training samples that encompassed different palm trees and other field crops. These samples helped train the model to recognize the size, shape, and spectral signature of these objects. Specifically, the training samples utilized small subimages, referred to as image chips, that focused on the specific features or classes of focus. Figure 10 exhibits a subset of the image chips employed within the context of this study [38, 39].

Figure 10.

Recorded training samples for field crops.

For the chosen workflows (U-net) in this study, the ArcGIS.learn module in the ArcGIS API for Python was utilized. The U-net architecture consists of an encoder network followed by a decoder network. Unlike classification, semantic segmentation requires pixel-level discrimination and feature projection from the encoder [40, 41]. The encoder, shown in Figure 11, uses pre-trained classification networks like VGG/ResNet for encoding. The decoder, on the other hand, projects features from the encoder onto higher-resolution pixel space for dense classification using convolution and upsampling.

Figure 11.

U-net architecture: Blue boxes represent multi-channel features, white boxes indicate copied feature maps, and colored arrows denote various operations.

Tree detection in machine vision relies on human expertise rather than a purely mathematical definition. Deep learning-based object detection differs from other methods by extracting features iteratively, enabling the capture of contextual and global image features for robustness and high accuracy. In this study, a Convolutional Neural Network (CNN) is employed to extract information from high-resolution imagery. The CNN model, depicted in Figure 12, includes input, convolution, pooling, fully connected, and output layers. Our ArcGIS Pro model incorporates pooling and convolution layers iteratively. CNN’s advanced feature extraction handles the challenges of recognizing diverse characteristics in real-world environments, making it a common choice for vegetation cover detection [42].

Figure 12.

CNN network layers.

2.4.3 Data analysis

The main analysis in this study revolves around estimating vegetation health, utilizing the same images employed for deep learning extraction. The assessment of vegetation health involves the calculation of a vegetation health index, specifically the Visible Atmospherically Resistant Index (VARI) [43]. The Vegetation Area Ratio Index (VARI) is employed as an indirect measure of leaf area index (LAI) and vegetation fraction (VF), relying solely on reflectance values within the visible wavelength range.

RgRr/Rg+RrRRgRbE1

The calculation of the Visible Atmospherically Resistant Index (VARI) involves using reflectance values from the red (Rr), green (Rg), and blue (Rb) bands [43]. Additionally, the estimation of vegetation health utilizes reflectance values from both the visible and near-infrared (NIR) wavelength bands, similar to the normalized difference vegetation index (NDVI). Figure 13 presents the resulting NDVI for the Hessyan and Jabal Ali communities.

Figure 13.

NDVI results generated from drone imagery (Jabal Ali & Hessyan).

2.4.4 Data evaluation (QA/QC)

To assess the outcomes of vegetation cover extraction and the detection of plant diseases and pests in multispectral drone imagery across Dubai Emirate, several evaluation metrics were employed. These metrics included false negatives (omission errors), false positives (commission errors), detection rate, and an accuracy index (AI) [44]. The AI, which quantifies the balance between omission and commission errors, was computed using the following formula:

AI=1001FP+FN/REFE2

The terms false positives (FP) and false negatives (FN) represent specific types of errors. FP refers to the instances where a result is falsely identified as positive, while FN pertains to the cases where a positive result is mistakenly classified as negative. In this study, additional evaluation indices were computed, namely Precision, mean Average Precision (mAP), Recall, and the harmonic Mean F1 score. The F1 score is derived from the combination of Precision and Recall defined as follows:

Precision=TP/TP+FP100%E3
Recall=TP/TP+FN100%E4

In Formulas (3) and (4), TP (True Positives) refers to correctly predicted positive instances, indicating the number of accurately identified lesions by the algorithm. FP (False Positives) represents incorrectly predicted positive instances, indicating the number of lesions inaccurately identified by the algorithm. Conversely, FN (False Negatives) corresponds to incorrectly predicted negative instances that are actually positive. FN represents the count of unacknowledged lesions. The evaluation of detection accuracy is measured by employing the mean Average Precision (mAP) as defined in Formula (6). Initially, the average accuracy for each category within the dataset is computed following the procedure described in Formula (5).

Paverage=j=0NClassPrecisionj.Recallj.100%E5
mAP=Paverage/NclassE6

In the provided formula, N(class) corresponds to the total count of categories. Precision(j) and Recall(j) refer to the precision and recall values specifically associated with class j. The mean Average Precision (mAP) is defined as the average accuracy across all categories. A higher mAP value indicates greater recognition accuracy of the algorithm, while a lower value suggests reduced accuracy. Additionally, the F1 score, which is a significant metric, is employed to assess the accuracy of the deep learning model. The F1 score combines both accuracy and recall, as described in Formula (7).

F1=2PrecisionRecall/Precision+Recall100%E7

Frames per second (FPS) serves as a metric employed to evaluate the speed at which the deep learning model recognizes information. A greater FPS value indicates a swifter recognition speed of the algorithm, while a lower value suggests a slower recognition speed.

Advertisement

3. Results

In this section, we present the experimental results using real data from 12 communities in Dubai Emirate, encompassing a total area of 770.75 square kilometers. The deep learning models discussed in this paper demonstrated excellent coverage of all crop types, achieving an impressive overall accuracy of 85.4%. Moreover, the detection performance for date palms and GHAF trees was highly promising, with F1-scores of 96.03% and 94.54% respectively.

3.1 Vegetation cover

The results demonstrate that the deep learning models exhibited superior performance compared to the machine learning technique. Specifically, when using RGB color images alone, the deep learning models outperformed the machine learning technique by a minimum of 11%. Moreover, when utilizing NDVI source images, the deep learning models surpassed the machine learning technique by over 28%. Significantly, around 43% of the outcomes obtained from supervised classification were consistent with the deep learning outputs utilizing NDVI. Moreover, there was a notable agreement of 96% between the deep learning results and the photo interpretation method.

During the evaluation of the deep learning model with NDVI, it exhibited marginally better outcomes in comparison to both the deep learning models employing RGB and manual digitization. Subsequent field visits provided further evidence that these additional positive results corresponded to plants impacted by prolonged drought, which proved to be difficult to classify solely through photo-interpretation.

Table 6 provides a summary of the confusion matrix results, presenting an overall average for 12 communities and offering an overview of the predicted vegetation cover outcomes using deep learning (with NDVI and Standard RGB), as well as the machine learning (supervised classification) technique. The sensitivity (SN), also known as recall (REC) or true positive rate (TPR), achieved a value of 0.89 for deep learning using NDVI, indicating its suitability for the designated area. The ultimate sensitivity score is 1.0, whereas the worst is 0.0. Conversely, specificity (SP), also referred to as the true negative rate (TNR), quantifies the number of accurate negative predictions, reached an overall value of 0.95 for the deep learning model using NDVI, indicating a relatively high level of accuracy.

Accuracy criteriaDeep learning using NDVIDeep learning using RGBSupervised classification
Sensitivity (Recall)0.890.730.54
Specificity0.950.820.61
Positive predicted value0.90.830.7
Negative predicted value0.60.50.3
Prevalence0.70.70.5
Detection rate0.9820.70.6
Detection prevalence0.90.80.4
Balanced accuracy0.8970.72830.6154

Table 6.

Confusion matrix results- overall results for 12 communities.

The land use classification and object-detection deep learning algorithms achieved high accuracy, with 89.7% for NDVI and 72.8% for RGB. The commission error was minimal, with few instances of including bare soil or grass. Each crop was accurately represented as an individual object. In the Hessyan community, the overall accuracy index reached 87.8%. Table 7 provides detailed results [45, 46].

MetricDeep learning algorithm using NDVIReference data (Manual digitization)
True positives3520 crops3521 crops
False positives6 crops1 crop
False negatives12 crops6 crops
Detection rate98.2%99.9%
Accuracy index87.8%99.9%

Table 7.

Results of vegetation cover area generated by deep learning using NDVI.

Within Table 8, a comprehensive compilation of the deep learning outcomes derived from NDVI extraction is presented. The table encompasses essential information such as the count of identified crops, the respective area covered in square kilometers, and the overall Accuracy Index for the chosen communities under investigation. Notably, the communities of Al Lesaily, Margham, and Remah exhibited the highest prevalence of recorded crops, indicating their significant vegetation presence across Dubai Emirate.

CommunityNumber of cropsArea SQ KMAccuracy AI
Saih Shuaib31170.3197.80%
Hadaeq Sheikh Mohammed Bin Rashid48411.2889.70%
Aleyas17,4351.1986.60%
Al Kheeran30071.3383.90%
Al Lesaily44,4332.0697.40%
Margham32,3791.3385.90%
Al Wohoosh46670.0393.80%
Al Maha86740.8488.90%
Remah27,7700.4787.50%
Grayteesah16,0490.6984.80%
Al Fagaa10,2980.3992.70%
Hessyan35200.6087.80%
Total/Overall176,19010.5389.73%

Table 8.

Outcomes of deep learning utilizing NDVI extracted from drone imagery.

Figure 14 showcases sample outcomes obtained through the deep learning-based land use classification model using NDVI images. Thorough preparation of every source image was diligently carried out, as extensively discussed in this paper. The creation of a vector layer representing vegetation cover was achieved through a comprehensive methodology that incorporated advanced geospatial analysis and deep learning techniques. The results showcased in this study portray trees, shrubs, or grass through green pixels, barren land through silver color, and urban areas depicted in various shades of blue. Notably, Seih Shuaib exhibits the highest accuracy index at 97.8%, attributed to the utilization of hydroponic technology for cultivating in-demand micro-greens and herbs in several farms. Conversely, Al Kheeran demonstrates the lowest accuracy index at 83.9%. The overall average accuracy index for all communities within the study’s scope is calculated to be 89.73%.

Figure 14.

Results of NDVI based deep learning model for vegetation extraction from UAS multispectral imagery.

3.2 Date palms and GHAF trees detection

The tree detection model’s accuracy was evaluated by comparing it with photo-interpreted drone images. The deep learning algorithm outperformed the machine learning method in detecting date palms and GHAF trees. The deep learning model accurately predicted 336 trees, while the supervised classification predicted 289 trees. Overall, the deep learning model showed superior performance, as evidenced by F1 scores. Figure 15 provides an example of detected trees in different locations [47].

Figure 15.

(a): Detected date palm trees in Al KIFAF road; (b): Detected GHAF trees in Hadaeq Sheikh Mohammed Bin Rashid.

Both methods demonstrated satisfactory classification results for GHAF trees and date palms, achieving accuracies of over 79%. However, the deep learning object-based method outperformed in accurately detecting the target trees, with overall accuracies surpassing 95% for GHAF trees and 97% for date palms. Figure 16 provides a visual representation showcasing the disparities observed in the detection procedures, emphasizing a higher precision of approximately 16% for GHAF trees and 13% for date palms.

Figure 16.

Confusion matrix for date palm trees and GHAF trees under different scenarios:(a) deep learning, (b) supervised classification.

The initial assessment revealed that errors in tree detection occurred primarily when palm trees were obscured by other tree canopies or when trees with similar physical characteristics to palm trees were present, particularly coconut trees. These errors were relatively minor and mainly observed in areas where coconut trees were planted. Additionally, the detectability of palm trees was affected in cases where the trees were located at the image edges, resulting in some parts of the crown area extending across two images. Detection errors also arose from the size of the crown, particularly in young palms with smaller crown sizes. This discrepancy can be attributed to the limited inclusion of young palm samples with small crowns, as the study primarily focused on mature palms prevalent within the study area. However, addressing the challenges associated with crown size errors could be accomplished by incorporating a greater number of young palm samples with small crowns into the training data, thereby improving the model’s performance.

3.3 Performance comparisons

Tables 6 and 9 summarize the performance of deep learning models using NDVI and RGB imagery, compared to supervised classification. The NDVI-based deep learning model outperforms the RGB-based model. Both deep learning models outperform supervised classification in vegetation cover classification and tree object detection. The NDVI-based model shows slightly better results. Deep learning is more accurate but requires longer training time. In areas with low vegetation cover, supervised classification is acceptable. For simplicity and reduced hardware dependency, use supervised classification when complex features are absent.

MetricDate palms (Deep learning)Date palms (Supervised classification)GHAF trees (Deep learning)GHAF trees (Supervised classification)
True positives10,663923895927983
Precision97.3%84.3%95.4%79.4%
Recall94.8%86.8%93.7%77.2%
F1 score96.03%85.5%94.54%78.28%

Table 9.

Comparison of detected trees by method and type.

Advertisement

4. Conclusions

The combination of Unmanned Aircraft Systems (UAS) and deep learning object detection methods enables accurate crop identification and productivity analysis, achieving an overall accuracy of 89.7%. Specifically, the detection rates for date palm trees and GHAF trees reach 96.03% and 94.54%, respectively. The implementation of this approach in Dubai Municipality has demonstrated its potential in addressing agricultural challenges by providing up-to-date, high-quality data for informed decision-making.

Precision farming, which integrates sensor data, imaging, and real-time analytics, plays a crucial role in enhancing farm productivity through the mapping of spatial variability in fields. The data collected through drones during this work serves as a valuable resource for activating analytical models in agriculture. By supporting precision farming practices, UAS facilitate soil health analysis, irrigation planning, yield estimation, fertilizer application, and weather analysis. The combination of spatial data from drones with other data sources and analytic solutions generates actionable information for agricultural management.

The Normalized Difference Vegetation Index (NDVI) measured at different crop stages exhibits a robust correlation with crop yield. Monitoring crop growth at critical stages using NDVI maps, alongside other indexes like the Crop-Water Stress Index (CWSI) and Canopy-Chlorophyll Content Index (CCCI), enables accurate estimation of crop yield and early identification of issues. Multispectral drone imagery, equipped with sensors such as infrared and hyperspectral, proves to be an effective method for detecting plants under stress, differentiating between crops, and assessing crop health in agricultural mapping tools.

Advertisement

Acknowledgments

This work was supported by the Geographic Information Systems Centre (GISC), Dubai Municipality.

References

  1. 1. Kekane MA. Indian agriculture-status, importance and role in Indian economy. International Journal of Agriculture and Food Science Technology. 2013;4(4):343-346
  2. 2. Fan M, Shen J, Yuan L, Jiang R, Chen X, Davies WJ, et al. Improving crop productivity and resource use efficiency to ensure food security and environmental quality in China. Journal of Experimental Botany. 2011;63(1):13-24. DOI: 10.1093/jxb/err248
  3. 3. Oyakhilomen RGZ. Agricultural production and economic growth in Nigeria: Implication for rural poverty alleviation. Quarterly Journal of International Agriculture. 2014;53(3):207-223. DOI: 10.22004/ag.econ.195735
  4. 4. Awokuse TO. Does agriculture really matter for economic growth in developing countries? In: The American Agricultural Economics Association Annual Meeting. Agricultural and Applied Economics Association. Milwaukee, Newark, USA; 2009. DOI: 10.22004/ag.econ.49762
  5. 5. Badiene O. Sustaining and accelerating Africa's agricultural growth recovery in the context of changing global food prices. IFPRI Policy Brief. 2008;9:1-4
  6. 6. de Gennaro BC, Forleo MB. Sustainability perspectives in agricultural economics research and policy agenda. Agricultural Economics. 2019;17:7. DOI: 10.1186/s40100-019-0134-8
  7. 7. Food and Agriculture Organization of the United Nations. The Future of Food and Agriculture: Trends and Challenges. Rome: FAO; 2017
  8. 8. Balogh JM, Jámbor A. The environmental impacts of agricultural trade: A systematic literature review. Sustainability. 2020;12:3. DOI: 10.3390/su12031152
  9. 9. Food and Agriculture Organization of the United Nations. The State of Agricultural Commodity Markets. Agricultural Trade, Climate Change and Food Security. Rome, Italy: FAO; 2018
  10. 10. Garsous G. Trends in policy indicators on trade and environment. In: OECD trade and environment working papers. Paris, France: OECD; 2019. DOI: 10.1787/18166881
  11. 11. Kwan C, Gribben D, Ayhan B, Bernabe S, Plaza A, Selva M. Improving land cover classification using extended multi-attribute profiles (EMAP) enhanced color, near infrared, and LiDAR data. Remote Sensing. 2020;12:1392. DOI: 10.3390/rs12091392
  12. 12. Tan K, Zhang Y, Wang X, Chen Y. Object-based change detection using multiple classifiers and multi-scale uncertainty analysis. Remote Sensing. 2019;11:359. DOI: 10.3390/rs11030359
  13. 13. Van der Meij B, Kooistra L, Suomalainen J, Barel J, De Deyn G. Remote sensing of plant trait responses to field-based plant-soil feedback using UAV-based optical sensors. Biogeosciences. 2017;14:733-749. DOI: 10.5194/bg-14-733-2017
  14. 14. Zare A, Bolton J, Gader P, Schatten M. Vegetation mapping for landmine detection using long-wave hyperspectral imagery. IEEE Transactions on Geoscience and Remote Sensing. 2007;46:172-178. DOI: 10.1109/TGRS.2007.906438
  15. 15. Wellmann T, Lausch A, Andersson E, Knapp S, Cortinovis C, Jache J, et al. Remote sensing in urban planning: Contributions towards ecologically sound policies? Landscape and Urban Planning. 2020;204:2-10. DOI: 10.1016/j.landurbplan.2020.10392
  16. 16. Skarlatos D, Vlachos M. Vegetation removal from UAV derived DSMS, using combination of RGB and NIR imagery. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume. 2018;IV-2:255-262. DOI: 10.5194/isprs-annals-IV-2-255-2018
  17. 17. Hellesen T, Matikainen L. An object-based approach for mapping shrub and tree cover on grassland habitats by use of LiDAR and CIR orthoimages. Remote Sensing. 2013;5:558-583. DOI: 10.3390/rs5020558
  18. 18. Ayhan B, Kwan C, Kwan L, Skarlatos D, Vlachos M. Deep learning models for accurate vegetation classification using RGB image only. In: Proceedings of the Geospatial Informatics X (Conference SI113). Proceedings of the SPIE. Anaheim, CA, USA; 2020. DOI: 10.1117/ 12.2557833
  19. 19. Guirado E, Tabik S, Alcaraz-Segura D, Cabello J, Herrera F. Deep-learning versus OBIA for scattered shrub detection with Google earth imagery: Ziziphus lotus as case study. Remote Sensing. 2017;9:1220. DOI: 10.3390/rs9121220
  20. 20. Yang L, Wu X, Praun E, Ma X. Tree detection from aerial imagery. In: Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. Seattle, WA, USA; 2009. pp. 131-137
  21. 21. Snehal SS, Sandeep SV. Agricultural crop yield prediction using artificial neural network approach. International Journal of Innovative Applications of Artificial Intelligence in Agriculture Research in Electrical, Electronics, Instrumentation and Control Engineering. 2014;2(1):683-686. DOI: 10.1.1.429.1195
  22. 22. Zhang X, Han L, Han L, Zhu L. How well do deep learning-based methods for land cover classification and object detection perform on high resolution remote sensing imagery? Remote Sensing. 2020;12:417. DOI: 10.3390/rs12030417
  23. 23. Song H, He Y. Crop nutrition diagnosis expert system based on artificial neural networks. In: 3rd International Conference on Information Technology and Applications. Sydney, Australia; 2005. DOI: 10.1109/ ICITA.2005.108
  24. 24. Papageorgiou EI, Markinos AT, Gemtos TA. Fuzzy cognitive map-based approach for predicting crop production as a basis for decision support system in precision agriculture application. Applied Soft Computing. 2011;11(4):3643-3657. DOI: 10.1016/j.asoc.2011.01.036
  25. 25. Dai X, Huo Z, Wang H. Simulation of response of crop yield to soil moisture and salinity with artificial neural network. Field Crops Research. 2011;121(3):441-449. DOI: 10.1016/j.fcr.2011.01.016
  26. 26. Rehman A, Jingdong L, Khatoon R, Hussain I. Modern agricultural technology adoption its importance, role and usage for the improvement of agriculture. American-Eurasian Journal of Agricultural & Environmental Sciences. 2016;16(2):284-288. DOI: 10.5829/idosi.aejaes.2016.16.2.12840
  27. 27. Purkis S, Riegl B. Geomorphology and Reef Building in the SE Gulf. 2012. DOI: 10.1007/978-94-007-3008-3_3
  28. 28. Bolleter J. Desert paradises: Surveying the landscapes of Dubai’s urban model. Taylor & Francis. 2019. DOI: 10.4324/9781351129763
  29. 29. Fathelrahman E, Gheblawi M, Muhammad S, Dunn E, Ascough J, Green T. Optimum returns from greenhouse vegetables under water quality and risk constraints in the United Arab Emirates. Sustainability. 2017;9(5):719. DOI: 10.3390/su9050719
  30. 30. Shahmoradi J, Talebi E, Roghanchi P, Hassanalian M. A comprehensive review of applications of drone technology in the mining industry. Drones. 2020;4:3. DOI: 10.3390/drones4030034
  31. 31. Christiansen MP, Laursen MS, Jørgensen RN, Skovsen S, Gislum R. Designing and testing a UAV mapping system for agricultural field surveying. Sensors. 2017;17:2703. DOI: 10.3390/s17122703
  32. 32. Starý K, Jelínek Z, Kumhálová J, Chyba J, Balážová K. Comparing RGB - Based vegetation indices from UAV imageries to estimate hops canopy area. Agronomy Research. 2020;18:4. DOI: 10.15159/ar.20.169
  33. 33. Pauly K. Towards Calibrated Vegetation Indices from UAS-derived Orthomosaics. 2016. DOI: 10.13140/RG.2.2.21842.35524
  34. 34. Klaas P. Applying conventional vegetation vigor indices to UAS-derived Orthomosaics: Issues and considerations. In: 12th International Conference for Precision Agriculture. Sacramento (CA, USA); 2014
  35. 35. Turner D, Lucieer A, Watson C, Turner D, Lucieer A, Watson C. An automated technique for generating Georectified mosaics from ultra-high resolution unmanned aerial vehicle (UAV) imagery, based on structure from motion (SfM) point clouds. Remote Sensing. 2012;4:1392-1410. DOI: 10.3390/rs4051392
  36. 36. Höhle J. Generating topographic map data from classification results. Remote Sensing. 2017;9:3. DOI: 10.3390/rs9030224
  37. 37. Du Z, Yang J, Ou C, Zhang T. Smallholder crop area mapped with a semantic segmentation deep learning method. Remote Sensing. 2019;11:888. DOI: 10.3390/rs11070888
  38. 38. Najafabadi MM, Villanustre F, Khoshgoftaar TM, et al. Deep learning applications and challenges in big data analytics. Journal of Big Data. 2015;2:1
  39. 39. Frank E-S, Zhen Y, Han F, Shailesh T, Matthias D. An introductory review of deep learning for prediction models with big data. Frontiers in Artificial Intelligence. 2020;3(3):4. DOI: 10.3389/frai.2020.00004
  40. 40. Lamba H. Understanding semantic segmentation with UNET, A salt identification case study. Towards Data Science. 2019
  41. 41. Michelle L, Jana R, Utku AO, Aziz TA, Marie AE, Tabea K, et al. A U-net deep learning framework for high performance vessel segmentation in patients with cerebrovascular disease. Frontiers in Neuroscience. 2019;13:97. DOI: 10.3389/fnins.2019.00097
  42. 42. Türkoğlu M, Hanbay D. Plant disease and pest detection using deep learning-based features. Turkish Journal of Electrical Engineering and Computer Sciences. 2019;27:1636-1651. DOI: 10.3906/elk- 1809-181
  43. 43. Liu YH. Feature extraction and image recognition with convolutional neural networks. Journal of Physics: Conference Series. 2018;1087:6. DOI: 10.1088/1742-6596/1087/6/062032
  44. 44. Mishra A. Metrics to evaluate your machine learning algorithm. Towards Science. 2018
  45. 45. Sogawa T, Tabuchi H, Nagasato D, Masumoto H, Ikuno Y, Ohsugi H, et al. Accuracy of a deep convolutional neural network in the detection of myopic macular diseases using swept-source optical coherence tomography. PLoS One. Apr 16 2020;15(4):e0227240. DOI: 10.1371/journal.pone.0227240. PMID: 32298265; PMCID: PMC7161961
  46. 46. Zhao H, Yang C, Guo W, Zhang L, Zhang D. Automatic estimation of crop disease severity levels based on vegetation index normalization. Remote Sensing. 2020;12:12. DOI: 10.3390/rs12121930
  47. 47. Yarak K, Witayangkurn A, Kritiyutanont K, Arunplod C, Shibasaki R. Oil palm tree detection and health classification on high-resolution imagery using deep learning. Agriculture. 2021;11:2. DOI: 10.3390/agriculture11020183

Written By

Lala El Hoummaidi, Abdelkader Larabi and Khan Alam

Submitted: 12 July 2023 Reviewed: 13 July 2023 Published: 25 September 2023