Open access peer-reviewed chapter - ONLINE FIRST

Sustainable Farming through Precision Agriculture: Enhancing Nitrogen Use and Weed Management

Written By

Mehmet Hadi Suzer, Mehmet Şenbayram and Mehmet Ali Çullu

Submitted: 13 January 2024 Reviewed: 30 January 2024 Published: 04 March 2024

DOI: 10.5772/intechopen.114256

Precision Agriculture - Emerging Technologies IntechOpen
Precision Agriculture - Emerging Technologies Edited by Redmond R. Shamshiri

From the Edited Volume

Precision Agriculture - Emerging Technologies [Working Title]

Dr. Redmond R. Shamshiri, Dr. Sanaz Shafian and Prof. Ibrahim A. A. Hameed

Chapter metrics overview

41 Chapter Downloads

View Full Metrics

Abstract

The integration of digital tools to agriculture became more important than ever because of food security concerns and climate change. Real-time soil and crop monitoring systems, such as field sensors, spectral cameras, decision-making platforms and autonomous robots have significant potential to determine anomalies and optimize crop management practices. For example, variable rate application methods consist of reliable vegetation cover maps, however, do not contain information about the underlying causes of variation. Thus, the benefits of precision management remain a subject of debate limiting the adoption of such technology by farmers. In this review, we discuss the underlying causes of lower success rates of variable rate application and the developing of new digital platforms which will improve the efficiency of digital farming tools to manage nitrogen. Furthermore, image-based weed detection (key milestone for digitalized weed management) that employs sophisticated algorithms and machine learning techniques to analyze images captured by drones or ground-based cameras to identify weed species, density, and its growth stages, enabling targeted weed control will be discussed. Adoption of upcoming digital tools not only contributes to a significant technological leap in agriculture, but we believe also be the most important drivers of sustainable agriculture.

Keywords

  • digital farming
  • weed detection
  • nitrogen
  • NDVI
  • satellite

1. Introduction

Agriculture plays a vital role in meeting the global demand for food and fiber, and in achieving socio-economic balance [1]. As the world’s population grows, there is an increasing demand for higher rates of food production on a global scale. However, with limited arable land and natural resources, increasing food production to feed the estimated 9 billion people by 2050 will be the biggest challenge for the human race [2]. The application of nitrogen fertilizers is fundamental for high crop production, but only a portion is utilized by plants [3, 4]. The excessive use of nitrogen-based fertilizers worldwide has caused a surge in nitrate pollution of water sources [5, 6], and increased atmospheric nitrous oxide emissions [7, 8].

Excessive use of chemical fertilizers leads to their accumulation in soil, which negatively impacts crop quality and yield, and causes environmental pollution [9, 10]. Around 175.5 million tons of chemical fertilizers were utilized in farming to produce optimal crop yield [11]. Approximately 60% of the total anthropogenic N2O emissions come from agricultural soils [12, 13, 14], making it a significant contributor to climate change. Nitrous oxide (N2O), which has 265 times more global warming potential than CO2, contributes to greenhouse gas (GHG) emissions [15]. The Food and Agriculture Organization [7] predicts a 50% increase in worldwide usage of synthetic N fertilizers by 2050 compared to 2012. Reducing N2O emissions per unit of nitrogen compound applied can be a potential strategy to minimize the global impact of nitrogen fertilizers on human-induced GHG emissions. However, the most effective approach is to optimize nitrogen fertilizer management, where excess fertilization is one of the biggest problems of sustainable agriculture globally [16, 17].

Integration of advanced sensors and control systems together with advancements in artificial intelligence have significantly accelerated the evolution of agricultural robotics, where we already see practical implementations of these technologies in advanced farmers’ fields. Change from traditional agriculture to fully automated farm models allows for acquiring accurate and detailed temporal and spatial information on time that can be used to complete nonlinear control tasks for almost every agricultural management practice by autonomous vehicles [18]. The potential of fully- or semi-autonomous robot systems applications attracts not only farmers but also global stakeholders, engineers from various classes, and scientists. Although most of the fully autonomous field robots are in prototype phase, some of them are already in the market and now capable of performing various tasks such as crop inspection [19], sowing and weed control [20], harvesting [21], targeted spraying [22], optimized nitrogenous fertilizer management [23].

The surge in big data’s role is reshaping agricultural technology. Farmers now have access to a wealth of information, from tractor display screens to mobile farm management applications. This information not only aids in decision-making but also feeds into the development of new agricultural technologies by companies. These firms leverage the collected data to refine digital farming solutions and innovate in areas such as fertilizer and agrochemical management. The landscape of digital farming innovation is heavily influenced by start-up firms, which are key players amidst a global food economy. Large corporations are actively seeking to lead in the digital farming arena, either by developing their technologies or acquiring smaller companies. However, this rapid advancement and integration of data collection and analytics in agriculture have resulted in fragmented and ambiguous governance structures. The regulation of data collection and access in digital farming is predominantly governed by user agreements formulated by the developers, which farmers are required to sign. In reaction to these developments, there is a growing movement among farmers advocating for data ownership rights.

Sensors in the agricultural field monitor environmental factors such as humidity, temperature, and wind, while images (either photos or videos) taken from soil and crops can provide information about crop development, diseases, weeds, and harmful insects [24, 25, 26, 27, 28]. Since the healthy development of the plant determines the benefits to be obtained and the amount of resources to be spent, traditional human-eye monitoring is subject to timeliness and accuracy issues. Real-time and accurate monitoring of the plant is a crucial component of precision agriculture, playing a significant role in maximizing crop yield [14].

Advertisement

2. Fertilizer management

2.1 Remote sensing nitrogen

The integration of digital farming tools in fertilizer management within croplands has transformed agricultural practices, particularly in terms of precision and sustainability. Digital farming, a concept dating back nearly a century, has evolved significantly with technological advancements. The modern form, driven by data analytics, emerged in the 1980s and 1990s, incorporating tools like GPS, yield monitor mapping, and variable-rate application of agrochemicals [29]. Recent developments include machine learning (ML) tools, remote sensing, advanced variable-rate technologies, robotics, auto-steer machinery, and unmanned aerial vehicles. These technologies enable site-specific and global data collection and analysis, influencing decisions on most agricultural practices (e.g., fertilizer management), contrasting the uniform practices dominant since the introduction of agrochemicals a century ago. The adoption of these technologies is not only prevalent on large commodity crop farms in industrialized countries but now available also for small farmers all over the globe. The deployment of these technologies is facilitated by the decreasing cost of digital technology and advances in computing and robotics, enhancing their accessibility and affordability.

Simple apps showing time course of satellite-based NDVI data for farmers: The time course of satellite-based normalized difference vegetation index (NDVI), when integrated into a simple computer software or a phone app, can significantly impact fertilizer management for all farm types in several profound ways. NDVI values can be used as a graphical indicator that assesses whether the target being observed contains healthy green vegetation. By using satellite-based NDVI imagery, farmers can obtain detailed insights into the health and vigor of their crops. This data may reflect the amount of chlorophyll present, which can be linked to the growth stage of their crops (for timing of fertilization) or in some way to the nitrogen uptake of the plant canopy for estimating the N input needed. By analyzing these images, farmers can estimate the growth stages of their crops more accurately and assess whether the plants are getting enough nitrogen or if they are experiencing deficiencies in any part of their field. Furthermore, anomalies in plant growth and nitrogen uptake can be effectively detected, allowing for rapid actions. This approach to crop management can lead to a more efficient use of resources, cost savings, and a reduced environmental CO2 footprint, allowing significant advancement in sustainable agricultural practices.

Figure 1 represents NDVI values for wheat fields with different sowing dates, as captured by a satellite and processed through an NDVI satellite app (https://maps.datafarming.com), on January 7, 2024. NDVI is a measure of the health and vigor of vegetation, with values ranging from −1 to +1, the higher the value, the healthier the vegetation. We can see three distinct sections in Figure 1, each corresponding to a different sowing date and displaying a unique color range that indicates the NDVI value hence indicating different growth stages of the wheat. The top section shows a field sown on November 15, 2023 with an NDVI value corresponding to the BBCH scale score of 22, which generally indicates tillering in wheat. The field section sown on December 4 corresponded to BBCH of 13 and the field sown on December 17 corresponded to BBCH of 9, suggesting that the wheat is at the emergence. The color gradient from blue (high NDVI values) to red (low NDVI values) visually demonstrates the difference in vegetation density and health across the fields, with the fields sown earlier showing higher NDVI values indicating more advanced growth stages. This app provides farmers with reliable information on the growth stages of their wheat, which is crucial for making precise decisions about, for example, to the timing of the first fertilizer application.

Figure 1.

NDVI values, acquired from sentinel satellite imagery, reveal various developmental stages of wheat crops. This data offers farmers a comprehensive view of their wheat fields’ growth progress.

Nonetheless, the availability of satellite imagery can be inconsistent, as conditions often obstruct the collection of clear data specifically in winter or even during summer in northern Europe. Cloud cover can render satellites ineffective on certain days, leading to gaps in monitoring and potential delays in agricultural decision-making. Drones equipped with spectral cameras represent a significant innovation in precision agriculture, offering high-resolution imaging capabilities that surpass those of many satellite systems. These cameras capture data across multiple spectral bands, not just the visible spectrum, enabling detailed analysis of crop health and soil conditions through indicators such as NDVI. The high-resolution imagery from drones can reveal intricate details of field conditions, such as plant counting, pest infestation levels, and even nutrient deficiencies at a very granular level. For some farmers, especially those managing large-scale or highly specialized operations, this level of detail can be invaluable, allowing for highly precise interventions that can improve yields and reduce input costs. However, not all farmers require this high degree of resolution. For many agricultural applications, the detail provided by satellite imagery may be sufficient, especially when the focus is on broader field trends rather than minute variations. The added cost and complexity of drone operations, including the need to process large volumes of data, may not justify the incremental benefits for these users. Moreover, drones offer a distinct advantage over satellite imagery in terms of flexibility and reliability under different weather conditions. Satellite imagery is dependent on the satellite’s orbit and is subject to atmospheric conditions, meaning that cloud cover can significantly impede the acquisition of useful data. Drones, on the other hand, operate at lower altitudes and can often fly below cloud level, providing more consistent data collection opportunities, especially in regions with frequent cloud cover or during seasons where sunny days are scarce. In summary, while drones with spectral cameras offer higher-resolution data, the necessity and practicality of this technology for individual farmers depend on the scale, precision requirements, and specific challenges of their agricultural operations. The ability to operate in less-than-ideal weather conditions is a clear advantage of drones, ensuring that critical agricultural decisions are informed by the most current and precise data available, regardless of cloud cover.

Soil sensors are vital tools in precision agriculture, enabling farmers, for example, to monitor soil water, temperature and even pH changes precisely. Such real-time data can lead to more precise water and nutrient management, ensuring that crops receive the right amount of water and nutrients. However, there is a notable gap in the market for reliable remote sensors specifically for nitrogen in soil. While there are sensors that can approximate nitrogen availability through indirect measures, the complexity of soil chemistry and the multitude of factors influencing nitrogen availability make it challenging to accurately measure soil nitrogen levels remotely. This difficulty is compounded by the fact that nitrogen is highly mobile within the soil, existing in various forms and undergoing numerous transformations.

In contrast, handheld chlorophyll meters such as SPAD (soil plant analysis development) meters, provide a non-destructive and reliable means of assessing leaf chlorophyll, which correlates closely with nitrogen content in plant tissue. By measuring the greenness of the leaf, SPAD meters give a quick and reliable indication of the plant’s nitrogen status. This method is widely accepted in agronomic practices as a proxy for leaf nitrogen content, and many studies have validated its use. While remote sensing technologies are advancing, the precision and reliability of direct measurements like SPAD for nitrogen management remain unmatched. Until remote sensors can measure soil nitrogen directly and reliably, tools like SPAD meters will continue to be essential for farmers aiming to optimize their nitrogen fertilizer use. It is important for ongoing research to focus on developing remote sensing technologies that can accurately reflect soil nitrogen levels, bridging the current gap and enabling even more efficient and environmentally friendly farming practices.

2.2 Robots in agriculture

Crop health monitoring: Accurately mapping the spatial distribution of crop development throughout the whole vegetation period is crucial for monitoring plant stress and crop health. Satellites, human-crewed aircraft, drones or ground-based autonomous vehicles equipped with spectral, thermal or RGB cameras are popular remote-sensing technologies [30]. Proximal canopy sensors measure reflected light at specific wavelengths and convert these to various vegetation indexes (e.g., NDVI may, for example, represent the soil coverage of photosynthetically active biomass [3132]. These indices can be used to estimate crop biomass, N nutritional status or yield potential, and several commercial sensors are available as well as different methods to transform their readings into N rates. The absorption of light in the visible region of the spectrum (450–680 nm) by photosynthetic pigments such as chlorophyll a and b, or carotenoids, has been well documented [33], and the content of these pigments are known to be affected by the crop health. The “red-edge” region (680–780 nm), lying between the visible and near-infrared (NIR) regions, is known to be related to the levels of chlorophyll content which can be used as an indicator of crop productivity [34]. The NDVI is a widely used spectral index that measures the difference between the reflectance values of near-infrared and red wavelengths and is positively correlated with crop yield and health. Other spectral indices, such as the green normalized difference vegetation index (GNDVI), soil-adjusted vegetation index (SAVI), and plant senescence reflectance index (PSRI), have also been used to screen vegetation health and may additional information about the soil/vegetation ratio [35]. Although RGB pictures can also be used for crop health monitoring (discussed in Section 3 in more detail), spectral cameras are simpler solutions as they do not require sophisticated ML algorithms for simple monitoring processes. Here we listed the advantages and shortcomings of various mobile remote sensing platforms (Table 1).

AdvantagesShortcomings
SatelliteWide coverage (field to regional screening), low cost, no pilot involvement, long-term data analysisOn average 4–10 days data availability, weather dependent (no data in cloudy days), limited resolution
DronesHigh resolution, easy deployment and setupLimited flight times, pilot involvement, average payload weight, limited real-time data analysis, limited battery life
Fixed wing aircraftsLarge area coverage (faster than drones), high resolutionPilot involvement, higher risk for collusions, complicated set-up, less maneuverable than drones, lower resolutions than drones
Ground-based vehiclesVery high resolution, allowing for real-time data analysis, lower risk for accidents and collusionLimited to accessible areas, slower data collection, higher costs, sophisticated algorithms for row crops

Table 1.

Advantages and shortcomings of various mobile remote sensing platforms.

Fertilizer application: Ground vehicles equipped with crop sensors, GPS, and advanced mapping systems can be used for high-precision variable rate application of nutrients such as nitrogen [32]. This approach may optimize the nitrogen needs of specific areas within a field that show varying growth patterns. However, the demand and the supply of N is a complex task as the optimum N rate depends on numerous interrelated factors such as the expected yield, and the N supply from other sources besides fertilizer and N losses to the environment [36]. Variable rate maps, segmented according to different crop development levels, can be created at high resolution through (i) offline methods such as satellite imagery (utilizing either current or long-term NDVI data analysis), or unmanned aerial vehicles (e.g., DJI Mavic 3 Multispectral, Parrot Bluegrass), or (ii) online methods include ground vehicle-attached multispectral sensors (e.g., tractor-mounted sensors like the YARA N-sensor and CropSpec by Topcon). Although these tools can generate reliable maps of the field segmented by different growth rates, background information for the cause of such variation is still missing to define whether a higher or lower rate of fertilizer is needed for each segment. This is because the heterogeneity in growth rate at specific locations in the field may not always directly relate to nitrogen limitations in those specific spots [37]. Therefore, the benefits of such precision management practices remain still a subject of debate specifically when it comes to crop yield. For example, in a review study, authors concluded that N fertilizer savings of 5–45% with no significant effect on yield, and lack of consistent evidence of economic benefits limits the adoption of variable rate technology by farmers [38]. Similarly, several other authors concluded no clear evidence that using sensors was profitable, because in most of their field observations reduced N rates often negatively affected yield, leading to smaller profits [39, 40, 41]. These studies indicate that additional data (such as soil texture, weather data, availability of other nutrients, and historical crop yields) is crucial for a more comprehensive understanding of nitrogen needs in varying field segments. For example, integrating historical yield, or potential yield maps generated from multi-year satellite images, can help in understanding the variation in soil fertility of different field segments. This variation may indicate chronic problems such as soil texture, biology, or physical orientation (which affect solar radiation absorption) that lead to growth depression in specific locations, highlighting the need for further analysis. Here, soil analysis can reveal a nutrient profile and identify deficiencies in nutrients other than nitrogen, or texture and organic matter issues for each varying field segment. Incorporating these complex datasets requires sophisticated data analysis seeking inter-relations and expertise, pointing to a collaboration between agronomists, data scientists, and farmers. Once complex data sets are achieved, advanced data analytics and ML algorithms can then be employed to develop more accurate algorithms for optimizing the timing and rate of nitrogen fertilizer applications. However, as far as we are aware, we are still in the initial phase of the development of effective digital farming. However, with the development in sensor and automation technologies in conjunction with the sophisticate ML processes for big data analysis, we expect that precision agriculture will likely become more and more accurate and effective. In conclusion, while current tools provide a significant step forward in precision agriculture, a holistic approach that incorporates a wider range of data sources and advanced analytical methods is essential. The future of precision agriculture lies in the seamless integration of various data streams, enabling farmers to make more informed decisions that lead to sustainable and efficient agricultural practices.

Greenhouse gas measurements: Considering the major source of GHG emissions is agriculture, mobile GHG measurement systems is very important to identify the spatial variation and the annual quantity of GHG emission from agricultural soils. Systems that can measure GHG emissions such as (N2O, CO2, and CH4) in the concentration range relevant for cropped soils are already available. For instance, the Picarro cavity ringdown spectroscopy can measure N2O, CO2, and CH4 at very high precision, however, this system is very expensive and with very high power consumption. Other systems, such as GASERA’s ONE GHG, which uses photoacoustics have the necessary accuracy or most GHGs (N2O, CO2, and CH4) in the ppb range, field robots can be fitted with such sensors to detect and measure GHG emissions directly from the soil and plants with very high time resolution and accuracy without the need for excess manual work. These measurements are crucial for understanding the interactive effect of environment and agricultural practices for developing site-specific strategies to mitigate GHG emissions.

In this context, the robot shown in Figure 2 has been developed by the scientific team at the University of Harran to determine GHG emissions in agricultural soils to determine the impact of various management practices such as N fertilization, irrigation and tillage. The robot has various optical sensors (Intel Realsense depth camera) and GASERA ONE, Finland, developed for real-time monitoring of GHG emissions and crop development. This robot can accurately measure the concentrations of emitted gases like methane (CH4), carbon dioxide (CO2), ammonia (NH3), and nitrous oxide (N2O) at very high time resolution, which are relevant for determining environmental impact of farming practices. We can conclude that the future of digitally optimized nitrogen fertilizer management does not rely only on determining crop N needs. It necessitates the integration of sophisticated data analysis, encompassing factors like gaseous nitrogen losses, weather forecasts, soil parameters, and real-time crop monitoring. This analysis should ideally be performed by advanced ML techniques to develop reliable algorithms for determining timing and rate of fertilizers needed for sustainable crop production. However, significant progress and development in this field are still in its early steps.

Figure 2.

Fully autonomous field robot measuring GHG emission from crop fields developed by the team at the University of Harran.

Advertisement

3. Machine learning algorithms for precision agriculture applications

Monitoring the vast agricultural lands plays a crucial role in obtaining continuous, real-time, and accurate information about the soil and plants, directly impacting the quality and quantity of the harvested crop. Simultaneously, optimizing resource usage helps reduce both costs and harmful environmental effects. However, due to reasons such as the low precision and accuracy of information gathered through human observations and the high labor costs of, especially very large fields, computer vision systems have become a necessity. Computer vision systems can acquire information about the surrounding physical environment through their sensory and imaging devices, interpret this information, and transform it into useful outputs.

Modern digital detection systems can obtain high-resolution and real-time data from a variety of sensors and imaging devices and utilize advanced analysis methods for processing this information. Analyses that were traditionally performed using image processing techniques have recently been transitioning to more sophisticated and advanced ML algorithms. A subset of ML algorithms known as deep learning (DL), can efficiently address various image separation and classification problems. Therefore, these algorithms hold great promise for the analysis, interpretation, and generation of useful outputs from large agricultural datasets of today.

Utilization of ML algorithms in the analysis of images taken from plants or soil in agricultural fields has resulted in significant improvements in object detection, identification, and classification [24]. ML methods are applied not only to standard red, green, and blue (RGB) images but also to images captured in different color spectra such as near infrared (NIR) and color infrared (CIR). ML algorithms generally involve a learning process centered around “experience,” where a model is trained with “training data” and later used for classifying and extracting features from the test data. ML tasks can be classified based on the learning method, as supervised and unsupervised, and the learning model, as classification, regression, clustering, and dimensionality reduction.

In supervised learning, labeled input and output examples are provided to create a model representing the relationship between the input and the output. The trained model is then used to infer useful information from the test data. In unsupervised learning, data is explored for hidden features (patterns) without human verification, as the data is not labeled as training or testing. Reinforcement learning uses a reward mechanism instead of labels in supervised learning, where each verified output is rewarded, and the goal is to maximize the reward amount. Reinforcement learning does not require large datasets.

3.1 Data preprocessing for deep learning

In the application, images obtained with cameras or scanners undergo preprocessing such as filtering and segmentation after which various texture and color features are obtained. These features are processed by an artificial neural network (ANN) classifier to classify the image. We give a summary of data preprocessing stages, which significantly improve data for further analysis. Preprocessing of data improves accuracy while reducing computational resource demand of ML algorithms.

3.1.1 Data acquisition

ML algorithms require a training process before being tested on real-world problems. They need a sufficient amount of example data to train themselves with the phenomenon they will be testing. Example data can be raw data collected by digital sensors, or visual data obtained from cameras on satellites, drones, field robots (such as the one shown in Figure 2), or portable cameras. For further analysis, these images can also be examined in different spectrums such as RGB, NIR, or CIR. Additionally, data go through preprocessing via classical digital signal processing techniques (such as fast Fourier transform, FFT, finite impulse response, FIR, etc.) or image processing techniques (edge detection, thresholding, etc.) to enhance the efficiency of ML algorithms.

3.1.2 Data labeling

For the training of ML models, visual data obtained from various sources are labeled to known descriptors. In this stage, objects in regions (Region of Interest, ROI) distinguishable in the data are labeled, allowing the ML algorithm to build a model for each object. During the test phase, the ML algorithm is run on unlabeled data, and objects are identified and classified using the trained model. Labeling can be done manually or automatically, and the presence of an existing database of labeled data may eliminate the need to repeat this stage.

3.1.3 Data augmentation

For the training of ML models, a large quantity and number of real datasets are required. However, often, it may not be feasible to create such a dataset due to constraints such as time, labor, and cost. In such situations, augmenting the existing dataset through synthetic means can mitigate this problem. To achieve this, the dataset can be enriched with various transformations such as zooming, cropping, flipping, rotating, overlapping, and color changing, etc.

3.2 Major ML algorithms for object detection

ANNs draw inspiration from the functions of the human brain and mimic complex functions such as object recognition, learning, and decision-making. Similar to the billions of interconnected and communicating neurons in the human brain, an ANN consists of numerous processing units that are organized in a certain way and connected to each other.

As AI algorithms work on massive amounts of data, they are resource hungry in terms of computational power, memory, communication bandwidth, etc. There have been continuing efforts to increase their accuracy and optimize their resource usage further. Recently, another ML family of algorithms, the deep neural networks (DNN), has got a lot of attention for their successful implementations in object recognition and classification. DNN algorithms consist of multiple hidden layers, and convolutional and pooling layers as well, compared to the single hidden layer in ANNs.

In DNN algorithms, the feature extraction stage is performed by the model itself. The most popular deep learning algorithms capable of mapping features on an image are convolutional neural networks (CNN) and its derivatives (RCNN, fast RCNN, faster RCNN), long-short term memory (LSTM), and single stage algorithms, YOLO, and SSD.

These DNN algorithms are described briefly as follows.

3.2.1 Convolutional neural networks (CNN)

The CNN model is specifically developed for image processing and recognition tasks. It supports transfer learning, the capability of retraining the model over the existing data for new recognition tasks. Additionally, various derivatives of the CNN model are developed, which require less image preprocessing due to their ability to perform feature mapping (Figure 3).

Figure 3.

ML algorithms for object detection.

Instead of the single type of hidden layer in ANN architecture, CNN consists of three types of layers. The first one is the convolution layer, in which there are filters acting as neurons. Feature maps are obtained at the output of this layer by convolving the filters with the source image. These feature maps perform tasks such as edge detection and object boundary detection which would otherwise need to be obtained at the preprocessing phase.

Pooling layers, on the other hand, aim to reduce the computational resource requirements of the algorithm by generalizing the obtained feature maps and reducing their dimensions without incurring any loss.

Finally, in the “fully connected layer,” feature maps from the pooling layer are flattened and the classification of the image is performed using these flattened feature maps.

3.2.2 Long-short term memory (LSTM)

LSTM networks are a type of recurrent neural network (RNN) designed to detect long-term dependencies in sequential data [42]. LSTM networks have a memory cell that can maintain information over long periods of time. They can generate variable-length outputs from variable-length inputs. Many variants such as CNN-LSTM and stacked LSTM have been developed enhancing the performance and capabilities further.

3.2.3 Deep recurrent Q-network (DQN)

DQN networks combine the Q-learning algorithm (which aims to maximize long-term benefits) and DNN networks, with the aim to predict the expected benefit for a specific behavior. DQN networks have been successfully applied in applications such as robotic control, autonomous vehicles and have also been used for crop yield prediction in agriculture [43].

3.2.4 Region-based CNN (R-CNN)

It is a two-stage CNN method developed specifically for object detection [44]. As illustrated in Figure 4, in the first stage, the image is segmented into regions of interest (ROI) based on features such as texture and color. Then, each ROI is fed to a CNN layer to create a fixed-length feature vector. CNNs consist of pre-trained networks. Although R-CNN has achieved quite successful results (in terms of accuracy) in its time, it has remained relatively slow in both the training and testing phases.

Figure 4.

R-CNN algorithm overview.

3.2.5 Fast R-CNN

Developed to accelerate the R-CNN algorithm, fast R-CNN creates a single feature map for all ROIs that can be shared among all, instead of having a separate feature extraction step for each ROI [45]. It provided more accurate and much faster results compared to R-CNN.

3.2.6 Faster R-CNN

Faster R-CNN is even more improved version of the fast R-CNN algorithm in terms of speed and accuracy [46]. For each position on the image, a region proposal network (RPN) predicts objectness scores and object bounds at the same time. RPNs are trained to generate almost cost-free region proposals which are to be fed into fast-RCNN for object detection. Together with fast-RCNN, RPNs achieve 5fps speed and 70.4% mAP (mean average precision) object detection accuracy.

3.2.7 YOLO

You only look once (YOLO) is an object detection method that analyzes images in a single pass, providing both object classification and object location. YOLO utilizes an end-to-end neural network that makes predictions of bounding boxes and class probabilities in one evaluation. This allows it to achieve 45 frames per second (fps) image processing [47] which makes it effective in fast, real-time applications. YOLO continues to improve through a series of versions up to YOLO v4 (as of this date) and each new version improves performance further, up to 72 fps and 88% mAP for YOLO v4 [48].

3.2.8 SSD

Single shot multi-box detector (SSD) is a model that can perform object detection at multiple locations and scales using feature maps of different scales. It is designed to detect objects of different sizes simultaneously. It achieves 59 fps speed and claimed to be simpler as it does not require object proposals which makes SSD to be easily trained while achieving higher accuracy compared to other single-stage methods [49].

Figure 5 shows performance of YOLO, SSD, and CNN derivatives in terms of processing speed and accuracy (performance data is taken from [48, 49, 50]).

Figure 5.

Performance comparison of DNN algorithms for object identification.

3.3 Agricultural applications

Deep learning algorithms are utilized in various agricultural applications such as crop quality assessment, soil nutrient and yield prediction, disease, pest, and weed detection. They provide expert guidance to farmers for the amount and timing of additional nutrients, irrigation, chemicals for disease control, pesticide, and herbicides which increase yield quality and amount.

We will briefly describe these application areas as follows.

3.3.1 Crop quality assessment and yield prediction

Assessing crop quality involves determining the number, weight, size, and maturity level of crops during plant growth. Assessment is performed with ML methods over raw images, thus requiring no direct contact (non-invasive), eliminates manual assessment, and reducing labor while achieving higher precision. Accurate assessment of crop quality using computer vision and ML together can also enable automated harvesting.

Yield prediction in early stages of plants’ growth is another area where ML techniques play a significant role. Accurate yield prediction can contribute to food production and food security by optimizing resources such as water and nutrients etc. Yield prediction accuracy depends on many factors involved in plant life cycle and requires managing very large datasets with ML algorithms.

Studies using support vector machine (SVM), Bayesian model, Gaussian Naive Bayes (BM/GNB), artificial neural network, adaptive neuro-fuzzy inference system (ANN/ANFIS), artificial neural network, self-organizing neural networks (ANN/SNKs), and clustering, expectation-maximization (EM) models have been conducted, incorporating drone, satellite, and camera images. Table 2 shows several features used to predict the crop yield [51].

Soil informationSoil type
Soil pH level
Field informationIrrigation
Fertilization
Weather informationAmbient temperature
Rainfall and precipitation
Wind speed and atmospheric pressure
Solar exposure time
Nutrient informationNutrients available and injected
Artificial informationEnhanced vegetation index (EVI)
Normalized vegetation index (NDVI)

Table 2.

Features used for crop yield prediction [51].

3.3.2 Disease/pest detection

Plant diseases and organisms such as harmful insects and parasites (pests) living on plants negatively affect crop production. Detecting the symptoms of diseases and pests with the naked eye is quite challenging (see Figure 6) and has low accuracy particularly during the early stages of plants’ growth. The delayed identification of diseases and pests leads to the need for excessive drug/pesticide usage in later stages, reducing crop quality and increasing environmental harm. Therefore, the timely and accurate detection of diseases and pests is of critical importance to determine the type and dosage of drugs or pesticides to be applied.

Figure 6.

Leaves with disease and pests.

Traditionally, pesticides are uniformly sprayed on cultivated areas for pest and disease control. Although effective, this method leads to high costs and environmental damage, including the mixing of pesticide residues with crops and drinking water, and negative effects on wildlife and ecosystems. Unlike traditional methods, ML methods ensure that the necessary pesticide is applied only at the required time and amount to the infected plant, minimizing unnecessary financial and environmental impacts. In the literature [27, 52], SVM, ANN, CNN, YOLOv3, LSTM and their derivatives (explained in Section 3.1) have been used for pest and disease identification and classification.

3.3.3 Weed detection

Weeds (Figure 7) pose a significant threat to crop production, as they share resources such as soil, water, nutrients, sunlight, etc., reducing crop quality. Detecting and distinguishing weeds for targeted herbicide applications is challenging. Contrary to traditional methods, such as manual or chemical weed control methods, ML algorithms allow real-time detection, identification and classification of weeds and their effective destruction, minimizing the side effects of herbicides and their negative impact on the environment. Algorithms such as ANN, CNN, and SVM derivatives are well suited to weed detection applications.

Figure 7.

Weed detection over RGB image.

Advertisement

4. Conclusion

The integration of NDVI satellite data and drone technology, complemented by ground sensors (e.g., SPAD), has established a new standard in nitrogen fertilizer management. These advanced tools enable a dynamic and responsive approach to fertilization that aligns closely with the crop nutrient status. Satellite-derived NDVI provides a macroscopic view of field variability and plant health, allowing for the identification of areas that may require differential nitrogen application. Drones fill in the gaps on cloudy days and add a layer of detail with their high-resolution imagery, capable of monitoring crop health at a micro-scale, even detecting individual leaves’ nutrient status. Handheld devices like SPAD meters complement these technologies by offering on-the-spot assessments of leaf chlorophyll, serving as another dependable indicator of plant nitrogen content. The confluence of NDVI data, drone imaging, and SPAD readings supports farmers in applying nitrogen fertilizers more effectively, matching the spatial and temporal crop needs. However, as far as we are aware, we are still in the initial phase of the development of effective digital farming. With the development of sensor and automation technologies in conjunction with the sophisticated ML processes for big data analysis, we expect that precision agriculture will likely become more and more accurate and effective. The breakthroughs in image-based weed detection is one of the biggest breakthrough in weed management practices. Leveraging complex algorithms and machine learning, this technology facilitates discerning weed species and their respective growth stages, thereby enabling farmers to execute more focused and efficient weed control strategies. By synthesizing data from soil maps, yield maps, and remote sensing, these innovations ensure that each field zone receives just the right amount of input, fostering a reduction in herbicide usage, a decrease in production costs, and a lesser environmental impact from crop protection chemicals.

Advertisement

Conflict of interest

The authors declare no conflict of interest.

References

  1. 1. UNFCCC. UNFCCC, 2015. United Nations: Paris Agreement; 2015. Available from: https://unfccc.int/sites/default/files/english_paris_agreement.pdf
  2. 2. FAO. The future of food and agriculture - trends and challenges, food and agriculture Organization of the United Nations. Channels. 2017;4:180
  3. 3. Reay D. Nitrogen and Climate Change: An Explosive Story. London: Palgrave Macmillan; 2015. DOI: 10.1057/9781137286963
  4. 4. Menegat SAL, Ledo L, Tirado R. Greenhouse gas emissions from global production and use of nitrogen synthetic fertilizers in agriculture. Scientific Reports. 2022;12:14490. DOI: 10.1038/s41598-022-18773
  5. 5. Singh B, Craswell E. Fertilizers and nitrate pollution of surface and ground water: An increasingly pervasive global problem. SN Applied Sciences. 2021;3:518. DOI: 10.1007/S42452-021-04521-8
  6. 6. Chiu MT, Xu X, Wei Y, Huang Z, Schwing AG, Brunner R, et al. Agriculture-vision: A large aerial image database for agricultural pattern analysis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, WA, USA: IEEE; 2020. pp. 2828-2838
  7. 7. FAO. FAOSTAT. Emission shares dataset. 2022. Available from: fenix.fao.org/faostat/internal/en/#data/EM [Accessed: Mar. 11, 2022]
  8. 8. Davidson EA, Keller M, Erickson HE, Verchot LV, Veldkamp E. Testing a conceptual model of soil emissions of nitrous and nitric oxides. Bioscience. 2000;50:667
  9. 9. Gan L. Environmental risks of fertilizer use and the prevention and control measures in Chinese rural areas. Chimica Oggi-Chemistry Today. 2016;34(6 B):33-38
  10. 10. Ajeng AA, Abdullah R, Malek MA, Chew KW, Ho Y-C, Ling TC, et al. The effects of biofertilizers on growth, soil fertility, and nutrients uptake of oil palm (Elaeis Guineensis) under greenhouse conditions. PRO. 2020;8:1681. DOI: 10.3390/pr8121681
  11. 11. FAO. World Fertilizer Trends and Outlook to 2022. Rome, Italy: Food and Agriculture Organization of the United Nations (FAO); 2019
  12. 12. Hu HW, Chen D, He JZ. Microbial regulation of terrestrial nitrous oxide formation: Understanding the biological pathways for prediction of emission rates. FEMS Microbiology Reviews. 2015a;39:729-749. DOI: 10.1093/femsre/fuv021
  13. 13. Clark MA, Domingo NGG, Colgan K, Thakrar SK, Tilman D, Lynch J, et al. Global food system emissions could preclude achieving the 1.5° and 2°C climate change targets. Science. 2020;370:705-708. DOI: 10.1126/science.aba7357
  14. 14. Tian H et al. A comprehensive quantification of global nitrous oxide sources and sinks. Nature. 2020;586:248-256. DOI: 10.1038/s41586-020-2780-0
  15. 15. IPCC. Climate change and land: An IPCC special report on climate change, desertification, land degradation, sustainable land management, food security, and greenhouse gas fluxes in terrestrial ecosystems, summary for policymakers. 2019.
  16. 16. Sutton MA et al. Our nutrient world: The challenge to produce more food and energy with less pollution. Global Overview of Nutrient Management. Available from: www.initrogen.org and www.gpa.unep.org/gpnm. 2013 [Accessed: Mar 11, 2022]
  17. 17. Lassaletta L et al. Nitrogen use in the global food system: Past trends and future trajectories of agronomic performance, pollution, trade, and dietary demand. Environmental Research Letters. 2016;11:095007. DOI: 10.1088/1748-9326/11/9/095007
  18. 18. Chlingaryan A, Sukkarieh S, Whelan B. Machine learning approaches for crop yield prediction and nitrogen status estimation in precision agriculture: A review. Computers and Electronics in Agriculture. 2018;151:61-69
  19. 19. Rejeb A, Abdollahi A, Rejeb K, Treiblmaier H. Drones in agriculture: A review and bibliometric analysis. Computers and Electronics in Agriculture. 2022;198:107017
  20. 20. Gerhards R, Risser P, Spaeth M, Saile M, Peteinatos G. A comparison of seven innovative robotic weeding systems and reference herbicide strategies in sugar beet (Beta vulgaris subsp. vulgaris L.) and rapeseed (Brassica napus L.). Weed Research. 2023;64(1):42-53. DOI: 10.1111/wre.12603
  21. 21. Droukas L, Doulgeri Z, Tsakiridis NL, Triantafyllou D, Kleitsiotis I, Mariolis I, et al. A survey of robotic harvesting systems and enabling technologies. Journal of Intelligent & Robotic Systems. 2023;107(2):21
  22. 22. Gonzalez-de-Soto M, Emmi L, Perez-Ruiz M, Aguera J, Gonzalez-de-Santos P. Autonomous systems for precise spraying–evaluation of a robotised patch sprayer. Biosystems Engineering. 2016;146:165-182
  23. 23. Liu J, Cai H, Chen S, Pi J, Zhao L. A review on soil nitrogen sensing technologies: Challenges, Progress and perspectives. Agriculture; 2023;13(4):743
  24. 24. Artizzu XPB, Ribeiro BA, Guijarro M, Pajares G. Real-time image processing for crop/weed discrimination in maize fields. Computer and Electronics in Agriculture. 2011;75:337-346. DOI: 10.1016/j.compag.2010.12.011
  25. 25. Milioto A, Lottes P, Stachniss C. Real-time semantic segmentation of crop and weed for precision agriculture robots leveraging background knowledge in CNNs. In: 2018 IEEE International Conference on Robotics and Automation (ICRA). Brisbane, Australia: IEEE; 2018. pp. 2229-2235
  26. 26. Kulkarni AH, Patil A. Applying image processing techniques to detect plant diseases. International Journal of Modern Engineering Research. 2012;2(5):3661-3664
  27. 27. Liakos KG, Busato P, Moshou D, Pearson S, Bochtis D. Machine learning in agriculture. A review. Sensors. 2018;18(8):2674
  28. 28. Sishodia RP, Ray RL, Singh SK. Applications of remote sensing in precision agriculture: A review. Remote Sensing. 2020;12(19):3136. DOI: 10.3390/rs12193136
  29. 29. Hellerstein D, Vilorio D. Agricultural resources and environmental indicators. In: EIB- 208. Washington, D.C., USA: U.S. Department of Agriculture, Economic Research Service; May 2019
  30. 30. Tsouros DC, Bibi S, Sarigiannidis PG. A review on UAV-based applications for precision agriculture. Information. 2019;10(11):349
  31. 31. Escolà A, Martínez-Casasnovas JA, Rufat J, Arnó J, Arbonés A, Sebé F, et al. Mobile terrestrial laser scanner applications in precision fruticulture/horticulture and tools to extract information from canopy point clouds. Precision Agriculture. 2017;18:111-132
  32. 32. Heege HJ. Precision in guidance of farm machinery. Precision in crop farming: Site specific concepts and sensing methods. Applications and Results. 2013:35-50
  33. 33. Serrano L, Penuelas J, Ustin SL. Remote sensing of nitrogen and lignin in Mediterranean vegetation from AVIRIS data: Decomposing biochemical from structural signals. Remote Sensing of Environment. 2002;81(2-3):355-364
  34. 34. Jamali M, Soufizadeh S, Yeganeh B, Emam Y. Wheat leaf traits monitoring based on machine learning algorithms and high-resolution satellite imagery. Ecological Informatics. 2023;74:101967
  35. 35. Gitelson AA. Wide dynamic range vegetation index for remote quantification of biophysical characteristics of vegetation. Journal of Plant Physiology. 2004;161(2):165-173
  36. 36. Senbayram M, Chen R, Wienforth B, Herrmann A, Kage H, Mühling KH, et al. Emission of N2O from biogas crop production systems in northern Germany. Bioenergy Research. 2014;7:1223-1236
  37. 37. Bramley RG, Lawes RA, Cook SE. Spatially distributed experimentation: Tools for the optimization of targeted management. In: Precision Agriculture for Sustainability and Environmental Protection. London: Routledge; 2013. pp. 205-218
  38. 38. Colaço AF, Bramley RG. Do crop sensors promote improved nitrogen management in grain crops? Field Crops Research. 2018;218:126-140
  39. 39. Biermacher JT, Epplin FM, Brorsen BW, Solie JB, Raun WR. Economic feasibility of site-specific optical sensing for managing nitrogen fertilizer for growing wheat. Precision Agriculture. 2009;10:213-230
  40. 40. Boyer CN, Wade Brorsen B, Solie JB, Raun WR. Profitability of variable rate nitrogen application in wheat production. Precision Agriculture. 2011;12:473-487
  41. 41. Roberts DC, Brorsen BW, Solie JB, Raun WR. The effect of parameter uncertainty on whole-field nitrogen recommendations from nitrogen-rich strips and ramped strips in winter wheat. Agricultural Systems. 2011;104(4):307-314
  42. 42. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Computation. 1997;9(8):1735-1780
  43. 43. Elavarasan D, Vincent PD. Crop yield prediction using deep reinforcement learning model for sustainable agrarian applications. IEEE Access. 2020;8:86886-86901
  44. 44. Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA. 2014. pp. 580-587. DOI: 10.1109/CVPR.2014.81
  45. 45. Girshick R. Fast r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision. Santiago, Chile: IEEE; 2015. pp. 1440-1448. DOI: 10.1109/ICCV.2015.169
  46. 46. Ren S, He K, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems. 2015;28:28. DOI: 10.48550/arXiv.1506.01497
  47. 47. Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: IEEE; 2016. pp. 779-788
  48. 48. Wang S. Research towards YOLO-series algorithms: Comparison and analysis of object detection models for real-time UAV applications. In: Journal of Physics: Conference Series. Vol. 1948, no. 1. Bristol, England: IOP Publishing; 2021. p. 012021
  49. 49. Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY, et al. SSD: Single shot multibox detector. In: Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, the Netherlands, October 11-14, 2016, Proceedings, Part I 14. Cham, Switzerland: Springer International Publishing; 2016. pp. 21-37
  50. 50. Divya R, Peter JD. Smart healthcare system-a brain-like computing approach for analyzing the performance of detectron2 and PoseNet models for anomalous action detection in aged people with movement impairments. Complex & Intelligent Systems. 2022;8(4):3021-3040
  51. 51. Van Klompenburg T, Kassahun A, Catal C. Crop yield prediction using machine learning: A systematic literature review. Computers and Electronics in Agriculture. 2020;177:105709
  52. 52. Chen CJ, Huang YY, Li YS, Chen YC, Chang CY, Huang YM. Identification of fruit tree pests with deep learning on embedded drone to achieve accurate pesticide spraying. IEEE Access. 2021;9:21986-21997

Written By

Mehmet Hadi Suzer, Mehmet Şenbayram and Mehmet Ali Çullu

Submitted: 13 January 2024 Reviewed: 30 January 2024 Published: 04 March 2024