Open access peer-reviewed chapter

Agricultural Robot for Intelligent Detection of Pyralidae Insects

Written By

Zhuhua Hu, Boyi Liu and Yaochi Zhao

Submitted: 26 February 2018 Reviewed: 12 June 2018 Published: 05 November 2018

DOI: 10.5772/intechopen.79460

From the Edited Volume

Agricultural Robots - Fundamentals and Applications

Edited by Jun Zhou and Baohua Zhang

Chapter metrics overview

1,443 Chapter Downloads

View Full Metrics

Abstract

The Pyralidae insects are one of the main pests in economic crops. However, the manual detection and identification of Pyralidae insects are labor intensive and inefficient, and subjective factors can influence recognition accuracy. To address these shortcomings, an insect monitoring robot and a new method to recognize the Pyralidae insects are presented in this chapter. Firstly, the robot gets images by performing a fixed action and detects whether there are Pyralidae insects in the images. The recognition method obtains the total probability image by using reverse mapping of histogram and multi-template images, and then image contour can be extracted quickly and accurately by using constraint Otsu. Finally, according to the Hu moment characters, perimeter, and area characters, the contours can be filtrated, and recognition results with triangle mark can be obtained. According to the recognition results, the speed of the robot car and mechanical arm can be adjusted adaptively. The theoretical analysis and experimental results show that the proposed scheme has high timeliness and high recognition accuracy in the natural planting scene.

Keywords

  • pest detection and recognition
  • Pyralidae insects
  • reverse mapping
  • multi-template matching
  • agricultural robot

1. Introduction

The timely detection and identification of corn pests and diseases are one of the major tasks of agriculturists for social and environmental challenges, such as maintaining the stability of grain output and reducing environmental pollution caused by the use of pesticides. Pyralidae insects are one of the most common pests of maize [1], and it does great harm to the quality and yield of maize. The traditional manual monitoring not only requires a large amount of labor but also causes that detection is not timely due to human omissions. With the rapidly development of computer technology, the monitoring of diseases and insect pests based on computer vision has been feasible, which can greatly improve the real-time detection and recognition of pests [2].

Currently, there have existed some methods to detect plant diseases or insect with image processing and computer vision technologies [3]. For example, Ali et al. used color histogram and textural descriptors to detect citrus diseases [4]. They took the use of color difference to separate the area affected by disease. Lu et al. used spectroscopy technology to detect anthracnose crown rot in strawberry [5]. Xie et al. employed the hyper-spectral images to detect whether there was gray mold disease in tomato leaves [6]. In addition, the researchers constructed an automated detection and monitoring system for the detection of small pests in the greenhouse, such as whitefly, etc., which can effectively monitor the tiny insects and their densities [7, 8, 9, 10]. Meanwhile, computer vision technology was also used for aphid detection and monitoring of its population [11]. For the parasites on strawberry plants, support vector machine (SVM) method combined with the image processing technique was successful in detecting the thrips with an error less than 2.5% in the greenhouse environment [12]. The incorporation k-means clustering methodology with image processing was used to segment the pests or any object from the image [13]. Dai and Man used a convolutional Riemannian texture with differential entropic active contours to distinguish the background regions and expose pest regions [14]. Zhao et al. obtained accurate contour of crop diseases and insect pests for the following recognition, taking the use of texture difference and active contour guided by the texture difference [15]. In their further research, they also proposed image segmentation method for fruits with diseases based on constraint Otsu and level set active contour [16]. However, they did not research on identification.

As for the recognition of insects and diseases, some recent research advances can be classified into the two categories. The first category focuses on the image processing and computer vision technologies without requiring data training. Pest recognition method based on sparse representation and multi-feature fusion was proposed, which mainly used to identify beetles [17]. Four methods for the diagnosis and classification of the diseases of corn leaf were presented by using image processing and machine vision techniques [18]. Martin et al. proposed an extended region growing algorithm, which can identify the pest and have the counting of the pest to predict the pesticide amount to be used [19]. Przybyłowicz et al. developed a technique based on wing measurements, which can be an effective tool for monitoring of the European corn borer [20].

The second category concentrated on the training of data models, which mainly used machine learning and neural network technology. The method based on difference of Gaussian filter and local configuration pattern algorithm was used to extract the invariant features of the pest images, and then these features were put to a linear SVM (support vector machine) for pest recognition with recognition rate of 89% [21]. Kohonen’s Self-Organizing Maps neural network was used to identify the extracted insect pests caught by a sticky trap [22]. In addition, Boniecki et al. proposed a classification neural model using optimized learning sets acquired based on the information encoded, which can be used to accurately identify the six most common apple pests [23]. Based on the combination of an image processing algorithm and artificial neural networks, Espinoza et al. proposed an algorithm to detect and monitor adult-stage whitefly (Bemisia tabaci) and thrip (Frankliniella occidentalis) in greenhouses, and the correct recognition rate reached above 0.92 [24]. Zhu et al. combined the color histogram with dual tree complex wavelet transform [25] and SVM [26] to recognize insects, which can improve the recognition rate of insects. Li et al. proposed a red spider recognition method based on k-means clustering, which transformed the image into Lab color space for clustering [27]. This method had a high accuracy rate to identify red spider with obvious red features. However, the method can be only applied in the situation that there is high color contrast between the objects and the scenes.

In addition, the device for image acquisition is also necessary [7]. Johannes et al. presented a scheme to diagnosis wheat disease automatically by using mobile capture devices [28]. In his research, a novel image processing algorithm based on candidate hot-spot detection in combination with statistical inference methods is proposed to tackle disease identification in wild conditions.

From the literature analysis in recent years, the image processing and computer vision technology have been widely used for the detection and recognition of diseases and pests and have achieved good results. Generally, the researchers used an existing method combined with image processing techniques to detect and identify clustering method, neural network, texture analysis, wavelet transform, the level set method, etc. However, it is difficult to have a universal method to detect and identify all pests. In general, the algorithms are used to detect and identify one or a class of pests. Moreover, most of the existing studies are often aimed at the greenhouse environment, and the researchers usually do not build a practical verification system. Obviously, deep learning can achieve high recognition accuracy, but this training-based approach is difficult to guarantee real time and requires a large amount of existing data to train the model.

At present, there are still relatively few studies on the detection and identification of Pyralidae insects. In order to detect and identify the Pyralidae insects automatically and accurately in real time, we have researched in the following aspects. Firstly, a robot platform for pest monitoring is designed and fabricated. Then, a recognition scheme for Pyralidae insects is presented, in which the color feature of the image is used. Moreover, the histogram reverse mapping method and the multi-template image are used to obtain the general probability image superposition. Next, the image is segmented with constraint Otsu. Finally, the contours and Hu moments are used to automatically screen and identify the contours; thus, the contour of Pyralidae insects can be recognized. The scheme proposed in this chapter can recognize the single target and also has good recognition ability for multiple targets.

The rest of this chapter is organized as follows. Section 2 shows data acquisition equipment and its structure and also gives the whole detailed description of detection and recognition algorithm. In Section 3, we verify the monitoring robot’s work and the proposed scheme of detection and recognition. In addition, we also evaluate the proposed scheme and discuss the results of the experiment. Finally, Section 4 concludes the chapter.

Advertisement

2. Materials and methods

2.1. Acquisition of Pyralidae insect data source

The image data used in this study are collected by an Automatic Detection and Identification System for Pests and Diseases. The system has been installed at the zone of technology application and demonstration of Hainan University in Hainan province, China. The system prototype and structure diagram is shown in Figure 1. The basic structure of the system can be divided into five major parts: the camera sensor (automatic focusing, resolution 1600 × 1200 and camera model KS2A01AF) and display unit, trap unit, the power delivery unit, the intelligent detection and recognition unit, and the hardware bearer unit.

Figure 1.

The intelligent recognition of robot car for Pyralidae insects. (1) Deep grooved wheel, (2) shell, (3) guardrail, (4) screen display, (5) camera, (6) mechanical arm, (7) vertical thread screw, (8) screw guardrail, (9) solar panels, (10) sensor integrator, (11) horizontal screw motor, (12) trap lamp, (13) the hardcore, (14) crossbar, (15) insect collecting board, (16) vertical thread screw-driven motor, (17) chassis, (18) car control buttons, (19) horizontal thread screw, and (20) trap top cover.

2.2. Description of proposed scheme

In this chapter, the recognition scheme for Pyralidae insects based on reverse mapping of histogram and contour template matching is mainly divided into input module, reference image processing module, image segmentation module, contour extraction module, and target recognition module. The input module firstly converts the experimental image into a matrix and initializes the parameters such as contour recognition threshold and the binarization threshold of the probability image. Then, the reference image processing module makes space conversion for the reference image, transforms the image from RGB space to HSV space, and extracts the histogram of the color layer (H layer). After that, the image segmentation module is to extract the color histogram of the experimental image. After normalization, the total probability image is obtained by the principle of histogram reverse mapping using the H layer histogram of multiple template images, and then the module binarizes the probability image. Subsequently, in the contour extraction module, the method obtains the contour of the binary image with the help of the function named findContours() in OpenCV. The contours of the internal holes are removed by morphological methods, which are screened according to the circumference and area features. Finally, in the target recognition module, the scheme recognizes the contour by calculating the similarity between the contour obtained in the previous steps and the template contour. The outline of the contour larger than the threshold is considered to be the target contour, and finally we can get the recognition result. The pseudo-code corresponding to the scheme is shown in Table 1.

Algorithm: Recognition scheme of Pyralidae insects
Input: S (target image); Mx (reference image);
Output: Three vertices of triangular markings on the Pyralidae insects α1β1,α2β2,α3β3
1: Initialize: (R, G, B)←S, Mx
2: Setting: The threshold of Hu moments; Reference contour image Yimage
3: V=max(R, G, B);
 S=(V-min(R,G,B))×255÷V if V!=0, 0 otherwise
H=GB×60÷SifV=R180+BR×60÷SifV=G240+RG×60÷SifV=B
4: for i=0:1:255
   The color histogram of each image is obtained by statistics: H←Xi=Hpi÷(Hm×Hn);
     Normalized (H);
      end for
5: for i=0:1:m
  for j=0:1:n
    Gij=Similarity(H of {Image blocks with the same size as Mx}, H of Mx)
/*Similarity(),Calculate the histogram similarity */
  end for
 end for
6: R = OSTU(G); /* Binarize the image by Otsu method */
7: C=findContours(R)       /*findContours() extracts the contours from binary images*/
8: real_match ← Based on Hu moment feature, calculating the similarity between C and the template contour
9: if real_match > match:
   Triangle ← Approximate processing for triangle contour;
   Output vertex coordinates
  else:delete R

Table 1.

The pseudo-code description of the proposed scheme.

2.3. Probability image acquisition based on color histogram reverse projection and multi-template matching

The adults of the Pyralidae insects are yellowish brown. The male moths are 10–13 mm long, and the wing can reach 20–30 mm. The back of the Pyralidae insects is yellowish brown, and the end of the abdomen is relatively thin and pointed. Usually, they have a pair of filamentous antennae, which are grayish brown. Meanwhile, its forewing is tan, with two brown wavy stripes, and there are two yellowish brown short patterns between the two lines. In addition, the hind wings of the Pyralidae insects are grayish brown; especially, female moths are similar in shape to male moths with lighter shades, yellowish veins, lightly brown texture, and obese abdomen. From these characteristics, the color characteristics of adult Pyralidae insects are obvious, and it is very effective to recognize the Pyralidae insects by color characteristics. Color histograms are often used to describe color features and are particularly useful for describing images that are difficult to segment automatically.

The inverse projection of the histogram is proposed by Michael J. Swain and Dana H. Ballard [29], which is a form of record that shows how the pixel or pixel block adapts to the histogram model allocation. It can be used to segment image or find interesting content in the image. The output of the algorithm is an image of the same size as the input image, where the value of the pixel represents the probability that it belongs to the target image. Therefore, it is possible to obtain a probability image by mapping the histogram in the target image by using the template image of the Pyralidae insects. Considering the Pyralidae insect’s highlight color feature and the functional characteristics of histogram reflective algorithm, the scheme proposed in this chapter applies the image grayscale processing based on the reflection of the color histogram to the color feature extraction step. After the target image and the template image are converted into the HSV space and the color layer (i.e., the H component) is extracted, the image is grayed out by the method of histogram mapping. The gray image obtained in this way is a probability image that reflects the degree of similarity to the target color. Thus, it realizes the color distribution feature screening of the target image. The algorithm flow is shown below:

  1. Convert the reference image into HSV space; extract the H spatial matrix, statistically, histogram; and normalize it.

  2. Start from the first pixel (x, y) of the experimental image, and cut temporary image that is the same size as the reference image, where (x, y) is the center pixel of the temporary image. Extract the H space matrix, statistically its histogram, and normalize it.

  3. Calculate the similarity between the color histogram of the detected image H1 and the color histogram of the reference image H2. The result is Similarity (H1, H2):

    Hk'=Hki1N×jNHkjE1
    SimilarityH1H2=iNH1i×H2iiNH12i×H22iE2

In Eqs. (1) and (2), k12, i=j123N, N is the number of intervals in the histogram, and Hki is the value of the ith interval in the kth histogram. SimilarityH1H2 is the similarity between histogram H1 and histogram H2. The degree of similarity reflects the color characteristics of the pixel which are in line with the probability of Asian Pyralidae insects.

In addition, due to the differences in the color and texture between different Pyralidae insects in natural scenes, it is necessary to use a plurality of template images for histogram reverse projection processing, which can avoid the use of a template that cannot be adapted to a variety of different scene situations. As shown in Table 3, three template images are given. The total probability image obtained by this method is shown in Eq. (3), where M represents the number of template images. The results obtained are shown in Table 4.

SimilarityH1=m=1MSimilarityH1HmE3

2.4. Otsu image segmentation based on constrained space

The Otsu algorithm is also known as maximum between-class variance method [30], sometimes called the Otsu algorithm, which is considered to be the best algorithm of selecting the threshold in image segmentation. For the image Gxy, the split threshold is set as T, ω1 is the proportion of foreground pixels, μ1 is the average grayscale of foreground image, ω2 is the proportion background pixels, μ2 is the average grayscale of background image, μ is the total average grayscale of background image, and g is the maximum between-class variance. pmin and pmax are, respectively, the minimum and maximum values of the pixel values in the image. Then, we can get

μ=μ1×ω1+μ2×ω2s.t.ω1+ω2=1E4
gotsu=argmaxω1×μμ12+ω2×μμ22E5

Substitute Eq. (4) into Eq. (5), and then the Otsu solution expression for threshold is as below:

gotsu=argmaxω1×ω2×μ1μ22pminTpmaxE6

Finally, by using the method of traverse, the threshold of the maximum between-class variance of the image is obtained. Inspired by the literature [16], the variance of the similarity value of the background area is smaller because of the variance of the similarity degree of the Pyralidae insect area and the diversity of the natural scene. In addition, the similarity of the Pyralidae insects is larger than that of the background. Therefore, the Otsu threshold will be biased toward the background, which can lead smaller threshold compared with the actual optimal threshold. After that, the Otsu constrained spatial segmentation method is used to obtain the gotsu firstly, and then a threshold for maximizing the between-class variance is obtained in the constraint space (between gotsu and pmax), as shown in Eq. (7), where gotsu is a simple calculation method [31], that is, gotsu=12μ1+μ2, which indicates that the Otsu threshold is biased to a larger variance for the image with a large difference between the two variance values:

goptimal=argmaxω1×ω2×μ1μ22gotsuTpmaxE7

2.5. Target contour recognition based on Hu moments

The moment feature mainly characterizes the geometric characteristics of the image area, also known as the geometric moment. Because it has the invariant characteristic of the rotation, translation, scale, and so on, so it is also called the invariant moment. In image processing, geometric invariant moments can be used as an important feature to represent objects, which can be used to classify an image. Among them, the invariant moments commonly used in humanoid recognition are mainly composed of Hu moments, Zernike moments, and so on. Hu moment is first proposed by M.K. Hu [32], and he gave the definition of Hu moments, the basic properties, and seven invariant moments with translation, rotation, and scaling invariance.

Specifically, we assume that the gray distribution in the target D region is f(x, y). In order to describe the target, the gray distribution outside the region D is considered to be 0, and then the geometric moment and the regional moment of the p+q order are, respectively, expressed as follows:

mpq=DxpyqfxydxdyE8
μpq=Dxx¯pyy¯pfxydxdyE9

As shown in the above equation, mpq represents the p+q order geometric moments of the image, and μpq represents the p+q order center moments of the image. Calculating the two features of the reference contour image and the experimental contour image, we can use these two features to represent the contour. The similarity between the experimental contour and the reference contour is compared, and the similarity less than the threshold value of the contour is removed. Then, the rest of the contour is the contour of the Pyralidae insects. Finally, by using the function named approxPolyDP() in the OpenCV and other contour approximation processing functions, the contour is approximated to a triangle and marked. Obviously, the marked contour is the result we want.

2.6. Recognition algorithm combined with robot control

Combining with robot operations is one of the innovations of this chapter. Depending on the result of the similarity detection, the robot arm can adjust the speed. When the similarity is greater than 0.9, the robot arm will stop moving; meanwhile, camera sensors continue to collect image data, and the robot will give an alarm. When the similarity is between 0.7 and 0.9, the movement of the robot will slow down. Using robot and image recognition in a coordinated manner, we can reduce the false alarm rate and missed detection rate. Meanwhile, when there is interference of other insects, the robot arm will stop or slow down, which can reduce the probability of false positives. Only when the similarity of five consecutive insect images is greater than 0.9, we can make the final decision on the presence of Pyralidae insects. Using this method, it can be prevented from being mistaken for other insects, so as to improve the correct recognition rate.

Advertisement

3. Results and discussions

The hardware environment of this scheme includes PC (Inter(R) Core(TM) i3-2500 CPU @3.30GHZ and 4.00GB RAM), embedded master development board (NVIDIA Jetson TX1), embedded auxiliary control development board (2 Raspberry Pi B+ and 6 Arduino uno r3 expansion board), camera module (KS2A01AF), etc. The software experiment environment includes Window 7 operating system, Python 2.7, OpenCV 2.4.13, and embedded Linux operating system. The images used in the experiment are collected from cameras on the robot arm. We gather about more than 200 photos of the Pyralidae insects for experiments. Some result images of detection are shown in Table 4. The robot can perform a well-designed motion, capture the images well, and identify Pyralidae insect object from the images. The main parts and functions of the robot are shown in Table 2.

3.1. Probabilistic image acquisition experiment and analysis

After the image is converted to HSV space, the next step is that histogram reverse mapping is conducted by using the three template images for target images, and then we can obtain the probability image. The probability images obtained in the experiment are shown in Table 3.

Table 2.

Image acquisition equipment: pest identification and environmental monitoring robot.

Table 3.

The original images and the obtained probability image after histogram reverse mapping.

As shown in Table 3, there are probability images obtained after three template images make original image from the histogram reverse mapping. The image of the first column of lines 2–4 is the template image used by the current line. The first line of the table is the original image of the five images containing Asian Pyralidae insects. The second line of the table is the probability image obtained by mapping the backward histogram with the template image 1. The third line of the table is the probability image obtained by mapping the backward histogram with the template image 2. The fourth line of the table is the probability image obtained by mapping the backward histogram with the template image 3. The last line of the table is the total probability image obtained by logical or operation and image erosion with the above three probability images.

As can be found in Table 3, the proposed method can avoid the situation that only one image used cannot adapt to a variety of different scenarios. It can be seen from the results of the final image after erosion operation that the total probability image obtained by multi-template image’s logical operation has better effect.

3.2. Experiment and analysis of maize borer

After obtaining the probability image, the contour extraction, matching, screening, and recognition experiments are carried out in this chapter. At the same time, triangle mark is to identify the results of recognition for the characteristics of Pyralidae insects shape, and recognition results are shown in Table 4.

Table 4.

The recognition results and the robot arm action.

As can be seen from Table 4, the scheme proposed can better identify the target containing Pyralidae insect images. The number marked on the pictures indicates similarity. While we use the triangle to identify the results of identification, better results are achieved. According to different recognition results, the speed of the robot arm can be adjusted adaptively to improve the detection accuracy. Subsequently, we make statistics on time consumption and other indicators in the experimental results. The processing time is about 1 s on every image. So, the method proposed in this chapter can achieve real-time processing.

3.3. Comparison and analysis

Currently, recognition method based on ELM and deep learning has a rapid development. In theory, the use of these methods can get a higher correct rate. Unfortunately, the capture and establishment of such pest images of maize borers are very difficult. By now, there are few useful pictures we can take, which are far less than the minimum requirement for the number of image to be trained. Certainly, we also try to collect images through the trap. However, the background of the resulting images is single, which cannot meet the requirements. In addition, ELM and deep learning all have relatively high computational complexity and cannot meet the needs of real-time detection. So, based on the two reasons mentioned above, they are not feasible. Conversely, through the artificial summary of the characteristics of Pyralidae insects, the robot adaptively adjusts the sampling frequency to detect, which can achieve better accuracy and good practicability.

Finally, the proposed method is compared with the multi-structural element-based crop pest identification method proposed in [33] and the general histogram reverse mapping method. The experimental results are shown in Table 5. As can be seen from Table 5, the scheme of maize borer recognition proposed in this chapter has higher recognition rate, lower false alarm rate, and good application prospects. Besides, it is not necessary to carry out a large amount of data analysis, which ensures that the average time consumption is not significantly increased. In Table 5, the recognition rate and the false-positive rate are calculated as follows:

β=i=1nrijnrij=0orrij=1E10
δ=xi=1,j=1n,mrijxrij=0orrij=1E11
SchemesRecognition rate (%)False alarm rate (%)Average time consumption (s)
Our proposed scheme in this chapter94.36.51.12
Histogram reverse mapping method65.260.81.01
Multi-structural element-based crop pest identification method [33]78.816.91.10

Table 5.

Comparison results of different schemes.

In formulae (10) and (11), β represents the recognition rate and δ represents the false-positive rate. rij is the jth contour of the ith Pyralidae insects (if exist, then 1, else 0). n represents the number of real Pyralidae insects in the image, x represents the total number of contours marked by the algorithm, and m represents the total number of contours marked by the algorithm for the ith Pyralidae insects in the image. Thus, the recognition rate reflects the ability of the algorithm to identify maize borers. The false alarm rate reflects the proportion of the error contours in all marker contours. Especially, the sum of these two probabilities is not necessarily equal to 1.

Our scheme and two other algorithms are used to test more than 200 images containing the Pyralidae insects, respectively. Then, we conducted a statistical analysis for the average time consumption, the recognition accuracy, and the false alarm rate. The results of the statistics are shown in Table 5.

Advertisement

4. Conclusions

Pyralidae insects have a great influence on the quality and yield of maize and so on. In order to solve the problem of maize borer detection, this chapter presents a scheme for the detection and identification of Pyralidae insects by using the robot we designed. Firstly, the mathematical morphology is used to preprocess the obtained image, and then the image is binarized by histogram reverse mapping. Next, the binary image is processed by contour extraction and preliminary screening. Then, combining the reference contour image, the contours of Asian Pyralidae insect characteristics are selected by using the Hu moment feature. In the end, this chapter makes a statistical analysis of the experimental results, and the correct rate of recognition based on multi-template matching can reach nearly 94.3%. Compared with other methods, the time complexity of this scheme is basically the same as that of those, which can meet the requirement of real-time detection.

Advertisement

Acknowledgments

The contents of this chapter were supported by the Key R&D Project of Hainan Province (Grant no. ZDYF2018015), the Hainan Province Natural Science Foundation of China (Grant no. 617033), the Open Sub-project of State Key Laboratory of Marine Resource Utilization in South China Sea (Grant no. 2016013B), and the Oriented Project of State Key Laboratory of Marine Resource Utilization in South China Sea (Grant no. DX2017012).

Advertisement

Conflict of interest

The authors declare that there is no conflict of interests regarding the publication of this chapter.

References

  1. 1. Wei TS, Zhu WF, Pang MH, Liu YC, Wang ZY, Dong JG. Influence of the damage of cotton bollworm and corn borer to ear rot in corn. Journal of Maize Sciences. 2013;21(4):116-118 (in Chinese)
  2. 2. Wen C, Guyer D. Image-based orchard insect automated identification and classification method. Computers and Electronics in Agriculture. 2012;89:110-115
  3. 3. Rupanagudi SR, Ranjani BS, Nagaraj P, Bhat VG, Thippeswamy G. A novel cloud computing based smart farming system for early detection of borer insects in tomatoes. In: 2015 International Conference on Communication, Information & Computing Technology (ICCICT), Mumbai, India: IEEE; 15–17 January, 2015. pp. 1-6
  4. 4. Ali H, Lali MI, Nawaz MZ, Sharif M, Saleem BA. Symptom based automated detection of citrus diseases using color histogram and textural descriptors. Computers and Electronics in Agriculture. 2017;138(C):92-104
  5. 5. Lu J, Ehsani R, Shi Y, Abdulridha J, Castro AI, Xu Y. Field detection of anthracnose crown rot in strawberry using spectroscopy technology. Computers and Electronics in Agriculture. 2017;135(C):289-299
  6. 6. Xie C, Yang C, He Y. Hyperspectral imaging for classification of healthy and gray mold diseased tomato leaves with different infection severities. Computers and Electronics in Agriculture. 2017;135:154-162
  7. 7. Xia C, Lee JM, Li Y, Chung BK, Chon TS. In situ detection of small-size insect pests sampled on traps using multifractal analysis. Optical Engineering. 2012;51(2):027001-1-027001-12
  8. 8. Xia C, Chon TS, Ren Z, Lee JM. Automatic identification and counting of small size pests in greenhouse conditions with low computational cost. Ecological Informatics. 2015;29(9):139-146
  9. 9. Chung BK, Xia C, Song YH, Lee JM, Li Y, Kim H, Chon TS. Sampling of Bemisia tabaci adults using a pre-programmed autonomous pest control robot. Journal of Asia-Pacific Entomology. 2014;17(4):737-743
  10. 10. Qing Y, Chen G, Zheng W, Zhang C, Yang B, Jian T. Automated detection and identification of white-backed plant hoppers in paddy fields using image processing. Journal of Integrative Agriculture. 2017;16(7):1547-1557
  11. 11. Liu T, Chen W, Wu W, Sun C, Guo W, Zhu X. Detection of aphids in wheat fields using a computer vision technique. Journal of Biosystems Engineering. 2016;141:82-93
  12. 12. Ebrahimi MA, Khoshtaghaza MH, Minaei S, Jamshidi B. Vision-based pest detection based on SVM classification method. Computers and Electronics in Agriculture. 2017;137:52-58
  13. 13. Javed MH, Noor MH, Khan BY, Noor N, Arshad T. K-means based automatic pests detection and classification for pesticides spraying. International Journal of Advanced Computer Science and Applications. 2017;8(11):236-240
  14. 14. Dai S, Man H. A convolutional Riemannian texture model with differential entropic active contours for unsupervised pest detection. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA: IEEE; 5–9 March, 2017. pp. 1028-1032
  15. 15. Zhao Y, Hu Z, Bai Y, Cao F. An accurate segmentation approach for disease and pest based on DRLSE guided by texture difference. Transactions of the Chinese Society for Agriculture Machinery. 2015;46(2):14-19 (in Chinese)
  16. 16. Zhao Y, Hu Z. Segmentation of fruit with diseases in natural scenes based on logarithmic similarity constraint Otsu. Transactions of the Chinese Society for Agriculture Machinery. 2015;46(11):9-15 (in Chinese)
  17. 17. Hu Y, Song L, Zhang J, Xie C, Li R. Pest image recognition of multi-feature fusion based on sparse representation. International Journal of Pattern Recognition and Artificial Intelligence. 2014;27(11):985-992 (in Chinese)
  18. 18. Bayat M, Abbasi M, Yosefi A. Improvement of pest detection using histogram adjustment method and Gabor wavelet. Journal of Asian Scientific Research. 2016;6(2):24-33
  19. 19. Martin A, Sathish D, Balachander C, Hariprasath T, Krishnamoorthi G. Identification and counting of pests using extended region grow algorithm. In: 2015 2nd International Conference on Electronics and Communication Systems (ICECS), Coimbatore, India: IEEE; 26–27 February, 2015. pp. 1229-1234
  20. 20. Przybyłowicz Ł, Pniak M, Tofilski A. Semiautomated identification of European corn borer (Lepidoptera: Crambidae). Journal of Economic Entomology. 2015;109(1):195-199
  21. 21. Deng L, Yu R. Pest recognition system based on bio-inspired filtering and LCP features. In: 2015 12th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), Chengdu, China: IEEE; 18–20 December, 2015. pp. 202-204
  22. 22. Miranda JL. Pest identification using image processing techniques in detecting image pattern through neural network. International Journal of Advances in Image Processing Techniques. 2014;1(4):4-9
  23. 23. Boniecki P, Koszela K, Piekarska-Boniecka H, Weres J, Zaborowicz M, Kujawa S, Majewskic A, Rabaa B. Neural identification of selected apple pests. Computers and Electronics in Agriculture. 2015;110:9-16
  24. 24. Espinoza K, Valera DL, Torres JA, López A, Molina-Aiz FD. Combination of image processing and artificial neural networks as a novel approach for the identification of Bemisia tabaci and Frankliniella occidentalis on sticky traps in greenhouse agriculture. Computers and Electronics in Agriculture. 2016;127:495-505
  25. 25. Zhu L, Zhang Z, Zhang P. Image identification of insects based on color histogram and dual tree complex wavelet transform (DTCWT). Acta Entomologica Sinica. 2010;53(1):91-97 (in Chinese)
  26. 26. Zhu L, Zhang Z. Automatic insect classification based on local mean colour feature and supported vector machines. Journal of Oriental Insects. 2012;46(3–4):260-269
  27. 27. Li Z, Hong T, Zeng X, Zheng J. Citrus red mite image target identification based on K-means clustering. Transactions of the Chinese Society of Agricultural Engineering. 2013;28(23):147-153 (in Chinese)
  28. 28. Johannes A, Picon A, Alvarez-Gila A, Echazarra J, Rodriguez-Vaamonde S, Navajas AD, Ortiz-Barredo A. Automatic plant disease diagnosis using mobile capture devices, applied on a wheat use case. Computers and Electronics in Agriculture. 2017;138:200-209
  29. 29. Swain MJ, Ballard DH. Color indexing. International Journal of Computer Vision. 1991;7(1):11-32
  30. 30. Otsu N. A threshold selection method from gray-level histogram. IEEE Transactions on Systems, Man, and Cybernetics. 1979;9(1):62-66
  31. 31. Xu X, Song E, Jin L. Characteristic analysis of threshold based on Otsu criterion. Acta Entomologica Sinica. 2009;37(12):2716-2719 (in Chinese)
  32. 32. Doyle W. Operations useful for similarity-invariant pattern recognition. Association for Computing Machinery. 1962;9(2):259-267
  33. 33. Liu J, Geng G, Ren Z. Plant pest recognition system based on multi-structure element morphology. Journal of Computational Design and Engineering. 2009;30(6):1488-1490 (in Chinese)

Written By

Zhuhua Hu, Boyi Liu and Yaochi Zhao

Submitted: 26 February 2018 Reviewed: 12 June 2018 Published: 05 November 2018