Open access peer-reviewed chapter

Thresholding Image Techniques for Plant Segmentation

Written By

Miguel Ángel Castillo-Martínez, Francisco Javier Gallegos-Funes, Blanca E. Carvajal-Gámez, Guillermo Urriolagoitia-Sosa and Alberto J. Rosales-Silva

Submitted: 21 September 2021 Reviewed: 21 March 2022 Published: 28 April 2022

DOI: 10.5772/intechopen.104587

From the Edited Volume

Information Extraction and Object Tracking in Digital Video

Edited by Antonio José Ribeiro Neves and Francisco Javier Gallegos-Funes

Chapter metrics overview

148 Chapter Downloads

View Full Metrics

Abstract

There are challenges in the image-based research to obtain information from the objects in the scene. Moreover, an image is a set of data points that can be processed as an object in similarity way. In addition, the research fields can be merged to generate a method for information extraction and pixel classification. A complete method is proposed to extract information from the data and generate a classification model capable to isolate those pixels that are plant from others are not. Some quantitative and qualitative results are shown to compare methods to extract information and create the best model. Classical and threshold-based state-of-art methods are grouped in the present work for reference and application in image segmentation, obtaining acceptable results in the plant isolation.

Keywords

  • similarity
  • classification
  • threshold
  • image processing
  • segmentation

1. Introduction

There are three fields for the image-based research: Image processing, Computer Vision and Computer Graphics [1]. In a graphical way, this is shown in the Figure 1.

Figure 1.

Fields in image-based research. Source: [1].

Image processing takes an image as input and realize a set of operations to create a new image that improves the interest feature visualization. It is not limited to; the image processing can isolate those features have not meaningful information to be removed from the scene. For Cancer Aided Diagnostic in dermoscopy, shown in Figure 2, the lesion must be bounded but there are meaningless elements as hair and air bubbles. Here, there is a need to improve the lesion visualization removing all those elements. An approach for the processing chain begins with color space transformation, continues with hair detection and finishes with image inpainting [2].

Figure 2.

Hair Inpainting in dermoscopic images.

Computer Vision starts with an image and provides features of the object in the scene. These features allow a quantitative description for object interpretation. The Figure 3 takes the region of interest (ROI), hand for this case, and the 7 Hu’s moments are calculated to describe the hand sign as a feature vector, simplifying the classification process [3].

Figure 3.

Static sign recognition. Source: [3].

Computer graphics generates a visual representation of mathematical model behavior. The models can be visualized from lines to a video that shows time evolve behaviors. Figure 4 shows a finite differences description to represent a coil with ferromagnetic core.

Figure 4.

Numerical analysis for non-destructive electromagnetic test. Source: [4].

The applications are not exclusive from each field of study, these fields could be merged to generate a hybrid system that improves the description and composition to solve a specific problem. To hair removal in the Figure 2, a new image with the hair, described by a statistical operator and a thresholding rule, is generated. The inpainting takes the original and generated images to process only the pixels that do not describe the lesion, this process minimizes the classification error. The solution employees three research fields in image-based systems.

Color indexes. For the primary color image encoding the RGB color space is used. This allows the storage and image representation in Red, Green and Blue colors [5, 6, 7]. Transform the color space brings other image description to simplify the processing or improve visualization.

In matrix representation, the color transformation has the following form,

I=TSp+kE1

where I is the color space with N components, TS is the N×3 transformation matrix, p is the RGB vector representation of the pixel, and k is a constant vector. This representation allows a generalized transformation for N Channels with N transformation equations.

Suppose the RGB to YUV color space transformation [5],

Y=0.299R+0.587G+0.114B
U=0.492BY
V=0.877RYE2

Substituting and expanding Eq. (2),

Y=0.299R+0.587G+0.114B
U=0.147R0.289G+0.436B
V=0.615R0.515G0.100BE3

For this case, k is a zeros vector. This allows represent RGB to YUV as matrices with Eq. (4)

YUV=0.2990.5870.1140.1470.2890.4360.6150.5150.100RGBE4

Because color transformations do not consider neighborhood pixels, these are known as point-to-point operations. The color spaces are described for break up color and lighting. Moreover, there are other methods to transform color spaces and simplify feature extraction in images, introducing the color index concept.

A color index can take the color space information and generates a new channel, this improves the feature visualization according to the requirements. For plant images, Yuan et al. estimates the Nitrogen content in rice from plant in the digital image [8]. Authors get a measure called GMR, see a representation in Figure 5, subtracting the red channel from green channel responses respectively. After, they apply a fixed threshold for the plant segmentation.

Figure 5.

Colored GMR response of a leaf in an image acquired with polarized light.

The Color Index Vegetation Extraction (CIVE) is used to break up plants and soil. This allows a grow evaluation in the crops. Furthermore, the CIVE shows good response in outdoor environments [9, 10]. If the color is processed in the GMR and CIVE, the color transformation is defined as,

GMRCIVE=1100.4410.8110.385RGB+018.78745E5

Similarity Measure. Minkowski distance is a generalized way to similarity measure [11, 12, 13, 14] defined as,

dnXY=i=1nxiyin1nE6

where X=x1x2xn,Y=y1y2ynRn are data points which the algorithm seeks minimum distance. If n=2 then Euclidean distance is measured. Substituting in Eq. (6) the following expression is obtained,

d2XY=XYXYE7

Other way to measure similarity is by Mahalanobis distance [11, 15], this is calculated by Eq. (8),

dMxy=XYΣ1XYE8

where Σ is the covariance matrix.

Σ=σ112σ122σ1n2σ212σ222σ2n2σn1σn22σnn2E9

If Σ=I then Eq. (8) brings the Euclidean distance. Thus, a Weighted Euclidean distance can be calculated as

dΩxy=XYΩ1XYE10

where Ω is a n×n weight matrix. If each component is independent form others, then is possible define a weight matrix W=ΣI.

W=σ112000σ222000σnn2E11

With the analysis above, there are 3 cases for the weight matrix:

  1. Ω=Σ: Mahalanobis distance is calculated

  2. Ω=I: Euclidean distance is calculated

  3. Ω=W: Weighted Euclidean distance is calculated.

Find features that describe the object of interest is required to calculate the similarity for each pixel in the image. The distance measure brings the similarity between the pixel and plants, background, fruit and more.

Thresholding. Thresholding is based on a simple rule to cluster data in groups k0 and k1 from a threshold T. The groups have not a meaning in their clustering. Moreover, it is possible use this rule as supervised method (classification) [16, 17, 18, 19]. The rule R to identify a data point is described below.

R=k0,ε<Tk1,otherwiseE12

where ε is a measure according to analysis.

For digital images, an example for pixel identification is shown in Figure 6. From this figure, T is selected with the mean of the data and the pixels are assigned to a group according to it.

Figure 6.

Detected ROI by mean thresholding. a) Original image (Green Channel). b) Histogram (blue) and mean value (red). c) Grouped pixels.

There are two challenges using this rule: which parameters define ε and what conditions bring T. According to the analysis, ε could be an independent component. This means that the rule can use only one-color feature. Furthermore, there is a possibility to make a feature composition, improving results. In other perspective, T can be defined by a measure that describes data features for ε and allow an acceptable division between groups.

In state-of-art methods for threshold calculation, the Otsu method is reached [20]. This method brings thresholds that have the better separability over the data. Some indexes and separability obtained are shown in Figure 7.

Figure 7.

CVPPP color index pixel generated dataset break up. a) B Channel from RGB space: There is no way to separate data; b) S Channel from HSV space: There is a medium separability; c) a channel from lab space: There is an acceptable break up. Green: Plant, red: Background, histogram: 64 levels.

In order to measure the quality in the data isolation, the information gain is used. The information gain, good for decision tree generation, is a data homogeneity indicator [21, 22, 23]. This allows a class probability measure for a dataset distribution, indicating that all classes have the same probability when entropy is equal to one.

Information gain is calculated by Eq. (13)

GXTn=HXvTXTvXHXTvE13

where v is each possible value in the random variable for the analyzed color index Tn, XTv is the sub generated dataset for Tn, is the dataset cardinality and HX is dataset entropy.

For threshold case,

GXTn=HXXT<XHXT<+XTXHXTE14

Entropy is a random variable uncertainty measure. This is the most utilized information measure in this kind of processes [24, 25, 26]. In a normalized way, the Shannon entropy is calculated as follows

HX=k=1CPklogCPkE15

where C is the available classes in the dataset and Pk is the corresponding probability for each class.

Considering M pixels and each class has M/C elements, Eq. (15) changes to,

HX=k=1C1ClogC1CE16

For the binary case, C is replaced by 2, then

HX=k=1212log212=1E17

If HX=1,C>1 then all classes have the same probability. This means that the dataset distribution for feature selection is uniform.

If the conditional entropy HXT<=HXT=0 then each sub dataset has elements that belongs to only one class k, this is a total separability indicator.

Advertisement

2. Methodology

A proposed method consists in two main sections: Feature selection and classification. Feature selection takes the better features that break up those pixels in foreground and background. Classification utilizes a similarity measure to compare pixels and plant description to assign a specific class. Both processes are described below.

To select features that belong to plants in the best way, the method requires a similarity measure. This measure allows to compare and identify which can provide an acceptable separation between data. Considering a dataset with the same quantity samples of each class, the dataset has the form described in Figure 8.

Figure 8.

Pixel dataset representation. Where i es the feature vector that describes the object under study. The dataset contains N features, and Class describes what is the meaning of the feature vector, in this case, foreground or background.

The feature in is considered as continuous random variable. In order to simplify and use simple tools in the classification model generation, the random variable is discretized to a binary variable. Thus, the dataset is presented in Figure 9.

Figure 9.

Discretized variable dataset representation. Where Tn is a threshold overcome indicator.

For data classification, the minimum distance is wanted between the pixel and the object of interest. This analysis requires those features that describes the object to be compared with those features that describes the pixel. For plant image processing, some features are the color indexes as NCIVE, MNGRDI, GMR, etc. The feature selection is dependent to those maximizes data separation.

When the features are defined, independent statistical features are calculated. This information allows an orientation correction and distance threshold magnitude computation. According to this analysis, all components are considered independent from others, thus, the distance measure takes the third scenario. This weights those features that have more variability and brings an eccentricity adjust. An orientation correction should be applied to data because Euclidean distance is rotation variant.

For threshold calculation the standard deviations σ11,σ22,σnnRn are considered as a component vector. The magnitude is calculated with the Weighted Euclidean Distance and is assigned to the threshold. The standard deviation is considered because variance cannot be expressed in the same plane. This thought is expressed as follows,

Th=dΩ0σΩ=WE18

Before calculating distances, an orientation correction is needed with the angle described by the data. Finally, the thresholding defines if a pixel is plant or not with the following rule

Plantrc=TruedWprcR<ThFalseotherwiseE19

where Plantrc is the classification as plant for pixel prc and R is the feature vector defined by data in plant class. As result, the classification model with this method is shown in Figure 10.

Figure 10.

Pixel classification map. a) Data classification with decision boundary in Th; b) normalized classification map with decision boundary in [Th, 2Th, 3Th]. Green: Plant, red: Background, black: Boundary decision. Th: Computed threshold.

Advertisement

3. Results

In the following processes the MATLAB 2020b was used in a Laptop with Xeon E3-1505 M processor with32GB RAM. In addition, only the matrix operations were used for data processing and data image representation.

According to the study in [27], the optimal color index for data break up is the L and a channels in the Lab color space. Moreover, some experiments are done according the dataset variability from the CVPPP dataset [28], which is the test dataset. In one hand, comparing both results is observable that a channel matches with [27] as best index to isolate the plant pixels. In the other hand, there are other indexes that provide an acceptable separability, NCIVE and MNGRDI, shown in Tables 1 and 2. In a comparative way, the Otsu method thresholding has a separability less than the supervised entropy.

iThresholdInformation Gain
a0.38430.8421
NCIVE0.41960.8036
MNGRDI0.52940.7537
G0.38820.6913
b0.67050.6729
L0.38430.6601
V0.40780.6387
S0.52940.6014
R0.31370.4325
H0.50980.1597
B0.2470.0074

Table 1.

Indexes gain information. Otsu method.

iThresholdInformation Gain
a0.39680.8471
NCIVE0.44440.8428
MNGRDI0.52380.7467
G0.31740.7265
V0.40780.6894
b0.65070.6786
L0.33330.6601
S0.4920.6055
R0.28570.4489
H0.19040.3629
B0.0630.0748

Table 2.

Indexes gain information. Supervised entropy.

Other observation is the similarity in the order of separability quality. Moreover, the supervised entropy knows the data meaning, consequently, the quality measure is better in most of the cases.

In Figure 11 some examples of the index threshold data separability are shown. From here, is observed twice best cases, with an acceptable separability, and twice worst cases, with no apparent separability way.

Figure 11.

Thresholds and information gains for some color indexes. Left: Supervised entropy method, right: Otsu method.

Finally, Classification map for plant description pixels based on a-NCIVE indexes is shown in the Figure 12.

Figure 12.

Classification map for plant pixel segmentation. Th: Computed threshold.

According to this model, the pixels can be classified as plant or not. Some visual results are shown in the Figure 13.

Figure 13.

Visual results in plant image segmentation for CVPPP dataset.

Advertisement

4. Conclusions

The thresholding methods are effective when the problem data is defined and the superposition between groups is minimum. Furthermore, they are simple methods that provide acceptable results in the segmentation problem. The combination of methods is possible to rise the quality in the models.

The entropy, in a supervised way, can improve the data separability. Because Otsu methods minimizes the variance between groups, the quality in the results using supervised entropy is improved in consequence of consider data meaning. The supervised methods know the expected response, breaking up data in corresponding classes. Otsu only allow clustering data that have not a well-defined meaning yet.

The distance measures refer to reach a similarity in the compared data. It only needs a reference description of the studied object. In this case, all pixels are defined as plant, in a statistical way, a reference with the new pixels going to compared and classified. Thus, the segmentation is possible on those pixels that belong to plant and discard those are not.

From Tables 1 and 2, the best indexes have the same order in both results. Furthermore, the calculated threshold improves the data separability for the supervised entropy case. This allows development of classification maps like in Figure 12 to consider those indexes that achieve the best pixels break up. Figure 11 shown some separation scenarios where pixels that are plant (Green distribution) and are not (Red distribution). Finally, the classification map, shown in Figure 12, illustrates the best classifier obtained with the method to select pixels that belong to plant class.

Advertisement

Acknowledgments

To M. Minervi et al. for providing their dataset. This work is supported by Instituto Politecnico Nacional de Mexico (IPN) (Grant ID: 20201681) and Consejo Nacional de Ciencia y Tecnologia (CONACyT), project 240820.

Advertisement

Conflict of interest

The authors declare no conflict of interest.

References

  1. 1. Hunt K. Introduction. In: The Art of Image Processing with Java. United States: A K Peters/CRC Press; 2010. pp. 1–12. Available from: https://doi.org/10.1201/9781439865590
  2. 2. Castillo Martínez MA, Gallegos Funes FJ, Rosales Silva AJ, Ramos Arredondo RI. Preprocesamiento de imágenes dermatoscopicas para extracción de características. Research Computers Science. 2016;114(1):59-70 [Internet] Available from: http://rcs.cic.ipn.mx/2016_114/Preprocesamiento de imagenes dermatoscopicas para extraccion de caracteristicas.pdf
  3. 3. Pérez LM, Rosales AJ, Gallegos FJ, Barba AV. LSM static signs recognition using image processing. In: 14th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE) [Internet]. Mexico City: IEEE; 2017. pp. 1-5. Available from: http://ieeexplore.ieee.org/document/8108885/
  4. 4. Chávez-González AF, Aguila-Munoz J, Perez-Benitez JA, Espina-Hernandez JH. Finite differences software for the numeric analysis of a non-destructive electromagnetic testing system. In: 23rd International Conference on Electronics, Communications and Computing [Internet]. Cholula: IEEE; 2013. pp. 82-86. Available from: http://ieeexplore.ieee.org/document/6525764/
  5. 5. Burger W, Burge MJ. Color images. In: Digital Image Processing. London: Springer; 2nd edition, 2016. pp. 291–328. Available from: http://link.springer.com/10.1007/978-1-4471-6684-9_12
  6. 6. Sundararajan D. Color image processing. In: Digital Image Processing [Internet]. Singapore: Springer Singapore; 2017. pp. 407-438 Available from: http://link.springer.com/10.1007/978-981-10-6113-4_14
  7. 7. Gonzalez RC, Woods RE. Color image processing. In: Digital Image Processing. England: Pearson; 2018. 4th edition, pp. 399-461
  8. 8. Wang Y, Wang D, Zhang G, Wang J. Estimating nitrogen status of rice using the image segmentation of G-R thresholding method. Field Crops Research. 2013;149:33-39. [Internet]. Available from:. DOI: 10.1016/j.fcr.2013.04.007
  9. 9. Hamuda E, Glavin M, Jones E. A survey of image processing techniques for plant extraction and segmentation in the field. Computer and Electronics Agriculture. 2016;125:184-199. [Internet]. Available from:. DOI: 10.1016/j.compag.2016.04.024
  10. 10. Kataoka T, Kaneko T, Okamoto H, Hata S. Crop growth estimation system using machine vision. Proceedings of 2003 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2003). 2003;2(Aim):1079-1083
  11. 11. Flach P. Distance-based models. In: Machine Learning [Internet]. Cambridge: Cambridge University Press; 2012. pp. 231-261. Available from: https://www.cambridge.org/core/product/identifier/CBO9780511973000A014/type/book_part
  12. 12. Aldridge M. Clustering: An overview. In: Lecture Notes in Data Mining [Internet]. London: World Scientific; 2006. pp. 99-107. Available from: http://www.worldscientific.com/doi/abs/10.1142/9789812773630_0009
  13. 13. Zhou ZH. Clustering. In: Machine Learning [Internet]. Singapore: Springer Singapore; 2021. pp. 211-240. Available from: https://link.springer.com/10.1007/978-981-15-1967-3_9
  14. 14. Zhang D. Image ranking. In: Fundamentals of Image Data Mining. Cham: Springer; 2019. p. 271-287. Available from: http://link.springer.com/10.1007/978-3-030-17989-2_12
  15. 15. Zhang Y, Li Z, Cai J, Wang J. Image segmentation based on FCM with mahalanobis distance. In: International Conference on Information Computing and Applications. Berlin: Springer; 2010. pp. 205-212. Available from: http://link.springer.com/10.1007/978-3-642-16167-4_27
  16. 16. Flach P. Machine Learning [Internet]. 2nd ed. Cambridge: Cambridge University Press; 2012. Available from: http://ebooks.cambridge.org/ref/id/CBO9780511973000
  17. 17. Sundararajan D. Digital Image Processing [Internet]. Singapore: Springer Singapore; 2017. Available from: https://link.springer.com/book/10.1007/978-981-10-6113-4
  18. 18. Burger W, Burge MJ. Point Operations. In: Digital image processing [internet]. 2nd ed. London: Springer; 2016:57-88. Available from: https://link.springer.com/chapter/10.1007/978-1-4471-6684-9_4
  19. 19. Gonzalez RC, Woods RE, Eddins SL. Image segmentation I. In: Digital Image Processing Using MATLAB. 3rd ed. United States of America: Gatesmark Publishing; 2020. pp. 633-721
  20. 20. Otsu N. A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man, and Cybernetics. 1979;9(1):62-66 [Internet] Available from: http://ieeexplore.ieee.org/document/4310076/
  21. 21. Rokach L, Maimon O. Splitting criteria. In: Data Mining with Decision Trees [Internet]. Singapore: World Scientific Publishing; 2014. pp. 61-68. Available from: http://www.worldscientific.com/doi/abs/10.1142/9789814590082_0005
  22. 22. Masud MM, Khan L, Thuraisingham B. Email worm detection using data mining. In: Techniques and Applications for Advanced Information Privacy and Security [Internet]. Hershey, PA: IGI Global; 2009. pp. 20-34. Available from: http://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/978-1-60566-210-7.ch002
  23. 23. Omitaomu OA. Decision trees. In: Lecture Notes in Data Mining [Internet]. London: World Scientific; 2006. pp. 39-51. Available from: http://www.worldscientific.com/doi/abs/10.1142/9789812773630_0004
  24. 24. Addison PS. The Illustrated Wavelet Transform Handbook [Internet]. Biomedical Instrumentation and Technology. Boca Raton: CRC Press; 2017. p. 163. Available from: https://www.taylorfrancis.com/books/9781315372556
  25. 25. Rubinstein RY, Kroese DP. The Cross-Entropy Method [Internet]. New York, NY: Springer New York; 2004 (Information Science and Statistics). Available from: http://link.springer.com/10.1007/978-1-4757-4321-0
  26. 26. Ito S, Sagawa T. Information flow and entropy production on Bayesian networks. In: Mathematical Foundations and Applications of Graph Entropy [Internet]. Weinheim, Germany: Wiley-VCH Verlag GmbH & Co. KGaA; 2016. pp. 63-99 Available from: http://doi.wiley.com/10.1002/9783527693245.ch3
  27. 27. Hernández-Hernández JL, García-Mateos G, González-Esquiva JM, Escarabajal-Henarejos D, Ruiz-Canales A, Molina-Martínez JM. Optimal color space selection method for plant/soil segmentation in agriculture. Computers and Electronics in Agriculture. 2016;122:124-132
  28. 28. Minervini M, Fischbach A, Scharr H, Tsaftaris SA. Finely-grained annotated datasets for image-based plant phenotyping. Pattern Recognition Letters. 2016;81:80-89. [Internet]. Available from:. DOI: 10.1016/j.patrec.2015.10.013

Written By

Miguel Ángel Castillo-Martínez, Francisco Javier Gallegos-Funes, Blanca E. Carvajal-Gámez, Guillermo Urriolagoitia-Sosa and Alberto J. Rosales-Silva

Submitted: 21 September 2021 Reviewed: 21 March 2022 Published: 28 April 2022