Open access

Image Processing for Spider Classification

Written By

Jaime R. Ticay-Rivas, Marcos del Pozo-Baños, Miguel A. Gutiérrez-Ramos, William G. Eberhard, Carlos M. Travieso and Alonso B. Jesús

Published: 29 August 2012

DOI: 10.5772/50341

From the Edited Volume

Biodiversity Conservation and Utilization in a Diverse World

Edited by Gbolagade Akeem Lameed

Chapter metrics overview

2,337 Chapter Downloads

View Full Metrics

1. Introduction

As is defined by UNESCO [22]: "Biological diversity or biodiversity is defined as the diversity of all living forms at different levels of complexity: genes, species, ecosystems and even landscapes and seascapes. Biodiversity is shaped by climatic conditions, the properties of soils and sediments, evolutionary processes and human action. Biodiversity can be greatly enhanced by human activities; however, it can also be adversely impacted by such activities due to unsustainable use or by more profound causes linked to our development models."

It is clear that climate change and biodiversity are interconnected. Biodiversity is impacted by climate change but it also makes an important contribution to both climate-change mitigation and adaptation through the ecosystem services that it supports. Therefore, conserving and sustainable managing biodiversity is crucial to meet the clime change.

Biodiversity conservation is an urgent environmental issue that must be specially attended. It is as critical to humans as it is to the other lifeforms on Earth. Countries of the world acknowledge that species research is crucial in order to obtain and develop the right methods and tools to understand and protect biodiversity. Thus, biodiversity conservation has became a top priority for researchers [1]

In this sense, a big effort is being carried out by the scientific community in order to study the huge biodiversity present on the planet. Sadly, spiders have been one of most unattended groups in conservation biology [2]. These arachnids are plentiful and ecologically crucial in almost every terrestrial and semi-terrestrial habitats [3] [4] [5]. Moreover, they present a series of extraordinary qualities, such as the ability to react to environmental changes and anthropogenic impacts [5] [6].

Several works have studied the spiders’ behavior. Some of them focuses on the use of the way spiders build their webs as a source of information for species identification[7] [8]. Artificial intelligent systems have been proven to be of incalculable value for these systems. [9] proposed a model for spider behavior modeling, which provides simulations of how a specific spider specie builds its web. [10] recorded how spiders build their webs in a controlled scenario for further spatiotemporal analysis.

Because spider webs carry an incredibly amount of information, this chapter presents an study about its usage for automatic classification of spider species. In particular, computer vision and artificial intelligence techniques will be used for this aim. Moreover, the amount of information will be such, that it would be enough to perform the spider specie classification. This is, to authors extend, a novel approach on this problem.

The remainder of this paper is organized as follow. First, the database is briefly presented. Section 3 explains how images were preprocessed in order to extract the spiderwebs from the background. The feature extraction and classification techniques are introduced in sections 4 and 5. Next, experiments and results are shown in detail. Finally, the conclusions derived from the results are presented.

Advertisement

2. Database

The database contains spider web images of four different species named Allocyclosa, Anapisona Simoni, Micrathena Duodecimspinosa and Zosis Geniculata. Each class has respectively 28, 41, 39 and 49 images, which makes a total of 150 images. Some examples can be seen in Figure 1. Since the webs imageswere taken in both controlled and uncontrolled environments the lightness condition and background differ between classes.

Figure 1.

Spider web samples of the four species: a) Allocyclosa. b) Anapisona Simoni. c) Micrathena Duodimspinosa. d) Zosis Geniculata.

The images that correspond to Allocyclosa were assigned to Class 1 (C1). These were taken in a natural night time environment. The flash of the camera in conjunction with dark background enhanced the spider webs, resulting in a set of images with good quality for the processing stage.

ClassNumber of samplesSize (pixels)Bits number (color/gray)
1281024x76824 (True color)
2412240x14888 (Gray scale)
3392240x148824 (True color)
4422216x211224 (True color)

Table 1.

Tecnical features of the dabase.

Class 2 (C2) corresponds to the Anapisona Simoni images. Theses spider webs were built in a controlled environment, with the singularity that theywere built as tents, causing overlapping thread and light reflections. This made the processing stage far more more complex than in C1.

Class 3 (C3) is composed by Micrathena Duodimspinosa. These images were also taken in a natural environment during the day. This scenario, with the presence of the sun light and natural elements such as leaves and tree branches, required a more complex treatment as well.

Finally, Class 4 (C4) corresponds to Zosis Geniculata images. Again, they were taken in a controlled environment, allowing the capture of images with black background and uniform light.

The technique features of images from each class are summarized in Table 1

Advertisement

3. Preprocessing

As can be seen in Figure 1, spider web images were taken in both controlled and uncontrolled environments. Thus, the preprocessing step was vital in order to isolate the spider webs and remove possible effects of background in the system.

Image processing techniques were employed in order to isolate the spider webs from light reflections and elements of the background of the image such as leaves or tree branches. Once it was applied, a new normalized database was obtained.

3.1. Spider web selection

Since the image collection was not taken for this research, there are information that does not provide valid data for the spiders’ study. This information corresponds to any external element of the spiderwebs. This is why the region of interest of the image was manually selected. This is represented in figure 2.

Figure 2.

Spider web Selection

Once the spider web has been select, an adjustment of the proportional ratio was necessary in order to obtain a proportional square image. This will be explained in detail in the following section.

3.2. Image contrast

To enhance the contour of cobweb’s threads an increase of the color contrast was first applied. A spacial filtering was applied to enhance or attenuate details in order to obtain a better visual interpretation and prepare the data for the next preprocessing step. By using this filtering, the value of each pixel is modified according to neighbors’ values, transforming the original gray levels so that they become more similar or different to the corresponding neighboring pixels.

In general, the convolution of a image f with MxN dimensions with a h mxn mask is given by 1:

g(x,y)=s=aat=bbf(x+s,y+t)h(s,t)E1

Where f (x+s, y+t) are the pixel value’s of the selected block, h(s, t) are the mask coefficients and g(x, y) is the filtered image. The block dimension is defined by m = 2a+1 and n = 2b+1. The effect of applying contrast enhancement filtering over a gray image can be observed in figure 3.

Figure 3.

a) Original Image b) Contrast enhanced

3.3. Image binarization

The binarization process transforms the image to a black and white format in a way that it does not change the esential properties of the image. Equation 2 defines the binarization process, where f (x, y) is the original image and g(x, y) the obtained image:

g(x,y)={0f(x,y)<threshold1f(x,y)thresholdE2

This threshold is computed by using the very know Otsu’s method [25], which assigns the membership of each pixel to a determined group by computing the optimal value from which carrying out that assignment.

3.4. Image denoising

Once the image has been binarized, a denoising process was used aiming to eliminate any irrelevant information. To achieve this goal, two specific techniques were applied: Wiener Filtering and Morphological Operations.

The Wiener Filtering applied a spatial filtering using statistical methods in order to reduce noise and smooth shapes. It gradually smooths the image by changing the areas where the noise is very apparent, but keeping the areas where the details are present and the noise is less apparent. The Wiener filter is adapted to the local image variance.

In this work, the algorithm wiener2 [21] was used in order to compute the local mean and the variance around each pixel in the image a

μ =1NMη1,η2ϵηa(η1,η2)E3
σ2=1NMη1,η2ϵηa2(η1,η2)μ2E4

Where η is defined as the local neighborhood for each pixel NxMin the image a. An 2x2 block has been chosen by euristics, i.e. this was the configuration that provided the best visual effect. Wiener2 then filters the image using these estimates, where b is the resulting image.

b(n1,b2)=μ+σ2v2σ2(a2(η1,η2)μ)E5

If the noise variance is not given, wiener2 uses the average of all the local estimated variances.

On the other hand, morphological operations are those transformations that modify the structure or shape of the objects in the image based on the their geometry and shape, simplifying the images. These techniques can be used to denoise an image, for feature extraction or processing specific regions.

An illustrative example of these operations is shown in figures 4 and 5, where noise and projections (in the inner circle) are removed, obtaining more uniform boundaries.

The resulting image after image denoise can be observed in figure 6.

Figure 4.

Example of applying morphological operations: elimination of isolate pixels

Figure 5.

Example of applying morphological operations: smoothing of contour

Figure 6.

Image after denoising. a) Original image b) Image binarized

3.5. Center of the spiderwebs

Finally, the center of the spiderwebs was used as the source of discriminative information in order to classify spiders. Thus, once the images were preprocessed this center area was selected to conform the experimentation database. Figures 7 and 8 show the center of the spiderwebs for each specie.

Figure 7.

Selecting the center of the spiderwebs.

Figure 8.

Resulting center of each specie.

Advertisement

4. Features extractors

In general, feature extraction refers to the process of obtaining some numerical measures of images such as area, radio, perimeter, etc. Also it concerns to the process of transforming a set of original features; with dimension m, in another set of characteristics; usually with dimension n < m, which is termed as transformed domain techniques. This is the concept used in the present work.

Two well known techniques were used: Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT). These were selected as they have been successfully used in other biometric studies. Besides being able to reduce the dimensionality of data and reduce the computational requirements of the classifier stage, these techniques can improve the generalization of the information and the system’s success rate.

Figure 9.

Algorithm for the Discrete Wavelet Transform

4.1. Wavelet transform

The discrete wavelet transform (DWT) is based on the idea of decomposing a signal in terms of displaced and dilated versions of a finite wave called mother wavelet. The Wavelet transform is a preprocessing and feature extraction technique which can be directly applied to the image of spiderwebs. The DWT is defined in [24] as follows:

C[j,k]=nf[n]ψj,k[n]E6

where ψj,k is the transform function:

ψj,k[n]=2j/2·ψ[2jn1]E7

In the wavelet analysis is common evaluate the results as approximations and details. The approximations are the low frequency components of the signal and the details are the high frequency components. For many signals the most important information is content in the low frequencies. This content is what gives identity to the signal. The following is the diagram that has been used in this work, which would be one-dimensional DWT. The filtering process to obtain the approximations and detail of the discrete wavelet transform shown in the following figure 9:

The application of different mother families on pre-processing (artifacts elimination) and on the feature extraction has a set of good and discriminate parameters. Unlike the Fourier transform, the wavelet transform can be implemented on many bases. The different categories of wavelets (continuous, discrete, orthogonal, etc..) and various types of wavelet functions within each category provide a wide number of options for analyzing a signal. This allows selection of the base functions whose shape better approximates the characteristics of the signal to be represented or analyzed. On this work, the families Daubechies1 (db1), Biorthogonal 3.7 (bior3.7) and Discrete Meyer (dmey) were used.

4.2. Discrete Cosine Transform

Discrete Cosine Transform (DCT) was applied for noise and details of high frequency elimination [23]. Besides, this transform has a good energy compaction property that produces uncorrelated coefficients, where the base vectors of the DCT depend only on the order of the transformation selected, and not of the statistical properties of the input data.

Another important aspect of the DCT is its capacity to quantify the coefficients utilizing quantification values, which are chosen of visual way. This transformation has had a great acceptance inside the image digital processing, as there is a high correlation among elements for the data of a conventional image.

Advertisement

5. Classification: Support Vector Machine

Once the images were transformed to a set of features, the classification stage tried to produce an answer to the spider identification problem. In this work, the well known Support Vector Machine (SVM) technique has been used.

The SVM is a method of structural risk minimization (SRM) derived from the statistical learning theory developed by Vapnik and Chervonenkis [17]. It is enclosed in the group of supervised learning methods of pattern recognition, and it is used for classification and regression analysis.

Based on characteristic points called Support Vectors (SVs), the SVM uses an hyperplane or a set of hyperplanes to divide the space in zones enclosing a common class. Labeling these zones the system is able to identify the membership of a testing sample. The interesting aspect of the SVM is that it is able to do so even when the problem is not linearly separable. This is achieved by projecting the problem into a higher dimensional space where the classes are linearly separable. The projection is performed by an operator known as kernel, and this technique is called the kernel trick [18] [19]. The use of hyperplanes to divide the space gives rise to margins as shown in figure 10.

In this work, the Suykens’ et. al. LS-SVM [20] was used along with the Radial Basis Function kernel (RBF-kernel). The regularization parameter and the bandwidth of the RBF function were automatically optimized by the validation results obtained from 10 iterations of a Hold-Out cross-validation process. Two samples from each class (from the training set) were used for testing and the remaining for training as we saw that the number of training samples has a big impact in the LS-SVM optimal parameters. Once the optimal parameters were found, they were used to retrain the LS-SVM using all available training samples.

Figure 10.

Example of a separate hyperplane and its Support Vectors and margin for a linear problem.

Advertisement

6. Experiments and results

To sum up, the proposed system normalized all the images to 10x10 pixels. This system used the first M features obtained from the DCT projection of the spider webs images and the outcome of the DWT transformation of the spider webs images as inputs for a RBF-kernel LS-SVM with regularization and kernel parameters. The former parameter (the number of features) was varied during experimentation, while the later two parameters (the regularization and the kernel parameters) were automatically optimized by iteration using validation results. To obtain more reliable results, the available samples were divided into training and test sets, so that the system was trained and tested with totally different samples.

The well known K-Folds cross-validation and Hold-Out cross-validation techniques were used to obtain the final results. In particular, experiments with K equal 3, 5, 7, and 10 were run. The percent of training samples in Hold-Out cross-validation was 50, 40, 30, 20, 10 respectively. It is worth it to mention that the training and testing sets were computed for each class individually, having into account that each class has different number of samples. These experiments were performed for both datasets, i.e. using the whole spiderwebs and only the center area.

6.1. DCT results

In order to obtain the optimal number of coefficients, 30 experiments were performed using the Hold-Out cross validation technique. As the size of image was normalized to 10x10 pixels the total number of characteristics corresponded to 100, therefore, in this phase, the number of coefficients was swept from 1 to 100 coefficients. Figure 11 represents the mean of those 30 experiments. It can be observed that 60 is the optimal number of coefficients.

Figure 11.

Evolution of success rate when number of coefficients is incremented for the system DCT-based.

Table 2 shows the results reached for K-Fold cross-validation and Hold-Out cross-validation using the optimal number of coefficients.

K-Fold cross-validation
(K)
Success rate (%)Hold-Out cross-validation Success rate (%) (% of training samples)
1098.75% ± 0.185098.69% ± 1.20
798.56% ± 0.584098.45% ± 1.38
598.44% ± 0.763096.89% ± 2.06
398.07% ± 0.522094.29% ± 4.07
--1079.75% ± 6.54

Table 2.

Results obtained for K-Fold cross-validation and Hold-Out cross-validation using DCT

6.2. DWT results

In this case, the length of the feature vector depends on the wavelet family. Windows db1 and bior3.7 return an image with half the size of the original image while dmey returns an image with the same dimension. In all cases, the reconstructed image was used for classification, ignoring the horizontal, vertical and perpendicular components.

The Table 3, 4, 5 shows the results reached for K-Fold cross-validation and Hold-Out cross-validation for each DWT family.

K-Fold cross-validation
(K)
Success rate (%)Hold-Out cross-validation
(% of training samples)
Success rate (%)
1099.40% ± 0.535098.42% ± 1.74
799.31% ± 0.274096.89% ± 1.93
599.18% ± 0.313096.26% ± 2.40
398.47% ± 0.122094.38% ± 2.89
--1091.08% ± 4.28

Table 3.

Results obtained for different Ks of a K-Fold cross-validation and procedure using wavelet db1

K-Fold cross-validation
(K)
Success rate (%)Hold-out cross-validation
(% of training samples)
Success rate (%)
1099.40% ± 0.535098.42% ± 1.74
799.31% ± 0.274096.89% ± 1.93
599.18% ± 0.313096.26% ± 2.40
398.47% ± 0.122094.38% ± 2.89
--1091.08% ± 4.28

Table 4.

Results obtained for different Ks of a K-Fold cross-validation procedure using wavelet bior3.7

K-Fold cross-validation
(K)
Success rate (%)Hold-Out cross-validation
(% of training samples)
Success rate (%)
1098.62% ± 0.365097.70% ± 2.10
798.30% ± 0.504097.12% ± 1.54
598.55% ± 0.253095.66% ± 2.29
397.89% ± 0.352094.77% ± 2.28
--1089.14% ± 6.28

Table 5.

Results obtained for different Ks of a K-Fold cross-validation procedure using wavelet dmey

Advertisement

7. Discussion and conclusions

This work has faced the problem of spider web recognition improving the results obtained by the previous work [11]. It is important to note that, to the authors extend, these are the only published works using the proposed technique.

Images were preprocessed to isolate the center of the spider web and remove the effects of the background in the system. The resulting images were then transformed by using DCT and DWT. For the former, the optimal number of DCT coefficients M was found by euristics, while families db1, bior3.7 and dmey were tested for the DWT. Finally, the resulting characteristics were classified using an LS-SVM. In this case, regularization and kernel parameters were automatically optimized by the system dividing the training samples in training and validation sets and retraining the system with the optimal configuration using all available training data.

The results confirmed the improvement compared to [11], where only three species (versus the four species used in thius work) were classified with a maximum success rate of 95%. Thus, tables 2, 3, 4 and 5 show that the new system reached performance of around 99% on K-Fold cross-validation and 98% on Hold-Out cross- validation. Moreover, the obtained standard deviation was significantly low, although, as expected, slightly higher on Hold-Out as the number of samples for training lowered. All in all, the standard deviation achieved in both K-Fold and Hold-Out procedures are smaller than those obtained on [11].

When comparing DCT and DWT, the DWT provided a better behavior for this problem. It is worth it to emphasize that the images have been normalized to a size of 10x10, this is, quite compressed, considering the spatial distribution of the threads in the spider webs.

The results achieved by this work support the conclusions derived from [11] stating that the center of the spiderwebs provide enough discriminative information to recognize different species of spiders. However, it is still necessary to run more experiments with a larger database and execute a more detailed study on which parts of the spiderweb provide the most discriminative information before make stronger conclusions. On the other hand, this will allow to test the system’s performance with larger training sets, which will be interesting having into account that the system clearly improved when the number of training samples increased.

References

  1. 1. Sytnik, K.M., Preservation of biological diversity: Top-priority tasks of society and state2010Ukrainian Journal of Physical Optics, 11 (SUPPL. 1), S2S10
  2. 2. Carvalho, J. C., Cardoso, P., Crespo, L.C., Henriques, S., Carvalho,R., Gomes, P., "Biogeographic patterns of spiders in coastal dunes along a gradient of mediterraneity." Biodiversity and conservation (2011):1-22.
  3. 3. JohnstonJ. M.2000The contribution of microarthropods to aboveground food webs: A review and model of belowground transfer in a coniferous forestAmerican Midland Naturalist143226238
  4. 4. PetersonA. T.OsborneD. R.TaylorD. H.1989Tree trunk arthropod faunas as food resources for birds. Ohio Journal of Science 8912325
  5. 5. CardosoP.MAArnedoTriantis. K. A.BorgesP. A. V.2010Drivers of diversity in Macaronesian spiders and the role of species extinctionsJ Biogeogr 37:1034 EOF1046 EOF
  6. 6. Finch-DO.BlickT.SchuldtA.2008Macroecological patterns of spider species richness across EuropeBiodivers Conserv 17:2849 EOF2868 EOF
  7. 7. Eberhard,W.G., Behavioral Characters for the Higher Classification of Orb-Weaving Spiders, Evolution, Vol.5Sep., 198210671095Society for the Study of Evolution
  8. 8. Eberhard,W.G., Early Stages of Orb Construction by Philoponella Vicina, Leucauge Mariana, and Nephila Clavipes (Araneae, Uloboridae and Tetragnathidae), and Their Phylogenetic Implications, Journal of Arachnology, Vol.2Summer, 1990205234American Arachnological Society
  9. 9. EberhardW. G.ComputerSimulation.of-WebOrb.ConstructionJ.AmericanZoologist.pp229238February 1, 1969
  10. 10. SureshP. B.ZschokkeS.computerisedA.methodto.observespider.webbuilding.behaviourin. a.semi-naturallight.environmentth European colloquium of arachnology, Aarhus, Denmark, 2000
  11. 11. Ticay-RivasJaime. R.del Pozo-BañosMarcos.EberhardWilliam. G.AlonsoJesús. B.TraviesoCarlos.SpiderRecognition.byBiometric.WebAnalysis. I. W. I. N. A.IWINAC 2011, Part II, LNCS 6687, 4094172011
  12. 12. Jing Hu; Si, J.; Olson, B.P.; Jiping He;, "Feature detection in motor cortical spikes by principal component analysis," Neural Systems and Rehabilitation Engineering, IEEE Transactions on, vol.13, no.3, pp.256-262, Sept. 2005.
  13. 13. Qingfu Zhang; YiuWing Leung;, "Aclass of learning algorithms for principal component analysis and minor component analysis," Neural Networks, IEEE Transactions on, vol.11, no.1, pp.200-204, Jan 2000.
  14. 14. Langley, P.; Bowers, E.J.; Murray, A.;, "Principal Component Analysis as a Tool for Analyzing Beat-to-Beat Changes in ECG Features: Application to ECG-Derived Respiration," Biomedical Engineering, IEEE Transactions on, vol.57, no.4, pp.821-829, April 2010.
  15. 15. Haibo Yao; Lei Tian;, "A genetic-algorithm-based selective principal component analysis (GA-SPCA) method for high-dimensional data feature extraction," Geoscience and Remote Sensing, IEEE Transactions on, vol.41, no.6, pp.14691478June 2003
  16. 16. Nan Liu; Han Wang;, "Feature Extraction with Genetic Algorithms Based Nonlinear Principal Component Analysis for Face Recognition," Pattern Recognition,2006ICPR 2006. 18th International Conference on, 3no., 461464
  17. 17. V. Vapnik, “The Nature of Statistical learning Theory.” Springer Verlag, New York, 1995.
  18. 18. Vojislav Kevman.Learning and Soft Computing: Support Vector Machines, Neural Networks, and Fuzzy Logic models”Puiblished by The MIT Press, 2001
  19. 19. B. Schölkopf y A.J. Smola. “Learning with Kernels. Support Vector Machines, Regularization, Optimization, and Beyond“, Published by The MIT Press, 2002.
  20. 20. SuykensJ. A. K.Van GestelT.De BrabanterJ.De MoorB.VandewalleJ.Least“.SquaresSupport.VectorMachines”.WorldScientific.Singapore20029-81238-151-1
  21. 21. Lim, Jae S., Two-Dimensional Signal and Image Processing, Englewood Cliffs, NJ, Prentice Hall,1990548
  22. 22. http://www.unesco.org/new/en/natural-sciences/special-themes/biodiversity-initiative/.Las visit in March 2011
  23. 23. AhmedN.NatarajanT.RaoK. R. “.DiscreteCosine.Transform”I. E. E. E.transactionson.Computes90931974
  24. 24. Mallat, S., “A theory for multiresolution signal decomposition: the wavelet representation”, IEEE Pattern Analysis and Machine Intelligence, Vol.76746931989
  25. 25. Otsu, N.; “A threshold selection method from gray-level histograms”.IEEE Trans. Sys., Man., Cyber. 9162661979

Written By

Jaime R. Ticay-Rivas, Marcos del Pozo-Baños, Miguel A. Gutiérrez-Ramos, William G. Eberhard, Carlos M. Travieso and Alonso B. Jesús

Published: 29 August 2012