Open access peer-reviewed chapter

Machine Vision Systems – A Tool for Automatic Color Analysis in Agriculture

Written By

Ernesto Martínez Sandoval, Miguel Enrique Martínez Rosas, Jesús Raúl Martínez Sandoval, Manuel Moises Miranda Velasco and Humberto Cervantes De Ávila

Submitted: 12 June 2017 Reviewed: 25 October 2017 Published: 20 December 2017

DOI: 10.5772/intechopen.71935

From the Edited Volume

Automation in Agriculture - Securing Food Supplies for Future Generations

Edited by Stephan Hussmann

Chapter metrics overview

1,936 Chapter Downloads

View Full Metrics

Abstract

It was in the early 1960s when machine vision systems initiated researchers and developers have worked on building machines that perform tasks of acquisition, processing, and analysis of images in a wide range of applications for different areas. Currently, along with the new technological advances in electronics, computer systems, image processing, pattern recognition, and mechatronics, it has arose the opportunity to improve machine vision systems development with affordable implementations at lower cost. A machine vision system is the combination of several high-tech techniques, including both hardware and software, used to acquire, process, and analyze images on a machine, which contributes with a set of tools for the extraction of features, such as color and dimension parameters, texture, chemical components, disease detection, freshness, assessment, modeling, and control, among others. Based on former paragraphs, we could say that machine vision systems are appropriate to improve the actual agricultural systems making them more useful, efficient, practical, and reliable.

Keywords

  • image processing
  • computer vision
  • color analysis
  • habanero chili
  • vineyards

1. Introduction

The evaluation of what surrounds us is done through light, colors, shapes, textures, and intensities, among other characteristics, which originate from different natural phenomena that give rise to spectacular scenes that are visually striking to observe. The way to perceive such an enormous amount of information is achieved through the senses. The perception of the human being is an incredible skill composed of five senses: sight, ear, smell, taste, and touch. The human senses are complex systems within the human body, which in turn are formed by an immense number of well-adapted and calibrated sensors with a very wide range of operation to carry out specific tasks, in addition to adapting and interacting with each other. Some of the senses manage to partially or totally regenerate from time to time. The human being has another essential organ, the brain, which is the main processing unit, responsible for receiving, processing all information, making decisions, and coordinating actions in a synchronized, a fast, and an efficient manner. That is why trying to emulate these capabilities is a huge challenge.

The visual perception of the human being is basically composed by the interaction between the eyes and brain. In general, the human vision system can be described as follows: the eyes are composed of the eyeball and the muscles that control its position. The cornea and lens focus light rays at the back of the eye. The lens regulates the focus for near and far objects as they become more or less globular. On the other hand, the brain is much more complex. Even to this day, there are many unknowns about its operation and even its biological purpose. Computer vision systems (CVS) intend to emulate this sophisticated and dynamic vision system whose operation is very natural and transparent [1].

Computer vision can be understood as the science that develops theory and algorithms to extract useful information about an object or scene within an image, for further analysis. Computer vision systems follow a basic structure that can be divided into three stages illustrated in Figure 1: low, intermediate, and high level of processing. In the first stage, the appropriate image acquisition sensors must be chosen for the type of scene to be captured, as well as to establish the acquisition conditions that best fit the experiment in order to obtain clear and good quality images. For the second stage, we must work on segmentation, which is considered a fundamental process in the image processing. The segmentation process separates useful information from the rest of the scene; for this, there are different segmenting and filtering techniques whose objective is to define and extract desired features within the image. The third stage, of high level performs an analysis with the information resulting from the two previous phases, so that its performance is closely linked. This analysis is focused on the recognition of objects within the scene and interprets their attributes to give results or make decisions. This is based on the information extracted and a knowledge database that serves to store the information of the whole process, as well as to train the vision system. Image processing and image analysis are the core of computer vision systems.

Figure 1.

Stages of the basic structure in which image processing is divided.

Both human beings and systems of vision rely on light to appreciate their surroundings. Therefore, the sources of illumination as well as its quality are crucial parameters for the proper functioning of both biological and artificial systems. Roughly speaking, light is the portion of the electromagnetic spectrum that can be perceived by the human eye (400–800 nm). The reflections of different wavelengths in materials are essentially what we call the color of an object. Depending on the physical characteristics of the materials, light incident on an object can give rise to three phenomena: absorption, when light on the object converts that energy into heat; refraction, when light passes through the material by changing its direction as a function of the refractive index of the material; and reflection, when the material rejects certain wavelengths. The reflected waves are those that can be observed with the naked eye and give rise to the familiar range of colors.

Color is the result of the interaction of the light reflected by the objects upon contact with the cells of the cones of the human vision system. There are different kinds of cones in the human eye, so the level of absorption and the interpretation of the nervous system of these signals give rise to the perception of color. So, it can be said that color is a perception of light reflected by the surface of an object [2].

In computer vision systems, the color attribute is very valuable and is very often used to identify useful information and perform its extraction. It should be mentioned that one of the main factors in defining the quality of products in agro-industry, particularly fruit and vegetables, is their appearance and, therefore, the value of these products in the market. Appearance is a combination of color, shape, size, and texture, among others. As a result, color makes the product very attractive to consumers and helps them buy the product.

Advertisement

2. Machine vision systems

From a general perspective, machine vision systems (MVS) or computer vision systems (CVS) try to emulate human vision in order to gather information from an object without requiring a physical interaction with it [3]. Machine vision is a complex high-tech system which includes an image sensor (usually a charge-coupled device, known as CCD), a frame grabber and a computer with the appropriate software and algorithms [4]. In general, the procedure is as follows: a scene is captured by the image sensor, the analog electrical signal obtained from the sensor is converted to a digital format and sent to a computer, and then it is subsequently processed using algorithms in order to analyze the corresponding image. This procedure is frequently implemented using different setups to accomplish specific application requirements. The importance of this technology for the analysis of agricultural products lies in its nondestructive evaluation characteristic.

Among the several definitions for CVS offered by different authors, Timmermans says that CVS are states that include capture, processing, and analysis of 2D images [5]. Sonka mentions that the objective is to replicate human vision by perceiving and understanding an image electronically [6]. Jha holds that the perception of an object and their optical characteristics in order to perform an interpretation of the results is called vision system [7]. Also, it is defined as a system for automatic acquisition and analysis of images to obtain desired data for interpreting or controlling an activity. The system consists of image acquisition, image processing, and interpretation [8].

CVS are suitable for agricultural applications because when used to obtain the characteristics of fruits and vegetables the task is done quickly, economically, hygienically, consistently, and objectively. This is the reason why the use of CVS has been expanded in many sectors of the industry, such as medical, automation, surveillance, remote sensing, autonomous vehicles, and robot vision, among others [912].

There is a broad range of applications for CVS, ranging from routine inspection through complex vision systems for robots, which shows the flexibility of this technology. On the other hand, its implementation requires a relatively low cost, which makes this technology even more attractive for other applications.

In the case of the food industry, CVS have proved to be an alternative method for inspection of visual attributes in pastries [13, 14], meat [1517], and fish [18, 19]. Because this inspection method is nondestructive, it is widely used in agricultural applications, including inspection and selection of fruits and vegetables [20]. For the agro-industry, the visual aspect of its products is particularly important, since this parameter is a determining factor of the value of the product in the market.

Traditionally, personnel trained for this task carry out the inspection and selection of agricultural products manually. This implies several disadvantages, such as inconsistencies in selection, time consumption, intensive tasks, variability, and subjectivity. In addition, the manual process is tedious, laborious, costly, and strongly dependent on external factors.

On the other hand, the appearance of agricultural products, i.e., their size, shape, and color, and the presence of stains or shocks, have a negative influence in the consumer perception and therefore determine the degree of acceptance prior to a purchase. The consumer also associates a certain internal quality with external characteristics (the appearance), which affects future decisions in the purchases [21].

Quality is a key factor that Kramer defines as:

“Quality of foods may be defined as the composite of those characteristics that differentiate individual units of a product, and have significance in determining the degree of acceptability of that unit to the user” [22].

In reference to fruits and vegetables, these attributes can be classified as follows:

  • Color and appearance are the factors that normally determine whether the product is accepted or not.

  • Flavor (taste and aroma): once the product’s appearance convinces the consumer and the product is tested, the flavors and aromas become more important. Freshness, itching, and sweetness are some of the attributes that can be detected when consuming certain products.

  • Texture can be perceived externally not only when taking the product in our hands but also when you taste it, you have a clear impression of the softness, firmness, or crisp of the fruit or vegetable.

  • Nutritional value is a factor that is usually hidden, but it affects our organism in ways we cannot perceive. However, it is an increasingly important parameter for consumers, scientists, and medical personnel.

These factors are closely related to the process and maturity stage of both the plant and the fruit at the time of harvest, as well as postharvest management conditions.

Shewfelt suggests “A primary disadvantage of instrumental testing is that many instrumental measurements have little relevance to consumer acceptability and thus should never be used to define quality attributes for a specific product. In other words it is better to measure what is really important than to believe something is important because you measure it really well” [23].

Machine vision systems present a viable alternative to extract certain important attributes automatically and, moreover, analyze them for making decisions. Besides, they have been diversified in distinct areas that concern agriculture. Due to its wide range of use, a summarized categorization is presented in Table 1, considering the products that are evaluated, the type of application, and the technology that was employed.

ProductsApplicationsTechnology/technicReference
RicePanicle length measuringDual-camera devices[24]
SoybeanDetection of insect-damaged soybeanHyperspectral transmittance image[25]
ApplesDetectionRGB-D[26]
CauliflowerExtract structural parameters to assess the growthRGB-D[27]
Cherry tomatoesDetection of cuticle defectsHyperspectral fluorescence imagery[28]
Date fruitsDetermining viscoelastic characteristics of date fruitsCCD[29]
Dried figsGradingCCD[30]
FoliarDisease spotsCCD[31]
OrangeOcclusion recoveryCCD[32]
Rapeseed varietiesClassificationCCD scanner[33]

Table 1.

Machine vision systems in agriculture.

CCD: charge-coupled device; RGB-D: color depth camera.

The extraction of attributes of agricultural products presents important challenges due in part to the variety of products that exist. The extraction of useful data is highly dependent on a good segmentation. There are several techniques for segmenting images, three of which are illustrated in Figure 2. The first technique is thresholding, which is a fast technique that converts gray levels into a binary representation. The second technique is edge-based technique, which looks for change points that give way to borders or contours, and, finally, region-based technique, which performs a search around a group of pixels within the image.

Figure 2.

Typical segmentation techniques: (a) thresholding, (b) edge-based, and (c) region-based.

To illustrate the performance of the different segmentation techniques described above, a desktop setup was implemented as shown in Figure 4. The first step is to capture an image; clothing buttons were used in this experiment, which also include a printed circular reference. The second step was to implement routines in MATLAB that process the image with the different segmentation techniques resulting binary images. The performance of the three techniques was satisfactory, but the processing time differed. Thresholding was the fastest, then edge-based and region-based. The resulting binary images are illustrated in Figure 3.

Figure 3.

Applications of segmentation techniques: (a) original image, (b) thresholding, (c) edge-based, and (d) region-based.

Advertisement

3. Hardware and software requirements

In order to acquire all the useful characteristics of the image, it is required to use a suitable light source to obtain the type and quality of illumination required over the scene. Taking into account that the image sensor will receive the light reflected from the scene, the upper level of illumination will be limited by the sensitivity of the sensor. The image sensor resolution must be taken into account, since it generates the amount of pixels from the image, where light information is contained, as it was perceived. On the other hand, the speed of frame grabber is selected considering the shortest period time of the task to be captured; for example, to acquire images on real-time or online applications, a faster frame grabber is required. Once the image is acquired, it is ready to be processed and analyzed; the algorithm used to perform the task is designed to extract specific parameters from the image and then to show them as results in the proper form.

The automatic operation of the setup involves to take into account timing considerations, to assembly high-tech components, to write proper and efficient program codes, and, finally, to synchronize and integrate every component in a single functional system. First of all, the scene capturing place, the proper illumination, and the required insulation conditions to acquire useful images must be selected. The final application of the machine vision system must be defined in order to get a suitable image sensor and a frame grabber that accomplishes the resolution and speed required. Finally, it is necessary to write program codes for three different stages on a machine vision system: to acquire, to process, and to analyze the images. Acquisition refers to the task where the image is taken. During the processing stage, the features are extracted, and the image is enhanced. Finally, once that parameters and features of the image are found, the algorithms of the analyzing stage use them to get a result in context to the goal of the application [34].

Advertisement

4. Color analysis

Our eyes are able to detect what we call visible light, which in fact is a range of electromagnetic spectrum wavelengths between 400 and 800 nanometers. What we observe a scene is actually the light reflected from the surfaces or the radiation emitted by light sources that make us to experience the sensation of color.

In reference to fruits and vegetables, color is derived from the natural pigments in fruits and vegetables, many of which change as the plant proceeds through maturation and ripening. The primary pigments imparting color quality are the fat-soluble chlorophylls (green) and carotenoids (yellow, orange, and red) and the water-soluble anthocyanins (red, blue), flavonoids (yellow), and betalains (red). In addition, enzymatic and nonenzymatic browning reactions may result in the formation of water-soluble brown-, gray-, and black-colored pigments. Color is one of the most important object measurements for image understanding and object description [35]. Using color attributes is more discriminant than simply using grayscale since two image points with the same grayscale value can be differentiated from their color attributes [36].

The level and quality of illumination from a light source affect the performance of the human eye, just as the performance of computer vision systems is affected by the illumination sources used. Sarkar found that by adjusting the illumination, the appearance of an object can be radically altered [37]. Therefore, the lighting system can greatly influence the quality of the images, so it plays an important role in the efficiency and accuracy of the CVS. Gunasekaran noted that a well-designed lighting system can help improve the performance of image analysis by improving contrast [38].

Some of the technologies used to acquire images in the food production sector are charge-coupled device (CCD) camera, magnetic resonance imaging (MRI), ultrasound, X-ray, near-infrared spectroscopy, and computed tomography (CT), among others. From the technologies mentioned, the CCD camera is widely used in computer vision systems for quality assessment, product selection, and product classification. CCD technology has the ability to convert captured light into electrical signals to create images. Depending on the characteristics of both the sensor and the optics used to acquire the images, these can be obtained with high resolution, low noise, and good light-sensitive (ISO) level. These characteristics can be adjusted up to certain limits to adapt the images to different capture scenarios. Both color and monochromatic cameras have been used in the food industry for a wide variety of applications [3942]. There are several attributes used to evaluate the quality of products in the food industry. There are external attributes, such as shape, size, color, texture, and defects, which can be captured by CCD-based vision systems. On the other hand, among the internal attributes of food products that may be measured, we can mention internal structures, quantity of water, and gaps. The extraction of these attributes requires the use of technologies such as ultrasound, MRI, and CT in order to obtain images with sufficient information for processing. From the point of view of image processing, both external and internal attributes present important challenges to adequately identify the information of interest. Moreover, each technology has different qualities, and its usefulness will depend on both the type of information to be extracted and the type of application. In addition, it is advisable to keep the economic factor in mind before deciding the type of technology to use.

The appropriate selection of the color space contributes to enhance color attributes from processed images. Table 2 presents a simple classification of agro-products. The table consists of six categories: the first column helps to classify the products in fruits or vegetables; the second column presents the study applications; the third column has the technology or technique used; the fourth column has the percentage of efficiency reported; and the last column contains the references of each of the submitted works. In fruits, it is clearly noted that apples are a quite popular product and have been evaluated using different color spaces with an accuracy greater than 90%, as well as in banana and peach fruits. The applications range from sorting fruits by color and size, maturity discrimination, and color classification, to vegetables focusing on blemish detection and color measurements. The RGB, HSI, HSV, and L*a*b* color spaces are used depending on applications.

CategoryProductsApplicationsTechnologyAccuracy (%)Reference
FruitAppleSorting by color and sizeHSI90%[43]
AppleMature discriminationRGB95.83%[44]
StarfruitsColor classificationHSI100%[45]
BananaColor measurementRGB, HSV, L*a*b97%[46]
PeachSorting by color and sizeHSI90%[47]
VegetablePotatoColor classificationRGB90%[48]
PotatoBlemish detectionRGB89.6%[49]
TomatoColor classificationRGBNo reported[50]

Table 2.

Analysis by color attributes.

HSI: color space (hue, saturation, intensity); HSV: color space (hue, saturation,value); RGB: color space (red, green, blue); CIELAB: L*a*b* color space.

Advertisement

5. Applications of machine vision systems in agriculture

5.1. Machine vision system implementation

To acquire and analyze the information from an image, it is necessary to implement a machine vision system. The simplest case involves the implementation in a desktop setup. The primary goal of this configuration is to provide the insulation conditions to acquire a clean, high-contrast image. Additionally, in this implementation it is easy to add a number of sensors to monitor inside ambient conditions, in order to provide complementary data for the analysis. These characteristics are quite useful for the agricultural sector to characterize several species as well as a variety of crops under different conditions [51, 52].

A summary of technologies used in a wide range of implementations in the agriculture area is presented in Table 3. Products, applications, technologies (used as a machine vision system), the accuracy of this system, and the corresponding references are included.

ProductsApplicationsTechnologyAccuracy (%)Reference
OilseedsMeasuring thermal propertiesMRNo reported[53]
Passion fruit juiceTracking thermal degradationMRNo reported[54]
PearGrading by external shapeFeatures88.2%[55]
StrawberryBruise detectionH-CVS100%[56]
AppleQuality gradingM-CVS93.5%[57]
AppleChilling injuryH-CVS98.4%[58]
CitrusRottenness detectionH-CVS98%[59]
MangoMango gradingFD89.83%[60]
MushroomEnzymatic browningH-CVS>89%[61]
CitrusDefect detectionT-CVS98.9%[62]
Agricultural productsInternal characterizationX-rayNo reported[63]
Agricultural productsQuarantine scannerX-ray/CCDNo reported[64]

Table 3.

Technologies applied to agriculture.

T-CVS: traditional computer vision system; H-CVS: hyperspectral computer vision system; M-CVS: multispectral computer vision system; R: Magnetic resonance.

Based on an extensive bibliographical research, the advantages and disadvantages of the artificial vision systems applied in agriculture are indicated in Table 4. It is important to mention that the nondestructive characteristic becomes one of its main advantages, since the industry wants to sell the final product, with the best quality to the consumer and in that way increase their profits.

AdvantagesDisadvantages
ConsistentAdaptable under specific conditions
DatabaseArtificial lighting
EfficientEnvironment control
FastObject identification
Flexibility
Non expensive
Nondestructive
Robust

Table 4.

Advantages and disadvantages of computer vision system.

5.2. Case study

5.2.1. Habanero chili color assessment

Setups for 2D vision systems are suitable for capturing images of fruits and vegetables. For this case of study, the setup required a camera hold steady at the center top for capturing the images. It was used a commercial digital single-lens reflex (DSLR) camera as capture device.

As shown in Figure 4, the capture device includes a housing that allows to isolate the experiment from external contaminants such as light and dust. In addition, a control unit is installed to trigger the camera remotely in order to avoid handling vibrations and to ensure that the captures are taken as uniform as possible between samples. In the same way, the illumination lamp is turned on few seconds before to capture the image, which allows to stabilize the lamp and in this way to achieve a more uniform illumination.

Figure 4.

Typical machine vision system component.

To verify the proper performance of the capturing device, it is advisable to include a wireless communication to the control unit in order to perform a remote monitoring. On the other hand, the illumination lamp spreads the light using the white surface of the interior of the case, allowing a better distribution of light, reducing both shadows and reflections in the samples. Immediately, after each image capture, the lamp is turned off to avoid possible damages due to over exposure to the fruits. In addition, a high-contrast background has to be used and thus facilitate a correct segmentation.

To ensure that the conditions and positions of the samples do not change during the capture of the images that are same, each individual sample can be separated using specialized routines, so that they can be processed for further analysis. In order to extract the color attribute automatically from the images, a routine can be implemented to carry out a threshold segmentation and thus to obtain only the information of the region of interest (ROI). Once the information of the camera in RGB format is obtained, a color space conversion to CIELAB was made to show the color changes as a function of time.

Images were taken with a sampling frequency of one image per hour, during 8 days for 35 habanero chili specimens. This generates a total of 6720 sub-images from the 192 images taken. The type of images captured from this system can be seen in Figure 5.

Figure 5.

Type of images captured: (a) samples at the first day (hour 1) and (b) samples at the eighth day (hour 192).

The foundation for development of image processing algorithms was to keep them simple and with a proprietary processing instead of using toolbox functions of MATLAB® for image processing, except for the fundamental ones to loading and displaying images. The first step was to test whether the algorithms were able to capture, process, and analyze the color images of habanero chili. Since these algorithms are part of a larger project that considers migrating these codes to an embedded system, to provide greater versatility and portability to the vision system, expanding its range of applications. Algorithm starts loading an image, from color image database of habanero chili specimens from the previous acquisition stage. Then, a predetermined area segmentation delimits a specific region for each specimen, called sub-image. Next, a color segmentation is performed, using a threshold technique, where the background color parameters are set as the threshold value, so everything else is the color information from the specimens. Then, a color space conversion from RGB to CIELAB is carried out. After that calculations of color average from each specimen are performed and color information in CIELAB components L*a*b*, Cab*, and hab° is generated, this stage results in a matrix with these color parameters. Finally, with this color data, a color analysis is executed, where these parameters can be plotted and observed, as shown in Figure 6.

Figure 6.

Flowchart of image processing algorithm.

In Figure 7, a color diagram is presented with the parameters of a* vs. b* from CIELAB and represents the mean values from each specimen during 8 days of experiment. Each point signifies the average from 24 images taken per day, corresponding to sampling rate. It can be seen how the system is able to detect color changes.

Figure 7.

Color diagram of a* vs. b* of all samples from their mean values per day.

Color information obtained from image processing and expressed in terms CIELAB coordinates was statistically analyzed applying a one-way analysis of variance (ANOVA). The analysis of variance shows highly significant differences between specimens. The obtained results are reported in Table 5 (sum of square [SS], degrees of freedom [DF], mean square [MS], and test statistic [F]). Moreover, Figure 8 represents the analysis of variance between specimens.

SourceSSDFMSF > F0.05
Days22643.273234.7410.97 ≫ 2.010
Error80213.9272294.9
Total102857.1279

Table 5.

One-way analysis of variance (ANOVA) between samples of α = 0.05 significance.

Figure 8.

Analysis of variance results between samples, box, and whisker plot.

Another statistical analysis was performed using one-way analysis of variance (ANOVA). The analysis of variance shows highly significant differences between days. The obtained results are reported in Table 6. Moreover, Figure 9 represents the analysis of variance between days.

SourceSSDFMSF > F0.05
Specimens56449.8341660.298.77 ≫ 1.424
Error46407.3245189.42
Total102857.1279

Table 6.

One-way analysis of variance (ANOVA) between samples of α = 0.05 significance.

Figure 9.

Analysis of variance results between days, box, and whisker plot.

Highlights: a color image database of habanero chilies at controlled conditions was generated for future developments and analysis. At the moment, developments in this specific kind of application explore noninvasive techniques for color assessment in habanero chili specimens; the information related was null, which increase the scientific value of these results. Algorithms developed are capable to acquire process and analyze habanero chili specimens in postharvest under controlled conditions. The guideline for algorithm development was to write code with basic functions and proprietary code, instead of toolbox functions of MATLAB®, considering code migration for future research and developments. The statistical analysis confirms that image processing algorithms and machine vision system are capable to detect, quantify, and analyze color changes of digital images from habanero chili in an automatic way.

5.2.2. Vineyards

The wine industry is one of the most interesting in the use of vision systems to increase the quality of its crops [65]. An increasing number of wine producers recognize the advantages of understanding the biophysical characteristics and the performance of their vineyards, leading to better management of their resources and making decisions. Wine producers commonly have a goal for the state of maturity they want to achieve for the wine they produce. Such a goal can vary, even within the same grape variety, depending on the type or style of wine that will be made. As previously mentioned, for optimum determination of fruit ripeness, color is one of the most important parameters. In the case of vineyards, a computer vision system can be used to individually locate grapes in clusters as well as grape berries that are used in evaluation samplings. By using segmentation algorithms, sub-images of each grape are used to extract the information of the color parameters in a proper color space.

As mentioned by Martinez-Sandoval et al. [66] as well as Rabatel and Guizard [67], digital images of individual grapes could easily be acquired in the field as long as adequate control of the distance and light conditions over the samples is maintained. The use of computer vision systems would allow estimating the volume of grapes from their visible area, and using suitable image processing algorithms, it would be possible to obtain detailed information such as the color and size of the berries.

The development of methods based on image processing has advanced to the point of being able to automatically detect and count the grapes to accurately predict the yield of a harvest. As indicated by Nuske et al. [68], in order to capture the images of vines (in the visible light spectrum), conventional cameras are used throughout the vineyard. A typical algorithm to perform the image processing of vines can be divided into the following stages [66]:

  1. Detecting potential berry locations with a radial symmetry transform.

  2. Identifying the potential locations that have similar appearance to grape berries.

  3. Group neighboring berries into clusters

A vine image consists of a set of berries with nearly identical color properties, so that the information available for berry separation mainly relies on their contours (as seen in Figure 10) where a bunch of grapes are segmented from the background. Due to the smooth 3D shape of the berries, their contours are accessible as luminance discontinuities in the image. However, the algorithm only works with grayscale images. The main goal is to make a nondestructive estimation of the bunch weight of grapes at an early stage by computer vision using an elliptical model suitable to estimate the volume of the grape from its visible area.

Figure 10.

Image segmentation of bunch of grapes.

Winemakers commonly have a goal for grape maturity that they would like to achieve for the wine they will produce. This objective can vary, even within the same grape variety, depending on the type or style of wine desired [+]. For the characterization of maturation, physicochemical analyses such as sugar content, acid, and pH [69] are performed, sugar content being one of the most important factors [70]. Figure 11 shows a generalized graphical representation of changes in grape composition during development and ripening.

Figure 11.

Grape berry development and ripening.

The sugar content of the grapes is commonly controlled by periodically measuring the content of soluble solids in the ripening berries with a refract meter. It works by measuring refracted light through grape juice in a prism. The sugar level is generally expressed in Brix grade, which represents grams of sugar per 100 grams of juice; the thicker juice shows a higher Brix degree on the scale.

The procedure followed by Murillo-Bracamontes et al. [3] in order to extract color parameters from berry image and to compare them against Brix degree (measured with a refract meter) as well as to their pH value was as follows: the acquired images were converted to grayscale and equalized using a contrast-limited adaptive histogram equation (CLAHE) to improve the contrast of grayscale images (see Figure 12) [71, 72]. A Hough transformation was then applied to locate the grapes (as shown in Figure 13). After completion of the transformation, three vectors corresponding to (x, y) and radius parameters of the berries were obtained; then, the individual berries detected were segmented from the background, stored in sub-pictures, and then converted to the CIELAB color space for analysis. Figure 14 shows a flowchart of the steps performed for this analysis.

Figure 12.

Preprocessing of the image before applying the Hough transform. (a) Original image in grayscale. (b) Results of applying CLAHE equalization.

Figure 13.

Hough transform. (a) Original image in RGB color space. (b) Results of the detected circles (remarked in yellow).

Figure 14.

Algorithm applied to grape images.

Analyzed data yield the same evolution described in Figure 11, which is a good indicator that the procedure provides promising values.

Advertisement

6. Conclusions

Computer vision systems are a viable option for handling agricultural products. The image processing and image analysis tools are well suited for application developments with automatic extraction of external and internal attributes of agro-products. One of the trends is the development of vision applications for in situ use in fields or greenhouses. The properties of being noninvasive and nondestructive make these systems an attractive option for application developments in different areas. However, agro-industry is particularly interesting due to the nature of their products and the standards of quality requirements.

Advertisement

Acknowledgments

The authors would like to thank the National Council of Science and Technology (CONACYT) for providing the fundings to carry out this research, as well as to the people involved on this research at the Autonomous University of Baja California, Program for Strengthening Educational Quality (PFCE) and the Center for Scientific Research and Higher Education at Ensenada and the State University of Sonora.

References

  1. 1. David H. Eye, Brain and Vision. W.H. Freeman; Scientific American Library, Henry Holt and Company, 1988. ISBN: 0716760096, 9780716760092
  2. 2. Acharya T, Ray AK. Image Processing: Principles and Applications. John Wiley & Sons; 2005. pp. 425. ISBN: 0471745782, 9780471745785
  3. 3. Murillo-Bracamontes EA, Martinez-Rosas ME, Miranda-Velasco MM, Martinez-Reyes HL, Martinez-Sandoval JR, Cervantes-De-Avila H. Implementation of Hough transform for fruit image segmentation. Procedia Engineering. 2012;35:230-239
  4. 4. Davies ER. Machine vision, third edition: Theory, algorithms, practicalities. Pattern Recognition Letters. 2004;7010:934
  5. 5. Timmermans AJM. Computer vision system for on-line sorting of pot plants based on learning techniques.International Society for Horticultural Science. 1998;421:91-98. DOI: 10.17660/ActaHortic.1998.421.8
  6. 6. Sonka M, Hlavac V, Boyle R. Image Processing, Analysis and Machine Vision, First Edit. Springer-Science Business Media, B.V. 1993. pp. 555. ISBN: 978-0-412-45570-4, 978-1-4899-3216-7
  7. 7. Jha SN. Nondestructive Evaluation of Food Quality. Springer US: India; 2010
  8. 8. Bechar A. Agricultural robots for field operations: Concepts and components. Biosystems Engineering. 2016;149:94-111
  9. 9. Tezmol A, Sari-Sarraf H, Mitra S, Long R, Gururajan A. Customized hough transform for robust segmentation of cervical vertebrae from X-ray images. In: Fifth IEEE Southwest Symposium on Image Analysis and Interpretation. 2002
  10. 10. van den Broek WHAM, Noordam JC, Pauli A. Multivariate Imaging for Automated Process Control in the Agro Industry. IFAC Proceedings Volumes. 2000;33(19):303-307
  11. 11. Marzotto R, Zoratti P, Bagni D, Colombari A, Murino V. A real-time versatile roadway path extraction and tracking on an FPGA platform. Computer Vision and Image Understanding. 2010;114(11):1164-1179
  12. 12. Sentouh C, Popieul JC. Human-machine interaction in automated vehicles: The ABV project. In: Proceedings of the 19th World Congress the International Federation of Automatic Control Cape Town. Vol. 47, no. 3. 2014. pp. 6344-6349
  13. 13. Davidson VJ, Ryks J, Chu T. Fuzzy models to predict consumer ratings for biscuits based on digital image features. IEEE Transactions on Fuzzy Systems. 2001;9(1):62-67
  14. 14. Li J, Tan J, Shatadal P. Classification of tough and tender beef by image texture analysis. Meat Science. 2001;57(4):341-346
  15. 15. Tan FJ, Morgan MT, Ludast LI, Forrest JC, Gerrard DE. Assessment of fresh pork color with color machine vision. Journal of Animal Science. 2000;78(12):3078-3085
  16. 16. Davenel A, Seigneurin F, Collewet G, Rémignon H. Estimation of poultry breastmeat yield: Magnetic resonance imaging as a tool to improve the positioning of ultrasonic scanners. Meat Science. 2000;56:153-158
  17. 17. Mery D et al. Automated fish bone detection using X-ray imaging. Journal of Food Engineering. 2011;105(3):485-492
  18. 18. Costa C, Loy A, Cataudella S, Davis D, Scardi M. Extracting fish size using dual underwater cameras. Aquacultural Engineering. 2006;35:218-227
  19. 19. Storbeck F, Daan B. Fish species recognition using computer vision and a neural network. Fisheries Research. Apr. 2001;51(1):11-15
  20. 20. Elmasry G, Cubero S, Moltó E, Blasco J. In-line sorting of irregular potatoes by using automated computer-based machine vision system. Journal of Food Engineering. Sep. 2012;112(1-2):60-68
  21. 21. Brosnan T, Sun D. Improving quality inspection of food products by computer vision––A review. Journal of Food Engineering. 2004;61(1):3-16
  22. 22. Kramer A. Evaluation of quality of fruits and vegetables. In: Food Quality. Irving GW, Hoover SR, editors. American Association for the Advancement of Science, Washington, DC; 1965. pp. 9-18
  23. 23. Barrett DM, Beaulieu JC, Shewfelt R. Color, flavor, texture, and nutritional quality of fresh-cut fruits and vegetables: Desirable levels, instrumental and sensory measurement, and the effects of processing. Critical Reviews in Food Science and Nutrition. May 2010;50(5):369-389
  24. 24. Huang C, Yang W, Duan L, Jiang N, Chen G, Xiong L. Rice panicle length measuring system based on dual-camera imaging. Computers and Electronics in Agriculture. 2013;98:158-165
  25. 25. Huang M, Wan X, Zhang M, Zhu Q. Detection of insect-damaged vegetable soybeans using hyperspectral transmittance image. Journal of Food Engineering. 2013;116(1):45-49
  26. 26. Nguyen TT, Vandevoorde K, Wouters N, Kayacan E, De Baerdemaeker JG, Saeys W. Robotic agriculture detection of red and bicoloured apples on tree with an RGB-D camera. Biosystems Engineering. 2016;146:33-44
  27. 27. Andújar D, Ribeiro A, Fernández-quintanilla C, Dorado J. Using depth cameras to extract structural parameters to assess the growth state and yield of cauliflower crops. Computers and Electronics in Agriculture. 2016;122:67-73
  28. 28. Cho B et al. Detection of cuticle defects on cherry tomatoes using hyperspectral fluorescence imagery. Postharvest Biology and Technology. 2013;76:40-49
  29. 29. Alirezaei M, Zare D, Nassiri SM. Application of computer vision for determining viscoelastic characteristics of date fruits. Journal of Food Engineering. 2013;118(3):326-332
  30. 30. Baigvand M, Banakar A, Minaei S, Khodaei J, Behroozi-Khazaei N. Machine vision system for grading of dried figs. Computers and Electronics in Agriculture. 2015;119:158-165
  31. 31. Ma J, Du K, Zhang L, Zheng F, Chu J, Sun Z. A segmentation method for greenhouse vegetable foliar disease spots images using color information and region growing. Computers and Electronics in Agriculture. 2017;142:110-117
  32. 32. Lu J, Sang N. Detecting citrus fruits and occlusion recovery under natural illumination conditions. Computers and Electronics in Agriculture. 2015;110:121-130
  33. 33. Kurtulmuş F, Ünal H. Discriminating rapeseed varieties using computer vision and machine learning. Expert Systems with Applications. 2015;42(4):1880-1891
  34. 34. Annadurai S. Fundamentals of Digital Image Processing. Pearson, Pearson Education India; 2007
  35. 35. Sun DW. Computer Vision Technology for Food Evaluation. Academic Press, Elsevier; 2007. ISBN: 0123736420, 9780123736420
  36. 36. Christophe C, Marchand E. Colorimetry-based visual servoing. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Oct 2009, St Louis, United States. 2009. pp. 5438-5443
  37. 37. Sarkar NR. Machine Vision for Quality Control in the Food Industry: In Instrumental Methods for Quality Assurance in Foods. Food Science and Technology. Taylor & Francis; 1991. ISBN: 082478278X, 9780824782788
  38. 38. Gunasekaran S. Computer vision technology for food quality assurance. Trends in Food Science and Technology. 1996;7(8):245-256
  39. 39. Jhawar J. Orange sorting by applying pattern recognition on colour image. Procedia Computer Science. 2016;78(December):691-697
  40. 40. Pearson TC, Slaughter DC. Machine vision detection of early split pistachio nuts. Transactions of the ASAE. 1996;39(3):1203-1207
  41. 41. Steinmetz V, Roger JM, Moltó E, Blasco J. On-line fusion of colour camera and spectrophotometer for sugar content prediction of apples. Journal of Agricultural Engineering Research. 1999;73(2):207-216
  42. 42. Chherawala Y, Lepage R, Doyon G. Food grading/sorting based on color appearance trough machine vision: The case of fresh cranberries. In: 2006 2nd International Conference on Information & Communication Technologies. vol. 1. 2006. pp. 540-545
  43. 43. Feng G, Qixin C. Study on color image processing based intelligent fruit sorting system. In: Fifth World Congr. Intell. Control Autom. (IEEE Cat. No.04EX788). Vol. 6. 2004. pp. 4802-4805
  44. 44. Garrido-Novell C, Pérez-Marin D, Amigo JM, Fernández-Novales J, Guerrero JE, Garrido-Varo A. Grading and color evolution of apples using RGB and hyperspectral imaging vision cameras. Journal of Food Engineering. 2012;113(2):281-288
  45. 45. Abdullah MZ, Fathinul-Syahir AS, Mohd-Azemi BMN. Automated inspection system for colour and shape grading of starfruit (Averrhoa carambola L.) using machine vision sensor. Transactions of the Institute of Measurement and Control. 2005;27(2):65-87
  46. 46. Cho JS, Lee HJ, Park JH, Sung JH, Choi JY, Moon KD. Image analysis to evaluate the browning degree of banana (Musa spp.) peel. Food Chemistry. 2016;194:1028-1033
  47. 47. Esehaghbeygi A, Ardforoushan M, Monajemi SAH, Masoumi AA. Digital image processing for quality ranking of saffron peach. International Agrophysics. 2010;24:115-120
  48. 48. Noordam JC, Otten GW, Timmermans TJM, van Zwol BH. High-speed potato grading and quality inspection based on a color vision system. Proceedings of SPIE. 2000;3966:206-217
  49. 49. Barnes M, Duckett T, Cielniak G, Stroud G, Harper G. Visual detection of blemishes in potatoes using minimalist boosted classifiers. Journal of Food Engineering. 2010;98(3):339-346
  50. 50. Carlos L, Antonio L, Sanches J, Maria DAL, Fabbro I. Image processing techniques for lemons and tomatoes classification. Bragantia, Campinas. 2008;67(3):785-789
  51. 51. Moreda GP, Ortiz-Cañavate J, García-Ramos FJ, Ruiz-Altisent M. Non-destructive technologies for fruit and vegetable size determination—A review. Journal of Food Engineering. May 2009;92(2):119-136
  52. 52. Brosnan T, Sun D. Inspection and grading of agricultural and food products by computer vision systems—A review. Computers and Electronics in Agriculture. 2002;36:193-213
  53. 53. Carosio MGA, Bernardes DF, Andrade FD, Moraes TB, Tosin G, Colnago LA. Measuring thermal properties of oilseeds using time domain nuclear magnetic resonance spectroscopy. Journal of Food Engineering. 2016;173:143-149
  54. 54. Soares Valeria LM, Alves Filho EG, Mara A. Silva L, Henrique Novotny E, Marques Canuto K, Jair Wurlitzer N, Narain N, Sousa de Brito E. Tracking thermal degradation on passion fruit juice through Nuclear Magnetic Resonance and chemo-metrics, Food Chemistry. 2016;219:1-6
  55. 55. Zhang Y, Wu L. Classification of fruits using computer vision and a multiclass support vector machine. Sensors. 2012;12(9):12489-12505
  56. 56. Nagata M, Tallada JG, Kobayashi T. Bruise detection using NIR hyperspectral imaging for strawberry (FragariaXananassa Duch). Environmental Control in Biology. 2006;44(2):133-142
  57. 57. Unay D, Gosselin B, Kleynen O, Leemans V, Destain MF, Debeir O. Automatic grading of bi-colored apples by multispectral machine vision. Computers and Electronics in Agriculture. 2011;75(1):204-212
  58. 58. ElMasry G, Wang N, Vigneault C. Detecting chilling injury in red delicious apple using hyperspectral imaging and neural networks. Postharvest Biology and Technology. 2009;52(1):1-8
  59. 59. Gómez-Sanchis J, Martín-Guerrero JD, Soria-Olivas E, Martínez-Sober M, Magdalena-Benedito R, Blasco J. Detecting rottenness caused by Penicillium genus fungi in citrus fruits using machine learning techniques. Expert Systems with Applications. 2012;39(1):780-785
  60. 60. Khoje S, Bodhe S. Performance comparison of Fourier transform and its derivatives as shape descriptors for mango grading. International Journal of Computers and Applications. 2012;53(3):17-22
  61. 61. Taghizadeh M, Gowen AA, O’Donnell CP. The potential of visible-near infrared hyperspectral imaging to discriminate between casing soil, enzymatic browning and undamaged tissue on mushroom (Agaricus bisporus) surfaces. Computers and Electronics in Agriculture. 2011;77(1):74-80
  62. 62. Larhmam M, Mahmoudi S, Benjelloun M. Semi-automatic detection of cervical vertebrae in X-ray images using generalized Hough transform. In: Proceedings of the 3rd International Conference on Image Processing Theory, Tools and Applications.IPTA’12. 2012. pp. 396-401
  63. 63. Donis-González IR, Guyer DE, Pease A, Barthel F. Internal characterisation of fresh agricultural products using traditional and ultrafast electron beam X-ray computed tomography imaging. Biosystems Engineering. 2013;7:104-113
  64. 64. Chuang C et al. Automatic X-ray quarantine scanner and pest infestation detector for agricultural products. Computers and Electronics in Agriculture. 2011;77(1):41-59
  65. 65. Chamelat R, Rosso E, Choksuriwong A, Rosenberger C, Laurent H, Bro P. Grape detection by image processing. In: IECON 2006—32nd Annual Conference on IEEE Industrial Electronics. 2006. pp. 3-8
  66. 66. Martinez-Sandoval JR, Murillo-Bracamontes EA, Martinez-Rosas ME, Miranda-Velasco MM, Cervantes De Avila H. Image processing ap-plied in agriculture. In: Raul AS, Edwards Block A, editors. Embedded Systems and Wireless Technology. 1st ed. Boca Raton: CRC Press; 2013. pp. 201-226. ISBN: 9781466565654
  67. 67. Rabatel G, Guizard C. Grape berry calibration by computer vision using elliptical model fitting. In: ECPA 2007, 6th Eur. Conf. Precis. Agric. 2007
  68. 68. Nuske S, Achar S, Bates T, Narasimhan S, Singh S. Yield estimation in vineyards by visual grape detection. IEEE International Conference on Intelligent Robots and System. 2011:2352-2358
  69. 69. Hellman E. How to judge grape ripeness before harvest. In: Southwest Regional Vine and Wine Conference. 2004
  70. 70. Dami I et al. Midwest Grape Production Guide. Columbus, OH: Ohio State University Extension; 2005
  71. 71. Sonia Goyal S. Region based contrast limited adaptive HE with additive gradient for contrast enhancement of medical images (MRI). International Journal Computer Science Engineering. 2011;1(4):154-157
  72. 72. Garg R, Mittal B, Garg S. Histogram equalization techniques for image enhancement. International Journal of Electronics Communication and Computer Technology. 2011;2(1):107-111

Written By

Ernesto Martínez Sandoval, Miguel Enrique Martínez Rosas, Jesús Raúl Martínez Sandoval, Manuel Moises Miranda Velasco and Humberto Cervantes De Ávila

Submitted: 12 June 2017 Reviewed: 25 October 2017 Published: 20 December 2017