Open access

Understanding Color Image Processing by Machine Vision for Biological Materials

Written By

Ayman H. Amer Eissa and Ayman A. Abdel Khalik

Published: 22 August 2012

DOI: 10.5772/50796

From the Edited Volume

Structure and Function of Food Engineering

Edited by Ayman Amer Eissa

Chapter metrics overview

8,677 Chapter Downloads

View Full Metrics

1. Introduction

Handling (Post harvest) process of fruits is completed in several steps: washing, sorting, grading, packing, transporting and storage. The fruits sorting and grading are considered the most important steps of handling. Product quality and quality evaluation are important aspects of fruit and vegetable production. Sorting and grading are major processing tasks associated with the production of fresh-market fruit types. Considerable effort and time have been invested in the area of automation.

Suitable handling (Post harvest) process of fruits and vegetables is considered the most important process that leads to conserve the fruits quality until reach to the consumers, improve the quality of industry food products and decrease the losses of fruits that estimated as 30% of crops in Egypt (Reyad, 1999).

Sorting is a separation based on a single measurable property of raw material units, while grading is “the assessment of the overall quality of a food using a number of attributes”. Grading of fresh product may also be defined as ‘sorting according to quality’, as sorting usually upgrades the product (Brennan, 2006).

Sorting of agricultural products is accomplished based on appearance (color and absence defects), texture, shape and sizes. Manual sorting is based on traditional visual quality inspection performed by human operators, which is tedious, time-consuming, slow and non-consistent. It has become increasingly difficult to hire personnel who are adequately trained and willing to undertake the tedious task of inspection. A cost effective, consistent, superior speed and accurate sorting can be achieved with machine vision assisted sorting.

In recent ten years, operations in grading systems for fruits and vegetables became highly automated with mechatronics, and robotics technologies. Machine vision systems and near infrared inspection systems have been introduced to many grading facilities with mechanisms for inspecting all sides of fruits and vegetables (Kondo, 2009).

Machine vision and image processing techniques have been found increasingly useful in the fruit industry, especially for applications in quality inspection and defect sorting applications. Research in this area indicates the feasibility of using machine vision systems to improve product quality while freeing people from the traditional hand-sorting of agricultural materials.

The use of machine vision for the inspection of fruits and vegetables has increased during recent years. Nowadays, several manufacturers around the world produce sorting machines capable of pre-grading fruits by size, color and weight. Nevertheless, the market constantly requires higher quality products and consequently, additional features have been developed to enhance machine vision inspection systems ( e.g. to locate stems, to determine the main and secondary color of the skin, to detect blemishes).

Automated sorting had undergone substantial growth in the food industries in the developed and developing nations because of availability of infrastructures. Computer application in agriculture and food industries have been applied in the areas of sorting, grading of fresh products, detection of defects such as cracks, dark spots and bruises on fresh fruits and seeds. The new technologies of image analysis and machine vision have not been fully explored in the development of automated machine in agricultural and food industries (Locht et al, 1997).

Rapid advances in artificial intelligent automated inspection of orange and tomato fruits by computer vision feasible. An intelligent vision system to evaluate fruit quality (size, color, shape, extent of blemishes, and maturity) and assign a grade would significantly improve the economic benefits to the orange and tomato fruits industries. It would potentially increase the consumer confidence in the quality of fruit.

Research efforts have concentrated on the implementation of machine vision to replace manual sorters.

The aim of this study is to develop machine vision techniques based on image processing techniques for estimation the quality of orange and tomato fruits and to evaluate the efficiency of these techniques regarding the following quality attributes: size, color, texture and detection of the external blemishes.

The specific objectives are to quantify the following attributes for inspection of orange and tomato fruits:

  1. Color,

  2. Texture (homogeneity or non-homogeneity),

  3. Size (projected area),

  4. External blemishes (detect defects).

  5. Develop image processing techniques to sorting orange and tomato fruits into quality classes based on size, color and texture analysis,

  6. Evaluate the performance of the system using some orange and tomato fruits, and

  7. Evaluate the accuracy of the techniques by comparison with manual inspection.

Advertisement

2. Sorting and grading of fruits and vegetables

Handling (Post harvest) process of fruits is completed in several steps: washing, sorting, grading, packing, transporting and storage. The fruits sorting are considered one of the most important steps of handling.

Product quality and quality evaluation are important aspects of fruit and vegetable production. Sorting and grading are major processing tasks associated with the production of fresh-market fruit types. Considerable effort and time have been invested in the area of automation, but the complexity of fruit sorting and required sorting rates have forced the sorting of most fruit types to be performed manually. Although they currently achieve the best performance, human graders are inconsistent and represent large labor costs.

Machine vision is the study of the principles underlying human visual perception, and it attempts to provide the computer-camera system the visual capabilities easily accomplished by humans. In the human eye-brain system the human eye receives light from an object and then converts the light into electric signals. It does not interpret these signals nor make decision based upon the nature of the image. Image interpretation and decision-making are performed by the brain. Similarly, a machine vision system has an eye, which may be a camera or a sensor. Image interpretation and decision-making are done by appropriate software and hardware. Machine vision, often referred to as computer vision, can be defined as a process of producing description of an object from its image.

In manual inspection, a human inspector evaluates individual fruit in order to assign a grade. This process is tedious, labor intensive, and subjective. It has become increasingly difficult to hire personnel who are adequately trained and willing to undertake the tedious task of inspection (Morrow et al.,1990).

Mcrae (1985), mentioned that, the term “grading” can be applied to two distinct operations which are: (1) sizing, in which the grades are segregated according to their dimensions and (2) inspection, in which grades are based on the proportion of undesirable characteristics such as greening, cuts or other blemishes which are allowed to remain with the sound tubers and involves the elimination of unwanted material.

Leemans and Destain (2004) mentioned that, fresh market fruits like apples are graded into quality categories according to their size, color and shape and to the presence of defects. The two first quality criteria are actually automated on industrial graders, but fruits grading according to the presence of defects is not yet efficient and consequently remains a manual operation, repetitive, expensive and not reliable.

Brennan (2006) stated that, sorting and grading are terms which frequently used interchangeably in the food processing industry. Sorting is a separation based on a single measurable property of raw material units, while grading is “the assessment of the overall quality of a food using a number of attributes”. Grading of fresh product may also be defined as ‘sorting according to quality’, as sorting usually upgrades the product.

Kondo (2009) reported that, in recent ten years, operations in grading systems for fruits and vegetables became highly automated with mechatronics, and robotics technologies. Machine vision systems and near infrared inspection systems have been introduced to many grading facilities with mechanisms for inspecting all sides of fruits and vegetables.

Sorting of agricultural products is accomplished based on appearance, texture, shape and sizes. Manual sorting is based on traditional visual quality inspection performed by human operators, which is tedious, time-consuming, slow and non-consistent. A cost effective, consistent, superior speed and accurate sorting can be achieved with machine vision assisted sorting.

Automated sorting had undergone substantial growth in the food industries in the developed and developing nations because of availability of infrastructures. Computer application in agriculture and food industries have been applied in the areas of sorting, grading of fresh products, detection of defects such as cracks, dark spots and bruises on fresh fruits and seeds. The new technologies of image analysis and machine vision have not been fully explored in the development of automated machine in agricultural and food industries. There is increasing evidence that machine vision is being adopted at commercial level but the slow pace of technological development in Egypt which are not available are among the factors that will limit the processes that requires computer vision and image analysis(Locht et al, 1997).

Advertisement

3. Manual inspection

The method used by the farmers and distributors to sort agricultural products is through traditional quality inspection and handpicking which is time-consuming, laborious and less efficient.

The maximum manual sorting rate is dependent on numerous factors, including the workers experience and training, the duration of tasks, and the work environment (temperature, humidity, noise levels, and ergonomics of the work station). More fundamentally, viewing conditions (illumination, defect contrast, and viewing distance) must be optimal to achieve maximum sorting rates.

Attempts to develop automatic produce sorters have been justified mostly by the inadequacies of manual sorters, but few authors provide results that demonstrate the degree of manual sorting inefficiencies. Flaws were more accurately identified when the inspector knew that only one type of flaw was present in the sample. The detectability of each flaw decreased when the sample contained more than one type of flaw. The authors indicated that different flaws must be mentally processed separately in a limited amount of time, and that these separate decisions may interfere with each other when more than one flaw is present in the sample. It was also proposed that a speed-accuracy relationship existed.

Geyer and Perry (1982) showed that samples with more than one flaw required a longer inspection time to achieve similar accuracy than a sample with only one flaw type. It was thought that inspector would have to search differently types of flaws, and this may have contributed to the longer inspection time. The increased inspection time improved correct rejection. The rejection of sound items was blamed on the increased false alarm rate due to more decision cycles.

More than the ability to discern a defect is required for optimal defect detection. Meyers et al. (1990) indicated that inspection tasks were complicated by the fact that acceptable defect limits periodically change. Also, individuals must apply absolute limits to continuous variables, such as color. In addition to the interpretation of the allowable limits, inspector must be able to see the defect if they are to reject the produce using a standard peach grading line with uniform spherical balls, theoretically only 88.7% of the surface area was presented to the inspector when standing at the side of the conveyor. Actual tests showed that only 82% of the defects on the balls were made visible to the inspector. The amount of surface area inspected is increased by placing multiple manual graders at both sides of a conveyor.

Many of the decision that are made during manual inspection are based on qualitative measurements, and Muir et al. (1989) illustrated individual “human sensors” are quite variable and difficult to calibrate. When qualified inspector were asked to quantify the amount of surface defect on a potato (in percentage of the total tuber surface), the values for a single sample ranged from 10 to 70%. The repeatability of individual inspectors was also very poor. Differences between two consecutive readings were as high as 40 percentage points in some cases. Appropriate imagines sensors are more accurate, with a maximum variation of 15 percentage points.

Rehkugelr and Throop (1976) indicate that a manual sorter was able to remove bruised apples from sound fruit with acceptable sorting efficiencies at a rate of approximately 1fruit/s. Similarly, Stephenson (1976) showed that rates for sorting tomatoes into immature and mature lots should not exceed 1fruit/s per inspector. A slightly faster rate, 1.2 fruit/s, was identified as the maximum rate for an inspector to reject 72% of serious defects in oranges. These results demonstrated the shortfalls of manual inspection and re-enforced the need for a more consistent grading system. Implementation of automated sorting machines may improve accuracy, decrease labor costs, and result in a final product free of defects.

The method used by the farmers and distributors to sort agricultural products is through traditional quality inspection and handpicking which is time-consuming, laborious and less efficient. Sun et al (2003) observed that the basis of quality assessment is often subjective with attributes such as appearance, smell, texture and flavour frequently examined by human inspectors. Francis (1980) found that human perception could easily be fooled. It is pertinent to explore the possibilities of adopting faster systems which will save time and more accurate in sorting of crops. One of such reliable method is the automated computer vision sorting system.

Advertisement

4. Machine vision applications

Machine vision technology uses a computer to analyze an image and to make decisions based on that analysis. There are two basic types of machine vision applications — inspection and control. In inspection applications, the machine vision optics and imaging system enable the processor to “see” objects precisely and thus make valid decisions about which parts pass and which parts must be scrapped. In control applications, sophisticated optics and software are used to direct the manufacturing process. Machine-vision guided assembly can eliminate any operator error that might result from doing difficult, tedious, or boring tasks; can allow process equipment to be utilized 24 hours a day; and can improve the overall level of quality.

The following process steps are common to all machine vision applications:

 Image acquisition

An optical system gathers an image, which is then converted to a digital format and placed into computer memory.

 Image processing

A computer processor uses various algorithms to enhance elements of the image that are of specific importance to the process.

 Feature extraction

The processor identifies and quantifies critical features in the image (e.g., the position of holes on a printed circuit board, the number of pins in a connector, the orientation of a component on a conveyor) and sends the data to a control program.

 Decision and control

The processor’s control program makes decisions based upon the data. Are the holes within specification? Is a pin missing? How must a robot move to pick up the component? Machine vision technology is used extensively in the automotive, agricultural, consumer product, semiconductor, pharmaceutical, and packaging industries, to name but a few. Some of the hundreds of applications include vision-guided circuit-board assembly, and gauging of components, razor blades, bottles and cans, and pharmaceuticals.

4.1. Use of machine vision to classify agricultural products

Machine vision is the use of a computer to analyze a picture in order to extract meaningful information out of the picture. Using this powerful tool, accurate information, such as the images shape, size or appearance, can be obtained from an object that could not be easily obtained by human observation. To better classify the shape and appearance of agricultural products several studies have looked into using machine vision to classify various agricultural products. These include studies by Nielsen et al 1998, Paulus et al 1997, and Heinemann et al 1994.

Advertisement

5. Machine vision system

Grading and sorting machine vision system consist of feeding unite, a belt conveyor to convey the fruit, a color CCD camera located in an image acquisition chamber with lighting system for image capturing, control unite for open and close gates according to signals from computer unite and a computer with an image frame grabber to process the captured image.

The acquisition of an image that is both focused and illuminated is one of the most important parts of any machine vision system. Figure (1) shows the general steps required in obtaining results from an image of an object Sun et al (2003).

Figure 1.

Imaging flowchart

Originally the image capture and digitizing of the image was accomplished by using a combination of a video camera and a frame grabber program. This method has almost been entirely replaced with CCD (charge-coupled device) and CMOS (complementary metal oxide semiconductor) chips. These chips use electrical circuits to directly convert light intensities into a digital image. They combine the video camera and frame grabber into one tool that can operate faster and with less distortion of the image. These chips also have the advantage that they can produce images at a much higher resolution then the frame grabber method (Mummert, 2004).

Noise is the incorrect representation of a pixel inside an image. It is best observed in variations in the color of a uniformly colored surface. Noise can be caused by numerous electrical sources and its removal is important, since the noise can cause the features of an image to appear distorted. This distortion can cause the features to be measured and classified incorrectly. While many algorithms have been proven useful to remove noise, the simplest method is to take multiple images of the same object and averaging the images together (Mummert, 2004). Since the noise is not the same in every image, when averaged the noise will blend into its surroundings, making the resulting image much clearer. Preprocessing of an image can include thresholding, cropping, gradient analysis, and many more algorithms. All of the processes permanently change the pixel values inside an image so that it can be analyzed by a computer. For example, in grey scale thresholding, a value for the intensity is selected and any pixel whose intensity value is less then the selected value intensity is set to 0 (black), if greater the value is set to 255 (white). After thresholding, the resulting grey scale image can easily have features classified and measured. The outputs from a machine vision system can be varied, in robotics the output might represent the location of an object to be moved, in inspections the output would be a pass or fall result, or in the case of this study the output is the sweet potatoes size and shape.

Machine vision and image processing techniques have been found increasingly useful in the fruit industry, especially for applications in quality inspection and defect sorting applications. Research in this area indicates the feasibility of using machine vision systems to improve product quality while freeing people from the traditional hand-sorting of agricultural materials (Tao, 1996a,b; Heinemann et al.,1995; Crowe and Delwiche, 1996; Throop et al., 1993; Yang, 1993; Upchurch et al., 1991). However, automating fruit defect sorting is still a challenging subject due to the complexity of the process. From fruit industry perspective, the fundamental requirements for an imaging-based fruit sorting system include: (1) 100% total inspection so that each piece of fruit is checked; (2) high-speed on-line and adaptation to existing packing lines; (3) sorting accuracy comparable to human sorters; and (4) the flexibility to adapt to fruits natural variations in shape, size, brightness, and various defect (Tao, 1998; Wen and Tao, 1997; Rigney et al., 1992). Machine-vision systems distinguish between good and defective fruit by contrasting the differences in light reflectance off the fruits surfaces (Miller, 1995; Thai et al., 1992; Guyer et al., 1994). Machine vision is increasingly used for automated inspection of agricultural commodities (Brosnan and Sun, 2004; Chen et al., 2002). Research results suggest that it is feasible to use machine vision systems to inspect fruit for quality related problems (Bennedsen and Peterson, 2005, and Brosnan and Sun, 2004). For fruit such as apples, commercial systems are available that allow sorting based on physical characteristics like weight, size, shape, and color. Automated fruit grading, standards assigned to fruit based on exterior quality, is also possible with machine vision (Leemans et al., 2002). Commercial sorters frequently use a conveyor system with either shallow cups (each cup holding one apple as it is moved) or bicone rollers that allow apples to rotate while moving along the conveyor (Figure 2). To be considered commercially applicable, automated systems must be able to handle fruit at rates of at least 6-10 fruit per second (Throop et al., 2001).

A camera or cameras above the conveyor are commonly used to capture images in these systems, sometimes in conjunction with mirrors below the fruit. The rotation of apples produced by bi-cone rollers allows for the imaging of multiple aspects of each apple’s surface by using two or more cameras spaced apart along the conveyer. This approach has not been proven to be viable for defect detection for a number of reasons, including non-uniform rotation due to differences in apple sizes and frequent bouncing due to non-uniform shapes.

Figure 2.

A Compac™ apple sorter. Courtesy of Compac, Inc., Visalia, CA.

Currently, there is no imaging process commercially used to detect defects or contamination due to lack of a method for imaging 100% of the entire surface of individual fruit. Thus, manual sorting remains the primary method for removal of apples with defects (Bennedsen & Peterson, 2005).

Figure 3.

A simple block diagram for a typical vision system operation.

The main components of a typical vision system have been described in this study. Several tasks such as the image acquisition, processing, segmentation, and pattern recognition are conceivable. The role of image-acquisition sub-system in a vision system is to transform the optical image data into an array of numerical data, which may be manipulated by a computer. Fig. 3 shows a simple block diagram for such a machine vision system. It includes systems and sub-systems for different processes. The big rectangles show the sub-systems while the parts for gathering information are presented as small rectangles in Fig. 3. As can be seen in Fig. 3, the light from a source illuminates the scene (it can be an industrial environment), and an optical image is generated by image sensors. Image arrays, digital camera, or other means are used to convert optical image into an electrical signal that can be converted to an ultimate digital image.

Typically, cameras incorporating either the line scan or area scan elements are used, which offer significant advantages. The camera system may use either charge coupled device (CCD) sensor or vidicon for the light detection. The preprocessing, segmentation, feature extraction and other tasks can be performed utilizing this digitized image. Classification and interpretation of image can be done at this stage and considering the scene description, the actuation operation can be performed in order to interact with the scene. The actuation sub-system therefore provides an interaction loop with the original scene in order to adjust or modify any given condition for a better image taking. (Golnabia and Asadpour, 2007).

The automated strawberry grading system (Liming and Yanchao, 2010) was developed based on three characteristics: shape, size and color. The automated strawberry grading system (Fig. 4) mainly consists of a mechanical part, an image processing part, a detection part and a control part. The mechanical part mainly consists of a conveyer belt, a platform, a leading screw, a gripper and two motors to implement the strawberry transport and gradation. The image processing part consists of camera (WV-CP470, Panasonic), image collecting card (DH-CG300, Daheng company), a closed image box and a computer (PCM9575) to implement image preprocessing, segmentation, extracting grading characteristic and to grade the strawberry by these characteristics.

The detection part consists of two photoelectrical sensors and two limit switches. The photoelectrical sensors are used to detect the strawberry position; the limit switch is used to protect the slider on the leading screw during the detection. The control part adopts the single-chip-microcomputer (SCM) to receive the signals from the photoelectrical sensor, the limit switch and the computer, finally to control the motors.

The results show that the strawberry classification algorithm is designed viable and accurately. Strawberry size error is less than 5%, the color grading accuracy rate is 88.8%, and the shape classification accuracy rate is over 90%. The average time to grade one strawberry is no more than 3 s.

(Blasco, et al. 2009) developed an engineering solution for the automatic sorting of pomegranate arils. The prototype (Fig. 5) basically consisted of three major elements that corresponded to the feeding, inspection and sorting units. These are described below. The prototype used two progressive scan cameras to acquire 512 _ 384 pixel RGB (Red, Green and Blue) images with a resolution of 0.70 mm/pixel. Both cameras were connected to a computer, the so-called ‘‘vision computer” (Pentium 4 at 3.0 GHz), by means of a single frame grabber that digitized the images and stored them in the computer’s memory.

Figure 4.

The structure of the strawberry automated grading system.

Figure 5.

Scheme of the sorting machine.

The illumination system consisted of two 40 w daylight compact fluorescent tubes located on both sides of each conveyor belt. The scene captured by each camera had a length of approximately 360 mm along the direction of the movement of the objects and a width that allowed the system to inspect three conveyor belts at the same time. The entire system was housed in a stainless steel chamber.

The sorting area followed the inspection chamber. Three outlets were placed on one side of each of the conveyor belts. In front of each outlet air ejectors were suitably placed to expel the product. The separation of the arils was monitored by the control computer, in which a board with 32 digital outputs was mounted. This board was used to manage the air ejectors. The computer tracked the movement of the objects on the conveyor belts by reading the signals produced by the optical encoder attached to the shaft of the carrier roller.

Concluded that the prototype for inspecting and sorting the arils was developed and successfully commissioned, which could handle a maximum throughput of 75 kg/h. The inspection unit, which had two cameras connected to a computer vision system, had enough capacity to achieve real-time specifications and enough accuracy to fulfill the commercial requirements. The sorting unit was able to classify the product into four categories.

(Aleixos, et al. 2002). Developed a multispectral inspection of citrus in real-time using machine vision and digital signal processors. Describes a new machine vision system for citrus inspection, including a parallel hardware and software architecture, able to determine the external quality of the fruit in real-time at a speed of 10 fruits/s.

The vision system has been placed on a commercial fruit sorter having four independent inspection lines. As the first step, the sorter singles the fruit before they enter into the inspection site by means of bi-conic rollers. In principle, each individual fruit is located in a space between two rollers (what is called a cup), although sometimes, when there is an excessive loading, two or more fruits are located in the same cup or fruit are located between two filled cups. The inspection site (Fig. 6) provides adequate lighting to the scene by fluorescent tubes, incandescent lamps and polarised filters that remove reflections from the surface of the fruit. The scene is composed of three complete fruit, imaged with a multispectral camera that simultaneously captures four bands: the three conventional color bands (R, G and B) and another centred at 750 nm (near infrared, denoted I). The camera (Fig. 7) has two CCDs, one a color CCD that provides RGB information and the other monochromatic, to which has been coupled an infrared filter, centred on 750 nm (±10), that provides I information. The light coming from the scene reaches a semi-transparent mirror that refracts 50% of the light towards the infrared (A) CCD and reflects the other 50% to a second mirror (B), which reflects all the light towards the color CCD. The system guarantees at least three whole fruits on each image with a resolution of 0.7 mm/pixel.

The fruit rotates while passing below the camera due to a forced rotation of the rollers. To single the fruits and estimate their size and shape, the system uses only the I information, but for color estimation and defect detection it is necessary to work also with the color bands.

This fact has been used to set up a parallel strategy based on dividing the inspection tasks between two digital signal processors (DSP), so during on-line work, two image analysis procedures are performed by the two DSP running in parallel in a master/slave architecture. The master processor calculates the geometrical and morphological features of the fruit using only the I band, and the slave processor estimates the fruit color and detects the skin defects using the four RGBI bands. After the image processing, the master processor collects the information from the slave and sends the result to a control computer. The system was tested under laboratory conditions at two common sizer speeds: 300 and 600 fruits/min (5–10 fruits/s).

Figure 6.

Scheme of the sorter and lighting system.

Figure 7.

Scheme of the multispectral camera.

An image processing based technique was developed by (Omid et al. 2010) to measure volume and mass of citrus fruits such as lemons, limes, oranges, and tangerines. The technique uses two cameras to give perpendicular views of the fruit as shown in (figure 8). An efficient algorithm was designed and implemented in Visual Basic (VB) language. The product volume was calculated by dividing the fruit image into a number of elementary elliptical frustums. The volume is calculated as the sum of the volumes of individual frustums using VB.

Figure 8.

The developed machine vision system.

The volumes computed showed good agreement with the actual volumes determined by water displacement method. The coefficient of determination (R2) for lemon, lime, orange, and tangerine were 0.962, 0.970, 0.985, and 0.959, respectively. The Bland–Altman 95% limits of agreement for comparison of volumes with the two methods were (_1.62; 1.74), (_7.20; 7.57), (_6.54; 6.84), and (_4.83; 6.15), respectively. The results indicated citrus fruit’s size has no effect on the accuracy of computed volume. The characterization results for various citrus fruits showed that the volume and mass are highly correlated. Hence, a simple procedure based on computed volume of assumed ellipsoidal shape was also proposed for estimating mass of citrus fruits. This information can be used to design and develop sizing systems.

Computer vision is the construction of explicit and meaningful descriptions of physical objects from images. States that it encloses the capturing, processing and analysis of two-dimensional images, with others noting that it aims to duplicate the effect of human vision by electronically perceiving and understanding an image. The basic principle of computer vision is described in Fig. 9. Image processing and image analysis are the core of computer vision with numerous algorithms and methods available to achieve the required classification and measurements.

Figure 9.

Principle of computer vision system.

Computer vision systems have been used increasingly in the food and agricultural industry for inspection and evaluation purposes as they provide suitably rapid, economic, consistent and objective assessment. They have proved to be successful for the objective measurement and assessment of several agricultural products. Over the past decade advances in hardware and software for digital image processing have motivated several studies on the development of these systems to evaluate the quality of diverse and processed foods. Computer vision has long been recognized as a potential technique for the guidance or control of agricultural and food processes. Therefore, over the past 20 years, extensive studies have been carried out, thus generating many publications.

Computer vision is a rapid, economic, consistent and objective inspection technique, which has expanded into many diverse industries. Its speed and accuracy satisfy ever-increasing production and quality requirements, hence aiding in the development of totally automated processes. This non-destructive method of inspection has found applications in the agricultural and food industry, including the inspection and grading of fruit and vegetable. It has also been used successfully in the analysis of grain characteristics and in the evaluation of foods such as meats, cheese and pizza (Brosnan and Sun, 2002).

(Jarimopas and Jaisin, 2008) develop an efficient machine vision experimental sorting system for sweet tamarind pods based on image processing techniques. Relevant sorting parameters included shape (straight, slightly curved, and curved), size (small, medium, and large), and defects. The variables defining the shape and size of the sweet tamarind pods were shape index and pod length. A pod was said to have defects if it contained cracks.

The sorting system involved the use of a CCD camera which was adapted to work with a TV card, microcontrollers, sensors, and a microcomputer as shown in figure 10. Conveyor belt 30 cm wide and 180 cm long with four receivers for the sorted sweet tamarind. On the right side of the belt was a box with a CCD camera mounted on the top and four 14-watt energy saving lamps at each corner of the box to give uniform light intensity with minimum shadows. The camera, which was mounted about 41 cm above the belt, had a focal length of 38–72 mm and provided a resolution of 520 vertical TV lines. A cylinder of compressed air was used to drive the three pneumatic segregators. The sorting system was so designed as to sort sweet tamarind into three sizes (large, medium, and small).

The defective pods were rejected at the left hand end of the conveyor. The control unit components were assembled in a box and placed under the sorting system.

Figure 10.

An experimental machine vision system for sorting sweet tamarind pods (1 is conveyor; 2 is power drive; 3 is light source and CCD camera; 4 is pneumatic segregator and compressed air tank; 5 is control unit; and 6 is microcomputer).

The results showed that the three control factors did not significantly affect shape, size, and defects at a significance level of 5%. The averaged shape indexes of the straight, slightly curved, and curved pods were 51.1%, 61.6%, and 75.8%, respectively. Pod length was found to be influenced by size and cultivar, with Sitong and Srichompoo pods ranging from 10.0 to 14.0 cm and 8.5 to 12.4 cm, respectively. The vision sorting system could separate Sitong tamarind pods at an average sorting efficiency (EW) of 89.8%, with a mean contamination ratio (CR) of 10.2% at a capacity of 1517 pod/h.

Orange grading operations have been mechanized from a couple of decades. At the first stage of the mechanization, plates with holes of orange fruit sizes were used for sorting. Machine vision and near infrared (NIR) technologies have been utilized and improved with engineering design to convey fruits to detect fruit size, shape, color, sugar content and acidity since about ten years ago. The system inspects fruit with color CCD cameras installed at six different positions on a line to provide all side fruit images with lighting devices. The light devices are made by halogen lamps or LEDs fitted with PL (polarizing) filters to eliminated halation on glossy fruit surfaces. The near infrared inspection systems consist of halogen lamps and a spectrophotometer to analyze absorption bands of transmissive light from fruits. Furthermore, an X-ray imaging system is sometimes installed on each line to find internal defects such as rind-puffing.

Fig. 11 shows a whole inspection system on an orange grading line. After dumping containers filled by oranges, fruits are singulated by a singulating conveyor. Singulated fruits are sent to the NIR inspection system (transmissive type) to measure sugar content (brix equivalent) and acidity.

Figure 11.

A whole orange fruit grading system on a line manufactured by SI Seiko Co., Ltd., Japan.

In addition, it can measure the granulation level of the fruit which indicates the inside water content of fruit. The second inspection is X-ray imaging for internal structural quality. Rind-puffing, a biological defect is detected from the image. In the external inspection stage, color images from six machine vision sets under random trigger mode, are copied to the image grabber boards fitted on the image processing computers whenever a trigger occurs.

The four cameras are set for acquiring side images, while the two cameras are from top. The final camera acquires a top image of each fruit after fruit turning over because both top and bottom sides are inspected. All the images are processed using specific algorithms for detecting image features of color, size, shape, and external defect. Output signals from image processing are transmitted to the judgment computer where the final grading decision (usually into several grades and several sizes) is made based on fruit features and internal quality measurements.

Fig. 12 shows a fruit grading robot system installed at JA Shimoina, Japan. The robot system consists of two 3 DOF manipulators, in which one is a providing robot, while the other is a grading robot with 12 machine vision systems. After container comes under the providing robot (1), 12 fruits are sucked up by suction pads at a time (2) and are transported to intermediate stage making space toward vertical direction on this page between fruits (3).

The grading robot picks 12 fruits up again (4) and 12 bottom images of the fruits are acquired during the manipulator moving to trays on a conveyor line (5). Just before releasing the fruits to the trays (7), 4 side images of each fruit are acquired by rotating the suction pads for 270_ (6). The fruits are pushed out onto a line (8) and top images are acquired by another color camera stationed on each line. Software algorithms of machine vision are similar with that of the orange grading system. Fruit color, size, shape, and defect are measured.

Figure 12.

A fruit grading robot system manufactured by SI Seiko Co., Ltd., Japan (Left: front view, Right: side view).

Concluded that it can be said that roles of automated grading systems as follows: 1) Efficient sorting and labor saving, 2) Uniformization of fruit quality, 3) Enhancing market value of products, 4) Fair payment to producers based not only on quantity but on quality of each product, 5) Farming guidance from grading results and GIS (Geographical Information System), and 6) Contribution to the traceability system for food safety and security. The most important difference of the automation systems from the conventional machines is to be able to handle a lot of precise information. To handle the comprehensive data on agricultural products and foods, understanding of diversity and complexity of biomaterial properties is required and sensors to collect data should be often designed based on the properties. Through the traceability system in which all the data of producers, distributors, and consumers are linked and opened to them, it is expected that mutual information exchange among them makes more effective procedure at each stage and produces more safety and higher quality products ( Kondo, 2010 ).

Identification of apple stem-ends and calyxes from defects on process grading lines is a challenging task due to the complexity of the process. An in-line detection of the apple defect is developed in this article. Firstly, a computer controlled system using three color cameras is placed on the line. In this system, the apples placed on rollers are rotating while moving, and each camera is capturing three images from each apple. In total nine images are obtained for each apple allowing the total surface to be scanned. Secondly, the apple image is segmented from the black background by multi-threshold methods. The defects, including the stem-ends and calyxes, called regions of interest (ROIs), are segmented and counted in each of the nine images. Thirdly, since a calyx and stem-end cannot appear at the same image, an apple is defective if any one of the nine images has two or more ROIs. There are no complex imaging processes or pattern recognition algorithms in this method, because it is only necessary to know how many ROIs are there in a given apple’s image. Good separation between normal and defective apples was obtained. The classification error of unjustified acceptance of blemished apples reduced from 21.8% for a single camera to 4.2% for the three camera system, at the expense of rejecting a higher proportion of good apples. Averaged over false positive and false negative, the classification error reduced from 15 to 11%. The disadvantage of this method is that it could not distinguish different defect types. Defects such as bruising, scab, fungal growth, and disease, are treated as the same.

The lighting and image acquisition system were designed to be adapted on an existing single row grading machine (prototype from Jiangsu Univ., China). Six lighting tubes (18 W, type 33 from Philips, Netherlands) were placed at the inner side of a lighting box while three cameras (color 3CCD uc610 from Uniq, USA), two having their optical axis in a plane perpendicular to the fruit movement and inclined at 60◦ with respect to the vertical and one above observed the grading line in the box, as shown in Figs. 13 and 14. The lighting box is 1000mm in length and 1000mm in width. The distance between apple and camera is 580mm, thus there are three apples in the view field of each camera with a resolution of 0.4456mm per pixel. The images were captured using three Matrox/meteorII digitized frame-grabbers (Matrox, Canada) loaded in three separate computers. The standard image treatment functions were based on the Matrox libraries (Matrox, Canada) with remaining algorithms implemented in C++. A local network was built among the computers in order to communicate results data.

The central processing unit of each computer was a Pentium 4 (Intel, USA) clocked at 3.0 GHz. The fruits placed on corn-shaped rollers are rotating while moving. The friction between rollers and the belt on the conveyor rack makes the corn-shaped roller rotate while moving through the field-of-view of the cameras. This was adjusted in such a way that a spherical object having a diameter of 80mm made a rotation in exactly three images when passed through the view field of camera. The moving speed in the range 0–15 apples per second could be adjusted by the stepping motor (Xiao-bo, et al, 2010 ).

Figure 13.

Hardware system of apple in-line detection.

Figure 14.

Trigger grab of nine images for an apple by three cameras at three positions.

One of the main problems in the post-harvest processing of citrus is the detection of visual defects in order to classify the fruit depending on their appearance. Species and cultivars of citrus present a high rate of unpredictability in texture and color that makes it difficult to develop a general, unsupervised method able of perform this task. In this paper we study the use of a general approach that was originally developed for the detection of defects in random color textures. It is based on a Multivariate Image Analysis strategy and uses Principal Component Analysis to extract a reference eigenspace from a matrix built by unfolding color and spatial data from samples of defect-free peel. Test images are also unfolded and projected onto the reference eigenspace and the result is a score matrix which is used to compute defective maps based on the T2 statistic. In addition, a multiresolution scheme is introduced in the original method to speed up the process. Unlike the techniques commonly used for the detection of defects in fruits, this is an unsupervised method that only needs a few samples to be trained. It is also a simple approach that is suitable for real-time compliance. Experimental work was performed on 120 samples of oranges and mandarins from four different cultivars: Clemenules, Marisol, Fortune, and Valencia. The success ratio for the detection of individual defects was 91.5%, while the classification ratio of damaged/sound samples was 94.2%. These results show that the studied method can be suitable for the task of citrus inspection.

The method performs novelty detection, and also is able to identify new unpredictable defects, by using a model of sound color textures and considering those locations that do not fit this model as being defective. It also needs only a few samples to carry out the unsupervised training. For this reason, it is suitable for citrus inspection as these systems need frequent tuning to adjust to the inspection of new cultivars and even the features of each batch of fruit within the same cultivar.

Experimental work was performed using 120 samples (images) of randomly selected oranges and mandarins belonging to four different cultivars: Marisol, Clemenules, Fortune and Valencia. First, a set of experiments were carried out to tune the parameters of the method for each cultivar. These included the number of principal eigenvectors used to define the reference eigenspace, the T2 threshold (percentile in the T2 cumulative histogram) used to determine if locations in test samples were sound or defective, and finally, the set of scales used in the multiresolution framework. Once the parameters were tuned,wecompiled the results for the detection of individual defects achieving 91.5% of correct detections and 3.5% of false detections. By using chromatic and textural features, the main contribution of this method is the capability of detecting external defects in different cultivars of citrus that present different textures carrying out only a single previous unsupervised training. The method achieved a performance rate of 94.2% successful classification of complete samples of fruit as either damaged or sound. These results show that the MIA approach studied here can be adequate for the task of citrus inspection (Fernando. et al, 2010).

Contemporary Vision and Pattern Recognition problems such as face recognition, fingerprinting identification, image categorization, and DNA sequencing often have an arbitrarily large number of classes and properties to consider. To deal with such complex problems using just one feature descriptor is a difficult task and feature fusion may become mandatory. Although normal feature fusion is quite effective for some problems, it can yield unexpected classification results when the different features are not properly normalized and preprocessed. Besides it has the drawback of increasing the dimensionality which might require more training data. To cope with these problems, this paper introduces a unified approach that can combine many features and classifiers that requires less training and is more adequate to some problems than a naïve method, where all features are simply concatenated and fed independently to each classification algorithm. Besides that, the presented technique is amenable to continuous learning, both when refining a learned model and also when adding new classes to be discriminated. The introduced fusion approach is validated using a multi-class fruit-and-vegetable categorization task in a semi-controlled environment, such as a distribution center or the supermarket cashier. The results show that the solution is able to reduce the classification error in up to 15 percentage points with respect to the baseline.

Oftentimes, when tackling complex classification problems, just one feature descriptor is not enough to capture the classes’ separability. Therefore, efficient and effective feature fusion policies may become necessary. Although normal feature fusion is quite effective for some problems, it can yield unexpected classification results when not properly normalized and preprocessed. Additionally, it has the drawback of increasing the dimensionality which might require more training data.

This paper approaches the multi-class classification as a set of binary problems in such a way one can assemble together diverse features and classifier approaches custom-tailored to parts of the problem. It presents a unified solution (Section 4) that can combine many features and classifiers. Such technique requires less training and performs better if compared with a naïve method, where all features are simply concatenated and fed independently to each classification algorithm.

The results show that the introduced solution is able to reduce the classification error in up to 15 percentage points with respect to the baseline. A second contribution of this paper is the introduction to the community of a complete and well-documented fruit/vegetables image data set suitable for content-based image retrieval, object recognition, and image categorization tasks. We hope this data set will be used as a common comparison set for researchers workingin this space.

Although we have showed that feature and classifier fusion can be worthwhile, it seems not to be advisable to combine weak features with high classification errors and features with low classification errors. In this case, most likely the system will not take advantage of such combination.

The feature and classifier fusion based on binary base learners presented in this paper represents the basic framework for solving the more complex problem of determining not only the species of a produce but also its variety. Since it requires only partial training for the added features and classifiers, its extension is straightforward. Given that the introduced solution is general enough to be used in other problems, we hope it will endure beyond this paper.

Whether or not more complex approaches such as appearance based descriptors provides good results for the classification is still an open problem. It would be unfair to conclude they do not help (Anderson et al, 2010 ).

Color is an important quality attribute that dictates the quality and value of many fruit products. Accurately measuring and describing heterogenous fruit color changes during ripening is difficult with the instrumentation available (chromometer and colorimeter) due to the small viewing area of the equipment. Calibrated computer vision systems (CVS) provide another technique that allows capture and quantitative description of whole fruit color characteristics. Published research has demonstrated errors in CVS due to product curvature. In this work, it was confirmed that of the measured a* and b* color values on a curved surface, 55% and 69% of the values were within the range measured for the same flat surface. This deviation of measurement results in description of hue angle and chroma with an average error of 2◦ and 2.5, respectively. The system developed allows capture of hue angle data of whole fruit of heterogeneous colour. The usefulness of the device for capturing descriptive colour data during maturation of fruit is demonstrated with ‘B74’ mangoes (Kang et al, 2008 ).

Hyperspectral images of the apples (normal and injured) were acquired using a lab-scale hyperspectral imaging system (Fig. 15) that consisted of a charge-coupled device (CCD) camera (PCO-1600, PCO Imaging, Germany) connected to a spectrograph (ImSpector V10E, Optikon Co., Canada) coupled with a standard C-mount zoom lens. The optics of this imaging system allowed studying fruit properties associated with the spectral range of 400–1000 nm.

Figure 15.

The hyperspectral imaging system: (a) a CCD camera; (b) a spectrograph with a standard C-mount zoom lens; (c) an illumination unit; (d) a light tent; and (e) a PC supported with the image acquisition software.

The camera faced downward at a distance of 400mm from the target. The sample was illuminated through a cubic tent made of white nylon fabric to provide uniform lighting conditions. The light source consisted of two 50Whalogen lamps mounted at a 45◦ angle from horizontal, fixed at 500mm above the sample and spaced 900mm apart on two opposite sides of the sample. The sample was put in a position that corresponded with the center of the field of view of the camera (300mm×300mm), with calyx–stem end perpendicular to the camera lens to avoid any discrepancy between the normal surface and stem or calyxes. The camera spectrograph assembly was provided with a stepper motor to move this unit through the camera’s field of view to scan the apple line-by-line.

The spectral images were collected in a dark room where only the halogen light source was used. The exposure time was adjusted to 200ms throughout the test. Each collected spectral image was stored as a three-dimensional image (x, y, ). The spatial components (x, y) included 400×400 pixels, and the spectral component () included 826 bands within 400–1000nm range. The hyperspectral imaging system was controlled by a laptop Pentium M computer (processor speed: 2.0 GHz; RAM: 2.0 GB) preloaded and configured with the Hypervisual Image Analyzer® software program (ProVision Technologies, Stennis Space Center, MO, USA). All spectral images acquired were processed and analyzed using the Environment for Visualizing Images software program (ENVI 4.2, Research Systems Inc., Boulder, CO, USA).

The hyperspectral images were calibrated with a white and a dark references. The dark reference was used to remove the dark current effect of the CCD detectors, which are thermally sensitive.

Hyperspectral imaging (400–1000 nm) and artificial neural network (ANN) techniques were investigated for the detection of chilling injury in Red Delicious apples. Ahyperspectral imaging system was established to acquire and pre-process apple images, as well as to extract apple spectral properties. Feed-forward back propagation ANN models were developed to select the optimal wavelength(s), classify the apples, and detect firmness changes due to chilling injury. The five optimal wavelengths selected by ANN were 717, 751, 875, 960 and 980 nm. The ANN models were trained, tested, and validated using different groups of fruit in order to evaluate the robustness of the models. With the spectral and spatial responses at the selected five optimal wave lengths, an average classification accuracy of 98.4%was achieved for distinguishing between normal and injured fruit. The correlation coefficients between measured and predicted firmness values were 0.93, 0.91 and 0.92 for the training, testing, and validation sets, respectively (Elmasry et al, 2009 ).

Naoshi et al (2008). Mentioned that, there are many types of citrus fruit grading machine with machine vision capability. While most of them sort fruit by size, shape, and color, detection of rotten fruit remains challenging because their appearances are similar to normal parts. Objectives of this research were to investigate if fluorescence would be a good indicator of the fruit rot, and to develop an economical solution to add the rot inspection capability to an existing machine vision fruit inspection station. A machine vision system consisting of a pair of white and ultra violet (UV) LED lighting devices and a color CCD camera was proposed for the citrus fruit grading task. Since the time lag between the color and fluorescence image captures was short (14ms), it was possible to inspect color, shape, size, and rot of a fruit on the move before it leaves an existing industrial inspection chamber.

Cheng et al (2003). Mentioned that, a near–infrared (NIR) and mid–infrared (MIR) dual–camera imaging approach for online apple stem–end/calyx detection is presented in this article. How to distinguish the stem–end/calyx from a true defect is a persistent problem in apple defect sorting systems. In a single–camera NIR approach, the stem–end/calyx of an apple is usually confused with true defects and is often mistakenly sorted. In order to solve this problem, a dual–camera NIR/MIR imaging method was developed. The MIR camera can identify only the stem–end/calyx parts of the fruit, while the NIR camera can identify both the stem–end/calyx portions and the true defects on the apple. A fast algorithm has been developed to process the NIR and MIR images. Online test results show that a 100% recognition rate for good apples and a 92% recognition rate for defective apples were achieved using this method. The dual–camera imaging system has great potential for reliable online sorting of apples for defects.

Sunil et al (2009). Identification of the insect damage is critical in the pecan processing. The insect damage is positively linked to the production of carcinogenic toxins in many food products. Previously, X-ray images were used for pecan defect identification, but the feature extraction was done manually. The objective of this article was to automate the feature extraction. Three energy levels (30 kV and 1 mA, 35 kV and 0.5 mA, and 40 kV and 0.75 mA) were used to acquire the images of the good pecans, pecans with insect exit holes, and nutmeat eaten pecans. After thresholding, three features were extracted. The features used were area ratio (ratio of area of the nutmeat and shell to the area of the total nut), mean local intensity variation, and average pixel intensity. The local adaptive methods performed well for the selected energy levels. The results indicate that it is feasible to distinguish between the good pecans and eaten nutmeat pecans. However, the selected features were not able to distinguish between the good pecans and the pecans with one or two insect exit holes.

Jun et al (2004). In this study, a mobile fruit grading robot for information-added product in precision agriculture was developed. The prototype robot, which consisted of a manipulator, an endeffector, a machine vision system, and a mobile mechanism, was made. The robot could acquire five fruit images from four sides and the top while its manipulator transported the fruit received from the operator. A preliminary experiment was conducted with 372 samples of sweet pepper in variety of “TosahikariD ” in laboratory. A fruit mass prediction method was developed by use of the five images.

A high spatial resolution (0.5–1.0 mm) hyperspectral imaging system is presented as a tool for selecting better multispectral methods to detect defective and contaminated foods and agricultural products. Examples of direct linear or non-linear analysis of the spectral bands of hyperspectral images that resulted in more efficient multispectral imaging techniques are given. Various image analysis methods for the detection of defects and/or contaminations on the surfaces of Red Delicious, Golden Delicious, Gala, and Fuji apples are compared. Surface defects/contaminations studied include side rots, bruises, flyspecks, scabs and molds, fungal diseases (such as black pox), and soil contaminations. Differences in spectral responses within the 430–900 nm spectral range are analyzed using monochromatic images and second difference analysis methods for sorting wholesome and contaminated apples. An asymmetric second difference method using a chlorophyll absorption waveband at 685 nm and two bands in the near-infrared region is shown to provide excellent detection of the defective/contaminated portions of apples, independent of the apple color and cultivar. Simple and requiring less computation than other methods such as principal component analysis, the asymmetric second difference method can be easily implemented as a multispectral imaging technique.

Fig. 16 is a schematic diagram of the ISL hyperspectral imaging system. It consists of a charge coupled device (CCD) camera system SpectraVideoe Camera from PixelVision, Inc. (Tigard, OR, USA) equipped with an imaging spectrograph SPECIM ImSpector version 1.7 from Spectral Imaging Ltd. (Oulu, Finland). The Im-Spector has a fixed-size internal slit to define the field of view for the spatial line and a prism/grating/prism system for the separation of the spectra along the spatial line. To improve the spatial resolution of the hyperspectral images, an external adjustable slit is placed between the sample and the camera optical set. This better defines the field of view and increases the spatial resolution. The image acquisition and recording is performed with a Pentium-based PC using a general purpose imaging software, PixelViewe 3.10 Beta 4.0 from Pixel-Vision, Inc. (Tigard, OR, USA.).

Figure 16.

Schematic of the hyperspectral imaging system.

A C-mount set with a focus lens and an aperture diaphragm allows for focusing and aperture adjustments, for which the circular aperture is opened to its maximum and the external slit is adjusted with micrometer actuators to optimize light flow and resolution. The light source consists of two 21 V, 150 W halogen lamps powered with a regulated DC voltage power supply from Fiber-Lite A-240P from Dolan-Jenner Industries, Inc. (Lawrence, MA, USA). The light is transmitted through two optical fibers towards a line light reflector. The sample is placed on a conveyor belt with an adjustable speed AC motor control Speedmaster from Leeson Electric Motors (Denver, CO, USA). The sample is scanned line by line with an adjustable scanning rate, illuminated by the two line sources as it passes through the camera’s field of view (Patrick, et al, 2004).

Naoshi et al (2008). Mentioned that, a complete fruit quality inspection system should be able to examine two opposite sides of each fruit. In automating such an inspection system, it is a well-known challenge when there is a need to mechanically manipulate fruits of irregular shapes and sizes. An innovatively designed rotary tray was developed for use in an eggplant fruit grading system. The rotary tray enables the presentation of two opposite sides of each fruit for inspection by machine vision systems. The rotary tray was designed for handling baby eggplants and mainly consisted of two cover plates and six side plates.

It is capable of performing five tasks on a fruit: receiving, presenting, holding, rotating, and releasing. The sequence of stages that a rotary tray goes through while moving along an inspection line are: 1. receiving a fruit, 2. presenting the fruit during the first image acquisition, 3. holding the fruit by closing one cover plate, 4. turn the fruit to its opposite side by rotating the entire tray, 5. opening the other cover plate, 5. presenting the opposite side of the fruit during the second image acquisition, 6. holding the fruit while the decision on its quality is being made by the machine vision algorithms, and 7. releasing the fruit to a particular location according to the inspection result.

The motions of a rotary tray are activated along a grading line by lifting guides, rotary pushers, clicks, and cams. The actions at stages 1 through 6 are performed by mechanical devices strategically placed along a motor driven grading conveyor. The releasing action is triggered by a rotary solenoid when the fruit arrives at a proper location. Six eggplant grading lines, each containing a series of the rotary trays, are being operated at an agricultural cooperative facility in Japan.

Jiangsheng and Yibin (2006). In this research, a novel approach for fruit shape detection was proposed, which based on multi scale level set framework. An image was first decomposed from coarse to fine by wavelet analysis method and a serial of images were formed. Then we use region homogeneity in a level set approach to extract fruit shape boundary at the coarse scale. At the finer scale, these coarse boundaries are used to initialize boundary detection and serve as a priori shape knowledge to guide contour evolution. This new algorithm doesn’t need any noise removal preprocessing, and can find accurate shape boundary without any assumption in a noisy image. The proposed method has been applied to fruit shape detection with more promising result than traditional method.

Color is important in evaluating quality and maturity level of many agricultural products. Color grading is an essential step in the processing and inventory control of fruits and vegetables that directly affects profitability. Dates are harvested at different levels of maturity that require different processing before the dates can be packed. Maturity evaluation is crucial to processing control, but conventional methods are slow and labor-intensive. Because date maturity level correlates strongly with color, automated color grading could be used. A novel and robust color space conversion and color index distribution analysis technique for automated date maturity evaluation that is well suited for commercial production is presented in this paper. In contrast with more complex color grading techniques, the proposed method makes it easy for a human operator to specify and adjust color preference settings for different color groups representing distinct maturity levels. The performance of this robust color grading technique is demonstrated using date samples collected from field testing.

Concluded that A new color space conversion method and color index distribution analysis technique specifically for automated date maturity evaluation has been presented. The proposed approach uses a third-order polynomial to convert 3D RGB values into a simple 1D color space. Unlike other color grading techniques, this approach makes the selection and adjustment of color preferences easy and intuitive. Moreover, it allows a more complicated distribution analysis of fruit surface colors. The user can change color and consistency cutoff points in a manner consistent with human color perception, simply sliding a cutoff point to include fruit that is ‘‘slightly darker” or ‘‘lighter red”. Moreover, changes in preferred color ranges can be completed without reference to precise color values. Furthermore, by converting 3D colors to a linear color space, color distribution analysis required for date maturity evaluation is much more straightforward. The implementation of this new color space conversion method and the results presented demonstrate the simplicity and accuracy of the proposed technique. To calibrate the system, an experienced grader specifies a set of colors of interest, each accompanied by a preferred index value on a linear scale. Provided that the selected color samples cover the complete range of expected colors, accurate color grading will result. This new technique can be applied to other color grading applications that require the setting and adjustment of color preferences.

Advertisement

6. Light

6.1. Electromagnetic spectrum

Radiation energy travels in space at the speed of light in the form of sinusoidal waves with known wavelengths. Arranged from shorter to longer wavelengths, the electromagnetic spectrum provides information on the frequency as well as the energy distribution of the electromagnetic radiation.

When electromagnetic radiation strikes an object, the resulting interaction is affected by the properties of an object such as color, physical damage, and presence of foreign material on the surface. Different types of electromagnetic radiation can be used for quality control of foods. For example, near-infrared radiation can be used for measuring moisture content, and internal defects can be detected by X-rays.

Figure 17.

The electromagnetic spectrum comprises the visible and non-visible range.

Electromagnetic radiation is transmitted in the form of waves and it can be classified according to wavelength and frequency. The electromagnetic spectrum is shown in Fig 17.

Referring to Figure 17, the gamma rays with wavelengths of less than 0.1 nm constitute the shortest wavelengths of the electromagnetic spectrum. At the other end of the spectrum, the longest waves are radio waves, which have wavelengths of many kilometers. The well-known ground-probing radar (GPR) and other microwave-based imaging modalities operate in this frequency range.

Traditionally, gamma radiation is important for medical and astronomical imaging, leading to the development of various types of anatomical imaging modalities such as computed tomography (CT), magnetic resonance imaging (MRI), nuclear magnetic resonance (NMR), single photon emission computed tomography (SPECT) and positron emission tomography (PET) operate at shorter wavelengths ranging from 10-8 m to 10-13 m.

Located in the middle of the electromagnetic spectrum is the visible range, consisting of narrow portion of the spectrum with wavelengths ranging from 400 nm (blue) to 700 nm (red). The popular charge-coupled device or CCD camera operates in this range.

Infrared (IR) light lies between the visible and microwave portions of the electromagnetic band. As with visible light, infrared has wavelengths that range from near (shorter) infrared to far (longer) infrared.

Ultraviolet (UV) light is of shorter wavelength than visible light. Similar to IR, the UV part of the spectrum can be divided, this time into three regions: near ultraviolet (NUV) (300 nm) (NUV), far ultraviolet (FUV) (30 nm), and extreme ultraviolet (EUV) (3 nm). NUV is closest to the visible band, while EUV is closest to the X-ray region and therefore is the most energetic of the three types. FUV, meanwhile, lies between the near and extreme ultraviolet regions, and is the least explored of the three.

Electromagnetic waves travel at the speed of light and are characterized by their frequency (f) and wavelength (λ). In a medium, these two properties are related by:

c = λf

where c is the speed of light in vacuum (3.0 × 108 m/s).

Radiation can exhibit properties of both waves and particles. Visible light acts as if it is carried in discrete units called photons. Each photon has an energy, E, that can be calculated by:

E = h f

where h is Planck’s constant (6.626 × 10−34 J s). ( Sahin & Sumnu, 2005; Sun, 2008).

6.2. Illumination

The provision of correct and high-quality illumination, in many vision applications, is absolutely decisive. Engineers and machine vision practitioners have long recognized lighting as being an important piece of the machine vision system. However, choosing the right lighting strategy remains a difficult problem because there is no specific guideline for integrating lighting into machine vision applications.

Therefore, the illuminant is an important factor that must be taken into account when considering machine vision integration. Frequently, knowledgeable selection of an illuminant is necessary for specific vision applications.

For detection of differences in color under diffuse illumination, both natural daylight and artificial simulated daylight are commonly used. A window facing north that is free of direct sunshine is the natural illuminant normally employed for visual color examination. However, natural daylight varies greatly in spectral quality with direction of view, time of day and year, weather, and geographical location. Therefore, simulated daylight is commonly used in industrial testing. Artificial light sources can be standardized and remain stable in quality. The Commission Internationale de l’Eclairage (CIE) (The International Commission on Illumination) recommended three light sources reproducible in the laboratory in 1931. Illuminant A defines light typical of that from an incandescent lamp, illuminant B represents direct sunlight, and illuminant C represents average daylight from the total sky. Based on measurements of daylight, CIE recommended a series of illuminants D in 1966 to represent daylight. These illuminants represent daylight more completely and accurately than illuminants B and C do. In addition, they are defined for complete series of yellow to blue color temperatures. The D illuminants are usually identified by the first two digits of their color temperature Sahin & Sumnu, 2005 ; Sun, 2008).

Traditionally, the two most common illuminants are fluorescent and incandescent bulbs, even though other light sources (such as light-emitting diodes (LEDs) and electroluminescent sources) are also useful.

Computer Vision Systems are affected by the level and quality of illumination as with the human eye. The performance of the illumination system greatly influences the quality of image and plays an important role in the overall efficiency and accuracy of the system. Illumination systems are the light sources. The light focuses on the materials (especially when used). Lighting type, location and color quality play an important role in bringing out a clear image of the object. Lighting arrangements are grouped into front- or back-lighting. Front lighting serve as illumination focusing on the object for better detection of external surface features of the product while back-lighting is used for enhancing the background of the object. Light sources used include incandescent lamps, fluorescent lamps, lasers, X-ray tubes and infra-red lamps (Narendra and Hareesh, 2010).

Advertisement

7. Color

Color is one of the important quality attributes in foods. Although it does not necessarily reflect nutritional, flavor, or functional values, it determines the acceptability of a product by consumers. Sometimes, instead of chemical analysis, color measurement may be used if a correlation is present between the presence of the colored component and the chemical in the food since color measurement is simpler and quicker than chemical analysis.

It may be desirable to follow the changes in color of a product during storage, maturation, processing, and so forth. Color is often used to determine the ripeness of fruits. Color of potato chips is largely controlled by the reducing sugar content, storage conditions of the potatoes, and subsequent processing. Color of flour reflects the amount of bran. In addition, freshly milled flour is yellow because of the presence of xanthophylls.

Color is a perceptual phenomenon that depends on the observer and the conditions in which the color is observed. It is a characteristic of light, which is measurable in terms of intensity and wavelength. Color of a material becomes visible only when light from a luminous object or source illuminates or strikes the surface.

Light is defined as visually evaluated radiant energy having a frequency from about 3.9×1014 Hz to 7.9 × 1014 Hz in the electromagnetic spectrum. Light of different wavelengths is perceived as having different colors. Many light sources emit electromagnetic radiation that is relatively balanced in all of the wavelengths contained in the visible region. Therefore, light appears white to the human eye. However, when light interacts with matter, only certain wavelengths within the visible region may be transmitted or reflected. The resulting radiations at different wavelengths are perceived by the human eye as different colors, and some wavelengths are visibly more intense than others. That is, the color arises from the presence of light in greater intensities at some wavelengths than at the others.

The selective absorption of different amounts of the wavelengths within the visible region determines the color of the object. Wavelengths not absorbed but reflected by or transmitted through an object are visible to observers.

Physically, the color of an object is measured and represented by spectrophotometric curves, which are plots of fractions of incident light (reflected or transmitted) as a function of wavelength throughout the visible spectrum (Figure 18).

Figure 18.

Shows spectrophotometric curves

7.1. Color fundamentals

The different colors we perceive are determined by two factors: the nature of the light reflected from the object and the source of the light. The reason tomatoes look red is that they absorb most of the violet, blue, green, and yellow components of the daylight, and reflect mainly the red components.

Leaves look green because they only reflect the green colors and absorb the red and blue colors. The source of light determines what colors can be reflected. Sunlight combines all lights of wavelengths, so objects appear colored in daylight. If the light source has a single wavelength, then objects just reflect this wavelength light and no other lights.

7.2. Trichromatic theory

The presence of three types of color receptors in the retinal layer confirmed the ideas that had been proposed in the trichromatic theory of human color vision.

This states that the magnitudes of three stimuli determine the perception of a color and not the detailed distribution of light energy across the visible spectrum. The concept is illustrated in Figure 19. If these stimuli are the same for two different light distributions, then the color appearance of the lights will be the same, irrespective of their spectrum. The trichromatic theory is important since it forms the basis of most methods of expressing color in terms of numbers and of the methods of reproduction of colored images.

The idea that three different types of photoreceptors participate in a population code for color is often referred to as the "trichromatic theory" of color vision.

Figure 19.

Show the signals from the eye cone cells

Therefore any light can be matched with a combination of any three others. Three receptors are types of cones:

  • S (Short): most receptive at 419nm

  • M (Medium): most receptive at 531nm

  • L (Long): most receptive at 558nm as shown in Figure 20.

Red and green are not only unique hues but are also psychologically opponent color sensations. A color will never be described as having both the properties of redness and greenness at the same time; there is no such color as a reddish green. In the same way, yellow and blue are an opponent pair of color perceptions.

Figure 20.

Show the cone absorption spectra

The six properties can be grouped into two opponent pairs, red/green and yellow/blue and the luminance property of white/black. The second stage of color vision is thought to arise from the action of neurons and in particular by inhibitory synapses. Figure 21 illustrates the signal pathways and the processing required accounting for the properties described in the opponent theory. The human eye has receptors for short (S), middle (M), and long (L) wavelengths, also known as blue, green, and red receptors.

Three cone types are combined to form three opponent process channels:-

  • S vs (M + L) = Blue/Yellow

  • (L+S) vs M = Red/Green

  • M + L = Black/White

In addition to the existence of the three different classes of cone photo pigments, considerable support for the dichromatic theory comes from observations of human color perception. For example, experiments in which subjects are shown different colors and asked to match them by mixing only three pure wavelengths of light in various proportions show that humans can, indeed, match any color using only three wavelengths of light - red, green and blue (Colour4Free, 2010).

Figure 21.

Show a set of signal paths consistent with the two stages of color vision.

7.3. The CIE chromaticity system

In 1931, the International Commission on Illumination, CIE (Commission Internationale del’Eclairage), defined three standard primary colors to be combined to produce all possible perceivable colors. The three standard primaries of the 1931 CIE, called X, Y, and Z, are imaginary colors.

The three dimensional color space CIE XYZ is the basis for all color management systems. This color space contains all perceivable colors - the human gamut. The two dimensional CIE chromaticity diagram xyY (below) shows a special projection of the three dimensional CIE color space XYZ. Some interpretations are possible in xyY, others require the three dimensional space XYZ or the related three dimensional space CIELab.

The new color-matching functions x(λ), y(λ),z(λ) have non-negative values, as expected. The functions x(λ), y(λ),z(λ) can be understood as weight factors. For a spectral pure color C with a fixed wavelength λ read in the diagram the three values as shown in figure 23. Then the color can be mixed by the three Standard Primaries:

C = x(λ) X + y(λ) Y + z(λ) Z

Generally we write

C = X X + Y Y + Z Z

and a given spectral color distribution P(λ) delivers the three coordinates XYZ by these integrals in the range from 380nm to 700nm or 800nm:

X=kP(λ)x¯(λ)d(λ)E1
Y=kP(λ)y¯(λ)d(λ)E2
Z=kP(λ)z¯(λ)d(λ)E3

where, k is a constant; it is 680 lumens/watt for a CRT; the λx, λy, and λz are color-matching functions.

Figure 22.

Show the CIE chromaticity diagram

The chromaticity values x,y,z depend only on the hue or dominant wavelength and the saturation. They are independent of the luminance:

x=XX+Y+ZE4
y=YX+Y+ZE5
z=ZX+Y+ZE6

Obviously we have x + y + z = 1. All the values are on the triangle plane, projected by a line through the arbitrary color XYZ and the origin, if we draw XYZ and xyz in one diagram. This is a planar projection. The center of projection is in the origin as shown in figure 24.

The vertical projection onto the xy-plane is the chromaticity diagram xyY (view direction). To reconstruct a color triple XYZ from the chromaticity values xy we need an additional information, the luminance Y.

Figure 23.

Show the XYZ Color-matching functions.

z=1xyE7
X=xyYE8
Z=zyYE9

Figure 24.

Show projection and chromaticity plane.

The interior and boundary of the diagram represent all visible chromaticity values. The boundary of the diagram represents the 100 percent pure colors of the spectrum. The line joining the red and violet spectral points, called the purple line, is not part of the spectrum. The center point E of the diagram represents a standard white light, which approximates sunlight. Luminance values are not available in the chromaticity diagram because of normalization. Colors with different luminance but the same chromaticity have the same point. The chromaticity diagram is useful for the following:-

  • Comparing color gamut for different sets of primaries.

  • Identifying complementary colors.

  • Determining the dominant wavelength and purity of a given color. (Hoffmann, 2000).

7.4. Color gamut

Color gamuts are represented on the chromaticity diagram as straight-line segments or as polygons.

Each color model uses a different color representation. The term color gamut is used to denote the universe of colors that can be created or displayed by a given color system or technology. The colors that are perceivable by the human visual system fall within the boundaries of the horse-shoe shape derived from the CIE-XYZ color space diagram, while the RGB colors (that can be displayed on an RGB monitor) fall within the red triangle that connects the RGB primary dots.

It is obvious that, the full range of perceptible color by humans is not available by the RGB color model and the transformations from one space to another may create colors outside the color gamut.

7.5. Color models

A color model is a method by which humans can specify, create and visualize color. A color model is a specification of a 3D color coordinate system and a visible subset in the coordinate system within which all colors in a particular color gamut lie. For example, the RGB color model is the unit cube subset of the 3D Cartesian coordinate system. There is more than one color model. The purpose of a color model is to allow convenient specification of colors within some color gamut. However, no color model can be used to specify all visible colors.

The choice of a color model is based on the application. Some equipment has limiting factors that dictate the size and type of color model that can be used; for example, the RGB color model is used with color CRT monitors, the YIQ color model is used with the broadcast TV color system, and the CMY color model is used with some color-printing devices. Unfortunately, none of these models are particularly easy to use comparing with human perception. According to human intuitive color concepts, it is easy to describe the color in terms of shade, tint, and tone, or hue, saturation, and brightness. Color models which attempt to describe colors in this way include HSV, HLS, CIEL*a*b*, CIEL*C*H*, CIEL*u*v*. (Shen, 2003) ( Fairchild, 1997) (Findling, 1996).

7.5.1. RGB color model

Based on the tri-stimulus theory of the vision of human eyes, the RGB (short for “red, green, and blue”) color model describes colors as positive combinations of three appropriately defined red, green, and blue primaries in a Cartesian coordinate system; this is an example of an additive color model.

The RGB color space can be defined by mapping the red, green, and blue intensity components into the Cartesian coordinate system. The dynamic range of the intensity values is scaled from 0 to 255 counts, and each primary color is represented by eight bits. The RGB color space shown in (Figure 25) displays 16.77 million discrete colors. The red, green, and blue corners of the cube indicate 100 percent color saturation.

An imaginary line can be drawn from the origin of the cube to the furthest opposite corner. Along this line are 256 achromatic colors representing possible shades of gray. Black resides at the origin of the color cube, and white is at the opposite corner. The RGB system enables the reproduction of any color within the color space by using an additive mixture of the primary colors. For an example, White is the sum of 255 counts of red, green, and blue, and the function is usually expressed by RGB (255, 255, 255).

7.5.2. The CMY & CMYK color models

Like the RGB color model, CMY color space is a subspace of standard three-dimensional Cartesian space, taking the shape of a unit cube. Each axis represents the basic secondary colors cyan, magenta, and yellow. Unlike RGB, however, CMY is a subtractive color model, meaning that where in RGB the origin represents pure black, the origin in CMY represents pure white. In other words, increasing values of the CMY coordinates move towards darker colors where increasing values of the RGB coordinates move towards lighter colors see Figure 26. Conversion from RGB to CMY can be done using the simple formula where it has been assumed that all color where it has been assumed that all color values have been normalized

[CMY]=[111][RGB]E10
(14)

to the range [0, 1]. This equation reiterates the subtractive nature of the CMY model. Although equal parts of cyan, magenta, and yellow should produce black, it has been found that in printing applications this leads to muddy results.

Figure 25.

Show the RGB color model

Thus in printing applications a fourth component of true black is added to create the CMYK color model. Four-color printing refers to using this CMYK model. As with the RGB model, point distances in the CMY space do not truly correspond to perceptual color differences.

Figure 26.

Show the CMY color model

7.5.3. YIQ color model

Developed by and for the television industry, the YIQ color system arose from a need to compress broadcasted digital imagery with as little visual degradation as possible.

The YIQ model is used in U.S.A. commercial color television broadcasting and is closely related to color raster graphics, which is suited to monochrome as well as color CRT display historically. The parameter Y is luminance, which is the same as in the XYZ model. Parameters I and Q are chromaticity, with I containing orange-cyan hue information, and Q containing green-magenta hue information. There are two peculiarities with the YIQ color model. The first is that this system is more sensitive to changes in luminance than to changes in chromaticity; the second is that color gamut is quite small, it can be specified adequately with one rather than two color dimensions. These properties are very convenient for the transfer of TV signals.An approximate linear transformation from a given set of RGB coordinates to the YIQ space is given by the following formula:

[YIQ]=[0.300.590.110.600.280.320.210.520.31][RGB]E11

7.5.4. HSV & HSL color models

The RGB, CMY, and YIQ color models are hardware-oriented. These do not provide an intuitive method to reproduce the colors according to human vision. For a specified color, people prefer to use tint, shade, and tone to describe a color. Thehsv (hue, saturation, and value) and HSL (hue, saturation, lightness) color models are very different from the previously explored RGB and CMY/K and YIQ color models in that both systems separate the overall intensity value of a point from its chromaticity. The HSV and HSL models can be visualized in three dimensions as a downward pointing hexcone. The HSV color model is a color model defined to describe the colors similarly to human vision. The HSV color model can be derived from the RGB cube. By looking along the diagonal of the RGB cube, which is from origin to (1,1,1), a hexagonal cone is seen from the outline of the cube as shown in (Figure 27).

Figure 27.

Color hexacone for HSV representation

The boundary of the hexcone represents the various hues, the saturation is measured along a horizontal axis, and value is along a vertical axis through the centre of the hexcone. The color wheel is varied same as the human perception.

Hue is represented by the angle around the vertical axis, with starting red at 0°, then yellow, green, cyan, blue, and magenta respectively, each interval is 60°. Any two colors with 180° difference are complementary colors. Saturation (S) varies from 0 to 1. It is the fraction of distance from center to edge of hexcone. At the S = 0, it is the grey scale. Value (V) varies from 0 to 1 at the top. At the origin, it represents black; and at the top of the hexcone, colors have their maximum intensity. As S =1, the colors have the pure hues.

The HSL color model is very much similar to the HSV system. A double hexcone, with two apexes at both pure white and pure black rather than just one at pure black, is used to visualize the subspace in three-dimensions as shown in (figure 28).

In HSL, the saturation component always goes from a fully saturated color to the corresponding gray value; whereas in HSV, with V at its maximum, saturation goes from a fully saturated color to white, which may not be considered intuitive to some. Additionally, in HSL the intensity component always spans the entire range from black through the chosen hue to white. In HSV, the intensity component only goes from black to the chosen hue. Because of the separation of chromaticity from intensity in both the HSV and HSL color spaces, it is possible to process images based on intensity only, leaving the original color information untouched. Because of this, HSV and HSL have found wide spread use in computer vision research.

Figure 28.

Show the HSL color model

7.5.5. CIEL*a*b* color model

CIEL*a*b* (or CIELAB) is another color model that separates the color information in ways that correspond to the human visual system. It is based on the CIEXYZ color model and was adopted by CIE in 1976. CIEL*a*b* is an opponent color system (no color can involve the opponent colors at same time) based on the earlier (1942) system of Richard Hunter called L, a, b.

The CIELAB color measurement method was developed in 1976 and offers more advantages over the system developed in 1931. It is more uniform and based on more useful and accepted colors describing a theory of opposing colors.

CIEL*a*b* defines L* as lightness; a* and b* are defined as the color axes to describe the hue and saturation. The color axes are based on the fact that a color can’t be red and green, or both blue and yellow, because these colors oppose each other. The a* axis runs from red (+ a) to green (- a) and the b* axis from yellow (+ b) to blue (- b) as shown in Figure 29. Hue values do not have the same angular distribution in CIEL*a*b* color model as the hue value in HSV. In fact, CIEL*a*b* is intended to mimic the logarithmic response of the human eye. [For98] The CIEL*a*b* overcomes the limitations of color gamut in the CIE chromaticity diagrams. However, in order to convert to other color models, L* is defined form 0 (black) to 100 (white), a* is from –100 (green) to 100 (red), b* is from –100 (blue) to 100 (yellow).

CIEL*C*H* has the same definition with the CIEL*a*b* except that its values are defined in a polar coordinate system. Thus in CIEL*C*H*, L* measures brightness, C* measures saturation and H* measures hue. We will use this model instead of HSV, as CIEL*C*H* is based on CIEL*a*b* and not on RGB, and hence is device-independent.

Figure 29.

Show the CIEL*a*b* color model

The color models which are used in computer graphics have been traditionally designed for specific devices, such as RGB color model for CRT displays and CMY color model for printers. They are device dependent. Therefore, it becomes meaningless to compare the colors with different devices or the same device under different conditions.

CIEL*a*b* is a device independent color model, and is used for color management as the device independent model of the ICC (International Color Consortium) device profiles (Shen, 2003) ( Fairchild, 1997) (CIE, 1999) (CIE, 1998) (Snead, 2005) (Findling, 1996) (Braun, et al. 1998).

7.5.6. sRGB color model

In order to avoid the color difference with different display systems, the IEC (International Electrotechnical Commission) introduced sRGB (IEC 61966-2-1) as a standard color model solution for office, home and web markets. The sRGB model serves the needs of PC and Web based color imaging systems and is based on the average performance of CRT displays. The sRGB solution is supported by the following observations:

  • Most computer displays are similar in their phosphor chromaticities (primaries) and transfer function.

  • The RGB color model is native to CRT displays, scanners and digital cameras, which are the devices with the highest performance constraints.

  • The RGB color model can be made device independent in a straightforward way. It is also possible to describe color gamuts that are large enough for all but a small number of applications.

The accurate handling of color characteristics of digital images is a non-trivial task because RGB signals generated by digital cameras are ‘device-dependent’, i.e. different cameras produce different RGB signals for the same scene. In addition, these signals will change over time as they are dependent on the camera settings and some of these may be scene dependent, such as the shutter speed and aperture diameter. In other words, each camera defines a custom device-dependent RGB color space for each picture taken. As a consequence, the term RGB (as in RGB-image) is clearly ill-defined and meaningless for anything other than trivial purposes. As measurements of colors and color differences in this paper are based on a standard colorimetric observer as defined by the CIE (Commission Internationale de l’Eclairage), the international standardizing body in the field of color science, it is not possible to make such measurements on RGB images if the relationship between the varying camera RGB color spaces and the colorimetric color spaces (color spaces based on said human observer) is not determined. However, there is a standard RGB color space (sRGB) that is fixed (device independent) and has a known relationship with the CIE colorimetric color spaces. Furthermore, sRGB should more or less display realistically on most modern display devices without extra manipulation or calibration.

The sRGB tristimulus values are linear combinations of the 1931 CIE XYZ values as measured on the faceplate of the display, which assumes the absence of any significant veiling glare. A linear portion of the transfer function of the dark end signal is integrated into the encoding specification to optimise encoding implementations.

A calibrated, nonlinear standard RGB color space called sRGB has been proposed by Microsoft and Hewlett-Packard. Benefits of sRGB are easier portability of RGB color images (especially on the Internet) and faster computational performance than in the uniform CIE spaces. The white point of sRGB is D65 as in the ITU-R BT.709 standard. The phosphor chromaticities are also from BT.709. The sRGB color space is large enough to fit in most device RGB spaces.

The suggested CRT gamma is 2.2 which complies with most monitors. The sRGB color space is computationally fast enough for interactive video and is becoming the future de facto Internet standard (Shen, 2003) (CIE, 1999) (CIE, 1998).

References

  1. 1. AlongeA. F.AdigunY. J.1999Some physical and aerodynamic properties of sorghum as related to cleaning. In Paper presented at the 21st Annual Conference of the Nigerian Society of Agricultural Engineers (NSAE) at Federal Polytechnic, Bauchi, Nigeria.
  2. 2. AndersonR.DanielC. H.JacquesW.SiomeG.2010Automatic fruit and vegetable classification from images. Computers and Electronics in Agriculture 70. 96104
  3. 3. BassiouN.KotropoulosC.2007Color image histogram equalization by absolute discounting back-off. Computer Vision and Image Understanding.107108122
  4. 4. BennedsenB. S.D. L.PetersonTabbA.2005Identifying defects in images of rotating apples. Comput. Electron. Agr. 48292102
  5. 5. BennedsenB. S.PetersonD. L.2005Performance of a system for apple surface defect identification in near-infrared images. Biosyst. Eng. 904419431
  6. 6. BennedsenB. S.PetersonD. L.2004Identification of apple stem and calyx using unsupervised feature extraction. Trans. ASAE 473889894
  7. 7. BraunG. J.EbnerF.M. D.Fairchild.1998Color Gamut Mapping in a Hue-Linearized CIELAB Color Space, IS&T/SID 6th Color Imaging Conference, Scottsdale, 163168
  8. 8. BrennanJ. G.2006Food Processing Handbook. Chapter(1) Postharvest Handling and Preparation of Foods for Processing. WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
  9. 9. BrosnanT.SunD. W.2004Improving quality inspection of food products by computer vision- a review. J. Food Engin. 611316
  10. 10. CastlemanK.1996Digital image processing. Englewood Cliffs, NJ: Prentice-Hall, 667p.
  11. 11. ChakespariA. G.RajabipourA.MobliH.2010Post Harvest Physical and Nutritional Properties of Two Apple Varieties. Journal of Agricultural Science 23
  12. 12. ChenY. R.ChaoK.KimM. S.2002Machine vision technology for agricultural applications. Comput. Electron. Agr. 33(2/3): 173-191.
  13. 13. ChengX.TaoY.ChenY. R.LuoY.2003Nir/MIR dual-sensor machine vision system for online apple stem-end/calyx recognition. Transactions of the ASAE. 46
  14. 14. CIE.1998Color Measurement and Management in Multimedia Systems and Equipment. Part 2-1: Default RGB Color Space- sRGB. http://www.colour.org/tc805 /Docs/colorspace/.
  15. 15. CIE.1999Multimedia Systems and Equipment Color Measurement and Management. Part 2-2: Color management Extended RGB Color Space- sRGB64. http://www.colour.org/tc805 /Docs/colorspace/.
  16. 16. Colour4Free.2010H14 color vision theory 01doc page 1 of 4. http://colour4free.org.uk/Books/H14ColourVisionTheoryV01.pdf.
  17. 17. 63923092317Crowe, T.G., and M.J. Delwiche. (1996). Real-time defect detection in fruit-part II: An algorithm and performance of a prototype system. Transactions of the ASAE 39(6): 2309-2317
  18. 18. DahJ. L.RobertS.JamesA.SteveM. C.2008Development of a machine vision system for automatic date grading using digital reflective near-infrared imaging. Journal of Food Engineering 86 p.p 388398
  19. 19. El MasryaG.WangbN.,and.VigneaultC.2009Detecting chilling injury in Red Delicious apple using hyperspectral imaging and neural networks. Postharvest Biology and Technology 52. 18
  20. 20. FairchildM. D.1997Status of CIE Color Appearance Models. http://www.colour.org/tc801 /MDF_AICpaper.pdf.
  21. 21. FernandoL. G.GabrielaA. G.JoséB.NuriaA.JoséM. V.2010Automatic detection of skin defects in citrus fruits using a multivariate image analysis approach. Computers and Electronics in Agriculture 71. 189197
  22. 22. FindlingH.1996Fuzzy Algorithm for the Enhancement of Noise degraded Images. Master thesis of science. School of Florida Institute of Technology. Melbourne, Florida.
  23. 23. FrancisF. J.1980Colour quality evaluation of horticultural crops. Hort Science, 1511415
  24. 24. GeyerLewis. H.RanaldF.Perry1982Variation in detectability of multiple flaws with allowed inspection time. Human Factors 243361365
  25. 25. GuyerD. G.BrownE.TimmR.BrookMarshallE.1994Lighting systems for fruit and vegetable sorting. Extension Bulletin E-2559.E.Lansing, Mich.: Cooperative Extension Service, Michigan State University.
  26. 26. HeinemannP. H.HughesR.MorrowC. T.SommerH. J.BeelmanR. B.WuestP. J.1994Grading of mushrooms using a machine vision system. Transactions of the ASAE 37516711677
  27. 27. HeinemannP. H.VargheseZ. A.MorrowC. T.SommerH. J.I. I. I.CrasswellerR. M.1995Machine vision inspection of "Golden Delicious" apples. Applied Engineering in Agriculture 116901906
  28. 28. HoffmannG.2000CIE color space. http://www.fho-emden.de/~hoffmann/ciexyz29082000.pdf
  29. 29. JiangshengG.Y.Yibin2006Fruit Shape Detection Based on Multi-scale Level Set Framework. ASABE Paper 063088
  30. 30. JunQ.AkiraS.SakaeS.NaoshiK.2004Mobile Fruit Grading Robot-Concept and prototype. American Society of Agricultural and Biological Engineers, St. Joseph, Michigan www.asabe.org.
  31. 31. KangS. P.EastA. R.TrujilloF. J.2008Colour vision system evaluation of bicolour fruit: A case study with ‘B74’ mango. Postharvest Biology and Technology 49. 7785
  32. 32. KheiralipourK.TabatabaeefarH.MobliA.RafieeS.SharifiM.JafariA.RajabipourA.2008Some physical and hydrodynamic properties of two varieties of apple (Malus domestica Borkh L.).Int. Agrophysics, 22225229
  33. 33. KondoN.2009Automation on fruit and vegetable grading system and food traceability”, Trends in Food Science & Technology, doi:j.tifs.
  34. 34. Kondo,N.2010Automation on fruit and vegetable grading system and food traceability. Trends in Food Science & Technology 21. 145152
  35. 35. LeemansV.MageinH.DestainM. F.2002On-line fruit grading according to their external quality using machine vision. Biosyst. Eng. 834397404
  36. 36. Mathworks2005Image processing toolbox for use with Matlab. Natick, USA: The Mathworks Inc.
  37. 37. MendozaF.AguileraJ. M.2004Application of image analysis for classification of ripening bananas. Journal of Food Science, 69(9), E471E477.
  38. 38. MendozaF.DejmekP.AguileraJ. M.2006Calibrated color measurements of agricultural foods using image analysis. Postharvest Biology and Technology, 41285295
  39. 39. MeryD.PedreschiF.2005Segmentation of color food images using a robust algorithm. Journal of Food Engineering, 66353360
  40. 40. MeyersJ. B.PrussiaS. E.ThaiC. N.SadoskyT. L.CampbellD. T.1990Visual inspection of agricultural products moving along sorting conveyors. Transactions of the ASAE 332367372
  41. 41. MillerW. M.1995Optical defect analysis of Florida citrus. Applied Engineering in Agriculture 116855860
  42. 42. MirashehR.2006Designing and making procedure for a machine determining olive image dimensions. Master of Science Thesis, Tehran University.
  43. 43. MohseninN. N.1970Physical properties of plant and animal materials. 1Structure physical characteristics and mechanical properties. Gordon and Breach Science Publications, New York.
  44. 44. MohseninN. N.1986Physical properties of plant and animal materials. Gordon of Breach science publishers, New York.
  45. 45. MuirA. Y.ShirlawI. D. G.MckaeD. C.1989Machine vision using spectral imaging techniques. Agricultural Engineering 4437981
  46. 46. MummertC. N.2004The development of a machine vision system to measure the shape of a sweetpotato root. Master of thesis of science. Fac, of Agric. North Carolina State University.
  47. 47. NaoshiK.TakahisaN.YoshihidM.PeterL.MitsutakaK.MakotoK.PaoloD. F.YuichiO.2008A double image acquisition system with visible and UV LEDs for citrus fruit. American Society of Agricultural and Biological Engineers, ASABE.
  48. 48. NarendraV. G.HareeshK. S.2010Prospects of Computer Vision Automated Grading and Sorting Systems in Agricultural and Food Products for Quality Evaluation. International Journal of Computer Applications. 14
  49. 49. NielsenH. M.PaulW.1998Modeling image processing parameters and consumer aspects for tomato quality grading. In: Mathematical and Control Application in Agriculture and Horticulture. Proceedings of the Third IFAC Workshop, Pergamon/Elsevier, Oxford, UK.
  50. 50. OnderK.AzizO.IbrahimA.2006Physical properties of cactus pear (Opuntia ficus india L.) grown wild in Turkey. Journal of Food Engineering (73 ): 198-202.
  51. 51. PatrickM. M.YudR. C.MoonS. K.DianeE. C.2004Development of hyperspectral imaging technique for the detection of apple surface defects and contaminations. Journal of Food Engineering 61. 6781
  52. 52. PaulusI.De BusscherR.SchrevensE.1997Use of image analysis to investigate human quality classification of apples. Journal of Agricultural Engineering Research 68, 341353
  53. 53. PeschelS.FrankeR.SchreiberL.KnocheM.2007Composition of cuticle of developing sweet cherry fruit. Phytochemistry, 6810171025
  54. 54. RehkuglerG. E.ThroopJ. A.1976Optical-mechanical bruised apple sorters. 185188In: J.J. Gaffney(ed). Quality Detection in Foods. ASAE. St. Josph, MI.
  55. 55. 63518731878Rigney M.P., G.H. Brusewitz, and G.A. Kranzler. (1992). Asparagus defect inspection with machine vision. Transactions of the ASAE 35(6): 1873-1878
  56. 56. SahinS.SumnuG. S.2006Physical Properties of Foods. Library of Congress Control 2005937128Springer Science+Business Media, LLC.
  57. 57. ShenZ.2003Color differentiation in digital images. Master thesis of science. School of Computer Science and Mathematics. Victoria University of Technology, Australia.
  58. 58. Snead,M.C.2005A method of content-based image retrieval for the generation of image mosaics. Master thesis of science. School of Electrical Engineering and Computer Science. University of Central Florida.
  59. 59. StephensonK. Q.1976Color sorting system for tomatoes. ASAE. 199201
  60. 60. SunD. W.2008Computer Vision Technology for Food Quality Evaluation.Food Science and Technology, International Series. 978-0-12373-642-0
  61. 61. SunD. W.BrosnanT.2003Improving quality inspection of food products by computer vision: A Review. Journal of Food Engineering, 61316
  62. 62. SunilK. M.PaulR. W.NingW.TimothyB.NielsO. M.2009Adaptive Thresholding of Pecan X-ray Images using Water Flow Models and Feature Extraction. ASABE.
  63. 63. TaoY.1996aMethods and apparatus for sorting objects including stable color transformayion. U.S. Patent 5
  64. 64. TaoY.1996bSpherical transform of fruit images for on-line defect extraction of mass objects. Opt. Engng. 352344350
  65. 65. TaoY.1998Defective object inspection and separation system. U.s. patent 5
  66. 66. ThroopJ. A.AneshansleyD. J.UpchurchB. L.AngerB.2001Apple orientation on two conveyors: performance and predictability based on fruit shape characteristics. Trans. ASABE. 44199109
  67. 67. ThroopJ. A.AneshansleyD. J.UpchurchB. L.1993Investigation of texture analysis features to apple bruises. ASAE Paper 93-3527933527
  68. 68. UpchurchB. L. H. A.AffeldtW. R.HruschkaJ.aThroop.1991Optical detection of bruises and early frost damage on apples. Transaction of the ASAE 34310041009
  69. 69. WenZ.TaoY.1997adaptive spherical transform of fruit images for high-speed defect detection. ASAE paper 97-3076973076
  70. 70. Xiao-boZ.Jie-wenZ.YanxiaoL.HolmesM.2010In-line detection of apple defects using three color cameras system. Computers and Electronics in Agriculture 70.129134
  71. 71. YangQ.1993Finding stalk and calyx of apples using structured lighting. Comput. Electron. Agric. 83142

Written By

Ayman H. Amer Eissa and Ayman A. Abdel Khalik

Published: 22 August 2012