Machine Vision Measurement Technology and Agricultural Applications

Many of the techniques of digital image processing, or digital picture processing as it often was called, were developed in the 1960s at the Jet Propulsion Laboratory, Massachusetts Institute of Technology, Bell Laboratories, University of Maryland, and a few other research facilities, with application to satellite imagery, wire-photo standards conversion, medical imaging, videophone, character recognition, and photograph enhancement. The cost of processing was fairly high, however, with the computing equipment of that era. That changed in the 1970s, when digital image processing proliferated as cheaper computers and dedicated hardware became available. Images then could be processed in real time. As general-purpose computers became faster, they started to take over the role of dedicated hardware for all but the most specialized and computer-intensive operations (Anonymous, 2011b). With the fast computers and signal processors available in the 2000s, digital image processing has become the most common form of image processing and generally, is used because it is not only the most versatile method, but also the cheapest. Digital image processing technology for medical applications was inducted into the Space Foundation Space Technology Hall of Fame in 1994.

can be processed by a computer. Generally the goal is to attain a signal from an object in such a form that we know where it is (geometry) and what it is or what properties it has (Jähne & Haußecker, 2000). The human visual system is limited to a very narrow portion of the spectrum of electromagnetic radiation, called visible light. Image processes are sensitive to wavelengths and additional information might be hidden in the spectral distribution of radiation. Using different types of radiation allows taking images from different depths or different object properties (Nixon & Aguado, 2002 ).
The measurement of images is often a principal method for acquiring scientific data and generally requires that features or structure be well defined, either by edges or unique colour, texture, or some combination of these factors. The types of measurements that can be performed on entire scenes or on individual features are important in determining the appropriate processing steps (Russ, 2006).

Acquiring images 2.1 Imaging photometers and colourimeters
Colourimetry is the measurement of the wavelength and the intensity of electromagnetic radiation in the visible region of the spectrum. Colourimetry can help find the concentration of substances, since the amount and colour of the light that is absorbed or transmitted depends on properties of the solution, including the concentration of particles in it. A colourimeter is an instrument that compares the amount of light getting through a solution with the amount that can get through a sample of pure solvent (Fig. 1.). A colourimeter contains a photocell is able to detect the amount of light which passes through the solution under investigation. The more light that hits the photocell, the higher the current it produces, hence showing the absorbance of light. A colourimeter takes 3 wideband (RGB) readings along the visible spectrum to obtain a rough estimate of a colour sample. Pigments absorb light at different wavelengths.
The use of imaging photometers and colourimeters for fast capture of photometric and colourimetric quantities with spatial resolution has attracted increasing interest. Compared with measuring instruments without spatial resolutions, such as spectrometers, this technology offers the following advantages:  Substantial time-savings with simultaneous capture of a large number of measurements in a single image,  Image-processing functions integrated in the software permit automated methods of analysis, e.g. calculation of homogeneity or contrast.
However, the absolute measuring precision of imaging photometers and colourimeters is not as high as spectroradiometers. This is because of the operational principle using a CCD Sensor in combination with optical filters, which can only be adapted to the sensitivity of the human eye with limited precision.
Imaging photometers and colourimeters are the instruments of choice for:  Measurement of luminance and colour distribution of industrial products,  Measurement of homogeneity, contrast of products,  Analysis of luminous intensity.  (Beyaz, 2009a).

Camera image
In recent years, a massive research and development effort has been witnessed in colour imaging technologies in both industry and ordinary life. Colour is commonly used in television, computer displays, cinema motion pictures, print and photographs. In all these application areas, the perception of colour is paramount for the correct understanding and dissemination of the visual information. Recent technological advances have reduced the complexity and the cost of colour devices, such as monitors, printers, scanners and copiers, thus allowing their use in the office and home environment. However, it is the extreme and still increasing popularity of the consumer, single-sensor digital cameras that today boosts the research activities in the field of digital colour image acquisition, processing and storage. Single-sensor camera image processing methods are becoming more important due to the development and proliferation of emerging digital camerabased applications and commercial devices, such as imaging enabled mobile phones and personal digital assistants, sensor networks, surveillance and automotive apparatus (Lukac & Plataniotis, 2007).

Digital camera architectures
Digital colour cameras capture colour images of real-life scenes electronically using an image sensor, usually a charge-coupled device (CCD), or complementary metal oxide semiconductor (CMOS), sensor, instead of the film used in the conventional analog cameras. Therefore, captured photos can be immediately viewed by the user on the digital camera's display, and immediately stored, processed, or transmitted without any doubt, this is one of the most attractive features of digital imaging (Lukac & Plataniotis, 2007).

Single-sensor device
This architecture reduces cost by placing a colour filter array (CFA), which is a mosaic of colour filters, on top of the conventional single CCD/CMOS image sensor to capture all www.intechopen.com three primary (RGB) colours at the same time. Each sensor cell has its own spectrally selective filter and thus it stores only a single measurement. Therefore, the CFA image constitutes a mosaic-like grayscale image with only one colour element available in each pixel location. The two missing colours must be determined from the adjacent pixels using a digital processing solution called demosaicking. Such an architecture represents the most cost-effective method currently in use for colour imaging, and for this reason, it is almost universally utilized in consumer-grade digital cameras (Lukac & Plataniotis, 2007).

Three-sensor device
This architecture acquires colour information using a beam splitter to separate incoming light into three optical paths. Each path has its own red, green or blue colour filter having different spectral transmittances and sensors for sampling the filtered light. Because the camera colour image is obtained by registering the signals from three sensors. The mechanical and optical alignment is necessary to maintain correspondence among the images obtained from the different channels. Besides the difficulties of maintaining image registration, the high cost of the sensor and the use of a beam splitter make the threesensor architecture available only for some professional digital cameras (Lukac & Plataniotis, 2007).

Real-time imaging systems
Real-time systems are those systems in which there is urgency to the processing involved. This urgency is formally represented by a deadline. Because this definition is very broad, the case can be made that every system is real-time. Therefore, the definition is usually specialized. A "firm" real-time system might involve a video display system, for example, one that superimposes commercial logos. Here, a few missed deadlines might result in some tolerable flickering, but too many missed deadlines would produce an unacceptable broadcast quality. Finally, a "soft" real-time system might involve the digital processing of photographic images. Here, only quality of performance is at issue. One of the most common misunderstandings of real-time systems is that their design simply involves improving the performance of the underlying computer hardware or image processing algorithm. While this is probably the case for the mentioned display orphotographic processing systems, this is not necessarily true for the target tracking system.
Here, guaranteeing that image processing deadlines are never missed is more important than the average time to process and render one frame. The reason that one cannot make performance guarantees or even reliably measure performance in most real-time systems is that the accompanying scheduling analysis problems are almost always computationally complex. Therefore, in order to make performance guarantees, it is imperative that the bounded processing times be known for all functionality. This procedure involves the guarantee of deadline satisfaction through the analysis of various aspects of code execution and operating systems interaction at the time the system is designed, not after the fact when trial-and-error is the only technique available. This process is called a schedulability analysis. The first step in performing any kind of schedulability analysis is to determine, measure or otherwise estimate the execution of specific code units using logic analyzers, the systemclock, instruction counting,

2.3.1
The issues that must be considered in real-time imaging system

Hardware and display issues
An understanding of the hardware support for colour imaging graphics is fundamental to the analysis of real-time performance of the system. Some specialized hardware for realtime imaging applications involves high-performance computers with structural support for complex instruction sets and imaging coprocessors. Inexpensive pixel processors are also available, and scalable structures are increasingly being used for real-time imaging applications. But building systems with highly specialized processors is not always easy, particularly because of poor tool support. Therefore, many commonly deployed colour imaging systems use consumer-grade personal computers. There are many architectural issues relating to real-time performance, such as internal/ external memory bus width, memory access times, speed of secondary storage devices, display hardware issues, and colour representation and storage, to name a few. Collectively, these design issues involve three trade-off problems -schedulability versus functionality, performance versus resolution, and performance versus storage requirements. Real-time design of imaging systems, then, involves making the necessary decisions that trade one quality for another, for example, speed versus resolution. For example, one of the main performance challenges in designing real-time image processing systems is the high computational cost of image manipulation. A common deadline found in many processing systems involves screen processing and update that must be completed at least 30 times per second for the human eye to perceive continuous motion.
Because this processing may involve more than a million pixels, with each colour pixel needing one or two words of storage, the computational load can be staggering. For the purposes of meeting this deadline, then, the real-time systems engineer can choose to forgo display resolution, or algorithmic accuracy. Finding improved algorithms or better hardware will also help meet the deadlines without sacrifice -if the algorithms and hardware behaviorare bounded, which, as mentioned, is not always the case. In this section, however, we confine our discussion to two important hardware issues (Lukac & Plataniotis, 2007).

Colour representation and real-time performance
Our interest is in the appropriate representation of colour in the physical hardware, because as previously noted, this has real-time performance implications. The colour buffer may be implemented using one or more of the following storage formats:  One byte per pixel (indexed or pseudo-colour), which allows 28 = 256 colours,  Two bytes per pixel (high colour), which, using 16 bits = 65,536 colours, www.intechopen.com  Three bytes per pixel (true or RGB colour), which yields approximately 16.8 million colours.
RGB is often considered the standard in many programming languages, and it is used in many important colour image data formats, such as JPEG and TIFF. True colour uses 24 bits of RGB colour, 1 byte per colour channel for 24 bits. A 32-bit representation is often used to enhance performance, because various hardware commands are optimized for groups of 4 bytes or more (Lukac & Plataniotis, 2007).

Language issues
Modern languages for real-time imaging must provide an easy interface to hardware devices and provide a framework for maintainability, portability and reliability, among many other features. Many programming languages are commonly used to implement Real-Time Colour Imaging Systems, including C, C++, C#, Java, Visual Basic, Fortran, assembly language, and even BASIC. Poor coding style is frequently the source of performance deterioration in real-time imaging systems. In many cases, the negative effects are due to performance penalties associated with object composition, inheritance, and polymorphism in object-oriented languages. But object-oriented languages are rapidly displacing the lower-level languages like C and assembly language in real-time colour imaging systems, and it is probably a good thing because of the accompanying benefits.
Understanding the performance impact of various language features, particularly as they relate to image storage and manipulation, is essential to using the most appropriate construct for a particular situation. There is no clear answer, and experimentation with the language compiler in conjunction with performance measurement tools can be helpful in obtaining the most efficient implementations.
The following list summarizes key issues when implementing real-time imaging systems in a high-level language:  Use appropriate coding standards to ensure uniformity and clarity,  Refactor the code continuously (that is, aggressively improve its structure) with an eye to performance improvement,  Use performance measurement tools continuously to assess the impact of changes to the hardware and software,  C a r e f u l l y d o c u m e n t t h e c o d e t o e n a b l e f u t u r e d e v e l o p e r s t o m a k e s t r u c t u r a l a n d performance enhancements,  Adopt an appropriate life cycle testing discipline.
Finally, be wary of code that evolved from non-object-oriented languages such as C into object-oriented version in C++ or Java. Frequently, these conversions are made hastily and incorporate the worst of both the object-oriented and non-object-oriented paradigms simultaneously.
Java is an object-oriented language, with a syntax that is similar to C++ and C#, and to a lesser extent, C. Besides, modern object-oriented languages such as C++ and C# have quite a lot in common with Java (Lukac & Plataniotis, 2007).

AdOculos
The software has PC-based image processing without the need of extensive programming knowledge. The complete C source code of these DLLs is part of the standard pack. Point, local and global, morphological operations texture, image sequence histograms procedures, colour transformations, automatic counting and interactive measuring, pattern recognition can be done by using this software.

IMAQ vision
Adds machine vision and image processing functionality to LabVIEW and ActiveX containers (National Instruments).

Optimas
It is an analytical Imaging -complete commercial image analysis program for windows which is used in biological and industrial measurement environments. Optimas implements hundreds of measurement, image processing, and image management operations, all available from the graphical user interface.

GLOBAL LAB image
It is an easy to use graphical software application for creating scientific and general purpose imaging application. Use the Point & Click script to develop applications quickly. Simply configure the tools you want to use and add them to your script to create powerful imaging applications without writing any code.

Matlab ımage processing toolbox
Image Processing Toolbox™ provides a comprehensive set of reference-standard algorithms and graphical tools for image processing, analysis, visualization, and algorithm development. You can perform image enhancement, image deblurring, feature detection, noise reduction, image segmentation, spatial transformations, and image registration. Many functions in the toolbox are multithreaded to take advantage of multicore and multiprocessor computers.

ImageJ
ImageJ is a public domain Java image processing program inspired by NIH Image. It runs, either as an online applet or as a downloadable application, on any computer with a Java 1.4 or later virtual machine. Downloadable distributions are available for Windows, Mac OS, Mac OS X and Linux.

Myriad
Myriad image processing software was also used for comparing images and measuring surface area of images. This program has two steps for measuring images. First step was www.intechopen.com calibration of the program. For this purpose software needed a calibration ruler or a calibration plate. Second step was true selection of images.

Surface area measurement
Surface areas describe as the boundries between structures. Surface area measurement of the different part of plant such as leaf projection area and fruit projection area is important for process engineering of agricultural products. So scientist use different methods for measuring areas of plant parts and one of them is image processing and analysis techniques. We need to slice products in to the pieces (Sum of these piece area give us the total surface of the agricultural product.) or select the boundries of the agricultural products for surface area measurements (Fig. 2.). They use the surface area measurements for estimating product diameter, weight, fruit volume and pesticide usage estimation.

Length measurement
Length measurement is usually applied in objects that have one more long dimension in comprision of others. For small objects such as seeds, lenght measurement can be done by using ımage analysis and processing techniques. Photographic enlargement of the dimensions help us to measure lenght of agricultural products easly (Fig. 3.).

Determining number
In the area of machine vision, blob detection refers to visual modules that are aimed at detecting points and/or regions in the image that are either brighter or darker than the surrounding (Fig. 4.). There are two main classes of blob detectors (i) differential methods based on derivative expressions and (ii) methods based on local extrema in the intensity landscape. With the more recent terminology used in the field, these operators can also be referred to as interest point operators, or alternatively interest region operators. Fig. 4. Counting the number of droplets by using ImageJ Software.

www.intechopen.com
Also determination of the numbers of objects is important for agricaltural applications. We can determine specicif gravity, thousand grain weight, number of droplets in spraying and etc. For example, the numbers of the droplets on a leaf in pesticide application is an important parameter for determining optimum pesticide usage in agricultural products. Because pesticides effects enviroment and using them in high dozes caused to air, soil and water pollution.

Colour measurements
RGB and CIELab is widely used as a standard spaces for comparing colours. A set of primary colours, such as the sRGB primaries, define a colour triangle; only colours within this triangle can be reproduced by mixing the primary colours. Colours outside the colour triangle are therefore shown here as gray. The primaries and the D65 white point of sRGB are shown in Fig. 5. (Anonymous, 2011c). A Lab colour space is a colour-opponent space with dimension L for lightness and a and b for the colour-opponent dimensions, based on nonlinearly colour space coordinates (Anonymous, 2011c). www.intechopen.com The CIE 1976 (L*, a*, b*) colour space, showing only colours that fit within the sRGB gamut (and can therefore be displayed on a typical computer display). Each axis of each square ranges from -128 to 128 ( Fig. 6.) (Anonymous, 2011c).

Determining location
The location of features was also needed for identify of the results. There are several definitions of location and pixels in an ordinary image For instance, the x,y coordinates of the midpoint of a feature can be determined simply from minimum and maximum limits of the pixels. Normally, starting from the top left corner of the array, the pixel addresses themselves (Fig. 7.). Global coordinate system gives us information about real pixel dimensions values. www.intechopen.com

Neighbor relationships
Local coordinates and individuals features of the pixels are important for some applications and neigbour pairs are the easiest way to identify differences between the products. The histogram of the distributions of this features gives the the answers. Fig. 8. shows distribution of features at measurement of rice grains by using Matlab Software.

Perimeter
The perimeter of a feature is a welldefined and familiar geometrical parameter. Measuring a numerical value that really describes the object and these value turns to be easy to determine agricultural products perimeter measurement. Some systems estimate the length of the boundary around the object by counting the pixels but some of them use picture selection method which is based on the selection of the objects manually from images, for investigating perimeter value (Fig. 9.).

Describing shape
Shape and size are inseparable in a physical object, and both are generally necessary if the object is to be satisfasfactoriliy described. Further, in defining the shape some dimensinoal parameters of the object must be measured. Seeds, grains, fruits and vegetables are irregular in shape because of the complexity of their specifications, theorically requires an infinite number of measurements (Fig. 10.). The number of these measurements increases with increase in irregularity of the shape (Mohsenin, 1970).

Three-dimensional measurements
3-D metrics have first been developed for simplification purposes but three-dimensional image assessments causes new challenges (Fig. 11.). There are measurement distortion www.intechopen.com Fig. 11. 3D measurement system that it developed jointly with the National Institute of Advanced Industrial Science and Technology (Anonymous, 2011a.).
between an original 3-D surface and its deformed version. The other important challenge is analysis of 3-D views by using 2-D screens. Image measurements still require identifying the pixels that are connected to each other. In two dimensions, it is necessary to determine differences between touching pixels.
Existing 3D measurement techniques are classified into two major types-active and passive. In general, active measurement employs structure illumination (structure projection, phase shift, moire topography, etc.) or laser scanning, which is not necessarily desirablein many applications. On the other hand, passive 3D measurement techniques based on stereo vision have the advantages of simplicity and applicability, since such techniquesrequire simple instrumentation.

Image measurement and computation method at applications
A digital camera was used for determining dimensions and digital images were evaluated by Myriad image processing software. Measuring and verification process of digital images can be done with the help of this program safely, also digital images can be compared easly (Fig. 12.). This program has two steps for measuring images. First step was calibration of the program. For this purpose software needed a calibration ruler or a calibration plate. Hence a millimetric paper was used for calibration and testing for software (Fig. 13.). For the calibration, a calibration plate with known dimensions was used while taking digital images. Second step was true selection of images (Beyaz, 2009a). Colour measurements were determined with Minolta Cr200 model colourmeter which is based on L*a*b* measurement system (Fig. 14.). Avarage of three sample points are used as colour value (Fig. 15.). Minolta Cr200 model colourmeter was calibrated with a white reflective plate.

Volume determination of Kahramanmaras red pepper (Capsicum annuum L.) by using image analysis technique
The size of an agricultural product is an important parameter to determine fruit growth and quality. It can be used to determine the optimum harvest time as a maturity index. In this study, the image analysis method was tested on Kahramanmaras red pepper (Capsicum annuum L.) which may have a non uniform shape. For this purpose; the front, top and left side of each pepper was taken into account for evaluations and projection areas. The effect of each image and image combination has been used to determine the volume of peppers. The regression coefficients between the projection areas and volume values have also been assessed for volume estimation. The relationship between the real volumes of www.intechopen.com Kahramanmaras red peppers and the front projection area values (A F ) was pointed as a graph at Fig. 16. Similarly top projection area values were presented at Fig. 17. However left projection areas values and the estimation equation can be seen in Fig. 18 (Beyaz et al., 2009d).  Fig. 16. The regression equation and graph obtained by using A F projection area (Beyaz et al., 2009d).
According to this assessment, the real volume value which was obtained from the front projection area shows the regression coefficient has reached 49.9%. The regression coefficient obtained from estimation equations by using of the top projection area was 66.6% (Fig. 17.).The regression coefficient obtained by using left projection area was 63.6% (Fig.  18.).  Fig. 18. The regression equation and graph obtained by using A L projection area (Beyaz et al., 2009d).

AT (cm2)
When the volume estimation obtained from one projection area, top projection area values has the highest regression coefficient (66.6%). The volume estimation has been examined by making double groups from these three projection areas. Fig. 19. represents the regression graph which is obtained from the sum of top and left projection areas. Estimation equation has reached a regression coefficient 74.7% depending on this assessment.
The most appropriate estimation formula has been calculated from the top and the left projection area. The following equation is the most appropriate equation (  It can be seen in Fig. 20., the regression coefficient was 59.9%. Fig. 21. shows the sum of the regression graph. Estimation coefficient was obtained as 71.1% depending on the regression equation.   It has been pointed that Fig. 23 the regression coefficient values between the real peppers volume value and the estimated volume values had been found as 89.7%. This volume estimation rate can be used as, before the harvest as a maturity index, after the harvest as a www.intechopen.com classification and packaging parameter. Overall, at irregular shape products such as pepper is expected to be high regression coefficient relationship between volume and weight. However, classification and packaging system by using image analysis techniques according to the quality of the product is also possible. In this regard research has router features.

AF+AT (cm2) Real Volume Value (ml)
According to the results of image processing method volume of Kahramanmaras red pepper seems to be appropriate for volume estimation (Beyaz et al., 2009d).

Image analyze technique for measurements of apple and peach tree canopy volume
Image analysis measurements and the real manual measurement method had been compared of apple and peach trees canopy volume in this study. The known of the tree canopy volumes are important from the point of views uniform growing, yield estimation, fertilizers and chemical applications. Also this research gives us some information about tree height, skirt height, parallel diameter. And also we can use these parameters for designing harvesting machines. In this research, totally twenty trees which has different canopy heights and volumes have been used. Apple and peach trees randomly have been selected from apple and peach garden belongs to Ankara University Agriculture Faculty. After the taking photographs of all trees by using digital camera, overall canopy height above the ground, canopy diameter parallel to row near ground level, height to point of maximum canopy diameter, height from ground to canopy skirt data have been obtained from these digital images. Apple and peach canopy volume have been calculated by using a formula that is named Albrigo from the prolate spheroid canopy volume values. The real values and image analysis measurements have been provided dimensions for computing the canopy volume where as image analysis measurements gave information that could be used to compute a image analysis canopy volume index. Tables show manual measurements of the 10 trees for each tree varieties and their corresponding canopy volumes, which were computed using prolate spheroid canopy volume formula (Table 1-4) (Beyaz et al., 2009c).  (Beyaz et al., 2009c).  (Beyaz et al., 2009c).  Table 3. Two dimensions and canopy volumes calculated using prolate spheroid canopy volume formulae from real tree measurements at peach trees (Beyaz et al., 2009c).  (Beyaz et al., 2009c). Table 1. and Table 3. shows that skirt height for most trees was about 0,63 m at apple trees and 0,43 m at peach trees respectively. Tables also suggest that the difference in volumes for tree each 10 trees. That difference was because the rest of the trees had almost the same parallel (D) diameters. Possibly the captured image measurements provide the easy canopy volume estimation because of fast tree dimensions.

Tree ID H t (m) H c (m) H s (m) D (m) PS CV (m 3 )
Real tree volumes and tree image volumes regressions have presented in Fig. 24. and 25.

Determination of sugar beet topping slice thickness by using image analysis technique
Turkey is one of the important sugar beet producers in the world. Turkey has produced 15 488 332 tones sugar beet from 3 219, 806 ha production area and also produced 2 061 000 tones sugar in 2008. Sugar beet harvesting by machine is general in Turkey. During mechanical harvesting several mechanical loads have caused skin and tissue damages of sugar beet. The damages of skin and tissue at sugar beets results in quantitative and qualitative losses. After harvesting, comparisons of the quality of sugar beets as good and bad are important in factory entrance terms of sugar losses. In this study widely used three types of harvesters have been operated. One of these machines is full hydraulic sugar beet harvest machine with arrangeable depth and row, second one is semihydraulic sugar beet harvest machine and the last one is full mechanic sugar beet harvest machine. Harvesters have been tried at the same field conditions during September-October months of 2009. Performance values have been obtained from three different evaluation methods. These methods are topping quality determination, the determination of sugar beet injury rate and the soil removal rate of the sugar beet. The determinations of these factors are important to obtain the optimum harvest performance. Image process and analysis methods have been used for evalutions. Descriptive statistics of image processing and measured damaged surface area features are given in Table 5 (Beyaz et al., 2009b).
Correlation coefficients between image processing and measured values at sugar beet head diameter have been determined as 0.98, between length values 0.96 and between surface www.intechopen.com damage values 0.97. Total yield loss and topping losses differences related to harvest machines are given in Table 6 (Beyaz et al., 2009b).    Table 6. Topping yield losses related to harvest machines (Beyaz et al., 2009b).

www.intechopen.com
As shown in Table 6., the total topping yield loss and image processing measurement values have been evaluated. This results respectively, at the mechanical harvesting machine 3.87% and 3.78%, at the semi-hydraulic machine 4.86% and 4.29%, at the fully hydraulic machine 4.81% and 4.66%. According to IIRB ideal topping quality is group is 4. According to the total topping yield loss values, the best machine is mechanical sugar beet harvesting machine (Beyaz et al., 2009b).

Assessment of mechanical damage on apples with image analysis
The aim of the study was to determine oxidation area and skin colour change at the damaged apples by using image analysis technique. Golden Delicious, Granny Smith and Stark Crimson apple varieties were examined with a special pendulum unit which was developed for the impact test. Plastic and wood materials were used as impact surface, and apples were dropped from different heights to the impact surface. Impact to each apple occurred perpendicular to impact surface from predetermined heights. After the impact test, colour change of damaged regions and oxidation areas of each sample were estimated by using a digital camera. Colour measurements were done with a colourmeter based on L*, a*, b* measurement system. The oxidation areas were evaluated clearly after these processes (Beyaz et al., 2010).
Measurements of the apple physical properties which were used for the tests are given in Table 7. The impact tests applied to the three apple varieties changed colour values. Changes of colour values for each apple varieties for five days are presented in Table 8 ( Beyaz et al., 2010).
Repeated ANOVA statistics was used for evaluating colour interaction between other variables. When we look at the results of interactions for each of L*, a*, b* values, interaction between the day and apple varieties was significant (p<0.05) for all values. The interaction values are presented in Tables 9 -12. Additionally, interaction between the day and surface variety was found to be significant only for b value presented in Table 12 ( Beyaz et al., 2010).   Table 9. The interaction between the apple varieties and days for L* value (Beyaz et al., 2010).  (Beyaz et al., 2010).

Days
When the L*, a*, b* values of Granny Smith varieties were reviewed, L* and a* values increased while b* value decreased (Fig.11). For Stark Crimson variety all L*, a*, b* values decreased. The L*, a*, b* values in Golden Delicious varieties revealed that L* and b* values decreased while a* value increased. The L* value for each Golden Delicious varieties decreased by 65%. The colour value of b* decreased by 60% for Golden Delicious variety among all varieties. Total colour changes in Golden Delicious varieties were significantly different compared to Granny Smith and Stark Crimson indicated in Table 8 (Beyaz et al., 2010).
According to variance analysis technique, two levels of surfaces (wood and plastic surfaces), four levels of drop heights (50, 55, 60 and 65 cm) as height factor and three apple varieties (Golden Delicious, Granny Smith, Stark Crimson) were analysed. The number of irritation had five subgroups. Duncan's multiple range test was used to determine the mean difference between variety, impact surfaces and drop heights. Impact energy was taken as feature covariant (Beyaz et al., 2010).
Statistical analysis and results of variance analysis showed that height and 'Surface x Variety' interaction were significant at p<0.01 confidence level. All the variance analysis for 'Surface x Variety' interaction was found to be significant statistically (Beyaz et al., 2010).
The descriptive statistics in the variance analysis are given in Table 13. Duncan's test results of the average of damaged apples were significant at 50, 55 and 60 cm heights. The relations between 60 and 65 cm heights were insignificant (Beyaz et al., 2010).
The descriptive statistics for bruised area and drop height.is presented in Table 13. The area of damaged surfaces and descriptive statistics for the apple varieties is presented in Table 14. The average plastic surface results of the apple varieties were not observed to be significant statistically. The results of the average impacted surface area of apple varieties for the wood surface are also presented in Table 14. Relations between the Golden Delicious and Stark Crimson varieties had no significant difference, but the Granny Smith variety had a significant difference from the others.  Table 14. Impacted surface area results for plastic and wood surfaces (Beyaz et al., 2010).

Surface Varieties
Apple varieties and impact surface interaction were compared and presented in Table 15. The differences between average surface areas were observed significant as indicated in  (Beyaz et al., 2010).
Image analysis of the oxidation areas (bruised area) which were created after impact test is important for designing equipments and machines for classification, packaging, transportation and transmission. The results of these impaction tests indicated that the response to impact bruising was dependent on flesh firmness and on the height of the impact (Beyaz et al., 2010).
When impacted apples were analysed according to Duncan's test, differences between 50, 55 and 60 cm heights were significant but difference between 60 and 65 cm was not significant.

www.intechopen.com
We can conclude that differences between average of impacted surface areas at Golden Delicious and Stark Crimson apples were important but this was not the case for the Granny Smith apples. This result shows us that apple varieties, impact surface and tissue of apple varieties affect average of impacted surface area (Beyaz et al., 2010).
For the surface properties, wood surface creates more impacts on apples than plastic surface because of material properties. According to the results, plastic surface affects in a good way apple surfaces which are harvested or processed for postharvest. In this research we have shown that image analysis technique can easily be used for determining impacted surface areas and changes of colour values of the apples and other agricultural products (Beyaz et al., 2010).

Future of machine vision measurement technology
The goal of machine vision research is to provide computers with humanlike perception capabilities so that they can sense the environment, understand the sensed data, take appropriate actions, and learn from this experience in order to enhance future performance. The field has evolved from the application of classical pattern recognition and image processing methods to advanced techniques in image understanding like model-based and knowledge-based vision.
In recent years, there has been an increased demand for machine vision systems to address "real-world" problems. The current state-of-the-art in machine vision needs significant advancements to deal with real-world applications, such as agricultural navigation systems, target recognition, manufacturing, photo interpretation, remote sensing, etc. It is widely understood that many of these applications require vision algorithms and systems to work under partial occlusion, possibly under high clutter, low contrast, and changing environmental conditions. This requires that the vision techniques should be robust and flexible to optimize performance in a given scenario (Sebe et al., 2005 ).

Acknowledgement
I wish to personally thank the following people for their contributions to my inspiration and knowledge and other help in creating this chapter: Measurement is a multidisciplinary experimental science. Measurement systems synergistically blend science, engineering and statistical methods to provide fundamental data for research, design and development, control of processes and operations, and facilitate safe and economic performance of systems. In recent years, measuring techniques have expanded rapidly and gained maturity, through extensive research activities and hardware advancements. With individual chapters authored by eminent professionals in their respective topics, Advanced Topics in Measurements attempts to provide a comprehensive presentation and in-depth guidance on some of the key applied and advanced topics in measurements for scientists, engineers and educators.