Open access

Machine Vision for Inspection: A Case Study

Written By

Brandon Miles and Brian Surgenor

Submitted: 15 November 2010 Published: 17 August 2011

DOI: 10.5772/21546

From the Edited Volume

Assembly Line - Theory and Practice

Edited by Waldemar Grzechca

Chapter metrics overview

3,528 Chapter Downloads

View Full Metrics

1. Introduction

Automated inspection systems have the potential to significantly improve quality and increase production rates in the manufacturing industry. Machine vision (MV) is an example of one inspection technology that has been successfully applied to production lines. A wide variety of industrial inspection applications of MV systems can be found in the literature. For example, Lee et al. (2007) applied a MV system to bird handling for the food industry. Reynolds et al. (2004) looked at solder paste inspection for the electronics industry. Gayubo et al. (2006) developed a system to locate tearing defects in sheet metal.

There has also been numerous laboratory based work on MV systems with practical applications. Jackman et al. (2009) used a vision system to predict the quality of beef. Kumar (2003) worked on the detection of defects in twill weave fabric samples. Garcia et al. (2006) checked for missing and misaligned electronics components. Hunter et al. (1995) confirmed circularity in brake shoes. Kwak et al. (2000) identified surface defects in leather.

Although the range of applications is broad, they all tended to adopt the same image processing system with four main stages. The first is image acquisition. This is followed by preprocessing of the image, including applying various filters and selecting regions of interest. The third stage is feature extraction where individual features are extracted from the image. Finally a classifier is used to determine whether a given part is acceptable or not.

The automotive industry presents a particularly challenging environment for MV based inspection. With changing lighting conditions in a dirty environment there is a need for robust and accurate classifiers to perform accurate inspection. Feature selection routines can be used to improve the results of an ANFIS based classifier for automotive applications (Miles and Surgenor, 2009). Although there is potential for good results with this approach, it can take hours of processing time to compute an accurate solution (Killing et al., 2009).

This chapter presents the results of a project where six classification techniques were examined to see if development time could be reduced without sacrificing performance. As a case study, the problem of fastener insertion to an automotive part known as a cross car beam was investigated. Images taken from a production assembly line were used as the source of the data. The types of classifiers under investigation were:

  1. a Neural Network based processor,

  2. Principle Component Analysis to reclassify the input feature set and

  3. a direct Eigenimage approach to avoid the need to extract features from each image.

These methods were compared in terms of classification accuracy. An additional data set was also used to test the performance of these classifiers in detecting orientation defects in addition to presence and absence of clips. The results of these investigations are presented with a comparison of the performance on different datasets.

Advertisement

2. Techniques

Data for classifiers can be generated from input images in a variety of ways. The first is a feature based classifier. In this approach, features such as lines, holes and circles are extracted from an image. Numerical values are then obtained from these features, such as the x, y coordinates of a circle. These values can then be used as inputs to either a traditional Neural Network or a Neuro-Fuzzy System such as ANFIS. Principle Component Analysis (PCA) can be used to reclassify features to generate a data set with fewer inputs.

A second method to generate input values involves the application of Eigenimages. Eigenimages are generated from a set of training images. New images can be expressed as combinations of these Eigenimages. Coefficients are assigned to all the Eigenimages used to express a given set of input images. These coefficients can then be used to train either a Neural Network or an ANFIS system.

The six specific types of classifiers under investigation for this study are summarized in Table 1. They are grouped into Feature Based classifiers and Eigenimage Based classifiers along with ANFIS versus Neural Networks. This allows for a comparison between different classification techniques namely Neural Networks and ANFIS. However more importantly, it compares the Eigenimage based approach, which works directly on the pixels to produce classifier inputs, with the feature based approach, where features are first identified and then a classifier is trained.

ANFIS Neural Network
Feature Based
Methods
Feature based with ANFIS Feature based with a Neural Network
Feature based with PCA and ANFIS Feature based with PCA and a Neural Network
Eigenimage Based
Methods
Eigenimage based with ANFIS Eigenimage based with a Neural Network

Table 1.

Summary of the six classifiers under investigation.

2.1. ANFIS

ANFIS possesses a full Fuzzy Inferencing Structure. A fuzzy structure is established as a preliminary model of the system. It can then be updated with additional inputs. Thus, it is trainable like a Neural Network. In this application, the ANFIS system was implemented using the MATLAB Fuzzy Toolbox. The reader is referred to (Roger Jang, 1993) for further background on this technique. The ANFIS technique was used as the benchmark in previous work reported in Killing et al. (2009).

2.2. Neural networks

The Neural Network used in this study was a Multi Layer Perceptron (MLP) network with one hidden layer. The hidden layer had a sigmoidal activation function and the output layer had a linear activation function. This approach was taken because the output value needed to be a scalar instead of the more common binary value. Twenty hidden nodes were used to minimize the possibility of over training the system while giving it sufficient size to be useful.

2.3. Principal component analysis

PCA can be used to reclassify the data in terms of maximized variance between input data and output data. By reorganizing the inputs into components ranked by variance the input data set can be reduced in size by keeping only the components with a high degree of variance. In this case, as chosen by experience, only the components comprising the top 95% of the variance were kept. This was typically five to seven inputs.

To calculate the principle components Singular Value Decomposition (SVD) was used. The sample covariance matrix S (calculated from the supplied data) was used to find the principle components:

S = X ¯ X ¯ T E1

where X ¯ is X calculated around the sample means or X ¯ = ( X c ) and c is the mean of X. Once S is calculated the Eigenvalues and eigenvectors of S can be calculated and the principle components can subsequently be found based.

2.4. Eigenimages

The Eigenimage classifier offers a more direct approach without the need to extract features from images. Grey scale images can be represented as vectors of pixels. In this way the dataset X can be generated by vectors of images where X = [x 1 , x 2 , x 3 … x n ] for n sample images. SVD can then be performed identifying the principle components [e 1 , e 2 , e 3, …, e n ].

These Eigenimages or principle components are a series of images that when combined are able to represent the entire dataset of images. It is often desirable to only select the principle components that have the largest variances E = [e 1 , e 2 , … e k ]. In this case, it is possible to project a given image onto the Eigenspace constructed from these principle components. This projection generates an Eigenpoint from the image x with a set of project coefficients p = [p 1 , p 2, … p n ]. This projection is calculated by:

p = E ( x c ) E2

Once this projection is known the set of points p can be used to classify the images. Note that p presents a smaller set of input data than the entire image, which presents benefits computationally. In this way the PCA technique acts like a feature detector, producing numerical values from an image. The theory of EigenImages is discussed further in Sun et al. (2007) and Ohba and Ilkeuchi (1997). Both a Neural Network and ANFIS can be trained based on these principle components.

2.5. Features

Lines, Holes and Circles can be found in an image using a Hough transform. Additionally large colour blobs can be located based on locations of certain colours of pixels. Also given a specific region of interest in the images the average red, green, blue and grey scale intensity values can be found. A radial hole method as detailed in Miles and Surgenor (2009) has also been used. These features are found relative to the centre of the beam where possible.

Two new features have been introduced in order to help improve the accuracy of the results. These are a Generalized Hough Rectangle feature and PCA colour feature.

2.5.1. Hough rectangle feature

Lines, Holes and Circles can be found in an image using a Hough transform. Additionally One of the extensions of the generalized Hough transform (GHT) is using it to find rectangles. The symmetry that rectangles have can be employed to help locate them. First the gradient direction and magnitude of all the pixels are calculated. Say the rectangle has side lengths A and B, where A>B. The sides A are oriented in the direction of the major axis of the rectangle. The sides B are oriented in the direction of the minor axis of the rectangle. Assume that the angle of A is between 0 and 90º, then for a given edge pixel, if it’s gradient is between 0 and 90º cast votes for a rectangle with a centre on a line at ±B/2 pixels away from the edge pixel in the direction of the edge pixel’s gradient. Alternatively if the directional gradient of the edge pixel is between 90º and 180º then cast votes for a rectangle centred on a line at ±A/2 from the edge pixel.

Then in another plane accumulate votes assuming that the angle of the major axis is between 90º and 180º. If the directional gradient of an edge pixel is between 90º and 180º cast votes for rectangles centred on a line A/2 pixels away from the edge pixel in the direction of the edge pixel’s gradient. Alternatively if the directional gradient of the edge pixel is between 0 and 90º then cast votes for a rectangle centred on a line at ±B/2 from the edge pixel.

This approach will produce two prominent peaks in one of the two accumulators. The largest one is for the major axis and the smaller one is for the minor axis. Where these two peaks intersect should be the centre of the rectangle and the direction should be known because of the slope of the line. Figure 2 illustrates this technique. See Chapter 14 in Davies (2005) for further details.

Figure 1.

Illustration of the Hough rectangle method.

2.5.2. PCA colour feature

As an additional source of information, it is possible to apply PCA to the three red, green and blue elements of an image. The result is three new colour components that can be reclassified in terms of maximum order of intensity (Lee et al., 2007). The image is represented as a vector of pixels each with red green and blue values. By applying PCA to this vector new colour components can be generated. Applying SVD to the sample correlation matrix will generate eigenvectors and Eigenvalues. These eigenvectors can then be used to translate the image into its new colour components. The Eigenvalues of these new colour components can be used as numerical inputs.

Advertisement

3. Case study

The part in question is a cross car beam, which is the metal support located behind the dashboard of an automobile. See Figure 2. The beam is a stamped metal part and a stamped metal radio bracket is welded to this beam. As part of the assembly process four small rectangular clips must be inserted into the radio bracket. These four clips serve as locations for securing the radio unit to the cross car beam. Figure 3 illustrates the manufacturing cell used to inspect each part.

Figure 2.

Cross car beam containing the radio bracket.

This is a safety critical part. It is essential to ensure that these clips are properly installed, because of the consequences that could result from of an unsecure radio unit during a collision. In order to ensure the presence of these clips a machine vision system was developed to automatically inspect each part before it leaves the manufacturing cell that is dedicated to attached the clips. As illustrated in Figure 3, the PLC that controls the manufacturing cell communicates with a standalone PC that controls the vision system.

3.1. Setup

A two camera system was installed on the production assembly line in order to inspect the part. This is shown in the Figure 4(a). The part is pictured in Figure 4(b). The two digital firewire cameras capture 1024 x 768 images. For a lighting solution an LED ring light with a diffuser was chosen. The ring lights are visible on the front of the cameras in Figure 4(a). The decision to use ring lights was made based on the need for a lighting solution that could illuminate the part, but did not have a large footprint due to space restrictions in the cell. It also needed to be easily mountable. Because of this it was not feasible to install an off angle lighting source or to use backlighting. The lights were manually aligned to illuminate the bracket and be centred on the beam.

Figure 3.

PLC controlled manufacturing cell with PC controlled vision sytsem.

Figure 4.

Orientation of (a) cameras looking at bracket and (b) radio bracket in holder.

3.2. Software

QVision a custom software system based on MATLAB® has been used to classify the data. This software is capable of loading library images, selecting and extracting features or regions of interest for Eigenimages, training classifiers through to final inspection of new images. It was integrated to the PLC running the manufacturing cell for online inspection of parts. Figure 5 shows an image of the classification results of the program.

Figure 5.

QVision software GUI used to detect the presence of clips.

3.3. Datasets

Two different data sets were used for comparing the classification techniques. They contain pass images and fail images taken from parts on the assembly line. A pass images consists of the clips present and a fail image consists of the clips missing. Figures 6 and 7 illustrate sample cases of these images. Since there are two cameras looking at the top and bottom clips, there are also top and bottom images.

The first data set which will be referred to as the “original set” consisted of 150 pass images and 114 fail images for the top bracket, and 147 pass images and 125 fail images for the bottom bracket.

Figure 6.

Sample “clean” image from manufacturing cell for top bracket: (a) pass and (b) fail.

The second data set (which will be referred to as the orientation set) has been examined to determine the performance of the system under two conditions. Firstly the lighting has changed in this orientation image set. Secondly 3 new failure modes have also been introduced. The clips do not have significant glare, but the background is much more visible in these images. Figures 8 to 11 show the four methods of failure, which are: missing clip, backwards and upside down clip, backwards clips and upside down clip respectively.

Figure 7.

Sample “clean” image from cell for bottom bracket: (a) pass and (b) fail.

Figure 8.

Sample fail (missing clip) for the orientation image set.

Figure 9.

Sample fail (backwards and upside down clip) for the orientation image set.

Figure 10.

Sample fail (backwards) for the orientation image set.

Figure 11.

Sample fail (upside down) for the orientation image set.

3.4. Performance criteria

The industrial partner has specified a performance goal in terms of false positives (FPs) and false negatives (FNs). An FP is a defective part classified as good and hence shipped to the customer. This is a safety hazard and hence can’t be tolerated. There must be no FPs. An FN is a good part classified as bad. The industrial partner has set this at a maximum rate of 2%. It is a measure of the scrap rate. It should be noted that Receiver Operating Characteristic graphs (Fawcett, 2005) can also be generated as a measure of performance.

For the purposes of this study, the root-mean-squared (RMS) error E rms for a set of images is used as a performance measure. The root-mean-square of the output error defined as:

E r m s = i = 1 n ( Z i d Z i ) 2 n E3

where, Z i d is the desired (correct) classification for the i th image, Z i is the output of the classifier algorithm ( Z i n f or Z i t h ) and n is the total number of images. Z = 1 is an unconditional pass (clip present) and Z = 0 is an unconditional fail (clip missing).

Advertisement

4. Results

The six classifiers were trained using the original data set. The system was trained on 40% of the original data set and these results were then checked on 20% of the original data set. These images were chosen randomly. A final 40% was reserved for additional testing purposes. The results of this classification are shown in Table 2.

Ranking Classifier Clip False Positives False Negatives % False Negatives RMS Error Erms
Feature based with PCA and NN Clip 1 0 0 0 0.0001
1st Clip 2 0 0 0 0.0115
Clip 3 0 0 0 0
Clip 4 0 1 0.4 0.0547
totals 0 1 (0.1) 0.066
Feature based with NN Clip 1 0 0 0 0.0879
2nd Clip 2 0 2 0.8 0.0899
Clip 3 0 0 0 0.0710
Clip 4 0 2 0.8 0.0871
totals 0 4 (0.4) 0.336
Eigenimage based with NN Clip 1 0 7 2.6 0.1853
3rd Clip 2 0 4 1.5 0.1166
Clip 3 0 3 1.1 0.1304
Clip 4 0 1 0.4 0.1309
totals 0 15 (1.4) 0.563
Feature based with PCA and ANFIS Clip 1 0 6 2.3 0.0721
4th Clip 2 0 0 0 0.1102
Clip 3 0 11 4.2 0.1748
Clip 4 0 9 3.4 0.1509
totals 0 26 (2.5) 0.508
Eigenimage based with ANFIS Clip 1 2 76 29 0.3932
5th Clip 2 6 40 15 0.3319
Clip 3 0 6 2.3 0.1512
Clip 4 0 4 1.5 0.1908
totals 8 126 (12) 1.067
Feature based with ANFIS Clip 1 1 29 11 0.9587
6th Clip 2 2 9 3.4 0.3230
Clip 3 10 12 4.5 0.5226
Clip 4 6 6 2.3 0.2805
totals 19 56 (5.3) 2.085

Table 2.

Results with original data set (264 images per clip).

To improve performance a new set of features was used including the PCA colour technique and Rectangular Hough transform technique. These are shown in Table 3. The results of the Eigenimage backpropagation were not included in this table because they are same as those found in Table 2.

The final set of images that was examined was the new orientation set. There are four additional modes of failure and the lighting has changed. These additional modes of failure can be caught by other methods and the industrial partner does not need the vision system to inspect for these defects. However it does present an excellent set of data to test the robustness of the algorithm. As in the previous cases features were defined on the images. However these were not found relative to the centre of the beam because of difficulty finding the beam with the new lighting. The feature-based results are presented in Table 4.

Ranking Classifier Clip False Positives False Negatives % False Negatives RMS Error Erms
Feature based with PCA and NN Clip 1 0 0 0 0.0047
1st Clip 2 0 0 0 0
Clip 3 0 2 0.8 0.0798
Clip 4 0 0 0 0.0005
totals 0 2 (0.2) 0.085
Feature based with NN Clip 1 0 0 0 0.0488
2nd Clip 2 0 1 0.4 0.0647
Clip 3 0 2 0.8 0.1043
Clip 4 0 1 0.4 0.0908
totals 0 4 (0.4) 0.309
Feature based with PCA and ANFIS Clip 1 0 2 0.8 0.0621
3rd Clip 2 0 0 0 0.0688
Clip 3 0 12 4.5 0.1416
Clip 4 0 4 1.5 0.1284
totals 0 18 (1.7) 0.401
Feature based with ANFIS Clip 1 1 6 2.3 0.5047
4th Clip 2 1 3 1.1 0.1874
Clip 3 0 5 1.9 0.1458
Clip 4 1 26 10 0.3767
totals 3 40 (3.8) 1.215

Table 3.

Results with original data set and additional features (264 images per clip).

For the feature based results from Table 2 features were extracted from an image. These were a combination of holes, lines, circles and colours. These features are illustrated for Clip 1 in Figure 12.

Figure 12.

Features defined on Clip 1 for original data set.

Ranking Classifier Clip False Positives False Negatives % False Negatives RMS Error Erms
Feature based with PCA and NN Clip 1 0 0 0 0.0047
1st Clip 2 0 0 0 0
Clip 3 0 2 0.7 0.0798
Clip 4 0 0 0 0.0005
totals 0 2 (0.15) 0.085
Feature based with NN Clip 1 0 0 0 0.0488
2nd Clip 2 0 1 0.4 0.0647
Clip 3 0 2 0.7 0.1043
Clip 4 0 1 0.4 0.0908
totals 0 4 (0.3) 0.317
Feature based with PCA and ANFIS Clip 1 0 2 0.8 0.0621
3rd Clip 2 0 0 0 0.0688
Clip 3 0 12 4 0.1416
Clip 4 0 4 1.5 0.1284
totals 0 18 (1.6) 0.401
Feature based with ANFIS
Clip 1 1 6 2 0.5047
4th Clip 2 1 3 1 0.1874
Clip 3 0 5 2 0.1458
Clip 4 1 26 10 0.3767
totals 3 40 (3.5) 0.710
5th Eigenimage based with NN Clip 1 9 0 0 0.1835
Clip 2 0 0 0 0.0662
Clip 3 0 0 0 0.0646
Clip 4 0 0 0 0.0548
totals 9 0 (0) 0.369
Eigenimage based with ANFIS Clip 1 12 2 0.9 0.2253
6th Clip 2 2 5 2.29 0.1626
Clip 3 1 1 0.5 0.1228
Clip 4 1 2 0.9 0.0963
totals 16 10 (0.9) 0.607

Table 4.

Results with orientation data set (286 images per clip).

Again the best results were found with the PCA Neural Network, followed by the Neural Network. The feature-based results did better than the Eigenimage results and the Neural Network results were better than the ANFIS results.

One interesting result is the Eigenimage backpropagation results trained with a Neural Network. If Clip 1 is ignored the results are perfect.

Advertisement

5. Discussion

Six different classifiers were compared to the classification of clips on an automotive assembly. Tests were done on two images sets and with two different feature sets. Consistently it was seen that a Neural Network classifier whether used on feature data, on feature data with PCA applied or on Eigenimage coefficients performed better than the ANFIS system. The NN results always ranked higher on all the tests. It is reasonable to say that the Neural Network performs better than the ANFIS system. It has also been consistently seen that applying PCA on input data improves the results of classification. The results with feature extraction and PCA ranked higher that the results with feature extraction and no PCA on the majority of the tests.

For the original data set the performance of feature based techniques were better than region of interest techniques. In the case of the orientation data set in general the feature based techniques had similar perform to the Eigenimage techniques. However ignoring Clip 1 the Eigenimage Neural Network technique worked better than any other technique on this set of data. This shows that the Eigenimage technique is better able to distinguish multiple types of faults with brighter lighting than a feature based technique.

Applying PCA to a dataset eliminates the need to perform feature selection improving the result in a systematic way. The Eigenimage technique has the benefit of not needing to extract features. A region of interest is selected and the calculations can then proceed. The greatest benefit of these techniques is their speed of training, which makes the system more flexible.

Advertisement

6. Conclusion

The performance of six different classifiers has been compared as applied to the detection of missing fasteners. Traditional feature based classifiers were first used to train Neural Networks (NN) and Neuro-Fuzzy (ANFIS) systems, with and without Principle Component Analysis (PCA). As an alternate, a non-feature based Eigenimage classifier was used to generate the inputs for the classifiers. It was found that when there was only one type of defect, both the NN and Eigenimage based classifiers, but not the ANFIS based classifier, could achieve the required performance. On the other hand, when there was more than one type of defect, only the NN and ANFIS based classifiers could maintain the required level of performance. Finally, given that the Eigenimage based classifier takes much less time to set up and train, it is considered superior to the NN based classifier for practical applications.

Advertisement

Acknowledgments

The Authors would like to thank the generous support of AUTO21 and OCE, which has allowed this research to be carried out. They would also like to thank Queen’s University for their support in this research.

References

  1. 1. Davies E. R. 2005 Machine Vision: Theory Algorithms, Practicalities, 3rd edition, Morgan Kaufmann, New York, NY.
  2. 2. Fawcett T. 2005An introduction to ROC analysis,” Pattern Recognition Letters, 27 861 874
  3. 3. Garcia H. C. Villalobos J. R. Runger G. C. 2006An Automated Feature Selection Method for Visual Inspection Systems”, IEEE Transactions on Automation Science and Engineering, 3 4 394 406
  4. 4. Garcia H. C. Villalobos J. R. Runger G. C. 2006An automated feature selection method for visual inspection systems”, IEEE Transactions on Automation Science and Engineering, 3 4 394 406
  5. 5. Gayubo F. Gonzalez J. L. del la Fuente E. Miguel and Peran J. R. (2006On-line machine vision systems to detect split defects in sheet-metal forming processes,” Int. Conf. Pattern Recognition (ICPR ‘06), Hong Kong, August 20 to 24.
  6. 6. Hunter J. J. Graham J. Taylor C. J. 1995User programmable visual inspection”, Image and Vision Computing, 13 8 623 628
  7. 7. Jackman P. Sun-W D. Du-J C. Allen P. 2009Prediction of beef eating qualities from colour marbling and wavelet surface texture features using homogenous carcass treatment,” Pattern Recognition, 42 751 763
  8. 8. Killing J. Surgenor B. W. Mechefske C. K. 2009A machine vision system for the detection of missing fasteners on steel stampings”, Int. Jrnl. of Advanced Manufacturing Technology, 41 7-8 808 819
  9. 9. Kumar A. 2003Neural network based detection of local textile defects“, Pattern Recognition, 36 1645 1659
  10. 10. Kwak C. Ventura A. Tofang-Szai K. 2000A neural network approach for defect identification and classification on leather fabric”, Jrnl. of Intelligent Manufacturing, 11 485 499
  11. 11. Lee-M K. Li Q. Daley W. 2007Effects of classification methods on color-based feature detection with food processing applications,” IEEE Transactions on Automation Science and Engineering, 4 1 40 51
  12. 12. Miles B. C. Surgenor B. W. 2009Industrial Experience with a Machine Vision System for the Detection of Missing Clip,” Changeable, Agile, Reconfigurable and Virtual Production (CARV 09), Munich, Germany, October 5 7
  13. 13. Ohba K. Ilkeuchi K. 1997Detectability, Uniqueness, and Reliability of Eigen Windows for Stable Verification of Partially Occluded Object,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 19 9 1043 1048
  14. 14. Reynolds, M.R., Campana, C., and Shetty, D. (2004) “Design of Machine Vision Systems For Improving Solder Paste Inspection”, ASME International Mechanical Engineering Congress and Exposition, ASME Paper IMECE2004-62133 Anaheim, California, USA, November 13-20.
  15. 15. Roger-S Jang. J. 1993ANFIS: Adaptive-Network-Based Fuzzy Inference System”, IEEE Transaction on Systems, Man and Cybernetics, 23 3 665 685
  16. 16. Sun J. Sun Q. Surgenor B. W. 2007Adaptive Visual Inspection for Assembly Line Parts Verification,” International Conference on Intelligent Automation and Robotics (ICIAR), San Francisco, California, USA, October 24 26

Written By

Brandon Miles and Brian Surgenor

Submitted: 15 November 2010 Published: 17 August 2011