Open access peer-reviewed chapter

Application of Neural Networks (NNs) for Fabric Defect Classification

Written By

H. İbrahim Çelik, L. Canan Dülger and Mehmet Topalbekiroğlu

Submitted: 19 October 2015 Reviewed: 01 April 2016 Published: 19 October 2016

DOI: 10.5772/63427

From the Edited Volume

Artificial Neural Networks - Models and Applications

Edited by Joao Luis G. Rosa

Chapter metrics overview

2,217 Chapter Downloads

View Full Metrics

Abstract

The defect classification is as important as the defect detection in fabric inspection process. The detected defects are classified according to their types and recorded with their names during manual fabric inspection process. The material is selected as “undyed raw denim” fabric in this study. Four commonly occurring defect types, hole, warp lacking, weft lacking and soiled yarn, were classified by using artificial neural network (ANN) method. The defects were automatically classified according to their texture features. Texture feature extraction algorithm was developed to acquire the required values from the defective fabric samples. The texture features were assessed as the network input values and the defect classification is obtained as the output. The defective images were classified with an average accuracy rate of 96.3%. As the hole defect was recognized with 100% accuracy rate, the others were recognized with a rate of 95%.

Keywords

  • artificial neural network (ANN)
  • fabric defect classification
  • pattern recognition
  • texture feature extraction
  • denim fabric

1. Introduction

The woven fabric is formed by interlacing warp and weft yarns at right angles. The fabric has a unique pattern construction along length (warp direction) and width (weft direction). The deformations that damage the appearance and performance of the fabric are called as “fabric defect.” In the literature, it is stated that there are 235 different fabric defect types [1]. The defects are evaluated as "major" and "minor" in relation to their size and types. After the weaving process is achieved, the fabric defects are inspected by a quality-control worker (Figure 1). As the fabric is wound by passing over an illuminated surface, the quality-control worker scans approximately 2 m width. She/he then records the type and location of the defects. The quality of the detected fabric is evaluated by means of the 4-point system, either Graniteville or a 10-point system. The main concept of all these systems is that the operator calculates the number of major and minor defects. This is taken as the point values in meter square, and then the fabric quality is considered as “first” and “second” quality. Defect detection procedure is time consuming and tiring. Thus, many different attempts are seen to replace the traditional inspection system by automated visual systems.

Figure 1.

Traditional fabric inspection.

Artificial intelligence methods, such as fuzzy logic (FL), neural network (NN), or genetic algorithm (GA), are generally preferred for fabric defect classification problems. Neural network is most frequently used method for defect classification. The input parameters of the neural network are obtained by using different types of feature extraction methods (FEMs). Different textile problems considered from fiber classification, color grading, and yarn and fabric property prediction can be given as examples. Hybrid modeling applications include neuro-fuzzy, Sugeno-Takagi fuzzy system, neuro-genetic, and neuro-fuzzy-genetic methods. Better results are obtained when they are used in combination [2]. The artificial intelligence system is based on learning the texture features and distinguishing them into the categories. Thus, the defect classification process is carried out as a solution of pattern recognition problem. The spatial filtering methods, morphological operations, noise-removing filters, and artificial intelligence methods must be used together. They can be combined properly for a robust defect detection and classification algorithm [2].

In the present study, Artificial Neural Network (ANN) method has been applied for a fabric defect classification problem. A texture feature extraction (TFE) algorithm was developed to acquire the required values from the defective fabric samples. The defects were automatically classified according to their texture features by using ANN method. The texture features were assessed as the network input values and the defect classification is obtained as the output. Four most common fabric defects, such as hole, warp lacking, weft lacking, and soiled yarn, are classified. Success in defect classification was given by using statistical measures at the end.

Advertisement

2. Previous studies

Artificial Neural Network methods are used from fiber to fabric in all textile product studies. Fiber identification is made by using ANN. Fiber properties according to the process parameters are also predicted by means of ANN. As far as yarns are concerned, the thrust of research is certainly on yarn property prediction particularly on the tensile properties. Researchers have tried to model the structure of a yarn with the ultimate aim of being able to predict its properties before the yarn is actually spun. In the area of fabric property prediction, traditionally subjective areas such as handle and drape have received considerable attention. Also some physical properties of the fabric; strength, elongation, air permeability, stiffness etc. are tried to predict before the fabric production. Most of the studies on fabric property prediction are performed for the identification and classification of faults in fabrics and carpets. These processes are attempted to automate. As far as dyeing is concerned, the prediction of dye concentration in the dye bath and dye recipes has been attempted [3].

In the literature survey, it is seen that Artificial intelligence techniques such as Artificial Neural Network, fuzzy logic, and genetic algorithm are especially preferred for fabric defect detection and also for classification. The texture features of the fabric samples are extracted by using different methods and these features are used as input. The artificial intelligence system learns the texture features and distinguishes them into categories [2, 3].

Huang and Chen have proposed a neuro-fuzzy system by combining FL and NN methods [4]. Nine categories were classified as normal fabrics and eight kinds of fabric defects. The results of the neuro-fuzzy system and NN systems were compared and it is concluded that better results are obtained with neuro-fuzzy system.

Tilocca et al. have presented a method using a different optical image acquisition system and ANN to analyze the acquired data [5]. The different light sources were used to illuminate the sample in order to acquire the different features. A three-layered Feed-Forward Neural Network (FFNN) with sigmoidal activation function and back propagation (BP) was used in this work. Four different types of defects, large knot, slub, broken thread, and knot, were classified by the given system. The percentage of correctly classified patterns was found as 92%.

Kumar has presented an approach for the segmentation of local textile defects using FFNN and fast-web inspection method using linear neural network (LNN) [6]. A twill-weave fabric sample with defect miss pick was tested by using FFNN method. Fabric inspection image with the defects slack end, dirty yarn, miss pick, and thin bar were tested. Linear neural network method is used. Plain-weave fabric samples with defect types, double weft, thin bar, broken ends, and slack pick, were used for real-time defect detection.

Islam et al. have developed an automated textile defect detection system based on adaptive NN [7]. The study was mainly based on to combine thresholding techniques and ANN for defect classification. In this system, defect types such as hole, scratch, stretch, fly yarn, dirty spot, slub, cracked point, color bleeding, etc. would be immediately recognized. Then, the system was triggered with the laser beams in order to display the upper offset and the lower offset of the faulty portion. The performance of recognition was stated as 72% in identifying hole-classified faults, 65% in identifying scratch-classified faults, 86% in identifying other classified faults, and 83% identifying no fault defects. The total performance of the system was found as 77%.

Liu et al. have presented an article about fabric defect classification [8]. Particle Swarm Optimization (PSO) was applied in BP-NN training. PSO-BP Neural Network was applied to the classification of fabric defect. PSO algorithm was introduced into BP-NN training to determine neural network connection weight and threshold values reasonably. Three types of fabric defects such as broken warp, broken weft, and oil stain were used in this article. As a result, it was stated that PSO-BP-NN had less hidden unit numbers, a shorter training period, and a higher accuracy of classification.

Suyi et al. have presented a study [9]. The fabric image was decomposed into sub-images by using DB3 wavelet transform function. The energy, entropy, and variance features of both horizontal and vertical detail coefficients are extracted. These features were used as inputs to PSO-BP-NN for classification. There were five types of defects such as warp direction, weft direction, particle, hole, and oil stain in this study.

Suyi et al. have proposed a defect detection algorithm by combining cellular automata theory and fuzzy theory [10]. Edge detection method was used to mark the boundary of the defective area. Broken warp, double weft, broken weft, and broken-filling type defects were chosen in this study.

Jianli and Baoqi have proposed a method consisting of Gray Level Co-occurrence Matrix (GLCM), Principle Component Analysis (PCA), and NN [11]. Denoising operation was applied to the fabric image by using wavelet thresholding. Then, Laplacian operation was applied to smooth the image. GLCM of the image was obtained and 13 different features of the matrix were extracted by using Haralick method. The feature vectors were prepared for NN input. Principle component analysis method was used to reduce the dimension of the input vector. The NN was trained for four types of fabric defects: warp lacking, weft lacking, oil stain, and hole. The defects were classified successfully by using a three-layer BP-NN.

Kuo and Su have made fabric defect classification by using GLCM and NN methods [12]. The GLCM of the fabric sample images was obtained and then the features such as energy, entropy, contrast, and dissimilarity were extracted. The features were used as input vector and the defect types were introduced to the NN. After the NN was trained, it was tested by using different fabric defect images. The NN was trained for four types of fabric defects: warp lacking, weft lacking, oil stain, and hole.

Kuo and Lee have classified the warp lacking, weft lacking, hole, and oil stain defects by training a three-layer BP-NN [13]. Plain-weave white fabric was used as the material. The images of the fabric sample were acquired via an area scan camera. The image was transmitted to a computer for filtering and thresholding. After thresholding operation, the maximum length, maximum width, and gray level of the detected region were extracted as inputs for NN. The classification was achieved with high recognition for all types of defects.

Celik et al. have developed a machine vision system. Five types of fabric defects, such as warp lacking, weft lacking, hole, soiled yarn, and yarn flow or knot, have been detected by using different image analysis approaches: Linear filtering (LF), Gabor filter (GF), and Wavelet analysis (WA) [1418]. The defect types have then been classified automatically by using ANN method.

Advertisement

3. Description and fundamentals of ANN method

An artificial neuron is a computational model inspired in the natural neurons. A neuron system consists of dendrites, cell body, axon, and synapses (output dendrites) connected to the dendrites of other neurons. The cell body is taken at the center of neuron. Dendrites and axon branch establish the connection between other neurons. Activity such as remember, think, and actions as a response to the environmental states passes from one neuron to another in terms of electrical triggers. This certainty is considered as an electrochemical process of voltage-gated ion exchange.

Figure 2.

ANN architecture [3].

Synapses are the branches of the axon interface with the dendrites of other neurons through certain specialized structures. The input is taken by the triggering signal coming from the other neurons to the cell body of the dendrites. The cell body response is transmitted along the axon [3, 14, 19].

ANN is a simple analog of the neural structure of the human brain. The basic element of the brain is taken as a natural neuron. The basic element of every neural network is considered as an artificial neuron. ANN is then built by putting the neurons in layers and connecting the outputs of neurons from one layer to the inputs of the neurons from the next layer ( Figure 2 ). There are three distinct functional operations happening in ANN architecture. First, the inputs from x 1 to x n are multiplied by the corresponding weight ( w kl ,w k2 ..….w kn ) in the layer to from the product of wp. The second operation, the weighted input wp, is added to the bias w k0 to form the input n . The bias can then be considered as a weight having a constant value of 1. The weighed inputs are summed to form the parameter of u k (Eq. (1)):

u k = J = 0 n w k j x j E1

Finally, the output y k is produced by passing the parameter u k through the activation function [1921]. The activation functions modify the signal input according to its nature. The most commonly used activation functions are given as “Linear transfer function,” “Log-sigmoid function,” and “Tangent-sigmoid function.” The activation functions are determined according to the problem by trial and error method.

3.1. The architecture of artificial neural network (ANN)

The neurons are combined in a layer. The network may consist of one or more layers. The neurons in different layers are connected to each other with a particular pattern. The connections between the neurons and the number of layers are generally called as architecture of the neural network [20]. The networks are categorized into two main groups according to their architecture: feed-forward and feedback networks. Feed-forward networks have no loops and feedback networks have loops because of feedback connections. The networks are also classified into subgroups according to the layer connections: single layer network and multilayer network with hidden layers [22]. The network establishes a relation between input and output values. The network is untrained when it is first built. The weight and the bias values are selected randomly, and so the output pattern totally mismatches the desired pattern. The network first uses the input pattern to produce its own output pattern. The actual output pattern is compared with the desired output pattern (target) and the weights are changed according to the difference. The procedure continues until the pattern matching occurs or the desired amount of matching error is obtained. This process is called as “training network.” When the training is achieved, the network is able to not only recall the input-output patterns but also interpolate and extrapolate them. This network is then called as “trained” or “learned” [3, 19]. After the network is trained, the network parameters such as the number of hidden layers, the number of units in the hidden layer, the learning rate, and the number of training cycles that is known as epochs must be optimized.

Advertisement

4. Texture feature extraction (TFE)

The surface or structure property of an object is defined as texture [23]. A fabric has got a regular pattern property in all regions. This uniform pattern provides a regular texture to the fabric. Due to the defective regions, the uniform texture property is deformed and a difference arises between them. Since every defect type causes a different change on the fabric texture, the fabric defects can be distinguished and classified by applying texture analysis and pattern recognition methods.

Two main approaches, “statistical” and “spatial approaches,” are used for measuring the texture properties. The statistical approach most frequently used one for texture analyzing and classification is based on the statistical properties of the intensity histogram. The spectral approach is based on the Fourier spectrum and suited for describing directionality of periodic patterns in an image [24]. The feature vector of a fabric image is composed of the first- and second-order statistical properties of the texture. The feature vectors of whole fabric images used for the application are extracted separately. The first- and second-order statistics are given in the following sections.

4.1. First-order statistics

The first-order statistical properties consist of average gray level (m), average contrast (σ), smoothness (R), third moment (μ 3 ), uniformity (U), and entropy (e) ( Table 1 ). These properties are derived from the intensity histogram of the gray-level image [25]. The statistical moments are used to measure some statistical properties. The expression for the nth moment is given by Eq. (2):

μ n = i = 0 L 1 ( z i m ) n . p ( z i ) E2

where z i is a random variable indicating the pixel intensity, p(z) is the histogram of the intensity levels in a region, and L is the number of possible intensity levels [25].

4.2. Second-order statistics

The second-order statistical properties include energy (f1), contrast (f2), correlation (f3), variance (f4), inverse difference moment (f5), sum average (f6), sum variance (f7), sum entropy (f8), entropy (f9), difference variance (f10), difference entropy (f11), and Information Measure of Correlation (IMC) 1 (f12) and 2 (f13) (Table 2) [2628].

Statistical property Formula
Average gray level (mean) m = i = 0 L 1 z i p ( z i )
Average contrast (standard deviation) σ = μ 2 ( z ) = σ 2
Smoothness R = 1 1 / ( 1 + σ 2 )
Third moment μ 3 = i = 0 L 1 ( z i m ) 3 . p ( z i )
Uniformity U = i = 0 L 1 p 2 ( z i )
Entropy e = i = 0 L 1 p ( z i ) log 2 p ( z i )

Table 1.

First-order statistics [14].

Statistical property Formula
Energy (angular second moment) f 1 = i   j p ( i , j ) 2
Contrast f 2 = n = 0 N 1 n 2 p x y ( n )    
Correlation f 3 = i   j ( i j ) p ( i , j ) μ x μ y σ x σ y
Variance (sums of squares) f 4 = i   j ( i μ ) 2
Inverse difference moment f 5 = i   j 1 1 + ( i j ) 2 p ( i , j )
Sum average f 6 = i = 2 2 N i p x + y ( i )
Sum variance f 7 = i = 2 2 N ( i f 6 ) 2 p x + y ( i )
Sum entropy f 8 = i = 2 2 N p x + y ( i ) log ( p x + y ( i ) )
Entropy f 9 = i   j p ( i , j ) log ( p ( i , j ) )
Difference variance f 10 = i = 0 N 1 ( i μ x y ) 2 p x y ( i )
Difference entropy f 11 = i = 0 N 1 p x y ( i ) log ( p x y ( i ) )
Information measure of correlation 1 f 12 = f 9 H X Y 1 max { HX , HY }
Information measure of correlation 2 f 13 = ( 1 exp [ 2.0 ( H X Y 2 f 9 ) ] ) 1 2

Table 2.

Second-order statistics [14].

Figure 3.

GLCM calculation.

The second-order statistics are derived from GLCM of images by using the methodology proposed by Haralick [26]. GLCM is the statistical method of examining texture that considers the spatial relationship of pixels [29]. GLCM measures how often a pixel p 1 occurs in specific spatial relationship to a pixel p 2 as shown in Figure 3 . GLCM is a square matrix of size N × N where N is the number of gray level. Generally, the statistical measures are made from this matrix. When a single GLCM is not enough to describe the textural features of the input image, an additional parameter “offset” could be specified to allow detection of patterns in different direction [29]. The offsets define pixel relationships of varying direction and distance given in Figure 4 . An offset array is defined to create a different GLCM for multiple directions.

Figure 4.

Offset directions.

The notations given in the following (Eqs. (3)–(6)) are used in the formulas for the second-order statistics of the image texture [2628],

p ( i , j ) : (i,j)-th entry in a GLCM, and called as “probability density” with N being the number of gray levels.

p x ( i ) = j = 1 N p ( i , j ) E3
p y ( j ) = j = 1 N p ( i , j ) E4
p x + y ( k ) = i , j : i + j = k p ( i , j ) for  k = 2 , 3 , 4 2 N E5
p x y ( k ) = i , j : | i j | = k p ( i , j ) for  k = 0 , 1 , 2 N 1 E6

where μ is the mean of p ( i , j ) , μ x , μ y are the means of p x and p y, respectively, and μ x y = i = 0 N 1 i p x y . σ x   a n d   σ y are the standard deviations, and HX and HY are the entropies of p x and p y , respectively. They are calculated as follows:

H X Y 1 = i = 1 N   j = 1 N p ( i , j ) log ( p x ( i ) p y ( j ) ) E7
H X Y 2 = i = 1 N   j = 1 N p x ( i ) p y ( j ) log ( p x ( i ) p y ( j ) ) E8

4.3. Description of texture feature extraction method

The algorithm consisting of Discrete Wavelet Transform (DWT), Soft Wavelet Thresholding (SWT) and GLCM methods is formed to extract the required texture features of the defective fabric images. The procedure of the algorithm includes seven steps as follows:

  1. The image noises are removed by means of Wiener filter to get a smoother image.

  2. The image is then decomposed into sub-images by applying DWT at level 2 with “db3” wavelet base. The approximation image is then applied to SWT (Eq. (9)) [30]:

    Y = { s g n ( X ) ( | X | T 0 ) , | X | T 0 0 , | X | < T 0 E9

    where Y is the wavelet coefficient and T 0 is the threshold level. The threshold valueT 0 ” is determined according to Eq. (10):

    T 1.2 = m e a n [ a ( i , j ) ] ± w * m e a n [ s t d ( a ( i , j ) ) ] E10

    T 1 is the upper limit, T 2 is the lower limit of the double thresholding processes, and “w” is the weighting factor which is determined experimentally between 2 and 4. The upper and lower thresholding limits are determined by using a defect-free fabric image as a template.

  3. The regular texture patterns should be made smoother and the defective regions should be accentuated in order to distinguish the defect boundaries. The image frame applied to soft thresholding is convolved with “Laplacian” operator.

  4. The first-order statistics ( Table 1 ) are then extracted from the convolved image.

  5. The woven fabric pattern is produced by interlacing the warp and weft yarns with a perpendicular angle. They are arranged to the horizontal (weft yarns) and vertical (warp yarns) directions in the fabric. The co-occurrence matrices with offset [0, 1] are formed for horizontal and vertical detail coefficients of the defective fabric. Basically, they represent the latitude and the longitude properties of the fabric.

  6. The second-order statistics are extracted from the co-occurrence matrices by using Haralick method.

  7. The feature column vector having 32 elements is then formed by using the first- and second-order statistics.

The procedure given above is repeated for each defective fabric image.

Advertisement

5. Preparation of defect database

The material selected for defect classification is “undyed denim fabric.” Denim is a strong and heavy warp-faced cotton cloth. The classical denim is made from 100% cotton and woven from coarse indigo dyed warp and gray undyed weft yarn. Weft yarn passes under two or more warp yarns and three and one twill construction is obtained. Generally, brown- or blue-colored yarns are used in warp and bleached yarns are used in weft [31, 32]. The name of denim comes from a strong fabric called serge, originally made in Nimes, France, which is then shortened to denim [31, 32]. Denim fabric is first produced as “working cloths.” Since the denim fabric is strong and durable, it was used as a working cloth in the 18th century and as a mineworker cloth in the 19th century. The mass production of the denim fabrics was begun in 1853 by Levi Strauss. Overtime, the denim fabric was used in the production of different cloth types such as short, shirt, skirt, jacket, and different products such as hat and bag. It is being estimated that 85% of the produced denim fabric is used in the production of trousers.

The sample fabric used in this study has got the specifications given in Table 3 . The material is supplied by Prestij Weaving Company in Gaziantep/Turkey, as an undyed denim fabric. It is woven on Picanol Gammax rapier weaving machines with a production speed of 450 rpm and a production efficiency of 85%. The fabric has got a size of 2000 cm length and 43 cm width. The fabric sample has four types of defects: warp lacking, weft lacking, hole, and soiled yarn ( Figure 5 ). Since the required number of defect cannot be encountered on a fabric with such a length, some of the defects are made randomly with different widths and lengths on the sample [14].

Pattern 3/1 (s) twill
Warp yarn number Ne 16/1 Open end
Weft yarn number Ne 10/1 Open end
Warp sett 45 ends/cm
Weft sett 20 picks/cm
Warp crimp (%) 12
Weft crimp (%) 1.5
Weight per square meter (with sizing) 323 g/m2
Cover factor 33.8
Reed number 110 dents/10cm

Table 3.

Fabric sample specifications [14].

Figure 5.

Fabric defect samples.

The image frames of the defective fabrics that will be used for network training and testing are acquired by using a prototype machine vision system [14] ( Figure 6 ). The system consists of an industrial fabric inspection machine, a camera system, camera attachment equipment, an additional lightening unit, a rotary encoder, and host computer. The camera system includes charge-coupled device (CCD) line-scan camera, frame grabber card, lens, and camera link cable. The fabric sample was placed on a fabric inspection machine. As the fabric was wound, the image frames were captured and then memorized on the computer. The fabric motion and the camera exposure are synchronized with a rotary encoder via a frame grabber card.

Figure 6.

Machine vision system for fabric inspection.

Advertisement

6. Case study: neural network architecture for fabric defect classification

The most commonly preferred method in AI among the studies on fabric defect classification problem [913] is Artificial Neural Network. In this study, four defect types, hole, warp lacking, weft lacking, and soiled yarn are classified by using MATLAB® Neural Network Toolbox. The toolbox consists of some tools such as neural fitting, neural clustering, pattern recognition, and neural network. The pattern recognition tool is used to classify inputs into a set of target categories for the fabric defect classification problem. The pattern recognition tool consists of a two-layer feed-forward network ( Figure 7 ). The network is trained with scaled conjugate gradient back propagation. Tan-sigmoid transfer function is used in both the hidden and the output layers [21].

The input and target matrices are formed for each defect type. Twenty-five defective fabric images for each defect type are taken to be used for the feature extraction. This results in an input matrix with 32 × 100 size. Each feature vector of the input matrix is assigned to the target vector, which has a size of 4 × 1 (binary vector). It is defined in Table 4 .

Figure 7.

Pattern Recognition Tool GUI of MATLAB® NN toolbox.

Defect type Column vector
Hole [1; 0; 0; 0]
Warp lacking [0; 1; 0; 0]
Weft lacking [0; 0; 1; 0]
Soiled yarn [0; 0; 0; 1]

Table 4.

Defect type and corresponding vector definition.

Target vector is formed for each defective fabric image and the target matrix’s size is 4 × 100. The input and target matrices are introduced into Neural Network Pattern Recognition Tool (nprtool) ( Figure 8 ). The input data set is randomly divided as 80, 10, and 10% for training, validation, and testing samples, respectively ( Figure 9 ). The number of neurons is then determined for the hidden layer of the network ( Figure 10 ). The number of neurons in the output layer is determined automatically according to the number of elements in the target vector, and it is taken as four in this study. Since four defect types are to be classified having performed many trials, the best results are obtained for 37 neurons in the hidden layer after many trials. The network is finally trained using the scaled conjugate gradient back propagation ( Figure 11 ). The Mean-Square-Error (MSE) algorithm adjusts the biases and weights so as to minimize the mean square error. MSE of training, testing, and validation operations is calculated by using Eq. (11). They are determined as 0.0021, 0.00014, and 0.00027, respectively ( Figure 12 ):

E = j ( t i o i ) 2 E11

where t i is the desired output and o i is the actual output of neuron "i" in the output layer [10].

Figure 8.

Input and target matrix introduction.

Figure 9.

Random division of data set.

Figure 10.

Number of neurons in the hidden layer.

Figure 11.

Network training.

6.1. Classification accuracy of the network

The classification accuracy of the network is specified by confusion matrices ( Figure 12 ), and Receiver Operating Characteristic (ROC) curves ( Figure 13 ) for training, testing, validation, and overall (three kinds of data combined). In the confusion matrices, the green squares indicate the correct response and the red squares indicate the incorrect responses. The lower-right blue squares illustrate the overall accuracies. As the number of green squares gets higher, the classification accuracy of the network increases. As in Figure 12 , 100% correct responses are obtained for all confusion matrices with this network.

Figure 12.

Confusion matrices [14].

Receiver Operating Characteristic curve is useful for recognizing the accuracy of predictions [33]. ROC curve illustrates the classification performance of a binary system as its distinguishing threshold level is varied. It is plotted as True-Positive Rate (TPR) versus False-Positive Rate (FPR). TPR is also known as sensitivity, and FPR is one minus the specificity. Four possible outcomes are seen as follows:

  1. If the sample is positive and it is classified as positive, it is counted as a true positive (TP).

  2. If the sample value is positive and it is discriminated as negative, it is counted as a false negative (FN).

  3. If the sample is negative and it is detected as negative, it is counted as a true negative (TN).

  4. If the sample has a negative value and it is detected as positive, it is counted as a false positive (FP).

As the curve gets closer to the left upper corner, it means that the higher classification accuracy is obtained. A perfect test shows points in the upper-left corner, with 100% sensitivity and 100% specificity. When the true-positive rate of ROC value is 1, it means the true positives are perfectly separated from the false positives. For this classification problem, ROC value 1 is obtained for training, testing, validation, and overall ( Figure 13 ). The network performs almost perfectly [34].

Figure 13.

Receiver Operating Characteristic (ROC) curves.

6.2. Defect classification software

Finally, a user interface is prepared for the classification of the defective fabric images. It is automatically used to determine the defect type of a selected defective image. The user interface consists of three buttons as “Exit,” “Reset Data,” and “Load-Defective Image” as shown in Figure 14 .

The exit button is used to exit the window. After the required image samples are classified, the counters of the defect classes can be made zero by using “reset data” button ( Figure 15 ). The counts of the defect classes come to the initial zero number. The classification operation can be continued with a different defective image folder. The image to be classified is selected from the directory by using the “load-defective image” button. The folder browser window is opened and the required fabric image is selected when this button is activated ( Figure 16 ). The selected image is applied to the feature extraction algorithm. The statistical texture features of the image are then extracted. The feature column vector is formed and it is simulated with the network previously built above. The selected image is displayed on the screen. The detect type is then titled for the image. The related defect-type counter is increased by one. These steps are shown in Figures 17 20 [14].

Figure 14.

Defect classification program user interface.

Figure 15.

Reset the counters.

Figure 16.

Selection of the defective image.

Figure 17.

Classification of hole defect.

Figure 18.

Classification of warp-lacking defect.

Figure 19.

Classification of weft-lacking defect.

Figure 20.

Classification of soiled yarn defect.

Advertisement

7. Statistical evaluation of network testing results

The defective fabric images are stored. They are then used for network training and testing. The features of the 25 defective fabric images are extracted for each defect type and the input matrix of the network is formed. After the neural network is trained successfully, 20 samples of each type of defects are used to test the network classification accuracy.

The neural network simulation results of hole, warp-lacking, weft-lacking, and soiled yarn defects are given in Tables 5 8 , respectively. The overall accuracy rate of each type of defect is then presented in Table 9 and Figure 21 [14].

The defective images are classified with an average accuracy rate of 96.3%. As the hole defect is recognized with 100% accuracy rate, the others are recognized with a rate of 95%. Since many weft yarns are removed and the large spaces occur between the yarns, one of the weft-lacking images is recognized as the hole. One of the soiled yarn images is recognized as warp lacking because of having large vertical soil. Since only a small part of the fabric image is different from the regular pattern, it is more difficult to classify them than the classification of completely different textures.

1 1.0000 0.0090 0.0000 0.0000
2 1.0000 0.0004 0.0000 0.0000
3 1.0000 0.0000 0.0000 0.0000
4 1.0000 0.0000 0.0000 0.0000
5 0.9999 0.0001 0.0000 0.0000
6 1.0000 0.0000 0.0000 0.0000
7 0.9750 0.0000 0.0000 0.0495
8 0.9995 0.0000 0.0000 0.0009
9 0.9995 0.0000 0.0000 0.0005
10 0.9997 0.0000 0.0000 0.0022
11 0.6396 0.0033 0.0000 0.0036
12 0.8833 0.0003 0.0000 0.0311
13 1.0000 0.0009 0.0000 0.0000
14 0.9997 0.1108 0.0000 0.0000
15 0.8559 0.0087 0.0001 0.0000
16 0.9993 0.0004 0.0000 0.0000
17 1.0000 0.0001 0.0001 0.0000
18 0.9992 0.0025 0.0001 0.0000
19 0.8551 0.0020 0.3635 0.0000
20 1.0000 0.0000 0.0000 0.0001

Table 5.

Classification results of “hole” defect.

1 0.0084 0.9601 0.0002 0.0000
2 0.0024 0.7283 0.0012 0.0000
3 0.0000 0.8354 0.1381 0.0000
4 0.0000 0.9938 0.0033 0.0000
5 0.0000 0.9997 0.0000 0.0001
6 0.0030 0.1955 0.0035 0.0001
7 0.0369 0.9362 0.0000 0.0000
8 0.0000 0.9824 0.0005 0.0024
9 0.0000 0.8589 0.5104 0.0000
10 0.0000 0.9983 0.0000 0.0003
11 0.0000 0.7221 0.0000 0.5606
12 0.0000 0.9999 0.0000 0.0000
13 0.0000 0.8850 0.0225 0.0000
14 0.0285 0.9988 0.0000 0.0000
15 0.0013 0.9998 0.0000 0.0000
16 0.0000 0.9910 0.0098 0.0000
17 0.0010 0.0099 0.0367 0.0042
18 0.0013 0.9238 0.0000 0.0001
19 0.0000 0.9993 0.0011 0.0000
20 0.0000 0.8986 0.1990 0.0000

Table 6.

Classification results of “warp-lacking” defect.

1 0.0001 0.0015 0.9996 0.0000
2 0.0001 0.0017 0.9999 0.0000
3 0.9989 0.0003 0.0265 0.0000
4 0.0037 0.0127 0.9974 0.0000
5 0.0249 0.0467 0.9831 0.0000
6 0.0000 0.0014 1.0000 0.0000
7 0.0020 0.0094 0.2899 0.0000
8 0.0000 0.5929 0.8806 0.0000
9 0.0000 0.0001 1.0000 0.0000
10 0.0122 0.0476 0.8142 0.0000
11 0.0145 0.0004 0.9565 0.0000
12 0.0000 0.0000 1.0000 0.0000
13 0.0000 0.0001 1.0000 0.0000
14 0.0003 0.0010 1.0000 0.0000
15 0.0241 0.0846 0.5240 0.0000
16 0.0000 0.0001 1.0000 0.0000
17 0.0000 0.0036 1.0000 0.0000
18 0.0011 0.0007 0.9999 0.0000
19 0.0005 0.0006 0.9999 0.0000
20 0.0001 0.0007 0.9997 0.0000

Table 7.

Classification results of “weft–lacking” defect.

1 0.0000 0.0000 0.0000 1.0000
2 0.0001 0.0024 0.0000 0.9991
3 0.0000 0.0003 0.0000 1.0000
4 0.0000 0.0002 0.0000 1.0000
5 0.0000 0.0233 0.0000 0.9897
6 0.0033 0.0916 0.0000 0.4757
7 0.0000 0.0010 0.0000 0.9999
8 0.0000 0.1733 0.0000 0.7789
9 0.0000 0.0392 0.0000 0.8368
10 0.0000 0.0008 0.0000 0.9995
11 0.0000 0.0000 0.0000 1.0000
12 0.0000 0.0000 0.0000 1.0000
13 0.0000 0.1266 0.0000 0.8657
14 0.0064 0.0073 0.0000 0.9998
15 0.0002 0.7885 0.0000 0.1102
16 0.0009 0.0000 0.0000 1.0000
17 0.0000 0.0075 0.0000 0.9987
18 0.0004 0.0000 0.0000 1.0000
19 0.0005 0.0149 0.0000 0.9924
20 0.0000 0.0001 0.0000 1.0000

Table 8.

Classification results of “soiled yarn” defect.

Defect type Hole Warp lacking Weft lacking Soiled yarn Number of sample Classification accuracy (%)
Hole 20 0 0 0 20 100
Warp lacking 0 19 1 0 20 95
Weft lacking 1 0 19 0 20 95
Soiled yarn 0 1 0 19 20 95

Table 9.

Defect classification accuracy rates.

Figure 21.

Classification accuracy rates of defects [14].

Advertisement

8. Conclusion

The experimental setup developed in this study has an operation speed of 7.5 m/min and it can detect the defects as small as 0.5 mm. This system was designed for the inspection of denim fabrics. The defective fabric images acquired via the developed machine vision system were classified by using ANN method. The classification was achieved according to their texture properties like a pattern recognition problem. The texture features of each defective image were extracted by using an algorithm based on DWT, SWT, and GLCM methods. Four defect types, hole, warp lacking, weft lacking, and soiled yarn, were classified. The first- and second-order statistical properties were extracted and the feature vector was formed for each defective fabric image. The feature extraction algorithm is applied for 25 images of each defect type. The input matrix with the size of 32 × 100 is obtained. The target vector indicated to which the input vector was assigned. It was made of a binary vector T (size = 4 × 1). The network was built by using MATLAB® Neural Network Toolbox and Pattern Recognition Tool.

Two layers were included in the network. The best results were obtained for 37 neurons in the hidden layer after many trials. The number of neurons in the output layer was determined automatically according to the number of elements in the target vector; it was taken as four in this study. The network was finally trained by using the scaled conjugate gradient BP method. Having trained the neural network successfully, 20 samples of each type of defects were used to test the network classification accuracy. The defective images were then classified with an average accuracy rate of 96.3%. As the hole defect was recognized with 100% accuracy rate, the others were recognized with a rate of 95%.

Advertisement

Acknowledgments

This study is a project supported by the Gaziantep University Scientific Research Projects Management Unit. The name of the project is “Development an Intelligent System for Fabric Defect Detection” and the project number is MF.10.12. The authors also thank Prestij Weaving Company in Gaziantep/Turkey for providing samples during tests.

References

  1. 1. Stojanovic, R., Mitropulos, P., Koulamas, C., Karayiannis, Y., Koubias, S., and Papadopoulos, G. (2001). Real-time vision-based system for textile fabric inspection. Real-Time Imaging. 7, 507–518.
  2. 2. Guruprasad, R. and Behera, B. K. (2010). Soft computing in textiles. Indian Journal of Fibre and Textile Research. 35, 75–84.
  3. 3. Chattopadhyay, R. and Guha, A. (2004). Artificial neural networks: applications to textiles. Textile Progress. 35, 1, 1–42.
  4. 4. Huang, C. C. and Chen, C. I. (2001). Neural-fuzzy classification for fabric defects. Textile Research Journal. 71(3), 220–224.
  5. 5. Tilocca, A., Borzone, P., Carosio, S., and Durante, A. (2002). Detecting fabric defects with a neural network using two kinds of optical patterns. Textile Research Journal. 72(6), 745–750.
  6. 6. Kumar, A. (2003). Neural network based detection of local textile defects. Pattern Recognition. 36, 1645–1659.
  7. 7. Islam, A., Akhter, S., and Mursalin, T. E. (2006). Automated textile defect recognition system using computer vision and artificial neural networks. World Academy of Science, Engineering and Technology. 13, 1–6.
  8. 8. Liu, S. Y., Zhang, L. D., Wang, Q., and Liu, J. J. (2008). BP neural network in classification of fabric defect based on particle swarm optimization. In: Proceedings of the 2008 International Conference on Wavelet Analysis and Pattern Recognition, Hong Kong. 216–220.
  9. 9. Suyi, L., Liu Jingjing, L., and Leduo, Z. (2008). Classification of fabric defect based on PSO-BP neural network. In: Proceedings of the 2008 Second International Conference on Genetic and Evolutionary Computing. Hubei, China. 137–140.
  10. 10. Suyi, L., Qian, W., and Heng, Z. (2009). Edge detection of fabric defect based on fuzzy cellular automata. In: Intelligent Systems and Applications. Wuhan, China. 1–3.
  11. 11. Jianli, L. and Baoqi, Z. (2007). Identification of fabric defects based on discrete wavelet transform and back-propagation neural network. Journal of the Textile Institute. 98(4), 355–362.
  12. 12. Kuo, C. J. and Su, T. (2003). Gray relational analysis for recognizing fabric defects. Textile Research Journal. 73(5), 461–465.
  13. 13. Kuo, C. J. and Lee, C. J. (2003). A back-propagation neural network for recognizing fabric defects. Textile Research Journal. 73(2), 147–151.
  14. 14. Çelik, H. I. Development of an intelligent fabric defect inspection system. Ph.D Thesis, Mechanical Engineering, University of Gaziantep Graduate School of Natural & Applied Sciences. 2013.
  15. 15. Çelik, H. İ., Topalbekiroğlu, M., and ve Dülger, L. C. (2015). Real-time denim fabric inspection using image analysis. Fibers and Textiles in Eastern Europe. 3(111), 85–90.
  16. 16. Çelik, H. İ., Dülger, L. C., and Topalbekiroğlu, M. (2014). Fabric defect detection by using linear filtering and morphological operations. Indian Journal of Fibre & Textile Research. 39, 254–259.
  17. 17. Çelik, H. I., Dülger, L. C., and Topalbekiroğlu, M. (2013). Developing an algorithm for defect detection of denim fabric: Gabor filter method. Tekstil ve Konfeksiyon. 23(3), 255–260.
  18. 18. Çelik, H. İ., Dülger, L. C., and Topalbekiroğlu, M. (2013). Development of a machine vision system: real-time fabric defect detection and classification with neural networks. The Journal of the Textile Institute. 105(6), 575–585.
  19. 19. Graupe, D. (2007). Principles of artificial neural networks. 2nd edition. Advanced Series on Circuits and Systems. Vol. 6. Singapore: World Scientific Publishing Co. Pte. Ltd.
  20. 20. Munakata, T. (2008). Fundamentals of the new artificial intelligence; neural, evolutionary, fuzzy and more. 2nd edition. London: Springer-Verlag London Limited.
  21. 21. Beale, M. H. and Hagan, M. T. (2012). Neural network toolbox™ user’s guide. Natick, MA: The MathWorks, Inc.
  22. 22. Jain, A. K., Mao, J., and Mohiuddin, K. M. (1996). Artificial neural networks: a tutorial. Computer. 29(3), 31–44.
  23. 23. Sonka, M., Hlavac, V., and Boyle, R. (2008). Image processing, analysis and machine vision. International student edition. Toronto, Ontario, USA: Thomson Corporation.
  24. 24. Heijden, F. V., Duin, R. P. W., Ridder, D., and Tax, D.M.J. 2004. Classification, parameter estimation and state estimation an engineering approach using MATLAB. West Sussex, UK: John Wiley & Sons, Ltd.
  25. 25. Gonzalez, R. C., Woods, R. E., and Eddins, S. L. (2004). Digital image processing using Matlab. Upper Saddle River, NJ: Prentice-Hall Inc.
  26. 26. Haralick, R. M. (1979). Statistical and structural approaches to texture. Proceedings of the IEEE. 67(5), 786–804.
  27. 27. Alam, F. I. and Uddin Faruqui, R. U. (2011). Optimized calculations of haralick texture features. European Journal of Scientific Research. 50(4), 543–553.
  28. 28. Haralick, R. M., Shanmugam, K., and Dinstein, I. (1973). Textural features for image classification. IEEE Transactions on Systems, Man and Cybernetics, SMC 3(6), 610–621.
  29. 29. The MathWorks. (2009). Image processing toolbox™ 6 user’s guide. Natick, MA: The MathWorks, Inc.
  30. 30. Misiti, M., Misiti, Y., Oppenheim, G., and Poggi, J. M. (1997). Wavelet toolbox for use with Matlab. Natick, MA: The MathWorks, Inc.
  31. 31. Wikipedia the free encyclopedia. Available at: https://en.wikipedia.org/wiki/Denim. Accessed 17.02.2016.
  32. 32. Gokarneshan, N. (2004). Fabric structure and design. New Delhi: New Age International (P) Ltd.
  33. 33. Wikipedia. Available at: http://en.wikipedia.org/wiki/Receiver_operating_characteristic. Accessed 15.02.2016.
  34. 34. Fawcett, T. (2006). An introduction to ROC analysis. Pattern Recognition Letters. 27, 861–874.

Written By

H. İbrahim Çelik, L. Canan Dülger and Mehmet Topalbekiroğlu

Submitted: 19 October 2015 Reviewed: 01 April 2016 Published: 19 October 2016