Open access

Image Recognition on Impact Perforation Test by Neural Network

Written By

Takehiko Ogawa

Published: 01 December 2009

DOI: 10.5772/7060

From the Edited Volume

Image Processing

Edited by Yung-Sheng Chen

Chapter metrics overview

2,338 Chapter Downloads

View Full Metrics

1. Introduction

It is important to estimate the perforation characteristics of materials in the design of structures that collide with a flying object. Concretely, the perforation limit speed and the remaining speed of a material are evaluated by performing an impact perforation test (Backman & Goldsmith, 1978, Zukas, 1990). Then, a method for evaluating these parameters using successive images of the perforation process captured with a super-high-speed camera system has been researched in the mechanical and material engineering fields (Kasano, 1999, Kasano et al., 2001). In this method, a steel ball is shot into the material specimen as successive images are captured. Then, the characteristics of the material are estimated from the location of the steel ball and the behavior of the specimen. However, it is often difficult to observe these parameters because of the scattered fragments. Fragment scattering is especially observed in acrylic composites, ceramics, and their composite materials; these materials have high hardness and heat resistance. Therefore, it can be difficult to accurately evaluate the characteristics of the material.

On the other hand, neural networks are often used in an image recognition method (Bishop, 2006, Ripley, 2007). Because of their robustness, noise-proofing, etc., neural networks can be expected to correctly recognize an imperfect image. A multilayered neural network, in particular, can distinguish an unknown image by developing relationships between the input image and the distinction category through its mapping ability. Neural networks have been applied to impact perforation image processing in order to estimate the position of steel balls (Ogawa et al., 2003, Ogawa et al., 2006, Ohkubo et al., 2007).

The impact perforation image includes the scattered fragments; the obscurity and the difference in brightness depend on the experimental conditions of the specimen material and the difference in light. The input-output relationship is determined beforehand using the training data, and distinction is performed using the obtained input-output model for distinction by a multilayer neural network. In general, different data based on the material and environment are used in learning and for an actual distinction. It is possible to deal with a few differences between the learning and the distinction data in a robust neural network. To improve the estimation accuracy, it is necessary for the method to decrease the differences between the training and the distinction data and clarify the features between groups in the training data.

In this study, we examined the addition of an image preprocessing method to improve the learning method for impact perforation image processing using a neural network. The objective of this study was to improve the distinction ability by clarifying the features of the image and decreasing the differences between the training and the distinction data. First, preprocessing of the input image by gamma correction is introduced to clarify features between the distinction data groups of the image in the input image. Then, the incremental learning method, in which distinction data is added to the study data, is introduced to reduce the difference between input data. The effect of these two methods is confirmed by simulations carried out using actual impact perforation images.

Advertisement

2. Impact perforation test

The impact perforation test is a method in which a flying object such as a steel ball is shot into a material specimen to examine the perforation and destruction process. The system for the impact perforation test consists of a super high-speed camera used to observe the perforation process. With this system, we can observe the state before and after impact perforation by looking at successive images.

When a steel ball perforates through a plate, the residual velocity of the steel ball after the perforation is expressed by several characteristics: the strength of the board, rigidity, initial form, size, etc. For example, we can estimate the material properties using the impact velocity and residual velocity of the steel ball and the geometric properties of the material. Kasano et al. have researched methods for evaluating material properties by performing the impact perforation test based on the abovementioned principle. The residual velocity V R is expressed by

V R = F ( a 1 , a 2 ; V i ) E1

where a 1, a 2, and V i represent the material properties, geometric properties of the material, and the initial velocity of the steel ball, respectively. We can estimate the material property a 1 if we know the initial velocity V i, residual velocity V R of the steel ball, and geometric property a 2. The perforation limit velocity is one of the important material properties and can be estimated by the initial and residual velocities obtained in the impact perforation test (Kasano et al., 2001).

As a measuring method for the velocity of a steel ball, we used the high-speed camera method because the perforated specimen’s fragments have little influence. In this system, the steel ball launched from the shooting device perforates a monotonous material plate in the high-temperature furnace; pictures are taken of the event with the high-speed camera. The high-speed camera of this system can take pictures of four successive images. The experimental setup is shown in Fig. 1. We can measure the velocity of the steel ball with the high-speed camera from successive images of the perforation. However, the location of the steel ball cannot often be measured precisely by the fragments of the destroyed material in the perforation image of an actual material. In the current system, image classification for the impact perforation test is done visually. Therefore, classification is done from successive images, which is difficult because of the fragments of the material. Therefore, precise image clustering of the steel ball, background, and fragments is necessary. We propose using a neural network to classify these components in images degraded by fragments of the specimen. We show the image classification for the images degraded by the scattered material fragments can be performed accurately with a neural network.

Successive images of the impact perforation test with actual composite materials— carbon nitride (Si3N4) and alumina (Al2O3) specimens—are shown in Fig. 2. The numbers drawn in the figure show the photographed order. The image of Si3N4 specimen is so clear that we can sufficiently classify it visually. However, classifying the image of alumina is too difficult because of scattered fragments. The aim of this study is to localize the steel ball in these images accurately. The plate size, thickness of the specimen, impact velocity of the steel ball, and interframe time of successive images are shown in Table 1. The size of the steel ball was 5 mm in diameter and 0.5 g in mass.

Figure 1.

Experimental setup of the impact perforation test.

Figure 2.

Successive images obtained using super-high-speed camera in the impact perforation test performed with composite materials: (a) carbon nitride (Si3N4) and (b) alumina (Al2O3).

Matrial Si3N4 Al2O3 Image size (pixel) 1172x770 1172x770 Specimen's size(mm) 80x50 80x80 Specimen's thickness (mm) 1.5 1.5 Initial velocities of the steel ball (m/s) 104.8 224.6 Interval time of successive images((s) 50 50 Si 3 N 4 Al 2 O 3
Image size (pixel) 1172x770 1172x770
Specimen's size(mm) 80x50 80x80
Specimen's thickness (mm) 1.5 1.5
Initial velocities of the steel ball (m/s) 104.8 224.6
Interval time of successive images( s) 50 50

Table 1.

Parameters of the impact perforation test and its image.

Advertisement

3. Neural network for impact perforation image processing

In this study, we used a neural network to localize a steel ball in the impact perforation image. We used the three-layer feed-forward network. The input and output are preprocessed sub-images and distinguished results of the input sub-images, respectively. The sub-images are sequentially extracted and provided to the network. The network distinguishes whether the sub-image is a steel ball from the provided sub-images of the impact perforation image. The network estimates the steel ball’s location from the sequential distinction of the steel ball.

The actual input of the network is each normalized pixel value in [0, 1], which is extracted from the whole image as a sub-image and compressed into 8 × 8 pixels via image compression. Image compression based on the two-dimensional wavelet transform was used in this study. The wavelet transform is a conversion with a lot of energy concentrations on the upper expansion coefficient (Goswami & Chan, 1999). Therefore, it compresses images with little loss of information. With the image sizes used in this study, the effect of using wavelet compression was not so large. However, because the effect of the image compression method causes changes in the sub-image size and so on, wavelet compression with the abovementioned advantage was used in this study. The eighth Daubechies basis function is concretely used; the octave of the image of the horizontal two steps and vertical two steps is divided; and the lowest region element (LLLL element) is used as a compressed image. In general, when the filter is processed to the image signal, the existence of the distortion at the edge of the image might be a problem. In a sub-image, filter processing by enhancing the processing area was not used because no remarkable distortions were seen. However, doing so may be necessary in the future according to the image size, kind of filter, etc.

The network architecture is shown in Fig. 3. The input-output characteristic of the neurons in the hidden and output layers is the sigmoid function expressed by

f ( u ) = 1 1 + e - U E2

where u denotes the weighted sum of the inputs. The neurons in the hidden and output layers output the weighted sum of the inputs after translation by the characteristic function of the sigmoid function. That is, the input-output relation is expressed by

y i = f ( w i j x j j = 0 N ) E3

where w ij , x j and y i represent the interlayer weight, input, and output of each neuron, respectively. The threshold value is expressed by the special weight w i0 and the corresponding input is expressed by the constant input x 0 = 1.

In the learning phase, the network learns the sub-image of a steel ball and other subimages as learning data. The input for the network is the value of the normalized grayscale for the compressed sub-image; it is prepared for the steel ball and other components. The output of the network is expressed by two neurons. That is, the outputs are assigned (O 1, O 2) = (1, 0) and (O 1, O 2) = (0, 1) for the steel ball and other components, where the value of two neurons is assumed to be O 1 and O 2.

Figure 3.

Neural network architecture for localizing a steel ball in impact perforation images.

In the estimation phase, the sub-image sequence obtained by horizontally scanning the part of the impact perforation image which the steel ball passes is input to the network. The network is then expected to output the values (O 1, O 2) = (1, 0) and (O 1, O 2) = (0, 1) for the steel ball and other components, respectively. The network then outputs the existence probability for the steel ball at each sub-image position. The existence probability of the steel ball is expressed according to the normalized value of the difference between two neuron outputs, which is expressed as

O ( x ) = O 1 ( x ) - O 2 ( x ) + 1 2 E4

where x denotes the horizontal position of the image. A position with a high existence probability is estimated to be the steel ball position. By sequential distinction of the steel ball at each position, the network estimates the steel ball location (Ogawa et al., 2003,Ogawa et al., 2006).

The purpose of this study was to estimate an indistinct steel ball position correctly by using a clear steel ball image. In other words, we estimated the position for a hidden steel ball based on the results for a confirmed steel ball. In actual estimation, the neural network distinguishes a steel ball position in the indistinct image by using the input-output relation obtained during the learning phase. Concretely, we aimed to correctly estimate the position of the steel ball in the impact perforation image of the alumina specimen by using the position from the silicon nitride specimen. The steel ball, background, and fragment subimages for the nitride silicon specimen were used as learning data. The steel ball position in the impact perforation image of the alumina specimen, in which the steel ball was hidden by the fragments of specimen, was estimated by the network after using the learning data. In other words, a simulation corresponding to the actual use of the network was carried out using images with normal and high levels of difficulty.

When different images are used for learning and estimation, the features—color, shape, and reflection—of the objects are different between the two images. For instance, it is difficult to estimate accurately based on features of the learned image because the features for the images of the alumina and silicon nitride specimens are different. To solve this problem, it is necessary to decrease the differences between the learning and estimation images and learn the features for the steel ball, background, and fragments sufficiently. In this study, we examine image preprocessing and the corresponding improvement in the learning method.

3.1. Preprocessing by gamma correction

In the impact perforation images, the steel ball, background, and fragments in the image are distinguished by the learned neural network. If the neural network can learn the correct group classifications with clear data, making distinctions with high accuracy becomes possible.

To clarify the input image, we convert the density distribution to express the features of the image. The density distribution of the image is expressed by the contrast, which is the ratio between the light and shade of the image. Contrast conversion is used to equate the contrast between different images. Contrast conversion includes linear and nonlinear density conversion and is used for each objective. In linear density conversion, part of the image information may be lost within the range of the upper and lower bounds of the density value for the original image, while the brightness of the present image can be easily converted. On the other hand, the advantage for nonlinear density conversion is that the contrast can be improved without losing image information. Gamma correction is a type of nonlinear density conversion with a high contrast improvement (Pratt, 2007). The general form of the gamma correction can be expressed by

Z ' = Z m ( z Z m ) E5

where Z', Z, and Z m denote the density values obtained after processing, before processing, and the maximum, respectively. The contrast is converted by adjusting the value of γ in the gamma correction. The image lightens when γ< 1 and darkens when γ> 1. When γ = 1, the images before and after processing become the same because Z' becomes equal to Z. In this study, we examined making the steel ball, background and fragments of the impact perforation image clear by using gamma correction. In addition, we improved the estimated accuracy for the gamma value, which reduced the difference between the learning and estimation images.

3.2. Incremental learning by adding estimation data

The image is distinguished by applying the input-output relation obtained during learning to the estimation image for robustness of the neural network, even though the learning and estimation images are different in actual impact perforation image processing. One method to improve the distinction ability of the image is to reduce the difference between the learning and estimation data. In this study, both the learning and estimation data were composed of four sequential photographs. Therefore, images easy and difficult to distinguish may exist in the four estimation images. To improve the positional estimation accuracy, we used the incremental learning method (Sugiyama & Ogawa, 2001; Yamauchi & Ishii, 1995). Concretely, the data that is obtained with high accuracy in the first estimation are added to the learning data; the network then learns all data again. This method aims to restructure the input-output relation of the neural network by adding part of the estimation data to the learning data to reduce differences in the features for both images.

In this method, the estimation image is first distinguished by the neural network which goes through learning with only the learning data. The image data to be added as the learning data are decided by the distinction results, and the network goes through learning again. Then, distinction is carried out for the estimation images again by using the neural network that had gone through learning using both the learning and added estimation images. This procedure is as follows.

Step 1. The steel ball, fragments, and background are extracted from the learning image and used as learning data.

Step 2. The neural network learns.

Step 3. The position of the steel ball is estimated for the estimation image.

Step 4. The part of the image corresponding to maximum output is extracted and added to the learning image.

Step 5. Steps 2 to 4 are repeated.

From the viewpoint of the neural network model, the procedure feeds the obtained output result back to the input as input image data. In this study, the feedback is expressed as the addition of the training data without changing the network architecture. However, it is also possible to compose the model as a recurrent neural network with modules corresponding to the number of learning data sets.

Advertisement

4. Simulation

First, we examined the usual learning and estimation without gamma correction and incremental learning by using the neural network architecture explained in the previous section. During learning, the sub-images of the steel ball, background, and fragments for the impact perforation image of the silicon nitride specimen are learned to construct the input-output relation. Then, the steel ball position in the impact perforation image of the alumina specimen is estimated by the network.

The impact perforation image of the nitride silicon specimen in Fig. 2(a) was used as learning data. The image of the alumina specimen in Fig. 2(b) was used as the estimation data. The network used is shown in Fig. 3, and its parameters are shown in Table 2. The condition for finishing the learning was set to 15 000 iterations or 0.0001 of the output mean squared errors, and we confirmed the learning finished within the maximum number of iterations.

The estimation results for the steel ball positions in the impact perforation images of four alumina specimens are shown in Figs. 4(a)–(d). The upper rows of each result are subimages extracted at the vertical position to estimate the position; they are used as inputs. The lower rows of each result are the estimated steel ball positions, which represent the probability of existence of the steel ball. The steel ball positions estimated to be an actual image can be compared by contrasting the upper and lower rows. From Fig. 4(a), it is observed that the position of the steel ball cannot be estimated from the first image. From Figs. 4(b)–(d), it is found that the other three images are not accurately distinguished because there are many misjudgments and the output value of the steel ball part is too small. However, a candidate for the steel ball position is barely estimated. Consequently, a steel ball position not estimated enough by the original neural network that learned the image of the silicon nitride specimen is estimated from the image of the alumina specimen.

Image size 385x586
Sub-image size 32x32
Compressed image size for network input 8x8
Number of input neurons 64
Number of hidden neurons 12
Number of output neurons 2
Training gain 0.01
Mean squared error for attain 0.0001
Maximum training epoch 150000

Table 2.

Network parameters

Figure 4.

Horizontally scanned images and network output indicating the ball location estimated using the usual network.

4.1. Results of gamma correction

The effect of image preprocessing by gamma correction was examined by carrying out a simulation. The second image of the silicon nitride specimen shown in Fig. 2(a) was used as the learning image. Further, the image of the alumina specimen shown in Fig. 2(b) was used as an estimation target. Gamma correction of γ = 1.5 was applied to both the learning image and the estimation image. The network architecture and the network parameters used in this simulation were the same as those used in the previous simulation, which are shown in Fig. 3 and Table 2, respectively. The condition for completing the learning was set to 15000 iterations or 0.0001 of the output mean-squared errors, which was the same as that mentioned in the previous subsection. As a result, we confirmed that learning was completed within the maximum number of iterations.

The estimated result of the position obtained using the images processed by gamma correction is shown in Fig. 5. The misjudgement is reduced overall, and in particular, it is found that image preprocessing has a significant effect on the result of the fourth image shown in Fig. 5(d). Moreover, the output corresponding to the steel ball approaches 1.0, and from the result of the third image shown in Fig. 5(c), it is observed that the steel ball recognition level improves. Consequently, the results, i.e., the improvements in the distinction performance of the steel ball and the reduction in the misjudgement, were obtained by gamma correction processing. However, it is necessary to examine the parameters of gamma correction, because the effect of image preprocessing seems not so large.

Figure 5.

Horizontally scanned images and network output indicating the ball location estimated using the network with gamma correction.

4.2. Results of incremental learning

The effect of incremental learning was examined by carrying out a simulation. Similar to the previous simulation, the second image of the silicon nitride specimen shown in Fig. 2(a) and the image of the alumina specimen shown in Fig. 2(b) were used as the learning and estimation images, respectively. In addition, gamma correction of γ = 1.5 was applied to the input images. The network architecture and the network parameters used in this simulation were the same as those used in the previous simulation, which are shown in Fig. 3 and Table 2, respectively. The condition for completing the learning was set to 15000 iterations or 0.0001 of the output mean-squared errors, which was the same as that mentioned in the previous subsection. As a result, we confirmed that the learning was completed within the maximum number of iterations.

We first did the usual learning and estimation. The result of this simulation is similar to that of the previous simulation showed in Fig. 5, which includes gamma correction. The image part corresponding to the maximum network output was extracted from the estimation image, and it was added to the learning image for carrying out incremental learning. We added four images to the learning image by repeating this procedure four times. The added four sub-images are shown in Fig. 6. Incremental learning was done in the same condition to finish the learning as usual learning. Then, we confirmed that the learning was completed within the maximum number of iterations.

Figure 6.

Extracted sub-images for incremental learning.

The estimation results for the network with additional learning are shown in Fig. 7. The accuracy of steel ball recognition was improved greatly, and misjudgments were almost eliminated by adding a part of the estimation image to the learning image. It became possible to accurately estimate the position of the steel ball by the incremental learning method. The effectiveness of the proposed method was confirmed by these results.

Advertisement

5. Conclusion

In this study, a neural network was introduced for impact perforation image processing. Moreover, a preprocessing method of an image and a novel learning method of the neural network were introduced to improve distinction ability. Concretely, to clarify the features of an image, preprocessing of the input image by gamma correction was introduced. In addition, an incremental learning method, which adds a part of the estimation image to the learning image and relearns it, was introduced to improve the reduction in the distinction ability on the basis of the difference between the learning image and the estimation image. As a result, the accuracy of recognition of the steel ball was improved by preprocessing with gamma correction. Moreover, the position of the steel ball could be estimated correctly by adding additional learning data for a situation which had been estimated to be difficult in usual learning.

Figure 7.

Horizontally scanned images and network output indicating the ball location estimated using the network with gamma correction and additional learning.

In future, it is necessary to determine an appropriate parameter for gamma correction. In this study, we focussed on reducing the difference between the learning image and the estimation image by gamma correction. However, it is possible to aim at converting images that are easy to distinguish for further advanced work. In addition, it is also necessary to develop a selection method for the images used in incremental learning. In this study, an image was selected on the basis of the output value. A certain threshold value may have to be decided. Further, it is necessary to obtain more types of learning data for improving the generality of the estimation method.

References

  1. 1. Backman M. E. Goldsmith W. 1978 The mechanics of penetration of projectiles into targets. International Journal of Engineering Science, 16 1 1 99 .
  2. 2. Bishop C. M. 2006 Pattern Recognition and Machine Learning, Springer, 978-0-38731-073-8 New York.
  3. 3. Kasano H. 1999 Recent advances in high-velocity impact perforation of fiber composite laminates. JSME International Journal A, 42-2 , 147 157 .
  4. 4. Kasano H. Okubo T. Hasegawa O. 2001 Impact perforation characteristics of carbon/carbon composite laminates. International Journal of Materials and Product Technology, 16 1-3 , 165 170 .
  5. 5. Ogawa T. Kanada H. Kasano H. 2003 Neural network localization of a steel ball in impact perforation images, Proceedings of the SICE Annual Conference, 416 419 .
  6. 6. Ogawa T. Tanaka S. Kanada H. Kasano H. 2006 Impact perforation image processing using a neural network, Proceedings of the SICE-ICASE International Joint Conference. 3762 3765 .
  7. 7. Okubo K. Ogawa T. Kanada H. 2007 Impact perforation image processing using a self-organizing map, Proceedings of the SICE Annual Conference, 1099 1103 .
  8. 8. Pratt W. K. 2007 Digital Image Processing, John Wiley & Sons, 978-0-47176-777-0 New Jersey.
  9. 9. Ripley B. D. 2007 Pattern Recognition and Neural Networks, Cambridge University Press, 978-0-52171-770-0 New York.
  10. 10. Sugiyama M. Ogawa H. 2001 Incremental projection learning for optimal generalization. Neural Networks, 14 10 53 66 .
  11. 11. Yamauchi K. Ishii N. 1995 An incremental learning method with recalling interfered patterns, Proceedings of the IEEE International Conference on Neural Networks, 6 3159 3164 .
  12. 12. Zukas J. A. 1990 High Velocity Impact Dynamics, John Wiley & Sons, 978-0-47151-444-2 New York.

Written By

Takehiko Ogawa

Published: 01 December 2009