Rotation Angle Estimation Algorithms for Textures and Their Implementations on Real Time Systems

In this chapter, rotation angle estimation algorithms for textures and their real time implementations on a custom smart camera called FU-SmartCam is introduced (Ulas et al., 2007) and improved. In the textile industry, weft-straightening is a fundamental problem which is closely related to the rotation angle estimation. Earlier weft-straightening machines used simple sensors and hardware; however, with the increased complexity of fabric types and demand for faster and more accurate machines, the whole industry started to switch to smart camera systems. Three basic methods, which are based on FGT constellation, polar transformation, and statistical features, are proposed and their performances are evaluated. As an improvement to statistical based method, we introduce a neural network based approach to choose optimum weights for the statistical features. Moreover, a comparison between FU-SmartCam and a commercial one called Tattile Smart Camera is given. Experimental results show that the introduced algorithms provide satisfactory performance, and can be used in real time systems.

There are several known pattern recognition algorithms to identify different patterns in an image under the presence of translation and rotation. Some of the relevant research papers are (Tuceryan and Jain, 1998), (Loh and Zisserman, 2005), (Josso et al., 2005), and (Araiza et al., 2006), and the references therein. However, in the weft-straightening problem, we have a known texture which is subject to translation and rotation, and the problem is to estimate the rotation angle only. In a typical industrial setup, the width of a fabric equals to a couple of meters, and there are four to six equally spaced sensors, each measuring the local rotation angle. By interpolating these rotation angle measurements, it is possible to estimate mild deformations and curvatures in the fabric. Basically, we have a known 2-D periodic or almost periodic signals if the textile irregularities are taken into account. The problem is to estimate the rotation angle from discretized and windowed version of the rotated texture under the presence of camera noise and quantization errors.
Rotation angle estimation is not a new subject in computer vision. LI et al. proposed a robust rotation angles estimation algorithm from image sequences using annealing m-estimator (Li et al, 1998). They call the method robust since the proposed method can deal with the outliers. Their aim of proposing a rotation angle estimation algorithm was to solve the motion estimation problem. In (Kim Yul and Kim Sung, 1999), another method based on Zernike moments is proposed to estimate rotation angles of the circular symmetric patterns. Since circular symmetric objects have similar eigenvalues in both directions, the principal axes cannot be used for rotation angle estimation. Therefore, they introduce a robust method which uses the phase information of Zernike moments. Recently, a rotation angle estimation algorithm based on wavelet analysis is proposed for textures (Lefebvre et al., 2011). The key point is to find the rotation angle that best concentrates the energy in a given direction of a wavelet decomposition of the original image.
A typical texture image and its 10 degree rotated version are given in Fig. 2. In Fig. 3, a hypothetical deformation of the fabric is shown, which is exaggerated for a better illustration. Measurement of four local rotation angles can be interpolated to estimate the actual deformation.  In real applications, deformations are much smaller than the exaggerated deformation curve shown above.
In this study, in order to solve the rotation angle estimation problem, three algorithms, which are based on "FGT Constellation", "Polar Transformation", and "Statistical Parameters" are proposed. In addition, neural networks based approach is used to choose optimum weights for statistical parameters. All of these methods are dedicated to solve weft-strengthening problem in the textile industry.
In Section 2, the proposed methods, FGT constellation, Polar Transformation, and Statistical Parameters and its extension to neural networks are discussed. Their performance analysis is given in Section 3. Finally, some concluding remarks are made in Section 4.

Polar transform approach
Polar transformation approach is related to the computation of the autocorrelation, (, ) t R xy , of the texture, (,) txy . For an MxN image, the autocorrelation function is also an image and can be written as; t polar Rr  . It is easy to see that pure rotation around the origin in the Cartesian space corresponds to translation in the  direction in the polar space. Therefore, now the problem is reduced to the estimation of the shift in the y direction of , Rx y , which converts translation in the second coordinate to a linear phase shift in the second coordinate. A simple graphical approach can be used to estimate the proportional constant in the linear phase shift, and hence estimate the rotation angle.
Preliminary tests indicate that both variations of this approach are computationally demanding, but give accurate angle estimates. For more information about the polar transform approach based rotation angle estimation one can look at (Sumeyra, 2007).

FGT Constellation approach
The FGT-Constellation approach also involves computation of the autocorrelation, t R( x ,y) , of the texture, t(x,y) . However, following this a thresholding is done, and a "constellation" like image is obtained. Basically, peaks of t R (x,y) , will appear as bright spots in the thresholded image. If the texture is rotated, as shown in Fig. 4, one can see that the bright points also rotate in the same way and the same amount as shown in Fig. 5. Then the problem turns into finding the brightest point position on the thresholded image by searching in the first quadrant of the coordinate axis (see Fig. 6). An illustrative video which shows the operation of this algorithm can be found in the following link; www.fatih.edu.tr/~culas/rotationestimation/video1.avi. Also, preliminary tests on the .NET platform using the DirectX framework showed the feasibility of this approach both computational-wise, and performance-wise. This algorithm is later implemented on the FU-SmartCam, and for 64x64 image size, we were able to get a couple of estimates per second with about 1 degree or better accuracy. However, significant improvement can be achieved if larger images are used. To overcome the computationally demanding autocorrelation computation, which is done by floating point FFT, we tested 2-D autocorrelation computation via GF(p) transformation (GT), where p is a large prime satisfying where is the image size, and m is the number of bits used to store the intensity information. Note that, to be able to perform fast GT by using the well-known "divide and conquer" type approach of FFT, we need the prime to be of the form, where u N=2 is the image size. We have successfully tested this approach in .NET platform using the DirectX framework: We have selected, 1073741953 p  and used 10 Preliminary tests indicate significant speedup, because instead of floating point operations, only 64 bit integer operations are needed, and the structure of the code is very similar to FFT code structure. The theoretical FGT approach, in principle can also be applied for the computation of the autocorrelation in the Polar Transform method. However, there seems to be no simple and fast way of doing Cartesian to polar transformation without using computationally expensive floating point operations.

Extraction of statistical features
In order to avoid the computational difficulty of autocorrelation computation, and polar transformation, we use a completely different method based on the computation of several statistical features from the fabric. Parameters varying significantly with rotation are considered as suitable for the rotation estimation. In addition, the parameter changes should be preferably linear or almost linear with the rotation, and for real-time applications, they should be easy to compute. For this purpose, five statistical features are proposed for a texture image. These are given as; Means of the standard deviations parallel to the x -axis  Means of the standard deviations parallel to the y -axis  Means of the standard deviations along the diagonal axes.
These features affect the overall system performance. In performance evaluation tests, we show the benefit of using large number of statistical parameters instead of using small set of features. Statistical parameters based approach is computationally more attractive. The reason is that a look up table is generated based on the reference fabric and this is stored in the memory. Then for a rotated image, these statistical features are computed, and then using the nearest neighborhood method or weighted 1-norm distance metric, best matching rotation angle is estimated.
Computation of the statistical features is explained starting from following subsection. Then two scenarios for rotation estimation based on statistical features are analyzed,  By using only 2-D model parameters,  By using all features.

2-D modeling
We model each pixel value with the following equation.
where  and  are the model parameters. Apart from these parameters, there are also two variables which are x d and x d . These variables, x d and x d , correspond to shifts/periods in x and y directions respectively. These parameters can be determined by using a trial and www.intechopen.com Rotation Angle Estimation Algorithms for Textures and Their Implementations on Real Time Systems 99 error depending on the fabric type. Hence, there are a total of 4 parameters. One possible method to determine  and  given the values of x d and x d is the minimization of the following cost function.
To minimize this cost function, derivatives with respect to  and  are computed, and set to zero; After the solving these two equations, one can get  and  as, , , Variation of the model parameters with respect to rotation angle is shown in Fig. 7. The texture image is rotated from -30 to +30 with 0.5 degree steps and its corresponding 2-D model parameters are computed. We observe that these parameter variations are almost linear when the distance values are 2 x d  and 1 y d  .  Fig. 7. 2-D model parameters versus rotation angle. The rotation angle is incremented from -30 to +30 with 0.5 degree steps.

1-D modeling
The 1-D model parameter approximation is similar to 2-D model parameter approximation. The following 1-D equation can be used for this type of modeling, In this case,  has the following equality, t (x -d ,y -d ) t(x-d ,y)   (13) Variation of the 1-D model parameter,  , with respect to rotation angle is shown in Fig. 8. The texture image is rotated from -30 degrees to +30 degrees with 0.5 degree steps and its corresponding 2-D model parameter is plotted versus the rotation angle. As it is seen, variation is not linear for the distance values 2 x d  and 1 y d  a n d t h i s i s a h i g h undesirable feature.  Fig. 8. 1-D model parameter,  variation versus rotation angle. The rotation angle is incremented from -30 degrees to +30 degrees with 0.5 degree steps.

Mean of the standard deviations along the X-Axis
The mean of the standard deviations along the x axis can be expressed as follows; where I is the mean of standard deviation along the x axis of the texture image. We divide I to  , which is the mean of the gray level image pixels, in order to eliminate the ambient illumination effects of the environment.

Mean of the standard deviations along Y axis
Similarly, the mean of the standard deviations along the y axis can be expressed as follows;

Statistical feature based on mean of the standard deviations along diagonal axes
K 1 and K 2 are the means of standard deviations on the diagonal and off-diagonal axes of the texture image. D stands for the number of the diagonal elements. Actually, for all study, M, N, and D are the same size since we work with the square images. Fig. 11 shows the variation of this new parameter with respect to the rotation angle.

Rotation angle estimation by using statistical features and model parameters
The main problem in rotation angle estimation is to find the useful statistical parameters which change significantly and linearly with rotation. To be able to find the best candidates, variations of each parameter are drawn as in the previous section and looked for the linear and important changes. Then, we decide if these parameters might be used or not. After determining the statistical features, a look up table is generated by rotating the reference image in a range and its statistical parameters are stored. In the estimation process, for the rotated image, the statistical features are computed, and the parameters are searched through the look up table by using a nearest neighborhood search (NNS) method. The closest parameter combination is accepted as rotation estimation. However, this idea best works by assuming that the features change linearly with the rotation, and all of them have the same importance. In fact, this is not true for many cases. Because, neither the parameters change linearly nor have the same importance. For this reason, we append the artificial neural networks to the proposed method to overcome this problem.
Another issue is to use sufficient number of statistical features. To show the importance of the number of parameters, the experiments are divided into two parts. In the first section, only 2-D model parameter is chosen as feature parameters, which are the most effective ones. In the second section, all statistical parameters are exploited to show the improvements.

Using only 2D model parameters
In this subsection, the 2-D model parameter is used as the statistical feature. A look-up table is generated by rotating the reference image in a desired region and calculating the 2D model parameters for each rotation.
where, θ, denotes the amount of rotation, and i is the index of each rotation starting from the first to last rotation, M. After the look-up table is built, the system performance is tested in the same region with higher resolution. For example, one can generate the look-up table in the region of -30 degrees to +30 degrees with 0.5 degree steps, and then the method is tested in the same region with 0.1 degree steps.
Another important point is the problem of choosing distance parameters, x d and y d . These parameters have to be chosen properly because they significantly affect the linearity of the variations. To decide which distance values are suitable for the reference texture image, we followed two ways. The first one is to draw the parameter-rotation graph for each, x d and y d combinations and look at the linearity and amount of change. The second and more professional one is to calculate the sum of least square errors between actual and estimated rotations for x d and y d combinations and accept the combinations which give the smallest error.
After we decide the distance parameters, x d and y d , the measured model parameters,  and  , are searched through the look-up table and the closest variations are used for rotation estimation. In general case, if we have foreknowledge about the weights of the parameters, we can use the weighted nearest neighborhood search as, is the i th row of the look-up table if the parameters are put on the columns. The weights are represented as w which emphasizes some statistical parameters over others. In this section, w is chosen as unity vector.

Using all statistical parameters
To get better results in rotation estimation algorithm, all statistical features explained in Section 2.3 are used. The idea is very similar to the two model rotation estimation algorithm; the first difference is that we generate the look-up table with all these statistical parameters. The other difference is the computational cost. In this method, the processing and searching time increases due to the number of the parameters that are used. The look-up table has the following structure;

Drawbacks of nearest neighborhood search (NNS)
The NNS is used to solve one norm distance problem. However, the NNS algorithm may fail if the parameters do not have the equal weights and linear changes with the rotation. We observe that the statistical parameters, explained in Section 2.3, are neither exactly linear nor have same importance. The second problem with the NNS is the searching time. If the lookup table is generated with high resolution, the table becomes so large and takes long time to estimate the amount of rotation. The method can be accelerated from O(N) to O(logN) by using kd-tree space quantization methods (Bently, 1980). Another and better solution is to use artificial neural networks in order to find the weighting factor and speed up the algorithm significantly. In neural networked based solution, the method becomes much faster since it does not need any searching method, and it is sufficient to train look up table.

Neural network improvement to statistical parameters based approach
Statistical parameters do not have equal importance on estimating the rotation angle; therefore, artificial neural networks are used to choose optimum weights for these parameters. To do this, Fletcher-Reeves updates etc., as shown in Fig. 12, we formed a global user interface in Matlab to test the performance of the various neural network training methods such as Levenberg-Marquardt (LM), BFGS quasi-Newton back propagation, conjugate gradient back propagation with Fletcher-Reeves updates etc.
Statistical features based method is computationally very attractive. Experimental results show quite affirmative results: An accuracy of less than 0.2 degree can easily be achieved with relatively little computational effort. In our tests, we observed that LM training method provides better performance over others.

Experimental setup
Tattile Smart Camera has a StrongARM SA-1100 processor based board, and Lattice iM4A3 programmable logic device. The StrongARM SA-1100 board is a low power embedded system running a Linux port. The architecture of the board is heavily based on the LART project done at Delft University (LART project). Tattile Smart Camera also has a progressive CCD sensor, and image acquisition is done by the high speed logic circuit implemented on the Lattice iM4A3 programmable logic device. In principle, it is possible to use a desktop PC, cross compile the code for StrongARM and then upload (ftp) it to the board via network connection. However, one still has to work with cross compilers, and simulators for prototyping experiments.
On the other hand, the FU-SmartCam embedded system which does not use that low power, but has more processing power, memory, and flash storage. It has Vortex86 processor (Vortex system) in it, and runs the Intel x86 port of Linux, i.e. the Linux port which runs on regular desktop PCs. Because of this, there is no need for cross compilers, and simulators. Complete prototyping experiments can be performed on the desktop PC, and the generated code will run without any modification on the target board, which is slow as the Vortex86 but not as powerful as a Pentium 4. The image acquisition was performed using a low cost interlaced scan camera with a Conextant Bt848 chip. The FU-SmartCam is shown in Fig. 13.
The FU-SmartCam has VGA, Keyboard, Ethernet, and RS-232 connections, is extremely flexible, and easily reconfigurable. Currently, we are also developing a relay board with a small 8-bit microcontroller interfaced to the Vortex over RS-232. This relay board will enable direct connection of the FU-SmartCam to several industrial equipment. Fig. 13. FU-SmartCam : On the top, we have a low cost interlaced scan camera, in the middle we have the Vortex system, and in the bottom we have a dual output power supply. The Vortex system itself consists of two boards. The overall system is quite small. In this part, the performance of the FGT constellation approach is investigated, as seen from the Fig. 14, the maximum absolute error is about 3 degree and the average of error is about 1 degree. Due to periodicity, although we expect to see algorithm to work in the range of -45 to +45, we observe that the method works fine in the range of -30 to +30 (see Fig. 15.) However, for weft-strengthening problem, this range can be acceptable for rotation angle estimation.

Using only 2-D model parameters
In order to test the performance of the statistical parameters based approach, we plot the errors with respect to rotations. The look-up table is generated in the region of -30 degrees to 30 degrees with 0.5 degree steps and it is tested in the same region with 0.1 degree resolution.
First of all, to show the effect of the distance parameters, the distance values are taken randomly as 1 x d  and 1 y d  and the error variations versus rotation is given in Fig.16. From this figure it can be seen that, the error becomes considerably high in some rotations and cannot be acceptable as acceptable. However, as explained in Section 2.4, if the proper distance parameters are chosen as d x = 9 and d y = 8, the error variation versus rotation is shown in Fig. 17, and can be considered as acceptable. The average absolute error of the estimation is less than 0.5 degree.

Using all statistical parameters with nearest neighborhood method
In this section, all statistical feature parameters are used for rotation angle estimation. The testing region is again from -30 degrees to +30 degrees with 0.1 degree steps. In this case the results are very attractive and the absolute error is about 0.2 degree. The distance parameter were chosen as d x = 9 and d y = 8. The variation of estimation error versus rotation angle is shown in Fig. 18.

Using all statistical parameters with artificial neural networks
To compare the results of neural networks and nearest neighborhood based methods, we used the same texture image and same optimum distance values in the same testing region. In Fig. 19, the error variation for only 2-D model parameters is shown. In order to show the power of using all statistical parameters with neural networks, we increased the testing resolution from 0.1 degree to 0.01 degree and plot the result in Fig. 20. The estimation error is decreased up to 0.1 degree, which was about 0.2 degree in nearest neighborhood method. Therefore, the performance is increased almost two times. If we compare the computation time, neural network is much faster than nearest neighborhood based method.

Conclusion
In this chapter, the weft-straightening problem encountered in the textile industry is described. As a solution of rotation angle estimation which is the fundamental part of the weftstraightening problem, three different algorithms are introduced. The first algorithm is based on the Polar Transform which is applied to auto-correlated images; therefore, the translation in the θ direction gives the rotation angle. The second one is based on FGT constellation approach, and it depends on the autocorrelation of the thresholded image. FGT constellation consists of regularly distributed bright spots and the rotation angle is estimated by finding the brightest point in the first quadrant of the coordinate axis. The third algorithm is based on statistical parameters. These parameters, firstly, are computed for the reference image and a look up table is generated for its artificially rotated images. The statistical parameters of the input image are searched in the look up table and the closest one is found as rotation angle. Finally, in order to improve the statistical parameters based approach, neural networks are used to choose optimum weight factors since not all parameters have the same importance. Various neural network training methods are tested to find the best performance. The results show that the proposed methods can be successfully implemented in the real-time systems.

Acknowledgements
This work is supported by the Scientific Research Fund of Fatih University under the project number P50061001_2.