Open access peer-reviewed chapter

Binarization Based on Maximum and Average Gray Values

Written By

Saúl Manuel Domínguez Nicolás

Submitted: 09 July 2021 Reviewed: 16 August 2021 Published: 20 April 2022

DOI: 10.5772/intechopen.99932

From the Edited Volume

Digital Image Processing Applications

Edited by Paulo E. Ambrósio

Chapter metrics overview

175 Chapter Downloads

View Full Metrics

Abstract

Many image processing techniques use binarization for object detection in images, where the objects and background are well distinct by their brightness values, where, the threshold level is globally assigned, on the other hand, if it’s adaptive, the threshold level is locally calculated. In order to determine the optimal binarization threshold, from an image with the mean gray values and extreme gray values, exchanging the mean gray values relating to automatic analisis for a standard histogram equalization, which can evaluate a wide range of image features, even when the gray values in both the object of interest and background of the image are not uniform.

Keywords

  • image processing
  • image are not uniform
  • mean gray values
  • extreme gray values
  • histogram equalization

1. Introduction

In image processing, when one aims to obtain information of interest from image, in order to achieve robust and reliable descriptors. In many image processing algorithms the segmentation technique is widely used to identify regions of input images, which is very important as it may be necessary in the required analysis. Thresholding is the most commonly used technique in image segmentation, and is a binarization method that is used for object detection if background and objects differ by their brightness values. Thresholds values used in a binarization can be chosen manually or automatically. In manual form to find appropriate threshold values it is necessary to perform trial experiments. Automatically selection, combines the image information to get the optimal threshold value. Otsu’s algorithm [1] uses image histogram to get the threshold values. There are algorithms based on edges, regions and hybrids, so according to the information used, they define their threshold values. Canny edge detection [2], Sobel edge detection and Laplacian edge detection [3] are algorithms based in the edge information as structures are depicted by edge points. Algorithms suppress the noice in the image to try to find edge pixels. For example, the second derivation information of the image intensity is used by Laplacian edge detection. The gradient magnitude is used in Canny edge detector to find the edge pixels. The pixel intensities are fundamental operations of these algorithms, so discrete pixels make up the detected boundary, hence it can be incomplete or discontinuous. Thus, post-processing techniques like morphological operations is applied to connect generated discontinuities. However, the edges of organs in medical images are not clearly defined, due to noise influence and partial volume effect. Therefore, a pre-processing step is used for the later algorithm base on threshold [4, 5].

Region growing algorithms [6, 7, 8, 9, 10] are algorithms that quantify features inside a structure tend to be homogeneous. The grouping is initiated by the similarity of seeds in the desired regions, growing throughout the image considering the properties present in neighboring pixels. Using a seed in the desired región and local criterion an increase in the regions of the input image can be obtained, or through the distribution of seeds in different regions and a global criterion. Nevertheless, due to their reliance on intensity, these algorithms present problems to undo the influence of partial volume effect.

Hybrid algorithms to complete the segmentation use different image properties. Hybrid algorithms are the watershed, which to complete segmentation, use morphological filter, gradient information and image intesity [11, 12, 13]. In these algorithms, the gradient magnitude is seen as elevation and as reliefs are considered the gray scale images. Pixels with local maximum gradient define to the watershed lines, which encloses the pixels that define a region of the image. The complete segmentation of an image can be successfully produced through the watershed algorithms. However, when the images are noisy, these algorithms tend to present over-segmentation problems. In knee cartilage image have been reported successful experiments on the segmentation using the marker imposition technique [11]. C-means algorithm [13] is used to avoid the over segmentation problems to improve the performance of watershed algorithm.

At the end of the 90s, some algorithms based in binarization via thresholding [14, 15, 16, 17] have been used to obtain the basic mechanical propiertes of materials using the Vickers hardness testing [18, 19, 20, 21, 22]. In addition, to eliminate speckles in the segmantation morphological filters have been employed [14, 15, 16, 17]. Other segmentation methods have considered to obtain mechanical propiertes in the Vickers hardness testing, such as: template edge matching [23, 24] and dual resolution active contours segmentation [25]. These methods are suitable for indentation images with high contrast and straight edges. Moreover, high computational complexity, multiple parameters specified by the user, and contour detection that may collapse in images with low contrast are challenges in algorithms based on edge and line oriented contour detection [26]. However, in the last three years, have been reported algortihms [27, 28] to detect objects where its edges are not exactly straight lines in low contrast image. These algorithms use thresholding based on the extreme and gray values as binarization creteria, which are distinct from other binarization techniques [14, 17, 24] used to segment similar images.

The purpose of this chapter is to show the reader thresholdization algorithms based on the extreme and gray values as binarization creteria, which are applied to detect objects of interes where theirs edges are not exactly straight lines, and where the gray values in both the object and background of the image are not uniform. In addition to being applied in images with very low contrast, in comparison with the limited capabilities of other algorithms [23, 24, 27, 28, 29, 30, 31] to detect objects in this type of images.

This chapter is organized as follows: Section 2 describes image segmentation semi-automatically evaluating maximum and average gray values like binarization creteria. Next, Section 3 describes image segmentation automatically evaluating maximum and average gray values like binarization creteria. Section 4 includes examples of the techniques. Finally, the conclutions are reported in the Section 5.

Advertisement

2. Image segmentation semi-automatically evaluating maximum and average gray values like binarization criteria

At the end of 2018 was reported an algorithm to segment images using maximum and average gray values like binarization criteria [27]. The algorithm’s aim was to detect corners, so as to locate the vertices of the object of interest, which is called indentation. Each input image was treated as 2D monochromatic digital image with gray values between 0 and Max, high values corresponding to bright pixels (Max=white), and dark pixels having low values (0=black). Each image contains exactly one indentation which is of approximately rhombic shape (see Figure 1) whose size, position, and exact orientation in the image are unknown. The indentation is asseumed as a dark region on a brighter background.

Figure 1.

Example of Indentation image.

Image segmentation of the semiautomatic algorithm, starts with binarization, using the average gray value of the input image as threshold and the difference to the maximum gray value as discriminant criterion. Both the average and the maximum gray value are global characteristics determined from the input image. Denoting the input image by F=fxy,x=x1,,xmax;y=y1,ymax, see Figure 1, its average gray value is given by fmean=1xmaxymaxx;yfxy, and its maximum gray value by fmax=maxx;yfxy.

Thus, the binarization criterion for every input pixel p=xy:p is considered a pixel of interest whenever fxyfmax>fmean.

The result is a binary image G=gxy with x=x1,,xmax, y=y1,,ymax where each pixel of interes is represented as black, and other pixels are white. Therefore, the indentation region will be a subset of the black pixels. In this type of images to represent the indentation region as black is in coincidence with the fact that this region is dark in the original image.

The first image of the Figure 2 has an value fmean = 0.3350 and fmax=0.5843, with which is evaluated the binarization criterion for every input pixel of the image. Similar, the second image of the same Figure 2 presents values of fmean=0.3202 and fmax=0.6157. Finally, the values fmean=0.3102 and fmax=0.6057 are obtained from the third image of Figure 2. As a result is the binary image shown in the second column corresponding to each image of the Figure 2. However, the binary image there can be many pixels are detected as false positives of the region of interes, as shown in Figure 2. Thus, morphological filter is applied to delete these black pixels not belonging to the indentation.

Figure 2.

Indentation images and their binary images.

Advertisement

3. Morphological filtering

For the binary image G, any set A of black pixels, and a (small) pixel set S called structuring element, the dilatation of A by S is the set AS of all pixels p=pxpy such that x1pxxmax, y1pyymax, and p=a+s=ax+sxay+sy for some a=axayA, s=sxsyS [27]. Thus, the dilation consists of extending the set of black points in G converting into black all pixels of AS. The erosion of A by S is the pixel set AƟS=pA:p+sAsS. Erosion reduces the set of black points in G converting into White all pixels of A which do not belong to AƟS. Erosion followed by dilation is called morphological opening, whereas morphiological closing is defined as Erosion of a dilated set.

The filters applied to the binarization techniques reported in [27, 28] consists in morphological opening of the set of black pixels in G, by a structuring element distinct from those used in works previously published. Based on the a- priori knowledge that the indentation region has a rhombic shape. Moreover, these binarization techniques use a structuring element S of diamond shape of 10 pixels radius. This morphological opening sustantively reduces structural noise of the image, making possible to find the interest object (indentation region) as the largest connected region of black pixels in the next step. In addition, it preserves size and shape of the indentation. Finally, through region growing, which is a standard procedure in image processing the image segmentation is completed. In a binary image, region growing consists in determining all connected components of black pixels, where the algorithms reported in [27, 28] applies 8-connectivity.

Figure 3 shows images together with their binary versions, and the results of morphological filtering. The first example contains a slightly deformed indentation and the second presents surface imperfections. Nevertheless, many of these types of images present some shading by lots of lighting and additionally, a light sopt in the image center, both problems caused by the light reflection capacity when capturing the image through a camera. Thus, the algortihm semi-automatic reported in [27] is not suitable for indentation images with low contrast in relation to the adjacent image, as show in the Figure 4.

Figure 3.

Indentation images (first column), binary versions (second column) and morphological filtering (third column).

Figure 4.

Indentation images with low contrast, where the algorithm reported in [27] fails in binarization technique.

Advertisement

4. Image segmentation automatically evaluating maximum and average gray values like binarization criteria

The algorithm reported in [28] uses image segmentation via binarization, automatically evaluating the mean and extreme gray values by means of standard histogram equalization so as to determine the optimal binarization threshold from each input image. The binarization of the input image use the average gray value, a standard histogram applied to the input image, and the difference to the maximum gray values to determine the threshold values τ for binarization. The highest frquency of occurence fh=maxhi (where hi is the histogram of the image with a number i of gray values), average fmean, and maximum fmax gray values are global characteristics determined by the input image F . Unlike semi-automatic binarization reported in [27], the image segmentation reported in [28] evaluate maximum and average gray values under the following binarization criterion:

fxyfmax>τE1

The binarization using (1) with τ=τ0=fmean is applied to indentation images with gray levels distribuided along the dynamic range of hi as shown in Figure 5, where fmeanfh and fmean are located to approximately half of the dynamic range hi.

Figure 5.

Indentation images where fmeanfh and fmean are located to approximately half of the dynamic range hi.

The highest frequency of occurrence on the left of the dynamic range is presented in dark-field images of indentation, as shown in Figure 6. For these images in τ=τ0=fmean prevents good binarization. Thus, Eq. (1) is evaluated for τ=τ0+fhfmean until good binarization can be obtained.

Figure 6.

Dark-field images of indentation and their hi.

To indentation images with light gray levels present histogram with a landslide of gray values, where the highest frequency of ocurrence falls to the right of the dynamic range hi, as shown in Figure 7. In these images, Eq. (1) is evaluated for τ=τ0fhfmean until better image segmentation is achieved and some feature of the object of interes may be obtained, for example the indentation vertices [28].

Figure 7.

Indentation images with light gray levels and their hi.

Thus, the binarization reported in [28] can be resumed as shown in the flow chart of the Figure 8.

Figure 8.

Image segmentation automatically evaluating maximum and average gray values like binarization criteria.

Figure 9 shows an example of improved binarization in dark-field of indentations applying the algorithm reported in [28]. Figure 10 shows an example of the algorithm reported in [28] applied in Indentation images with light gray levels.

Figure 9.

Example of improved binarization in dark-field of indentation images.

Figure 10.

Example of improved binarization in indentation images with light gray levels.

Advertisement

5. Examples of the techniques

The indentations image are generated through Microdurometers, which use a diamond tip to generate the indentation images. Example, the microdurometer Mitutoyo model HM-125 (see Figure 11) use a microscope with up to 100-fold magnification, which it analog video signal was converted to a digital video signal [32], so that it can be stored as 2D indentation images in gray-scale BMP format.

Figure 11.

Microdurometer Mitutoyo HM-125.

In many indantion images obtained of samples of steel-316 with roughly polished surface, it is enough to apply the semi-automatic segmentation technique to obtain a good binarization of the image. In Figure 2, with the values of fmean and fmax are enough to evaluate fxyfmax>fmean and obtain a good binarization (see Figure 2). Furthermore, with morphological filter and region growing, reduces structural noise of the image, making possible to find the indentation and other characteristics of the interes object like the indentation vertices applying techniques reported in [27, 28]. Images acquired from samples of steel-316, but with specular-polished surface, present a light spot in the indentation center, generated by the microscope’s integrated light source, which these indentation images have low contrast in relation to the adjacent image area. For these images, the binarization is bad obtained by the semi-automatic segmentation technique, because present a maximum area 8-component such that not coincide with the indentation, as shown in the first image of the Figure 12. In addition, samples of steel-316 with roughly polished surface, present dark-field images of indentation, where the morphological filter is insufficient to eliminate pixels detected as false positives for the indentation region, as shown in the second image of the same Figure 12.

Figure 12.

Indentations images with low contrast (left column). Bad binarization obtained by the semi-automatic segmentation technique (second column).

Thus, images with low contrast have been treated by automatic image segmentation, obtaining a better binarization than the semi-automatic segmentation technique. Figure 13 shows the improvement of binarization applying the automatic image segmentation technique to the same images in Figure 12, which, is observed that in the binarization obtained by the automatic image segmentation the maximum area 8-component coincides with the indentation footprint. In addition, applying techniques reported in [27, 28] the indentation vertices are obtained satisfactoraly after applying morphological filter and region growing to the final binarization obtained.

Figure 13.

Sequence to obtain good binarization through the automatic segmentation technique applied to indentations images with low contrast.

The first input image of Figure 13 satisfies the condition fmean>fmax2 of the automatic segmentation technique, while the second input image satisfies that fmean<fmax2 to obtain the optimal binarization threshold from the second image of Figure 13.

Advertisement

6. Conclusions

A couple of algorithms were presented in this chapter, which consist in a very simple binarization, based in the average gray value of the input image as threshold and the difference to the maximum gray value as binarization criteria, which presents robustness against image noise and surface imperfections.

A second algorithm presented in this chapter, employed the same binarization criteria for each input image as the firts algorithm. However, the second altering the mean gray values via automatic analysis with standard histogram equalization to determine the optimal binarization threshold. Morphological filtering was applied to the binarized image, followed by a segmentation on the growing region. Therefore, the result obtained is a maximum area black 8-component of the image segmentation by both algorithms. Nevertheless, the second algorithm, unlike the first, evaluates a wide range of indentation images, which the indentation edges are not exactly straight lines, and indentation images with very low contrast relation to the adjacent image area, and where the indentation image presents some shading, thereby resolving illumination problems in the image.

References

  1. 1. Otsu N. (1979) A Threshold selection method from Gray-level Histograms. IEEE Trans. Sys., Man., Cyber. 9, 1, pp 62-66.
  2. 2. Canny J. (1986) A Computational Approach to Edge Detection. IEEE Trans. Pattern Analysis and Machine Intelligence. 9, 6, pp. 679-698, https://doi.org/10.1109/TPAMI.1986.4767851
  3. 3. Davis LS. (1975) A Survey of Edge Detection Techniques. Computer Graphics and Image processing. 4, 3, pp. 248-270, https://doi.org/10.1016/0146-664X(75)90012-X
  4. 4. Andreão RV, Boudy J. (2007) Combining Wavelet Transform and Hidden Markov Models for ECG Segmentation. EURASIP Journal on Applied Signal Processing. 1, pp. 1-8 https://doi.org/10.1155/2007/56215
  5. 5. Qin XJ, Jiang JH. (2007) Canny Operator Based Level Set Segmentation Algorithm for Medical Images. Proceedings of the 1st International Conference on Bioinformatics and Biomedical Engineering. (Wuhan, China). https://doi.org/10.1109/ICBBE.2007.232
  6. 6. Beucher S, Meyer F. (1990) Morphological segmentation. Journal of Visual Communication and Image Representation. 1, 1, pp. 21-46. https://doi.org/10.1016/1047-3203(90)90014-M
  7. 7. Adams R, Bischof L. (1994) Seeded Region Growing. IEEE Trans. on Pattern Analysis and Machine Intelligence. 6, 6, pp. 641-647. https://doi.org/10.1109/34.295913
  8. 8. Phole R, Toennies KD. (2001) Segmentation of Medical Images Using Adaptive Region Growing. Proceedings of he SPIE Medical Imaging. (San Diego, California, USA). 4322. https://doi.org/10.1117/12.431013
  9. 9. Yi J, Ra JB (2001). Vascular Segmentation Algorithm Using Locally Adaptive Region Growing Based on Centerline Estimation. Proceedings of he SPIE Medical Imaging. (San Diego, California, USA). 4322. https://doi.org/10.1117/12.431012
  10. 10. Pan ZG, Lu JF (2007). A Bayes-Based Region-Growing Algorithm for Medical Image Segmentation. Computing in Science & Engineering. 9, 4, pp. 32-38. https://doi.org/10.1109/MCSE.2007.67
  11. 11. Grau V, Mewes AUJ. (2004). Improved Watershed Transform for Medical Image Segmentation Using Prior Information. IEEE Transactions on Medical Imaging. 23, 4, pp. 447-458. https://doi.org/10.1109/TMI.2004.824224
  12. 12. Ng HP, Ong SH, Foong KWC, Goh PS, Nowinski WL. (2006). Medical Image Segmentation Using K-Means Clustering and Improved Watershed Algorithm. Proceedings of the IEEE Southeast Symposium on Image Analysis and Interpretation. (Denver, Colorado, USA). https://doi.org/10.1109/SSIAI.2006.1633722
  13. 13. Hamarneh G, Li X. (2009). Watershed Segmentation Using Prior Shape and Appearance Knowledge. Image and Vision Computing. 27, 1, pp. 59-68. https://doi.org/10.1016/j.imavis.2006.10.009
  14. 14. Sugimoto T, Kawaguchi T (1997) Development of an automatic Vickers hardness testing system using image processing technology. IEEE Trans. Ind. Electron. 44. 696-702. https://doi.org/10.1109/41.633474
  15. 15. Ji Y, Xu A (2009) A new method for automatically measurement of Vickers hardness using thick line hough transform and least square method. 2nd Inter. Cong. Image Sign. Proc., Tianjin, China. https://doi.org/10.1109/CISP.2009.5305653
  16. 16. Kang S, Kim J, Park C, Kim H, Kwon D (2010) Conventional Vickers and true instrumented indentation hardness determined by instrumented indentation tests. J. Mater. Res. 25 337-343. https://doi.org/10.1557/JMR.2010.0045
  17. 17. Filho P, Cavalcante T, de Alburquerque V, Tavares J (2010) Brinell and Vickers hardness measurement using image processing and analysis techniques. J. Test. Eval. 38 88-94. https://doi.org/10.1520/JTE102220
  18. 18. Barati F, Latifi M, Moayeri far E, Mosallanejad MH, Saboori A (2019) Novel AM60-SiO2 nanocomposite produced via ultrasound-assisted casting; production and characterization Materials 12 3976. https://doi.org/10.3390/ma12233976
  19. 19. Momber AW, Irmer M, Marquardt T (2020) Effects of polymer hardness on the abrasive wear resistance of thick organic offshore coatings Prog. Org. Coat. 146 105720. https://doi.org/10.1016/j.porgcoat.2020.105720
  20. 20. Li M, Wang S, Wang Q, Zhao Z, Duan C, Zhao D, Zhang L, Wang Y (2020) Microstructure and mechanical propertis of MoAlB particles reinforced Al matrix composites by interface modification with in situ formed Al12Mo J. Alloys Compd.823 153813. https://doi.org/10.1016/j.jallcom.2020.153813
  21. 21. Almonani MA, Hayajneh MT, Al-Shrida M INvestigation of mechanical and tribological properties of hybrid green eggshells and graphite-reinforced aluminum composites J. Braz. Soc. Mech. Sci. Eng. 42 45. https://doi.org/10.1007/s40430-019-2130-z
  22. 22. Wang C, Song L, Xie Y (2020) Mechanical and electrical characteristics of WB2 synthesized at high pressure and high temperature Materials13 1212. https://doi.org/10.3390/ma13051212
  23. 23. Gadermayr M, Maier A, Uhl A (2012) The impact of unfocused Vickers indentation images on segmentation performance. In: Bebis G et al. (eds) Advances in Visual Computing. ISVC 2012 Lect Notes Comp. Sci. 7432 Spinger, Berlín, Heidelberg. https://doi.org/10.1007/978-3-642-33191-6_46
  24. 24. Gadermayr M, Maier A, Uhl, A (2011) Algorithms for microindentation measurement in automated Vickers hardness testing. Proc. SPIE 8000 10th Int. Conf. Qual. Control Artif. Vis. 80000M. https://doi.org/10.1117/12.890894
  25. 25. Gadermayr M and Uhl A (2012) Dual – resolution active contours segmentation of Vickers indentation images with shape prior initialization In: Elmoataz A, Mammass D, Lezoray O, Nouboud F and Aboutajdine D (eds) Image Signal Processing (ICISP 2012) Lect. Notes Comp. Sci. 7340 362-369 Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-31254-0_41
  26. 26. Papari G and Petkov N (2011) Edge and line oriented contour detection: State of the art. Image Vis. Comput. 29 79-103. https://doi.org/10.1016/j.imavis.2010.08.009
  27. 27. Domínguez-Nicolás SM, Wiederhold P (2018) Indentation image analysis for Vickers hardness testing. 15th Int. Conf. Elect. Eng. Comput. Sci. Autom. Contr. (CCE). México. https://doi.org/10.1109/ICEEE.2018.8533881
  28. 28. Domínguez-Nicolás SM, Herrera – May AL, García-González L, Zamora-peredo L, Hernández-Torres J, Martínez-Castillo J, Morales-González EA, Cerón-Álvarez CA, Escobar-Pérez A. (2021). Algorithm for automatic detection and measurement of Vickers indentation hardness using image processing. Measurement Science and Technology. 32, pp. 1-14. https://doi.org/10.1088/13561-6501/abaa66
  29. 29. Gademayr M, Maier A, Uhl A (2012) Robust algorithm for automated microindentation measurement in Vickers hardness testing. J. Electron. Imag. 21 021109. https://doi.org/10.1117/1.JEI.21.2.021109
  30. 30. Maier A, Uhl A (2012) The AreaMap operator and its application to Vickers hardness testing images. Intern. J. Fut. Gener. Commun. Netw. 5 123967509.
  31. 31. Maier A, Uhl A (2013) Areamap and Gabor filter based Vickers hardness indentation measurement. 21st Eur. Sign. Proc. Conf. (EUSIPCO), Marrakech, Morocco, 1-5.
  32. 32. Domínguez-Nicolás SM, Argüelles-Lucho P, Wiederhold P (2016) FPGA based image acquisition and graphic interface for hardness tests by indentation. Int. J. Adv. Comput. Technol. 5 6-16.

Written By

Saúl Manuel Domínguez Nicolás

Submitted: 09 July 2021 Reviewed: 16 August 2021 Published: 20 April 2022