Open access peer-reviewed chapter

Mineral Froth Image Classification and Segmentation

Written By

Wang Weixing and Chen Liangqin

Submitted: March 30th, 2016 Reviewed: July 26th, 2016 Published: November 23rd, 2016

DOI: 10.5772/65028

Chapter metrics overview

1,993 Chapter Downloads

View Full Metrics

Abstract

Accurate segmentation of froth images is always a problem in the research of floating modeling based on Machine Vision. Since a froth image is with the characteristic of complexity and diversity, it is a feasible research idea for the workflow of which the froth image is firstly classified and then segmented by the image segmentation algorithm designed for each type of froth images. This study proposes a new froth image classification algorithm. The texture feature is extracted to complete the classification. Meanwhile, an improved method based on the original valley‐edge detection algorithm is also proposed in the study. Firstly, the fractional differential is introduced to design the new valley‐edge detection templates which can extract more information on bubble edges after the enhancement of the weak edges, and finally the close bubble boundaries are obtained by carrying out the improved deburring and gap connection algorithms. Experimental results show that the new classification method can be used to distinguish the types of small, middle and large bubble images. The improved image segmentation algorithm can well reduce the problems of over‐segmentation and under‐segmentation, and it is in higher adaptability.

Keywords

  • froth image
  • bubble
  • classification
  • segmentation
  • valley‐edge detection

1. Introduction

In the conventional mineral processing, froth flotation is the most widely used method [1]. Based on the difference between the physical and chemical properties of mineral surfaces, flotation is a separation method by making the mineral particles selectively attached to the bubbles [2]. The froth flotation is a continuous physical and chemical process occurring in the solid, liquid and gas‐phase interface [3], in which the froth layer is a key factor. The visual feature of the surface of the froth layer is a direct indicator of flotation process conditions and production performance [37]. The flotation operation process is controlled by workers’ visual observation of the surface condition of the foam layer [810]. Clearly, this traditional way of working has many disadvantages, such as strong subjectivity, randomness and huge errors [11]. It seriously affects the flotation efficiency and performance. Since the 1990s of the last century, Machine Vision is introduced into the flotation process monitoring. Machine Vision flotation monitoring and control system is on the research about obtaining quantitative characteristics of the visual surface of the foam layer by using a computer, cameras and other industrial equipments, and applying digital image processing and artificial intelligence and other advanced technologies [12]. And further, by studying the relationship between these features and the flotation performance the flotation modeling can be realized, and accordingly the automatic monitoring and optimization control of the flotation process can be achieved [13]. Machine vision is a nonintrusive, cost‐effective, reliable technique for monitoring and controlling flotation systems [1417].

In recent years, with the rapid development of computer and digital image processing technology, the bubble image analysis attracts more and more attention. The University of Queensland, Australia, the United States of Process Technology Co., Ltd., UK, Sweden, Finland, Italy, Chile and other countries have been joined the research of the flotation process by computer vision. Efforts continues to be directed into how to measure accurately the physical and dynamic features of froth, and linking concentrate grade with these measurable attributes of the froth phase, although this is creating difficult [5, 12, 18]. The physical features of froth are the bubble size distribution, bubble shapes and colors. These features can be measured directly from digitized images of the froth, in which the image is segmented in order to explicitly identify individual bubbles on the froth surface [5]. Edge detection algorithm and Watershed algorithm are commonly used for bubble image segmentation and edge delineation. Reference [19] has proposed the use of valley‐edge detection and valley‐edge tracing to segment froth images. The method firstly uses Otsu threshold algorithm to extract the white spot areas on bubbles, under which the froth images are classified as the large, medium, small and mix‐sized bubble images; then based on the different gray scale distribution characteristics for each class, a set of valley‐edge detection algorithms are designed to extract the bubble edges. For each category, the filtering parameters and the threshold used in valley‐edge detection are set separately, which can obtain good segmentation results for different classes of images. Hence, before the image segmentation the froth image must be classified correctly, otherwise the segmentation result will be unsatisfactory. In the end, a cleanup procedure based on valley‐edge tracing is carried out to complete the gap connection between the valley edges. The advantage of such methods based on the edge detection algorithm is the fast calculation speed, but it is sensitive to noise in large bubble images, and the rough areas of the bubble surface always result in a large number of false boundary information, which are difficult to be completely removed. Watershed‐based algorithms are morphological approaches based on a simulation of water rising from a set of markers [5].Watershed algorithm can obtain good segmentation results when they are used to process the froth images of bubble size distributed more uniform, but for the images with the large variation of the bubble sizes, it is easy to encounter the over‐segmentation or under‐segmentation problems.

Around these two categories, many improved methods and other algorithms have been studied [2025]. However, to a certain degree these improved algorithms suffer the same problem that the balance of the accuracy of bubbles extraction and the high adaptability of algorithms is difficult to achieve in the application. Measuring the bubble size distribution is an intricate process [26]. No algorithm can obtain good segmentation result for all types of froth images. Currently, there are several commercial froth image processing systems, such as FrothMaster (Outokumpu), SmartFroth (UCT) and VisioFroth (Metso). About the bubble size and shape measurement, each of the systems is only available in some special cases, not for all the cases. The implementation of a long‐term fully automated flotation control system is difficult due to the image segmentation problem [12, 18].

In this study, a new classification algorithm and a new image segmentation algorithm of froth images are proposed. The new classification algorithm combines the size feature of white spot and the texture feature. And the new image segmentation algorithm is improved based on the original valley‐edge detection algorithm.

Advertisement

2. Flotation and froth image

2.1. Flotation mechanism and process

Flotation process is described as in Figure 1: A flotation agent is added to the pulp and mixed mechanically, while air is blown to create bubbles; under certain operating conditions of flotation, the hydrophobic mineral particles are adhered to the surface of bubbles during the floating of the air bubbles, and eventually rise to the surface of the flotation cell mineral to enrich and form a froth layer, while the hydrophilic particles are primarily retained in water and are discharged with the tailings finally. The flotation process circuit includes grinding, classification and flotation roughing, cleaning and scavenging operations. Each flotation bank includes dozens of flotation cells. Figure 2 shows a concise flowchart of the flotation circuit. The rough concentration from the rougher cell should be processed by the subsequent flotation procedures to improve the concentration grade. The first operation in which the slurry is fed is called rougher operation. The froth obtained from the rougher operation is again in the flotation process, which is called the cleaner operation. The tailing output from the rougher operation is again in the flotation process, which is called the scavenger operation. The outputs, including the tailing from the cleaner operation and the froth from the scavenger operation, are called the middle minerals. After the slurry is fed into the rougher cell, the useful mineral particles are adhered to the bubbles and floated to overflow the flotation cell and then fed into the cleaner operation from the pipe. The froth output from the cleaner operation is fed into the total fine groove and processed to get the final concentration production by settling and filtration operations. The underflow slurry of the rougher cells is put into the scavenger operation. The froth from the scavenger operation is back as middle mineral into the rougher cell to be processed again. The underflow slurry of the scavenger cells is converged into the total end groove. In the end groove, the tailing is discharged after the final operation. The flotation process is a continuous and complex industrial production process in which each subprocess is interrelated and interacted with each other.

Figure 1.

Principle of flotation cell.

Figure 2.

Schematic of flotation circuit.

2.2. Flotation image characteristic analysis

In the flotation system based on image analysis and machine vision, a CCD video camera is mounted vertically above a flotation cell to capture froth images. The froth image is a special kind of professional image. Figure 3 shows a lead froth image. A typical flotation image has the following characteristics: (1) A large number of bubbles stick together to form the foreground of an image without background; (2) The bubbles sizes are different, and there is a high‐light area (or more) on the top of each bubble, and the boundaries of bubbles are in low gray value; (3) The contrast is low, and there is noise on the surface of bubbles, and the illumination is uneven. In addition, there are often some black holes of different sizes on some bubble surfaces, and the colors of bubbles of different mineral flotation images are very different. Flotation is a dynamic and continuous process, and therefore the bubble will be a growth, burst and merger process.

Figure 3.

A froth image and its gray characteristic analysis.

All the characteristics of froth images make bubble delineation hard. Therefore, how to achieve an efficient froth image segmentation method has become a major task in this research field. Since a flotation image is with much noise and without background at all, the flotation image processing is very difficult. In order to achieve fast and efficient image processing results, it is necessary to classify froth image first, then the image segmentation is carried out.

Advertisement

3. Froth image classification

As described above, before image segmentation, a froth image should be classified into accurate categories based on the bubble size. Previous studies have shown that in most of the cases, each of the bubbles in a froth image includes one or more high light areas, called white spots, the size of a white spot is proportional to the bubble size and the average size of the white spots is inversely proportional to the number of the bubbles in a froth image. The white spots are generated by the artificial illumination in a froth cell.

3.1. Classification method on the white spot

One kind of bubble classification method is based on the size of the white spot area of bubble. Firstly, the white spot is extracted by an image segmentation method such as a method on threshold. And then the average size of the white spot is used to classify the bubble image category. Similarly, it can also determine the category based on the number distribution of white spots. Figure 4 presents three froth image binarization results, which can be used for image classification.

Figure 4.

Image classification based on the white spot. (a) Three types froth images, from left to right are mixed, middle and large bubble image. (b) Extracted white spot using Otsu threshold segmentation method.

3.2. Classification method on bubble surface texture feature

3.2.1. Bubble surface texture feature based on GLCM

In this section, a new classification method is proposed.

Based on the above‐mentioned information, for the correct bubble segmentation results, we believe that the froth images should be classified into four classes: (1) images with small size bubbles (or mixed with a few middle/large size bubbles), which is called “small bubble;” (2) images in which most of the bubbles have a medium size, which is called “middle bubble;” (3) images, where, most of the bubbles have a relatively large size, which is called “large bubble”; and (4) a few white spots can be detected or the detected white spots with a very large size, which is called “super‐large bubble.”

For class 1 images, a bubble consists of a few pixels, it is difficult to detect the contours of bubbles, but it is enough to estimate the bubble size distribution property by detecting white spots; for class 2, 30–60 pixels are contained in a bubble, a rough contour location can be detected by using a morphological segmentation algorithm; therefore, just the size distribution and texture information can be obtained; for class 3, where the contours of bubbles are clear, a small miss‐location of bubble contours cannot affect shape analysis results, thereby, the size, shape distribution can be obtained exactly with an image segmentation algorithm based on bubble contour tracing; and for class 4, the white spots information is less useful, only valuable information of bubbles is of bubble edges, so the valley‐edge detection algorithm was developed and used for estimation of size and texture information.

Texture is about the pixels collection with a certain size and shape. It is used to express the properties of the surface or structure of an object. Texture provides a measure of roughness, smoothness, regularity and other features of an image in an intuitive manner. The methods commonly used to extract texture can be summed up in two categories: the method on the spatial domain and the method on the frequency domain. In the space domain method, the brightness variation, relevance and direction of adjacent pixels are calculated, and then the characteristics of image texture are obtained using a statistical method. The analysis of power spectrum is a method widely applied to extract the texture of the frequency domain. The fine texture is reflected in the higher frequency, and the rough texture is reflected in the low frequency. For that, the texture features can be extracted from the spectral distribution of an image. However, the Fourier transformation should be calculated for the power spectrum, which leads to the large amount of computation.

In this paper, gray‐level co‐occurrence matrix (GLCM) is used to describe the texture feature of a bubble image. GLCM is a method based on the space domain. It provides the probability of a certain gray value variation between adjacent pixels. GLCM is different to histogram of an image. The histogram gives the statistical result. GLCM is a function of distance and direction. Given the condition and the size of window, the numbers of pixel pairs that meet with the condition are calculated. For an image of M×N size, given a predetermined direction θ (0°, 45°, 90°, 135°, etc.) and a distance value d, the element of P(i,j|d,θ) describes the probability of appearances of a pixel pair with the gray value i and j, of which the two pixels are along θ direction and at a distance of d.

Once the co‐occurrence matrix is calculated, the texture features can be described using the matrix. The most used ones are the following:

  1. Energy, the formula is as:

    FT1=ASM=ij[P(i,j|d,θ)]2E1

    Energy is the square values of the element of GLCM. The texture changes more regular, the value of energy becomes greater.

  2. Contrast, the formula is expressed as follows:

    FT2=CON=ijP(i,j|d,θ)(ij)2E2

    The clarity of an image is higher, the value of contrast is greater.

  3. Entropy, the formula is:

    FT3=ijP(i,j|d,θ)log2 P(i,j|d,θ)E3

    Entropy is a measure of irregularities of texture in an image. The distribution of the gray value of the image is messier and more disordered, and the value of entropy is greater.

  4. Evenness, the formula can be expressed as:

    FT4=IDM=ijP(i,j|d,θ)1+(ij)2E4

    The change and distribution of the local area of an image is less and more uniform, the value of evenness is greater.

  5. Correlation, the formula is as follows:

    Correlation is the expression of the degree of similarity of GLCM elements between each column and row.

    FT5=COR=ijijP(i,j|d,θ)μxμyσxσyμx=ijjP(i,j|d,θ)μy=ijjP(i,j|d,θ)σx2=i(iμx)2jP(i,j|d,θ)σy2=i(iμy)2iP(i,j|d,θ)E5

In the experiment, four froth images are chosen to carry out GLCM calculation. Based on human eyes, the chosen four images are classified into small, middle, large and super large bubble images, as shown in Figure 5. Energy, entropy, contrast and correlation are obtained from GLCM, and the results are shown in Table 1. In the calculation of GLCM, the distance d is 1, and for each texture feature, the average value in 0°,45°,90°,135°four directions is calculated as the final value.

Figure 5.

Four types of bubble images. (a) Small, (b) middle, (c) large and (d) super‐large.

Image type Energy Entropy Contrast Correlation
Small 0.0512 3.6108 1.9535 0.1227
Middle 0.0382 3.7851 1.5868 0.1113
Large 0.0644 3.3802 1.0884 0.1366
Super‐large 0.0596 3.5659 1.6412 0.1284

Table 1.

GLCM texture features of four images in Figure 5.

Tests are carried out on 50 images, including the four types of bubble images. The statistical data show that for the small, middle and large bubble images, the value of the contrast parameter of each type has a significant difference distribution area. The value of the small bubble images is the maximum one, and the area is about 1.9–2.1; and The value of the small bubble image is the maximum one, and the corresponding area is about 1.9–2.1; and the contrast value of the middle bubble image is about 1.4–1.6. The only exception is the super‐large bubble image. The value of the super‐large bubble image is not distributed at a fixed interval.

Based on the above analysis, a method combining the size of the white spot area and the texture feature is proposed to classify the bubble images. The input image is firstly segmented to extract the white spots. If the average size of the white spot areas is greater than a given threshold, the input image is classified into a super‐large bubble image. If not, GLCM of the input image is calculated to extract the contrast feature, and then the image is classified into the small, middle and large bubble image based on the difference distribution area of the value of the contrast feature.

3.2.2. Classification experiments and results based on SVM

After the texture features are extracted based on GLCM, these features can be used to design and train the classifier. And the classifier can be used to classify the froth images.

Support vector machine (SVM) was proposed based on the structural risk minimization principle. It means that the design principle of SVM is based on the maximization of the accuracy of both training and testing, or the minimization of the risk of both experience and expectation. For the traditional classifiers, the insufficient samples always lead to the imbalance between the training samples and the testing samples, and it affects the performance of the classifiers. SVM can overcome this shortcoming, and is one of the better machine learning algorithms. In this chapter, SVM method is used to classify the froth images.

Figure 6.

The voting and the decision process.

In the experiment, one‐to‐one way (two kinds of classifier) is used to design the classifier. Let class A denote the large froth image, class B for the middle froth image, and class C for the small froth image. Each two classes of A, B and C is composed to design and train the classifier. That's to say, there are total three classifiers, that is, classifier (A, B), classifier (A, C) and classifier (B, C). During the testing phase, the test sample xi is sequentially fed into these three classifiers, and then the voting way is used to decide the category. The specific voting procedure is described as follows:

  1. The initialization process: set these variables as in initial value 0, that is, vote(A)=vote(B)=vote(C) = 0;

  2. The voting process: If the test sample xi is judged as class A in the classifier (A,B), then vote(A)=vote(A) +1, otherwise vote(B) = vote(B) + 1;

    If the test sample xi is judged as class A in the classifier (A,C), then vote(A) = vote(A) + 1, otherwise vote(C) = vote(C) + 1;

    If the test sample xi is judged as class B in the classifier (B,C), then vote(B) = vote(B) + 1, otherwise vote(C) = vote(C) + 1;

  3. The final decision process: Find the category corresponding to the maximum based on the following formula, and the test sample xi is classified into the corresponding class.

    Max(vote(A),vote(B),vote(C))

If there are two maximum values, the class corresponding to the first maximum is generally taken as the selected class. Figure 6 gives the demonstration of the voting and decision process.

In the following section, we take the classifier (A,B), that is, the classifier of the large froth image and the middle froth image, as an example to describe the design procedure. The designs of the other two classifiers are similar.

We chose the radial basis function (RBF) with the strong generalization ability as the kernel function of SVM. The formula of RBF is as follows:

K(x,xi)=exp(γ|xxi|2)E6

where γ=1σ2, and in our experiment, γ is set as 0.07.

In the experiment, 35 large froth images and 35 middle froth images are chosen as the datasets. Among the datasets, 15 samples of each class are selected as a training set and the remaining 20 samples of each class as a test set. That is, there are 30 training samples and 40 testing samples in total. Eight dimensions of the texture features based on GLCM of each sample are used including the mean and the standard deviation of the energy, entropy, moment of inertia and correlation feature.

The model of the classifier is obtained after the training of 30 samples, and then 40 testing samples are fed into the classifier to complete the prediction classification. Figure 7 gives the classification results, of which label 1 represents the large froth image and label -1 for the middle froth image.

Figure 7.

Forty test samples classification results of classifier (A,B).

As shown in Figure 7, 20 large froth image test samples were all classified correctly and three samples were classified into the false class in 20 middle froth image test samples.

Classifier (A,C) and classifier (B,C) are designed as the above procedure. When the three classifiers are available, each test sample is fed into the classifier by turn and judged to the corresponding class by the voters. For 60 test samples, the classification results are shown in Figure 8, where label 2 represents the large froth image test sample, label 1 the middle froth image test sample and label -1 the small froth image test sample. Table 2 shows the statistical result corresponding to the figure. The statistical result shows that the probability of misclassification of the middle froth image test samples is highest. It leads to a higher misclassification rate for the reason that the texture feature of some middle froth images is possibly close to the large froth image or small froth image. However, the overall average correct classification rate, 83.3%, can basically meet the application requirements.

Figure 8.

Sixty test samples classification results.

Froth image category  Number of test samples  Number of correct classification test samples  Correct classification rate (%) 
Large 20 18 90
Middle 20 15 75
Small 20 17 85
Total amount 60 50 83.3

Table 2.

The statistical data corresponding to Figure 8.

Advertisement

4. Froth image segmentation

Bubble image segmentation and delineation are the key to extract the morphological characteristics of froth. Classical image segmentation methods, such as methods on threshold and on edge detection, can only detect the highlighted areas of some bubbles. These methods fail to extract the edges of bubbles. Actually the practice has proved that the better methods are the ones based on valley‐edge extraction and Watershed. Around these two categories, many improved methods have been proposed.

4.1. Segmentation method based on valley‐edge extraction

Segmentation method based on valley‐edge extraction was purposed by Wang et al. [19]. The algorithm includes valley‐edge detection and valley‐edge tracing. The detection process was designed on the gray value distribution feature of a cross‐section of a froth image. It detects each pixel to see if it is the lowest valley point in a certain direction. If it is, then the pixel is used as the valley‐edge candidate, and both its direction and location are marked. The above-mentioned search process is performed in the 0°, 45°, 90° and 135° four directions of the current pixel respectively. A threshold is set to find the greatest value, and then the detected point is marked as a valley‐edge candidate point.

Valley‐edge tracing is performed on the result image of valley‐edge detection after simple denoising. First, the significant endpoints of curves are detected, and then the direction is estimated, and finally the contour is traced according to the information of direction of each new detected point and an intensity cost function.

It should be noted that good segmentation results could be obtained only if the input froth image is classified into the exact type. Figure 9 gives the valley‐edge extraction results of the four types of bubble images in Figure 5. We can see that over‐segmentation or under‐segmentation problem cannot be avoided in the four types of bubble images.

Figure 9.

Segmentation results on the valley‐edge extraction.

4.2. Segmentation method based on watershed

Watershed is a kind of segmentation method based on mathematical morphology. The watershed algorithm is used to find the local maximal values (Watersheds) of an image. Vincent and Soille [27] proposed and described the algorithm in detail. An image is seen as a topographical surface, with holes pierced at the location of the minima. As this surface is lowered into a lake, the water level within the surface will start to rise within each of the catchment basins. When the water from two catchment basins is about to merge, a dam is built to prevent this. At the end of the process, each minimum is surrounded by a dam, with the dams corresponding to the watershed of the image. In order to obtain the good segmentation result, minimal location of each object should be found to be as a marker, see Ref. [28]. Figure 10 shows the watershed segmentation results of the four types of bubble images in Figure 5. The same problems with the valley‐edge extraction method can be seen in the watershed segmentation results.

Figure 10.

Segmentation results on Watershed.

4.3. New segmentation method based on valley‐edge extraction

The main reason for that the froth image is hard to segment accurately is in the very weak boundaries of bubbles. Based on the original valley‐edge extraction algorithm, we propose an improved segmentation method on the fractional integral. The valley‐edge detection mask is designed based on the fractional integral in the improved method. The new mask helps to extract more details on the bubble edges.

The fractional integral of signal f(x) is defined as:

Ivf(x)f(x)+vf(x1)+v(v+1)2f(x2)+v(v+1)(v+3)6f(x3)++Γ(v+1)n!Γ(vn+1)f(xn)E7

And each factor is defined as follows:

{a0=1, a1=v,a2=v(v+1)2a3=v(v+1)(v+2)6an=Γ(v+1)n!Γ(vn+1)E8

The first three coefficients, i.e. a0,a1,a2, are taken to define the eight detection templates of the valley‐edge detection algorithm. The eight templates are defined as shown in Figure 11.

Figure 11.

Eight directions templates. (a) X1, (b) X2, (c) X3, (d) X4, (e) X5, (f) X6, (g) X7 and (h) X8.

Based on the eight direction templates, the convolution operation is carried out on the image. For each pixel f(i,j), eight operation results can be obtained, each of which is marked with G1∼G8. The convolution operation rules of the eight directions are defined as:

G1=11+a1+a2s=bbt=bbX1*f(s,t)E9
G2=11+a1+a2s=bbt=bbX2*f(s,t)E10
G3=11+a1+a2s=bbt=bbX3*f(s,t)E11
G4=11+a1+a2s=bbt=bbX4*f(s,t)E12
G5=11+a1+a2s=bbt=bbX5*f(s,t)E13
G6=11+a1+a2s=bbt=bbX6*f(s,t)E14
G7=11+a1+a2s=bbt=bbX7*f(s,t)E15
G8=11+a1+a2s=bbt=bbX8*f(s,t)E16

In the above‐mentioned formula, b is the size of the template, and in this chapter we take, b = 3.

In accordance with the detection rules of the original valley‐edge extraction algorithm, if the values of the two directions of each pair are both greater than the value of the given threshold T, then the value of the current center pixel f(i,j), the pixel f(i,j), would be marked as a valley point of the current direction. And the new values of the four directions are set as follows:

f(i,j)={Gm+Gm+42,Gmf(i,j)T&&Gm+4f(i,j)T0, otherE17

where m = 1, 2, 3, 4. Take the maximum value of the four directions as the final value of valley point g(i,j). Finally, the valley‐edge image is binarized based on the selected threshold T2.

f(i,j)={255,gm(i,j)T20,otherE18

Figure 12 shows the bubble edges obtained by the improved valley‐edge extraction method. The original images are from Figure 5.

Figure 12.

Bubble edge extracted by improved valley‐edge extraction method.

In the bubble edge image, as shown in Figure 12, there are many isolated point noise, short lines and gaps. A series of post‐processing functions must be applied to get clean and complete closure of the boundaries of bubbles. The post‐processing procedure includes denoising, burr removal and gap connection.

(1) Denoising processing

The first step is for the expansion procedure on eight‐neighbor, and the second step is for the corrosion procedure on four‐neighbor. The expansion processing can be used to connect the small gaps, and the corrosion processing can eliminate the small glitches, and the boundaries can become smoother after the expansion and corrosion processing. Figure 13 gives the structural elements of the expansion and corrosion.

Figure 13.

Structural elements of expansion and corrosion. (a) Expansion structure on 8‐neighbor. (b) Corrosion structure on 4‐neighbor.

(2) Deburring process

For the short line noise, the traditional method is to eliminate them based on a given length threshold. In order to maintain more information of bubble boundaries as possible while removing the glitch noise, an improved deburring algorithm is designed, and it is shown in Figure 14.

(3) Gap connection processing

There are some boundary gaps after the deburring process. A normal method for the gap connection is to find the endpoints of the fracture gaps firstly and then to search other candidate endpoints in the surrounding area of the current pixel. If the candidate endpoints are found, the connection is carried out or not based on the condition of the distance and angle difference between the current endpoint and the candidate endpoint.

Since the bubble boundary is complex, an improved gap connection algorithm combined the long connection and the short connection is studied. Figure 15 shows the workflow of the improved algorithm.

In the above algorithm, the processing of the short connection is described as follows. When the distance between the two endpoints is less than a given value (here it is taken as 4 pixels), the two endpoints are connected directly.

Figure 14.

Workflow of improved deburring algorithm.

The long connection processing is a connection method based on the maximum entropy threshold method. The processing steps are shown as follows:

  1. Take the original froth image as a reference image.

  2. Calculate the threshold value of the original froth image based on the maximum entropy threshold. There is a fact that the gray value of bubble boundary is less than the threshold.

  3. Use the current endpoint and the candidate endpoint to locate the positions in the original image. If there is boundary information (the gray value is less than the threshold value of step b), the two endpoints can be connected.

For the four types of bubble images in Figure 5, the proposed improved segmentation method is tested on them. The segmentation results are shown in Figure 16. Compared with the results of the original valley‐edge extraction and the watershed methods, the improved algorithm has the advantage of restraining the over‐segmentation and under‐segmentation problems.

Figure 15.

Workflow of improved gap point connection algorithm.

Figure 16.

Final segmentation results on improved valley‐edge extraction algorithm.

Figure 17.

Comparison between improved algorithm and other segmentation algorithms. (a) Original froth images belonging to the four types. (b) Results of the original valley‐edge detection algorithm. (c) Results of the watershed algorithm. (d) Results of our improved algorithm.

Another experiment is carried out on the other four types of froth images as shown in Figure 17. The experimental results demonstrate again that no algorithm can always obtain good segmentation result for all types of froth images. For the froth image with the uniform bubble size distribution, the segmentation result is always satisfactory. The black hole areas and the raised mineral particle areas always lead to the over‐segmentation problem. And for the super large type of the froth image, the segmentation results of the four algorithms are all not satisfactory. The noise and the uneven gray value distribution become more obvious for the super large type of the image, so the position deviation of the extracted bubbles edges, over‐segmentation and other problems become more serious.

Advertisement

5. Conclusion

The classification and the segmentation of froth images are discussed and analyzed in this paper. And for each of the two questions, a new method is proposed.

  1. The existing classification and segmentation methods are discussed firstly.

  2. A new froth image classification method is proposed. It adopts GLCM to extract the contrast texture feature, and based on the difference distribution area of the contrast feature, the froth image can be classified into small, middle or large bubble image. The classification experiments and results based on SVM show that the supposed method is feasible for the application.

  3. An improved froth image segmentation method is suggested based on the original valley‐edge extraction algorithm. Firstly, the fractional differential is introduced to design the new eight direction templates used to extract the bubble boundary. Secondly, the mathematical morphology methods including expansion and corrosion are used to denoise. Thirdly, an improved deburring algorithm is used to remove burrs. Finally, an improved gap connection combined the long connection and the short connection is applied to form the close and integral bubble boundary.

  4. The experimental results demonstrate the effectiveness of the two improved algorithms. A froth image can be correctly classified using the new classification method. The improved segmentation algorithm can reduce over‐segmentation and under‐segmentation.

However, because of the complexity, particularity, diversity, randomness and dynamic of froth images, it should be noted that the difficulties of the classification and segmentation of froth images are still not completely overcome.

References

  1. 1. Nissinen A, Lehikoinen A, Mononen M, et al. Mint: Estimation of the bubble size and bubble loading in a flotation froth using electrical resistance tomography. Minerals Engineering. 2014; 69: 1–12. DOI:
  2. 2. Napier‐Munn T, Wills BA. Wills’ Mineral Processing Technology: an Introduction to the Practical Aspects of Ore Treatment and Mineral Recovery. Butterworth: Heinemann. 2011.
  3. 3. Farrokhpay S. Mint: The significance of froth stability in mineral flotation—a review. Advances in Colloid and Interface Science. 2011; 166: 1–7.
  4. 4. Nunez F, Cipriano A. Mint: Visual information model based predictor for froth speed control in flotation process. Mineral engineering. 2009; 22: 366–371.
  5. 5. Aldrich C, Marais C, Shean B J, et al. Mint: Online monitoring and control of froth flotation systems with machine vision: a review. International Journal of Mineral Processing. 2010; 96(1–4): 1–13.
  6. 6. Kistner M, Jemwa G T, Aldrich C. Mint: Monitoring of mineral processing systems by using textural image analysis. Mineral Engineering.2013; 52: 169–177.
  7. 7. Neethling S J, Cilliers J J. Mint: Modelling flotation froths. International Journal of Mineral Processing. 2003; 72: 267–287.
  8. 8. Moolman DW, Eksteen JJ, Aldrich C, et al. Mint: The significance of flotation froth appearance for machine vision control. International Journal of Mineral Processing. 1996; 48 : 135–158.
  9. 9. Reddick J F, Hesketh A H, Morar S H, et al. Mint: An evaluation of factors affecting the robustness of colour measurement and its potential to predict the grade of flotation concentrate. Minerals Engineering. 2009; 22: 64–69.
  10. 10. Tan J K, Liang L, Peng Y L, et al. Mint: The concentrate ash content analysis of coal flotation based on froth images. Minerals Engineering. 2016; 92: 9–20.
  11. 11. Zhang J, Tang Z, Liu J, et al. Mint: Recognition of flotation working conditions through froth image statistical modeling for performance monitoring. Minerals Engineering. 2016; 86: 116–129.
  12. 12. Jovanovic I, Miljanovic L, Jovanovic T. Mint: Soft computing‐based modeling of flotation process—A review. Minerals Engineering. 2015; 84: 34–63.
  13. 13. Morar S H, Harris M C, Bradshaw D J. Mint: The use of machine vision of predict flotation performance. Minerals Engineering. 2012; 36–38:31–36.
  14. 14. Mehrabi A, Mehrshad N, Massinaei M. Mint: Machine vision based on monitoring of an industrial flotation cell in an iron flotation plant. International Journal of Mineral Processing. 2014; 133: 60–66.
  15. 15. Moolman D W, Aldrich C, Schmitz G, et al. Mint: The interrelationship between surface froth characteristics and industrial flotation performance. Minerals Engineering. 1996; 9: 837–854.
  16. 16. Bonifazi G, Massacci P, Meloni A. Mint: Prediction of complex sulfide flotation performances by a combined 3D fractal and colour analysis of the froths. Minerals Engineering. 2000; 13: 737–746.
  17. 17. Vanegas C, Holtham P. Mint: On‐line froth acoustic emission measurements in industrial sites. Minerals Engineering. 2008; 21: 883–888.
  18. 18. Jahedsaravani A, Marhaban M, Massinaei M, et al. Mint: Froth‐based modeling and control of a batch flotation process. International Journal of Mineral Processing. 2016; 146: 90–96.
  19. 19. Wang WX, Bergholm F, Yang B. Mint: Froth delineation based on image classification. Minerals Engineering. 2003; 16(11): 1183–1192.
  20. 20. Banford AW, Aktas Z. Mint: The effect of reagent addition strategy on the performance of coal flotation. Minerals Engineering. 2004; 17: 745–760.
  21. 21. Yang CH, Xu CH, Mu XM, et al. Mint: Bubble size estimation using interfacial morphological information for mineral flotation process monitoring. Transaction of Nonferrous Metals Society of China. 2009; 19: 694–699.
  22. 22. Wang WX, Chen LQ. Mint: Flotation bubble delineation based on Harris corner detection and local gray value minima. Minerals. 2015; 5(2): 142–163.
  23. 23. Vinnett L, Alvares‐Silva M. Indirect estimation of bubble size using visual techniques and superficial gas rate. Minerals Engineering. 2015; 81: 5–9.
  24. 24. Jahedsaravani A, Marhaban M H, Massinaei M, et al. Mint: Development of a new algorithm for segmentation of flotation froth images. Minerals and Metallurgical Processing. 2014;31(1):66–72.
  25. 25. Liu JP, Gui GH, Tang ZH, et al. Mint: Recognition of the operational statuses of reagent addition using dynamic bubble size distribution in copper flotation process. Minerals Engineering. 2013; 45: 128–141.
  26. 26. Kracht W, Emery X, Paredes C. Mint: A stochastic approach for measuring bubble size distribution via image analysis. International Journal of Mineral Processing. 2013; 121: 6–11.
  27. 27. Vincent L, Soille P. Mint: Watersheds in digital spaces: An efficient algorithm based on immersion simulutions. IEEE Transactions on Pattern Analysis Machine Intelligence. 1991; 13(6): 583–598.
  28. 28. Vincent L. Mint: Morphological grayscale reconstruction in image analysis: Applications and efficient algorithms. IEEE Transactions on Image Processing. 1993; 2(2): 176–201.

Written By

Wang Weixing and Chen Liangqin

Submitted: March 30th, 2016 Reviewed: July 26th, 2016 Published: November 23rd, 2016