Open access peer-reviewed chapter

Collective Solutions on Sets of Stable Clusterings

By Vladimir Vasilevich Ryazanov

Submitted: October 20th 2017Reviewed: March 5th 2018Published: August 1st 2018

DOI: 10.5772/intechopen.76189

Downloaded: 172

Abstract

Two clustering problems are considered. We consider a lot of different clusters of the same data for a given number of clusters. Data clustering is understood as their stable partition into a given number of sets. Clustering is considered stable if the corresponding partitioning remains unchanged with its minimum change. How to create a new cluster based on ensemble clusterings? The second problem is the following. A definition of the committee synthesis as ensemble clustering is introduced. The sets of best and worst matrices of estimates are considered. Optimum clustering is built on the basis of the clusterings obtained as being closest to the set of the best estimation matrices or as the most distant from the set of worst-case matrices of estimates. As a result, the problem of finding the best committee clustering is formulated as a discrete optimization problem on permutations.

Keywords

  • clustering
  • algorithm
  • ensemble
  • collective
  • stability
  • optimality
  • construction

1. Introduction

There are many different approaches to solving the problems of clustering multidimensional data: based on the optimization of internal criteria (indices) [1, 2], hierarchical clustering [3], centroid-based clustering [4], density-based clustering [5], distribution-based clustering [6], and many others. There are well-known books and papers on clustering [7, 8, 9, 10].

This section is devoted to one approach to the creation of stable clusterings and the processing of their sets. A natural criterion is considered, which is applicable to any clustering method. In work [11], various criteria (indices) are proposed, optimizing which clustering is built with a definite look “what is clustering?” In this chapter, we use a criterion based on stability. If we really got clustering, that is, a solution for the whole sample, the partitioning should not change with a small change in the data. Criteria are introduced for the quality of the partition obtained. If the criterion value is less than one, then the partition is unstable. Let us obtain for the same data Nclusterings. How to create a new ensemble clustering based on the Npartitions? Previously, a committee method for building ensemble clusterings was proposed [12, 13, 14, 15]. Let there be Nresults of cluster analysis of the same data for lclusters. The committee method of building ensemble clustering makes it possible to build such lclusters, each of which is the intersection of “many” initial clusters. In other words, we find such lclusters whose objects are “equivalent” to each other according to several principles. As initial Nclusterings, one can take stable ones. Finally, we consider a video-logical approach to building the initial Ncoarse clusterings.

2. Criteria for stability of clustering

Let the sample of objects Χ=xii=12m,xiRnbe given and Κ=K1K2Klis the clustering of the sample into lclusters obtained by some method, KiΧ,i=1,2,,l,1lKi=Χ,KiKj=,ij.Speaking of clustering, we mean applying a method to a sample without focusing on the method itself. Is partition Κof a sample by this method clustering or here some kind of stopping criterion is satisfied? For example, an extremum of some functional is obtained or the maximum number of operations in the iterative process is fulfilled. We will use the following thesis as the main one. If the resulting partition Κis indeed clustering, then it must be the same clustering for any minimal change in the sample Χ. Let xibe arbitrary, xiKαensemble then the sample Χxipartition Κxi=K1K2Kl,Kj=Kj,j=1,2,,l,jα,Kα=Kαxi,i=1,2,,mmust be clustering. The fact of “coincidence” of clusterings Κ=K1K2Kland Κxi=K1K2Klwill be called identity, the clusterings themselves are identical and denoted it as ΚxiΚ. In this case, it is natural to call a partition Κas stable clustering if the partitions Κxiand Κcoincide for all xi,i=1,2,,m. In the case of non-identity of some individual Κxiwith Κ, we will call Κas quasi-clustering.

Definition 1. The quality of quasi-clustering (of unstable clustering) is the quantity ФΚ={xii=12m:ΚxiΚ}/m.

If ФΚ=1, then in this case, we will talk about stable clustering Κor simply clustering. Suppose that for some i,i=1,2,,mthe condition ΚxiΚis not satisfied, and Κxi=K1K2Klis the clustering of the sample Xxiobtained from the partition Κxiusing Κxias the initial approximation. Then Κxican significantly differ from Κxi. We will use as a function of the proximity between clustering Κxiand partitioning Κthe value dΚxiΚ=maxαi=1lKiKαi/m1. Note that to calculate proximity it is required to find the maximum matching in a bipartite graph, for which there is a polynomial algorithm [16]. If Κxidoes not exist, we will assume that dΚxiΚ=0.

Definition 2. The quality Fmin(Κ) of the quasi-clustering Κwill be called the quantity FminΚ=minidΚxiΚ.

Definition 3. The quality Favr(Κ) of the quasi-clustering Κwill be called the quantity FavrΚ=i=1mdΚxiΚ/m.

For some clustering algorithms, there are simple economical rules for computing ФΚ. Let us bring them (see also in [3, 17, 18]).

2.1. Method of minimizing the dispersion criterion

It is known that in order to minimize the dispersion criterion, it suffices to satisfy inequalities

njnj1x×mj2nknk+1x×mk20E1

for any clusters Kjand Kk, arbitrary x×Kj, where nj=Kj, mj=1njxtKjxt.

We establish the conditions for the identity ΚxiΚof the partitions Κxiand Κ. In the case x×Kj[considering (Eq. (1))] to satisfy the condition ΚxiΚinequalities.

nj1nj2x×mj2+2nj2x×mjximj+1nj1nj2ximj2nknk+1x×mk20must be satisfied. In the case x×Kkinequalities nknk1x×mk2nj1njx×mj22njx×mjximj1njnj1ximj20must be satisfied.

2.2. k-means method

Let the clustering Κbe obtained by k-means method, that is, x×mjx×mk,jk,x×Kj. In the case of equality, the object is considered to belong to a cluster with a lower number. Then, ΚxiΚis satisfied if x×mj2+2nj1x×mjximj+1nj12ximj2x×mk2under x×Kj,x×xiand x×mk2x×mj2+2nj1x×mjximj+1nj12ximj2under x×Kk.

2.3. Method of hierarchical agglomeration grouping

We confine ourselves to the case of an agglomeration hierarchical grouping. To find the value of the criterion ФΚ, you can calculate the partitioning Κ, partitions Κxi,i=1,2,,m, and compare Κwith each Κxi,i=1,2,,m. Here it is possible to save in the calculation of ФΚwithout carrying through the clustering for some of i. Indeed, let there Κtxi=K1tK2tKmttbe clustering of the sample Xxiinto mtclusters, tml. Κis a partition obtained by the clustering algorithm X. The main property of the hierarchical grouping is that for any k=1,2,,mtthere is j=1,2,,mt1for which KktKjt+1. In this case, if at some step t,tmlfor some kthe condition KktKjdoes not hold for all j=1,2,,l, then the condition ΚxiΚwill not be fulfilled.

2.4. Examples

We give some examples illustrating the stability criteria introduced.

  1. Below are the results obtained for model samples. The method of clustering based on the minimization of the dispersion criterion [3] has been used. As the initial data, we used samples of a mixture of two two-dimensional normal distributions with independent features, different а, and σ. Examples are shown in Figures 13 (images of the samples in question) and in Tables 1 and 2. Figure 1 represents a sample of 200 objects for which all the criteria ФΚ, Fmin(Κ), Favr(Κ) are equal to 1, and the resulting clustering into two clusters is stable clustering. Here we used distributions with parameters а1=00,а2=99, and σ1=σ2=33.

    Further, with the same parameters а1,а2, experiments were carried out for σ1=σ2=55.

    Then, we used distributions with parameters а1=00,а2=99, σ1=σ2=1010, m=200. In this case, we have the case of strongly intersecting distributions. Formally, the clustering method gives a quasi-clustering, approximately corresponding to the partitioning of the original sample (Figure 3) into two sets by a diagonal from the upper left corner of the picture to the lower right. The values of the criteria in Table 2 were obtained.

  2. Data clustering of [19] and criteria values ФΚ, Fmin(Κ), Favr(Κ). The following data from classification problem of electromagnetic signals were considered: n=34, m1=225, m2=126, l=2. We give the values of the stability criteria obtained. Figure 4 shows the visualization [3] of the sample. The accuracy of the supervised classification methods was about 87% of the correct answers. However, the clustering of data turned out to be only quasi-clustering (Table 3).

Figure 1.

Clustering in a task with parameters а1=00,а2=99, σ1=σ2=33, m=200.

Figure 2.

Clustering in a task with parameters а1=00,а2=99, σ1=σ2=55, m=200.

Figure 3.

Data with parameters а1=00,а2=99, σ1=σ2=1010, m=200.

ФΚ0.995
Fmin(Κ)0.995
Favr(Κ)0.999

Table 1.

Values of quasi-clustering criteria.

ФΚ0.770
Fmin(Κ)0.995
Favr(Κ)0.998

Table 2.

Values of quasi-clustering criteria. ?ase of very intersecting distributions.

Figure 4.

Data visualization.

ФΚ0.966
Fmin(Κ)0.997
Favr(Κ)0.999

Table 3.

The values of the criteria in the problem “ionosphere” ФΚ, Fmin(Κ), Favr(Κ).

3. Committee synthesis of ensemble clustering

The problem is as follows. There are Nclusterings for the same number of clusters. How to choose from them the only one or build a new clustering from the available ones? In the supervised classification problem (with the help of a collective solution of a set of algorithms) there is a criterion according to which one can choose an algorithm from existing ones or build a new algorithm. This is a supervised classification error. This direction in the theory of classification appeared in the early 1970s of the last century [20, 21], then was created an algebraic approach [22], various correctors were appeared. The key in the algebraic approach is the creation in the form of special algebraic polynomials of a correct (error-free) algorithm based on a set of supervised classification algorithms. Some algebraic operations on matrices of “degrees of belonging” of recognized objects are used. Various types of correctors were also created [22, 23, 24, 25], when the problem of constructing (and applying) the best algorithm is also solved in two stages. First, the supervised classification algorithms are determined, and then the corrector. This can be, for example, the problem of approximating a given partial Boolean function by some monotonic function. In recent decades, there are conferences on multiple classifier systems, these issues are reflected in the books [21, 10]. How to choose or create the best clustering using a finite set of given solutions? Here, all problems are connected primarily with the absence of a single generally accepted criterion. Each clustering algorithm finds such “source” clusters of objects that are “equivalent” to each other. In this chapter, it is proposed to build such a clustering of the initial data, the cluster solutions of which have a large intersection with the initial clusters.

Let the sample of objects Χ=x1x2xm,xiRnfor supervised classification and lclasses are given. In the theory of supervised classification, the following definition of the supervised classification algorithm exists [21]. Let αij01be equal to 1 when the object xi,i=1,2,,mis classified by the algorithm Aras xiKjand 0 otherwise: ArΧ=αijm×l. Here the intersection of classes is allowed. Unlike the supervised classification problem, when clustering a sample, we have freedom in the designation of clusters.

Definition 4. The matrices I=αijm×l,αij01and I'=αij'm×l,αij'01are said to be equivalent if they are equals to within a permutation of the columns.

It is clear that this definition defines a class of equivalent matrices for some matrix.

Definition 5. A clustering algorithm is an algorithm that maps a sample Χto a set of equivalent information matrices AcΧ=Καijm×l.

The number of clusters and the length of the control sample are considered to be given. This definition emphasizes the fact that in an arbitrary partition of a sample into lclusters, we have complete freedom in the numbering of clusters. In what follows we shall always consider matrices of dimension m×l.

Let there be given Nalgorithms A1c,A2c,,ANcfor clustering and their solutions AνcΧ=Καijvm×lfor sample Χ. We denote Iν=αijνm×lan arbitrary element of the clustering Καijvm×l.

Therefore, we have Ι=ΚI1×ΚI2××ΚINor set Ι=I1'I2'IN'Iν'ΚIν, Iν'=αij'νm×l.

There are two problems.

  1. Construction of the mapping Ιon, Κc, ΙΚc=Κсijm×l,сij01(that is, the construction of some kind of clustering).

  2. Finding the optimal element in Κc(i.e. finding the best clustering in Κc).

Definition 6. An operator ΒI1'I2'IN'=B=bijm×lis called an adder if bij=ν=1Nαij'ν.

It is clear that 0bijN,bij012N.

Definition 7. An operator ris called a threshold decision rule, if rB=С=сijm×l, cij=1,bijδi,0,otherwise,where δiR.

Definition 8. By the committee synthesis of an information matrix Сon an element I˜'=I1'I2'IN'let us call it a computation by the formula С=rΒI˜', provided that Βis the adder and ris the threshold decision rule.

The general scheme of collective synthesis is shown in Figure 5.

Figure 5.

Scheme of committee synthesis.

We note that the total number of possible values Bis bounded from above by a quantity l!N. Let sbe the operator that performs permutation of columns of matrices m×lwith the help of a substitution <j1,j2,,jl>, S=sis the set of all operators s. We believe that rs=sr,sS. We continue sSto the n-dimensional case σI˜'=sI1'sI2'sIn'. We denote Σ=σ,σis the extension of s. From the definition of the adder it follows that σΒ=Βσ,σΣ. Further, I˜'Ι,σΣwe have rΒσI˜'=ΒI˜'=srΒI˜'and finally σI˜'σΣrΒsrΒI˜'sS=ΚrΒI˜'=Κcijm×l. Therefore, the product rΒdefines the desired mapping and specifies some ensemble clustering. It is necessary to determine the optimal element from Κc, find it and I˜'.

ΙrΒΚc, AI˜'cΧ=ΚrΒI˜'.

We introduce definitions of potentially best and worst-case solutions. As the “ideal” of the collective solution, we will consider the case when all algorithms give us essentially the same partitions or coverings.

Definition 9. A numerical matrix bijm×lis called contrasting if bij0N. A numeric matrix bijm×lis called blurred if bij=δiR.

As the distance between two numerical matrices, we consider the function ρB1B2=i=1mj=1lbij1bij2.

Denote by Μthe set of all contrast matrices, and by M˜the set of all blurred matrices. We introduce definitions for estimating the quality of matrices.

Definition 10.

ΦB=ρBΜBmin.E2

Definition 11.

Φ˜B=ρBΜ˜Bmax.E3

The set Μ˜'=B˜(where B˜=b˜ijm×l,b˜ij=N2) is called the mean blurred matrix.

Definition 12.

Φ˜'B=ρBB˜BmaxE4

We note that the optimums according to the criteria (Eq. (2)) and (Eq. (3)) do not have to coincide. The sets Μand Μ˜intersect.

Figure 6 illustrates the sets of contrasting and blurred matrices. Arrows indicate some elements of sets.

Figure 6.

The sets of contrasting Μ, blurred Μ˜ matrices, and the set of matrices B.

Theorem 1. The sets of optimal solutions by criteria Eqs. (2) and (4) coincide.

Let us show that ΦB+Φ˜'B= Nml2for any B. We write Φ˜'B=i=1mj=1lα˜ij,α˜ij=bijN2,ΦB=i=1mj=1lαij,αij=minbijNbij. If bijN2then α˜ij=bijN2,αij=Nbij, and α˜ij+αij=N2. If bij<N2then α˜ij=N2bij,αij=bij, and α˜ij+αij=N2.

Summing over all the set of values of pairs of indices i,j, we get that ΦB+Φ˜'B= Nml2.

We consider the problem of finding optimal ensemble clusterings for the criterion (2). It is clear that ΦB=i=1mj=1lminbijNbij.

We introduce the notations M=12m, Xj=ibijN2i=12m, Yj=MXj, j=1,2,,l. Let πν=<μ1ν,μ2ν,,μlν>, ν=1,2,,Nbe some permutation of the set π0=<1,2,,l>. A set of permutations π=<π1,π2,,πN>uniquely determines the matrix of estimates.

B'=bij'm×l,bij'=bijπ=ν=1Nαij'ν.

We will further assume that the “initial” matrix αijνm×lof the algorithm Aνccorresponds to the permutation π0. αij'νm×lis the matrix of the algorithm Aνccorresponding to some permutation πν. Then αij'ν=αiμjνν.

Consider Δ˜ν=j=1liXjα¯ijν+iYjαijν, Δ˜ν'=j=1liXjα¯ij'ν+iYjαij'ν.

Then Δν=Δ˜ν'Δ˜ν=j=1liXjαijναij'ν+iYjαij'ναijν. We convert this expression.

The identity j=1liXjαijν+iYjαijν=j=1liXjαiμjνν+iYjαiμjννis valid. Get Δν=j=1liXjαijναiμjνν+iYjαiμjνναijν=2j=1liXjαijναiμjνν=2j=1liXjαijν2j=1liXjαiμjνν.

Thus, minimizing a function is equivalent to maximizing the second sum of the expression.

After applying the permutations π=<π1,π2,,πN>, the sets Xj,Yj,j=1,2,,lchange. We introduce the notations M1j=XjYj'Yj, M2j=Yj'Yj, M3j=YjXj'Xj, M4j=Xj'Xj.

Figure 7 schematically shows the changes in sets Xj,Yj,j=1,2,,l.

Figure 7.

Sets Xj,Yj,j=1,2,…,l are changed.

Theorem 2

ΔΦ=ΦB'ΦBν=1NΔν+ν=1NM2j2,Neven,1,Nodd+M4j0,Neven,1,Nodd

The proof is given in [12, 13]. Theorem 2 is the basis for creating an effective minimization algorithm of Φ.

Since the second sum is always not positive, we have an upper bound. We consider the problem of minimizing a function Δν. We write out all possible variants of the function j=1liXjαiμjννin the form of a table in Figure 8. Then the minimum of this function is reduced to finding the maximum matching of the bipartite graph, for finding which we can use the polynomial Hungarian algorithm [16].

Figure 8.

All possible variants of ∑j=1l∑i∈Xjαiμjνν for all admissible j and i.

It is clear that minπνΔν0. Now we can propose the following heuristic algorithm for steepest descent.

Algorithm.

1. We calculate Xj,j=1,2,,l.

2. We find Δν=minπνΔνfor each ν.

If ν=1NΔν<0, then apply the found permutations πν=<μ1ν,μ2ν,,μlν>,ν=1,2,,Nand go to step 1).

If ν=1NΔν=0then the END of algorithm.

NOTE. We note that our algorithm does not even find a local minimum of the criterion ΦB. Nevertheless, this algorithm is very fast, its complexity at each iteration is estimated as Ol5mN.

4. The algorithm of collective k-means

Results of clustering by Nalgorithms of sampling of mobjects to lclusters solutions are obtained, which we can write in the form of a binary matrix αijv,ν=1,2,,N,i=1,2,,m,j=1,2,,l. We assume that the cluster numbers in each algorithm are fixed. Then any horizontal layer number iof this three-dimensional matrix will denote the results of object xiclustering. As an ensemble clustering of the sample Χ, we can take the result of clustering the “new” descriptions—the layers of the original matrix αijv,ν=1,2,,n. As a method of clustering, we take the method of minimizing the dispersion criterion. Let there be a lot of Nclusterings αi1jv,αi2jv,,αiNjvwith heuristic clustering algorithms, then we calculate their sample mean αjνas the solution of the problem μ=1tαjvαiμjv2minαjv. Where do we obtain αjv=1Nμ=1Nαiμjv. Note that this method makes it possible to calculate such ensemble clusterings Κ=K1K2Klthat the sets of heuristic clustering of the objects of some cluster of the collective solution will be close to each other in the Euclidean metric. The committee synthesis of collective decisions provides more interpretable solutions. Indeed, if Κν=K1νK2νKlν,ν=1,2,,Nare separate solutions of heuristic clustering algorithms, then the cluster of collective solution will be the “intersection” of many some original clusters Ki11,Ki22,,KilN.

5. Man-machine (video-logical) clustering method

In the problems of ensemble clustering synthesis considered earlier, we did not consider the number of initial clustering algorithms, their quality and their proximity. Ensemble clustering was built and reflected only the opinion of the collective decisions that we used. “Internal” indices [9] reflect the person’s ideas about clustering. You can think up examples of data when known internal criteria lead to degenerate solutions.

At the same time, a person has the ability to cluster visual sets on a plane without using any proximity functions, criteria and indices. The following idea was realized. A person can personally cluster projections of sets of points from Rninto R2. Having made such clusterings under different projections, we can construct generally speaking various Nclusterings, which we submit to the input of the construction of the collective solution. The person himself “does not see” the objects in Rn, but can exactly solve the clustering tasks on the plane. Thus, here we use Nprecise solutions, but of various partial information about the data. Consider this video-logical method on one model example.

A sample of two normal distributions with independent characteristics was considered. The first feature of the first distribution (200 objects) had zero expectation and the standard deviation, the first attribute of the second distribution (200 objects) had these values equal to 5. All the other 49 attributes for all objects had аi=5, σi=5,i=2,3,,50. That is, the two sets had equal distributions for 49 features and one informative feature. Clustering of the entire sample by minimizing dispersion is shown in Figure 9. Black and gray points on sample visualization represent the objects of the first and second clusters. Here the fact of informative character of the first feature is lost.

Figure 9.

Clustering of a sample of model objects by the method of minimizing variance.

The program of the video-logical approach worked as follows. With the help of a single heuristic approach, all Cn2projections are automatically ordered according to the descending criteria of the presence of two clusters. Next we as experts consider some projections and with the help of the mouse we select in each of them two clusters. Figure 10 shows two such examples. Note that the first feature was present in all projections. It was used “manually” as the defining area for the dense location of objects. Then 10 “manual” clustering went to the program entrance for the committee synthesis of the collective solution. Note that only two objects were erroneously clustered.

Figure 10.

Allocation of clusters by mouse on the (1.4) and (1.6) features.

6. Conclusion

This chapter consists of two parts. First, clustering criteria based on sustainability are introduced. Next, we propose an approach to processing the sets of obtained partitions of the same sample. As the initial clustering, it is better to use stable clustering. It is shown how a person can be used in the construction of the committee synthesis of ensemble clustering.

Acknowledgments

The reported study was funded by RFBR according to the research project No 17-01-00634 and No 18-01-00557.

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Vladimir Vasilevich Ryazanov (August 1st 2018). Collective Solutions on Sets of Stable Clusterings, Recent Applications in Data Clustering, Harun Pirim, IntechOpen, DOI: 10.5772/intechopen.76189. Available from:

chapter statistics

172total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Clustering Algorithms for Incomplete Datasets

By Loai AbdAllah and Ilan Shimshoni

Related Book

First chapter

Concepts, Historical Milestones and the Central Place of Bioinformatics in Modern Biology: A European Perspective

By T.K. Attwood, A. Gisel, N-E. Eriksson and E. Bongcam-Rudloff

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us