Open access peer-reviewed chapter - ONLINE FIRST

Segmenting Images Using Hybridization of K-Means and Fuzzy C-Means Algorithms

By Raja Kishor Duggirala

Submitted: December 19th 2018Reviewed: April 16th 2019Published: July 10th 2019

DOI: 10.5772/intechopen.86374

Downloaded: 29

Abstract

Image segmentation is an essential technique of image processing for analyzing an image by partitioning it into non-overlapped regions each region referring to a set of pixels. Image segmentation approaches can be divided into four categories. They are thresholding, edge detection, region extraction and clustering. Clustering techniques can be used for partitioning datasets into groups according to the homogeneity of data points. The present research work proposes two algorithms involving hybridization of K-Means (KM) and Fuzzy C-Means (FCM) techniques as an attempt to achieve better clustering results. Along with the proposed hybrid algorithms, the present work also experiments with the standard K-Means and FCM algorithms. All the algorithms are experimented on four images. CPU Time, clustering fitness and sum of squared errors (SSE) are computed for measuring clustering performance of the algorithms. In all the experiments it is observed that the proposed hybrid algorithm KMandFCM is consistently producing better clustering results.

Keywords

  • image segmentation
  • clustering
  • K-Means
  • Fuzzy C-Means
  • hybridization
  • sum of squared error
  • clustering fitness

1. Introduction

Images are often the most important category among the available digital data. In the recent years, image data is increasing and will continue increase in the near future. Since it is difficult to deal with large amount of image data as the available data increases, it becomes crucial to use the automated tools for various purposes in connection to image data. The image processing provides wide range of techniques to deal with the images. By using the image processing techniques, we can make the work much easier not only for now, but also for the future when there will be more data and more work to do on the images.

Image segmentation is an essential image processing technique that analyzes an image by partitioning it into non-overlapped regions each region referring to a set of pixels. The pixels in a region are similar with respect to some characteristic such as color, intensity, or texture [1]. The pixels significantly differ with those in the other regions with respect to the same characteristic [2, 3, 4]. Image segmentation plays an important role in a variety of applications such as robot vision, object recognition, medical imaging and etc. [5, 6, 7]. Image segmentation approaches can be divided into four categories. They are thresholding, edge detection, region extraction and clustering. Clustering techniques can be used for data segmenting image data as they are used for partitioning large datasets into groups according to the homogeneity of data points.

In clustering, a given population of data is partitioned into groups such that objects are similar to one another within the same group and are dissimilar to the objects in other groups [8, 9]. There are different categories of clustering techniques. These can be partitional (hierarchical and non-hierarchical), like K-means, PAM, CLARA, CLARANs [10, 11]; model-based, like Expectation Maximization, SOM, Mixture model clustering [12, 13]; or fuzzy-based like Fuzzy C-Means [14, 15].

Partitional clustering techniques attempt to break a population of data into some predefined number of clusters such that the partition optimizes a given criterion.

Formally, clusters can be seen as subsets of the given dataset. So, clustering methods can be classified according to whether the subsets are fuzzy or crisp (hard). In hard clustering, an object either does or does not belong to a cluster. These methods partition the data into a specified number of mutually exclusive subsets. However, in fuzzy-based clustering, the objects may belong to several clusters with different degrees of membership [16].

It is studied in the literature that many researchers experimented with the Fuzzy C-Means (FCM) algorithm in a wide variety of ways for achieving better image segmentation results [1, 17]. In [18], a penalized FCM (PFCM) algorithm is presented for image segmentation for handling noise by adjusting a penalty coefficient. The penalty term used here takes the spatial dependence of the objects into consideration, which is modified according to the FCM criterion. In [19], a fuzzy rule-based technique is proposed. It employs the rule-based neighborhood enhancement system to impose spatial continuity by post-processing on the clustering results obtained using FCM algorithm. In [20], a Geometrically Guided FCM (GG-FCM) algorithm is proposed, which is based on a semi-supervised FCM technique for multivariate image segmentation. In [21], a regularization term was introduced into the standard FCM to impose the neighborhood effect. In [22], this regularization term is incorporated into a kernel-based fuzzy clustering algorithm. In [23], this regularization) term is incorporated into the adaptive FCM (AFCM) algorithm [24] to overcome the noise sensitivity of AFCM algorithm.

However, it is found in the literature that a very less attention is paid towards the hybridization of clustering techniques for partitioning the datasets.

The present research work aims at developing hybrid clustering algorithms involving K-Means and Fuzzy C-Means (FCM) techniques for achieving better clustering results. As part of hybridization, two algorithms are developed, KMFCM and KMandFCM. The KMFCM algorithm first performs K-Means on the dataset and then performs FCM using the results of K-Means. The KMandFCM algorithm performs K-Means and FCM in the alternative iterations.

All the experiments are carried out using the datasets that are related to four images. For performance evaluation, CPU time, clustering fitness and sum of squared error (SSE) are taken into consideration.

The following sections provide a detailed discussion of K-Means (KM), Fuzzy C-Means (FCM), KMFCM and KMandFCM algorithms.

2. The K-Means (KM) algorithms

Partitional clustering methods are appropriate for the efficient representation of large datasets [11]. These methods determine k clusters such that the data objects in a cluster are more similar to each other than to the objects in other clusters.

The K-Means is a partitional clustering method, which partitions a given dataset into a pre-specified number, k, of clusters [25]. It is a simple iterative method. The algorithm is initialized by randomly choosing k points from a given dataset as the initial cluster centers, i.e., cluster means. The algorithm iterates through two steps till its convergence:

  1. Data assignment: this step partitions the data by assigning each data object to its closest cluster center.

  2. Updating the cluster centers: update the center of each cluster based on the objects assigned to that cluster.

The algorithm for K-Means is as follows [26]. Here, k represents the number of clusters, d represents the number of dimensions or attributes, Xi represents the ith data sample, μj (j = 1, 2, …, k) represents the mean vector of cluster Cj, t is the iteration number. For termination condition the algorithm computes percentage change, Eq. (2). The algorithm terminates when Percentage change < α. Here, α is assumed to be 3 since it is negligible.

KM algorithm

  1. Select k vectors randomly from the dataset as the initial cluster centers, μj (j = 1, 2, …, k). Set the current iteration t = 0.

  2. Assign each vector, Xi, to its closest cluster center using Euclidean distance, Eq. (1).

    dXiμj=l=1dxilμjl2E1

  • Update mean vectors μj (j = 1, …, k).

  • Compute Percentage change as follows

    Percentage change=ΨtΨt+1Ψt×100E2

    where Ψt is the number of vectors assigned to new clusters in tth iteration and Ψt + 1 is the number of vectors assigned to new clusters in (t + 1)th iteration.

  • Stop the process if Percentage change < α, otherwise set t = t + 1 and repeat the steps 2–4 with the updated parameter.

  • The K-Means uses Euclidean distance as a proximity measure for determining the closest cluster to which a data object is assigned [13]. The algorithm stops when the assignment of data points to the clusters no longer changes or some other criterion is satisfied. The K-Means is a widely used algorithm for clustering and it requires less CPU time. However, it mainly suffers from detecting the natural clusters that have non-spherical shapes or widely different sizes or densities [25].

    3. The Fuzzy C-Means (FCM) algorithms

    Fuzzy-based clustering techniques focus on modeling uncertain and vague information that is found in the real world situations. These techniques deal with the clusters whose boundaries cannot be defined sharply [14, 15]. By fuzzy-based clustering, one can know if data objects fully or partially belong to the clusters based on their memberships in different clusters [27]. Among the fuzzy-based clustering methods, Fuzzy C-Means (FCM) is the most well-known algorithm as it has the advantage of robustness for obscure information about the clusters [1, 28].

    In FCM, a dataset is grouped into k clusters, where every data object may relate to every cluster with some degree of membership to that cluster [16]. The membership of a data object towards a cluster can range between 0 and 1 [29]. The sum of memberships for each data point must be unity.

    The FCM iterates through two phases for converging to a solution. First, each data object will be associated with a membership value for each cluster, and second, assigning the data object to the cluster with the highest membership value [2].

    The algorithm for FCM is given below [30]. Here, U is the k × N membership matrix. While computing the cluster centers and updating the membership matrix at each iteration, the FCM uses membership weight, m. For most data 1.5 ≤ m ≤ 3.0 gives good results [29]. In all our experiments, we take m = 1.25.

    FCM algorithm

    1. Initialize parameters: select k vectors randomly as cluster means; set initial membership matrix UkXN0, set the current iteration t = 0.

    2. Assign each data object Xi to clusters using the membership matrix.

    3. Compute jth cluster center as follows:

      μjt+1=i=1NujimXii=1NujimE3

  • Compute new membership matrix using

    ujit+1=l=1kXiμjt2Xiμlt21m11E4

  • Assign each data object Xi to clusters using the membership matrix.

  • Compute Percentage change using Eq. (2).

  • Stop the process if the Percentage change is <α. Otherwise, set t = t + 1 and repeat the steps 3–7 with the updated parameters.

  • FCM is widely studied and applied in geological shape analysis [31], medical diagnosis [32], automatic target recognition [33], meteorological data [28], pattern recognition, image analysis, image segmentation and image clustering [34, 35, 36], agricultural engineering, astronomy, chemistry [37], detection of polluted sites [38] and etc.

    4. Hybridization involving K-Means and FCM techniques

    The partitional [11] and fuzzy-based [16] methods are widely applied clustering techniques in several areas. The partitional clustering methods do hard clustering, where the dataset is partitioned into a specified number of mutually exclusive subsets. The K-Means, as a partitional clustering method is found in the research literature as widely applied technique in a variety of experiments. While clustering the data, the K-Means aims at minimizing the local distortion [39, 40]. However, K-Means is ideal if the data objects are distributed in well-separated groups.

    In fuzzy-based clustering, objects are not forced to fully belong to one cluster. Here, an object may belong to many clusters with varying degrees of membership. This membership can range between 0 and 1 indicating the partial belongingness of objects to the clusters [16]. Fuzzy clustering techniques help in understanding if the data objects fully or partially belong to clusters depending on their memberships [27]. In FCM, each data object belongs to each cluster with some degree of membership that ranges between 0 and 1 [29]. Here, clusters are treated as fuzzy sets. In general, introducing the fuzzy logic in K-Means is the Fuzzy C-Means algorithm [41].

    The following sub-section discusses two algorithms that apply hybridization of K-Means (KM) and Fuzzy C-Means (FCM) clustering techniques [42]. These algorithms are KMFCM and KMandFCM. The KMFCM algorithm first performs K-Means on the given dataset and then performs the FCM using the results of K-Means. The KMandFCM algorithm performs K-Means and FCM in the alternative iterations on the given dataset. The detailed discussion of these hybrid algorithms is presented in the following subsections.

    4.1 The KMFCM algorithm

    The proposed hybrid clustering algorithm KMFCM first performs the K-Means (KM) technique completely on the given dataset. Using the resulted cluster centers of KM as cluster seeds, the FCM is performed on the given dataset till termination. Here, to run the first iteration of the FCM, the cluster centers and the membership matrix are calculated based on the results of KM. The remaining iterations continue as in the FCM algorithm.

    The algorithm for the KMFCM is given below. Here, KM-Step is the K-Means step and FCM-Step is the Fuzzy C-Means step.

    KMFCM algorithm

    1. KM-Step: select k vectors randomly from the dataset as the initial cluster centers μj (j = 1, …, k). Set the current iteration t = 0.

    2. Assign each data object Xi to its closest cluster center using Eq. (1).

    3. Update cluster centers μj (j = 1, …, k) and set t = t + 1.

    4. Compute Percentage change using Eq. (2).

    5. If Percentage changeα, repeat steps 2–4.

    6. FCM-Step: compute the membership matrix UkXNtusing Eq. (4) based on the results of KM-Step.

    7. Assign data objects to clusters using membership matrix.

    8. For each cluster Cj, compute the center μj (j = 1, …, k) using Eq. (3)

    9. Compute Percentage change using Eq. (2).

    10. Stop the process if Percentage change <α. Otherwise, set t = t + 1 and repeat steps 6–9.

    4.2 The KMandFCM algorithm

    Clustering in KMandFCM is performed by executing K-Means and the FCM techniques in alternative iterations on the given dataset till termination. The first iteration is performed using K-Means assuming some randomly selected data points as cluster centers. The second iteration is performed using FCM technique. For this iteration the cluster means, covariance matrices and the membership matrix are calculated using the results of first iteration. Third iteration is performed using K-Means technique. This iteration computes cluster means using results obtained from the second iteration. In this way, clustering is performed using K-Means and FCM in the alternative iterations till termination.

    The algorithm for the proposed KMandFCM algorithm is given below. Here, KM-Step is the K-Means step and FCM-Step is the Fuzzy C-Means step.

    KM and FCM algorithm

    1. Select k vectors randomly from the dataset as initial cluster centers μj (j = 1, …, k). Set the current iteration t = 0.

    2. KM-Step: assign each vector Xi to its closest cluster center using Eq. (1).

    3. FCM-Step: set t = t + 1.

    4. For each cluster Cj, compute the center μj using Eq. (3)

    5. Compute the new membership matrix UkXNtusing Eq. (4)

    6. Assign data objects to clusters using the membership matrix.

    7. Compute Percentage change using Eq. (2).

    8. Stop the process if Percentage change < α. Otherwise, set t = t + 1.

    9. KM-Step: For each cluster Cj, compute new center μj using Eq. (3).

    10. Assign each vector Xi to its closest cluster center using Eq. (1).

    11. Compute Percentage change using Eq. (2).

    12. Stop the process if Percentage change < α. Otherwise, go to step 3.

    For all the algorithms, i.e., KM, FCM, KMFCM, KMandFCM, the same termination condition, Eq. (2), is used.

    5. Performance evaluation measures

    For performance evaluation of algorithms, CPU time in seconds, sum of squared error [12] and clustering fitness [43] are taken into consideration and are calculated for all the algorithms.

    5.1 Sum of squared errors

    The objective of clustering is to minimize the within-cluster sum of squared error (SSE). The lesser the SSE, the better the goodness of fit is. The sum of squared error [12] for the results of each clustering algorithm is computed using the Eq. (5)

    SSE=j=1kXiCjXiμj2E5

    Here, Xi is the ith data object in the dataset, μj (j = 1, …, k) is the center of the cluster Cj, and k is the number of clusters.

    5.2 Clustering fitness

    The main objective of any clustering algorithm is to generate clusters with higher intra-cluster similarity and lower inter-cluster similarity. So, it is also important to consider inter-cluster similarity while evaluating the clustering performance. In the present work, clustering fitness is also considered as a performance criterion, which requires the calculation of both intra-cluster similarity and inter-cluster similarity. The computation of clustering fitness also requires the experiential knowledge, λ. The computation of clustering fitness results in higher value when the inter-cluster similarity is low and results in lower value for when the inter-cluster similarity is high. Also that to make the computation of clustering fitness unbiased, the value of λ is taken as 0.5 [43].

    (a) Intra-cluster similarity for the cluster Cj: it can be quantified via a function of the reciprocals of intra-cluster radii within each of the resulting clusters. The intra-cluster similarity [43] of a cluster Cj (1 = j = k), denoted as Stra(Cj), is defined in Eq. (6)

    StraCj=1+n1+1ndistIlCentroidE6

    Here, n is the number of items in cluster Cj, Ij (1 = j = n) is the jth item in cluster Cj, and dist(Ij, Centroid) calculates the distance between Ij and the centroid of Cj, which is the intra-cluster radius of Cj. To smooth the value of Stra(Cj) and allow for possible singleton clusters, 1 is added to the denominator and numerator.

    (b) Intra-cluster similarity for one clustering result C: it is denoted as Stra(C). It is defined in Eq. (7), [43]

    StraC=1kStraCjkE7

    Here, k is the number of resulting clusters in C and Stra(Cj) is the intra-cluster similarity for the cluster Cj.

    (c) Inter-cluster similarity: it can be quantified via a function of the reciprocals of inter-cluster radii of the clustering centroids. The inter-cluster similarity [43] for one of the possible clustering results C, denoted as Ster(Cj), is defined as Eq. (8)

    SterC=1+k1+1kdistCentroidjCentroid2E8

    Here, k is the number of resulting clusters in C, 1 = j = k, Centroidj is the centroid of the jth cluster in C, Centroid2 is the centroid of all centroids of clusters in C. We compute inter-cluster radius of Centroidj by calculating dist(Centroidj, Centroid2), which is distance between Centroidj, and Centroid2. To smooth the value of Ster(C) and allow for possible all-inclusive clustering result, 1 is added to the denominator and the numerator.

    (d) Clustering fitness: the clustering fitness [43] for one of the possible clustering results C, denoted as CF, is defined as Eq. (9)

    CF=λ×StraC+1λSterCE9

    Here, λ (0 < λ < 1) is an experiential weight, Stra(C) is the intra-cluster similarity for the clustering result C and Ster(C) is the inter-cluster similarity for the clustering result C. To avoid biasedness in our experiments, λ is assumed to be 0.5.

    6. Experiments and results

    Experimental work has been carried out on the system with Intel(R) Core(TM) i3-5005U CPU@2.00GHz processor speed, 4GB RAM, Windows 7 OS (64-bit) and using JDK1.7.0_45. Separate modules are written for each of the above discussed methods to observe the CPU time for clustering any dataset by keeping the cluster seeds same for all methods. I/O operations are eliminated and the CPU time observed is strictly for clustering of the data.

    Along with the newly developed hybrid algorithms, experiments are also conducted with the algorithms for standard K-Means (KM) and Fuzzy C-Means (FCM) for performance comparison. All the algorithms are executed using datasets that are related to four images. The details of these images are available in Table 1.

    SNOImageResolutionNo. of pointsNo. of dimensions
    1Heart341 × 367125,1473
    2Kidneys473 × 355167,9153
    3Baboon512 × 512262,1443
    4Lena256 × 25665,5363

    Table 1.

    Medical Images.

    The medical images used in the present experiment are heart image [44] and kidneys image [45] (Figures 1 and 2). The experiments are also carried out using two benchmark images. They are Baboon and Lena images [46] (Figures 3 and 4).

    Figure 1.

    CPU time (Heart image).

    Figure 2.

    Clustering Fitness (Heart image).

    Figure 3.

    Sum of squared errors (Heart image).

    Figure 4.

    CPU time (Kidneys image).

    Below is the brief description of medical images.

    The Heart is a medical image obtained from biology data repository [44]. It is in “jpeg” format. The ‘Kidneys’ is a colored MRI scan of a coronal section through a human abdomen, showing the front view of healthy kidneys and liver [45]. It is in ‘jpeg’ format. The Baboon and Lena are benchmark test images that are found frequently in the literature [46]. These are all in uncompressed “tif” format.

    All the algorithms for standard K-Means (KM), standard Fuzzy C-Means (FCM), KMFCM and KMandFCM are executed on each image data with varying number of clusters (k = 10, 11, 12, 13, 14, 15). For all algorithms, same cluster seeds are used. Same termination condition Eq. (2) is used for all the experiments. The details of CPU time, clustering fitness and SSE of each algorithm for the all images are given in the following sub-sections (Tables 213). The results are also projected in their respective graphs (Figures 516).

    KKMFCMKMFCMKM and FCM
    100.210.301.360.19
    110.210.321.480.20
    120.250.401.610.20
    130.090.351.580.22
    140.140.391.730.23
    150.360.432.150.26

    Table 2.

    CPU time of each clustering technique (Heart image).

    KKMFCMKMFCMKM and FCM
    1051.2056.6258.5164.78
    1149.7955.7355.4062.14
    1242.2755.8061.1665.97
    1334.8847.5441.0858.46
    1448.3455.2256.6260.35
    1547.5457.9648.2459.22

    Table 3.

    Clustering fitness of each clustering technique (Heart image).

    KKMFCMKMFCMKM and FCM
    100.01630.01520.01480.0041
    110.01500.01450.00740.0036
    120.01730.01630.00590.0031
    130.01850.01710.02850.0037
    140.01420.01390.01130.0028
    150.01380.01140.02410.0024

    Table 4.

    SSE of each clustering technique (Heart image).

    KKMFCMKMFCMKM and FCM
    100.090.681.580.55
    110.130.411.830.26
    120.810.582.640.46
    130.080.472.070.30
    140.240.602.400.31
    150.651.782.221.06

    Table 5.

    CPU time of each clustering technique (Kidneys image).

    KKMFCMKMFCMKM and FCM
    1038.4047.1554.7661.48
    1142.1149.4357.8665.84
    1252.4161.0360.0065.41
    1341.2051.0448.7356.79
    1457.4964.8564.8871.59
    1553.1061.4062.8566.42

    Table 6.

    Clustering fitness of each clustering technique (Kidneys image).

    KKMFCMKMFCMKM and FCM
    100.02810.02150.01290.0075
    110.02650.01720.01140.0054
    120.02490.01090.01400.0029
    130.01230.01090.01910.0112
    140.01440.00900.00670.0037
    150.01150.00450.00280.0011

    Table 7.

    SSE of each clustering technique (Kidneys image).

    KKMFCMKMFCMKM and FCM
    100.140.792.160.62
    110.160.862.370.63
    120.290.912.680.63
    130.311.012.910.50
    140.360.723.140.78
    150.481.103.240.55

    Table 8.

    CPU time of each clustering method (Baboon image).

    KKMFCMKMFCMKM and FCM
    1030.2232.1736.0239.07
    1122.2829.7137.3639.49
    1228.7032.6335.1339.57
    1331.2833.4740.3942.28
    1425.9229.4937.7739.81
    1536.4838.1634.4339.98

    Table 9.

    Clustering fitness of each clustering method (Baboon image).

    KKMFCMKMFCMKM and FCM
    100.00800.00630.00590.0030
    110.00730.00680.00370.0024
    120.00990.00710.00530.0029
    130.00650.00580.00700.0025
    140.00870.00700.00410.0022
    150.00690.00560.00270.0019

    Table 10.

    SSE of each clustering method (Baboon image).

    KKMFCMKMFCMKMandFCM
    100.080.150.660.09
    110.130.440.760.32
    120.060.170.770.11
    130.090.400.840.32
    140.050.200.920.13
    150.210.241.090.14

    Table 11.

    CPU time of each clustering method (Lena image).

    KKMFCMKMFCMKM and FCM
    1025.5028.8030.6132.79
    1122.9725.5227.9531.08
    1220.2223.3825.4429.97
    1328.7130.1332.7434.26
    1426.7529.8331.0533.27
    1523.7030.1932.7934.60

    Table 12.

    Clustering fitness of each clustering method (Lena image).

    KKMFCMKMFCMKM and FCM
    100.01470.01270.00930.0034
    110.02450.02180.00990.0041
    120.02460.01780.00770.0034
    130.01440.01060.00600.0027
    140.01350.01100.00620.0024
    150.01300.01000.00490.0022

    Table 13.

    SSE of each clustering method (Lena image).

    Figure 5.

    Clustering Fitness (Kidneys image).

    Figure 6.

    Sum of squared errors (Kidneys image).

    Figure 7.

    CPU time (Baboon image).

    Figure 8.

    Clustering fitness (Baboon image).

    Figure 9.

    Sum of squared errors (Baboon image).

    Figure 10.

    CPU time (Lena image).

    Figure 11.

    Clustering fitness (Lena image).

    Figure 12.

    Sum of squared errors (Lena image).

    Figure 13.

    Heart image.

    Figure 14.

    Kidneys image.

    Figure 15.

    Baboon image.

    Figure 16.

    Lena image.

    6.1 Observations with Heart image

    6.2 Observations with Kidneys image

    6.3 Observations with Baboon image

    6.4 Observations with Lena image

    6.5 Original images used for present experimentation

    6.6 Comparison of segmentation results on Baboon image

    As an example of the present experiments for image segmentation, segmentation results for Baboon image for 10 clusters are presented here. These results are generated by the above proposed hybrid clustering algorithms along with the standard K-Means and standard FCM algorithms.

    For segmentation, here, each algorithm is executed using Baboon image data assuming that the number of clusters is 10, i.e., k = 10. Each segment is represented by each cluster. Separate color code is assigned to each cluster. The color codes are red, yellow, green, blue, orange, black, white, gray, cyan and magenta. The projections of all segmentation results generated by the algorithms are shown in Figure 17. The original Baboon image also shown in the figure.

    Figure 17.

    Image segmentation results for Baboon image (for 10 Clusters).

    In all the experiments, it is observed that hybrid clustering algorithm KMandFCM is showing better performance in terms of CPU, clustering fitness and SSE than the other algorithms.

    7. Conclusion

    The present chapter notably includes the study of hybridization of popular clustering algorithms, K-Means and FCM, and identifies the best hybridization strategy. All experiments are carried out for segmenting four images, which include two medical images also. For all the algorithms CPU time, clustering fitness and sum of squared error (SSE) are taken into consideration while carrying out their performance evaluation. In all the experiments that are conducted, the proposed hybrid algorithm KMandFCM is exhibiting better performance in terms of CPU time, Clustering Fitness (CF) and SSE.

    In all experiments, it is also observed that the proposed hybrid clustering algorithms are showing better performance than the standard K-Means and FCM algorithms. Especially the KMandFCM algorithm has good results when compared to all other algorithms. Thus, it could be concluded that the hybrid clustering algorithm KMandFCM will have good application in other fields too.

    Download

    chapter PDF

    © 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

    How to cite and reference

    Link to this chapter Copy to clipboard

    Cite this chapter Copy to clipboard

    Raja Kishor Duggirala (July 10th 2019). Segmenting Images Using Hybridization of K-Means and Fuzzy C-Means Algorithms [Online First], IntechOpen, DOI: 10.5772/intechopen.86374. Available from:

    chapter statistics

    29total chapter downloads

    More statistics for editors and authors

    Login to your personal dashboard for more detailed statistics on your publications.

    Access personal reporting

    We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

    More about us