Open access peer-reviewed chapter

Multistage Classification and Segmentation of Brain MR Image Using Modified Soft Computing Techniques

Written By

G. Sethuram Rao and D. Vydeki

Submitted: 09 May 2022 Reviewed: 17 June 2022 Published: 09 September 2022

DOI: 10.5772/intechopen.105908

From the Edited Volume

Central Nervous System Tumors - Primary and Secondary

Edited by Feyzi Birol Sarica

Chapter metrics overview

67 Chapter Downloads

View Full Metrics

Abstract

Recent studies indicate that brain tumor is one of the major causes of human casualties. Timely and accurate diagnosis of this life taking disease could reduce the casualty rate and extend the life of a person. In this research paper, techniques for brain tumor detection from MR Images with malignancy using modified soft computing approaches are presented and analyzed. An automated tumor detection system using artificial neural network (ANN) is proposed to classify the images as any of the four classes: Glioblastoma multiforme, Meningioma, secondary tumor-Metastasis and No Tumor. The classified image undergoes a segmentation process that predicts the size of the tumor in terms of pixels. Traditionally, conventional self-organizing map (CSOM) and Conventional back Propagation network (CBPN) are used for classification and segmentation respectively. However, these methods provide less accurate results in addition to high computational complexity. Moreover, due to unstable target weights, the number of iterations is large. These drawbacks are overcome in the proposed technique by developing a modified SOM (MSOM) for classification of images and modified BPN (MBPN) for segmentation. Simulated results show that the proposed modifications minimize the computational complexity without compromising on the accuracy. It is shown that MSOM increases the accuracy of classification by 10% compared to its conventional counterpart. Similarly segmentation accuracy is improved by 8% using MBPN.

Keywords

  • brain MR images
  • feature extraction
  • neural network
  • BPN
  • SOM
  • image classification
  • segmentation

1. Introduction

Medical image analysis helps in wide variety of healthcare applications and aid the diagnosis and treatment of ailing patients. Out of several techniques artificial neural networks lead in medical imaging applications, especially in brain tumor detection and classification. One of the main methods among ANN approaches is self-organizing map (SOM) and back propagation networks (BPN). Although SOM and BPN provide high accuracy, it requires huge computational complexity due to existence of many hidden layers between input neuron and output neuron. Thus for any system to be efficient in terms of performance, it must achieve high accuracy at low level of computational complexity. Extensive literature survey reveals that neural network based brain tumor detection and classification is a promising research field.

This proposed work presents a modified Self-Organizing Map (MSOM) for classification of brain tumor into different classes and a modified Back Propagation Network (MBPN) for segmentation of brain tumor, so that abnormal portion is extracted from any MR brain image.

The major contributions of this research work are as follows:

  1. The accuracy of the MSOM is higher, since the weights obtained are more stabilized than CSOM.

  2. As the algorithm is free from iterations the convergence rate is also improved. The computation time required is less than 1 CPU seconds.

  3. Independent nature of MBPN on iterations makes it immune to local minima.

Rest of the paper is organized as follows. Section 2 contributes to analysis and study of associated research papers. The proposed research work is narrated in Section 3. Section 4 deals with feature extraction method for both classification and segmentation. Section 5 to 7 gives the relative analysis of proposed method in comparison with the existing methods. The simulated results and its respective analysis are presented in subsequent sections. Section 8 gives the conclusion and future scope of the research work.

Advertisement

2. Literature survey

DNN based MR brain tumor image classification is explored by Heba M [1]. Three classes of brain tumor benign, malignant and no tumor are described in this work. Conventional training method is adapted to exploit the three tumors in brain MR image. The works by El-Dahsan, Hosny T [2] used DWT for feature extraction and reduction with one of the main techniques as principal component analysis. This paper achieves up to 97% accuracy using feed forward back propagation neural network (BPNN) and k-nearest neighbor (KNN) classifiers respectively. Zarchari [3] used GLCM, Gabor, intensity, shape and statistical feature extraction techniques to develop multiclass classification of brain tumors by extracting 100 features. Eli et al. [4] proposed algorithm for medical image processing using deep learning techniques. The prime focus is based on open-source software available for brain tumor segmentation and classification. Michael S [5] developed human visual perception using deep learning approach for image classification. The limitations and suggestions of conventional methods used in neural networks are available in this work. Ling-Li Z [6] worked on auto encoder neural network based MRI and f-MRI brain image classification. Another survey by Chensi C [7] on deep learning algorithms for bio medical applications is available. Yazhou K [8] proposed classification using deep neural network. The limitation of this paper is only two level classification can be achieved to determine whether a tissue is normal or abnormal.

Hongmin L [9] proposed modified DCNN. This algorithm is designed such that it uses a reduced amount of data set for training and testing. The limitation is very poor accuracy in classifying right tumor. Usman and Rajpoot [10] have presented a paper on brain tumor categorization from multi-modality MRI by means of wavelets and machine learning. Random forest method is used to find the tumor class. This method gives the accuracy of 88% in categorization compared to ground truth images. Papers from [11, 12, 13, 14] is used to explore segmentation techniques. These papers exploit the techniques used to separate tumor portions from properties such as color, contrast, brightness and gray level. The tumor portions are then evaluated for tissues, such as white matter-WM, gray matter-GM and cerebrospinal fluid-CSF. Youyong et al. [15] proposed a approach on segmentation, primarily positioned on information theoretic learning. This concept mainly practices feature selection data clustering of brain tumor at supervoxel-level. Both the method requires iterative tuning to achieve superior segmentation results. Hence this method consumes large duration to obtain better segmentation results. Damodharan and Raghavan [16] have developed a method for brain tumor analysis using neural network. This paper uses k-nearest neighbors (K-NN) and Bayesian algorithm. This method produces an accuracy result of 83%.In all these proposals mentioned above, image processing techniques such as pre-processing, feature extraction, segmentation and classification play a vital role. The primary idea from all these literature survey using computer aided diagnosis (CAD) systems is to produce highest level of accuracy with minimum computation time. Although many approaches are available, robust and efficient brain tumor classification is still an important and challenging task.

Advertisement

3. Proposed system

Figure 1 provides the mainstream block diagram for the proposed system. Image data sets used in the proposed system are MR brain images. The images as a whole are used for image classification and pixels within a single image are used for image segmentation. All the images used in the framework are of 256 × 256 dimension. The major steps in the proposed system include image pre-processing of MR brain images followed by image-based and pixel-based extraction of features. The brain MR images are classified as any one of the four classes’ Glioblastoma multiforme (GBM), Meningioma (MEN), secondary tumor-Metastasis (MET) and no Tumor (NT) using both conventional SOM and modified SOM. Similarly, image segmentation is done using conventional BPN and modified BPN to extract abnormal portion on a single MR brain image. Finally, performance estimation and comparison analysis are done through training and testing with both conventional and modified methods.

Figure 1.

Block diagram of proposed system.

3.1 MR image dataset

In the proposed system 59 patients dataset constituting of 125 Glioblastoma (GBM), 120 Meningioma (MEN), 135 Metastasis (MET), and 430 are No tumor (NT). For each patient data, T1,T2, FLAIR and post gadolinium is available. The MR images used in this proposal are simulated images collected from Centre for Diagnostic Imaging, Tirupathi, Andhra Pradesh, India over a time period of August 2018 to November 2019. Ground truth images of WM, GM, CSF and tumor are also provided by the diagnostic centre. The equipment used for acquiring MRI are Siemens verio, Erlangen, Germany, 3 tesla MR scanner. The number of no tumor region MR image is taken significantly larger than the malignant regions to well recognize highly varying anatomical regions. The MR images are ground truth images certified by neurological surgeon Dr.D.Jagadeeswara Reddy, MBBS, MCh. (NIMS). In total 810 brain MRI data set as shown in Table 1 constitutes simulated, ground truth and real time images. 60% of the dataset is used for training 40% of dataset used for testing in this proposal.

CategoryTraining ImageTesting Image
Glioblastoma7550
Meningioma7248
Metastasis8154
No tumor258172

Table 1.

Brain MR image database.

3.2 Software implementation

Proposed method is implemented using MATLAB 8.0 and is trained for brain tumor MR images of size 256 × 256. The research is done on personal computer having Intel™ Core 2 Duo 2.0 GHz processor with 3 GB RAM.

3.3 Image pre-processing

Magnetic resonance imaging (MRI) used for brain analysis is a safe and painless test. MRI uses radio waves generated due to magnetic field to produce detailed brain image. MR brain image can detect variety of conditions like tumors, swelling, bleeding and other brain abnormalities. The obtained MR brain image is given as input to the initial pre-processing step. Figure 2 gives the flow diagram of image pre-processing. The input MR image is in the form of raw quality which needs enhancement to make it appropriate for the proposed application. Moreover pre-processing improves parameters of MR brain image like smoothing, preserving edges, signal to noise ratio, redundant information, unrelated noise and unwanted background information.

Figure 2.

Flow diagram of pre-processing.

3.3.1 Gray to binary image conversion

Initially, MR brain image which is in the form of a gray image is converted to a binary image as shown in Figure 2. A gray scale image that has gray values from 0 to 255 is subjected to suitable threshold, to make the image binary. The threshold value selected in this proposal is 45; if the pixel value is above 45 then these pixels are made equal to 255. If the pixel value is less than 45 then those pixels are made equal to 0.Thus all the pixel value of MR brain image drops to either 0 or 255.

3.3.2 Skull stripping

Additional tissues in brain MR image such as fat, skin and skull affect the segmentation results. Hence to remove this unwanted information, skull stripping is done based on morphological operation. Morphological operation removes the pixels on object boundaries. Erosion operation uses a specified neighborhood technique to darken tiny bright areas such as scull boundary over brain tissue.

3.3.3 Connected component analysis

Connected component analysis is used to detect connected regions in MR brain images. This technique works by labelling the vertices based on the connectivity, for example 4-connected or 8-connected neighborhood. Thus this process helps to create objects in an image disappear as they are replaced with values that blend in with the background area.

3.3.4 Masking

Masking is needed to change the pixel values from 0 to 255 to 0–1. It is implemented by performing a product function between an original image and the mask. Thus the output images shown in Figure 3 are free from extra cranial tissues.

Figure 3.

Pre-processed sample images (a) input images (b) eroded images (c) images after thresholding (d) images after connected component analysis (e) enhanced output images.

In Figure 3, experimental results on MR brain image for pre-processing steps are shown. The MR images shown in Figure 3 (a) are the images given as input to the pre-processing stage. Figure 3 (b) shows the eroded images where the outer layer skull is mostly disappeared from the brain tissue. In Figure 3 (c) gray images are converted to binary by setting a threshold value of 45, resulting in only 0 or 1 pixel value. In Figure 3 (d), a mask is obtained using connected component analysis and this mask is multiplied with the input image. Each pixel of input image from (a) is multiplied with each pixel of the image in (0 or 1) (d), Hence, only the white portion which is equal to 1 will appear in output while the black portion which is equal to zero does not appear in output. Finally Figure 3 (d) presents the original images without the portion of the skull.

Advertisement

4. Feature extraction

Feature extraction is one of the widely used techniques to reduce the complexity of the classifier by knowing the characteristics of the image in terms of color, texture size and edges of the body. In this proposal, two types of features: image-based and pixel-based textual features are used for classification and segmentation respectively. In image-based textual features, classification of four set of tumor classes Glioblastoma multiforme (GBM), Meningioma (MEN) secondary tumor-Metastasis (MET) and No Tumor (NT) are done on MR images as shown in Figure 4. Similarly, in pixel-based textual features image segmentation is done to determine non-tumorous and tumorous regions.

Figure 4.

(a). Class I-glioblastoma (b). Class II-meningioma (c). Class III-metastasis (d). Class IV-No tumor.

4.1 Image classification

Image based textual features of brain MR images are used to classify the given image belonging to any one of the four classes: secondary tumor-Metastasis (MET), Glioblastoma multiforme (GBM), Meningioma (MEN) secondary tumor-Metastasis (MET) and No Tumor (NT). These features represent the characteristics of the input image which aid in distinguishing the different types of image. The eight textual features that are used in this proposal are i). Angular second moment ii). Contrast iii). Correlation iv). Variance v). Inverse difference moment vi). Entropy vii). Skewness and viii). Kurtosis. The following expressions are used in the formulae.

p(i, j) = Size of the MR image 256 × 256.

px = vector elements along horizontal summation p(i, j); size(256 × 1).

py = vector elements along vertical summation p(i, j); size(1 × 256).

  1. Angular second moment (ASM): It is measure of textural uniformity of an image. Energy reaches to highest value when gray level values of image reaches to constant or periodic form.

    For a MR image p(i, j) of size 256 × 256

    f1=ijpij2E1

    where i = 1,2, ….256; j = 1,2,….256;

  2. Contrast: Degree of difference between pixel to pixel in an image.

    f2=n=0Ng1n2i=1Ngj=1Ngpij2E2
    wherei=1,2,.256;j=1,2,.256;

  3. Correlation: Process of establishing relationship by finding the connection between the two pixels.

    f3=ijijpijμxμyσxσyE3

    where πx, πy, σx and σy denote the means and standard deviations.

  4. Variance: Quality of being different, divergent or inconsistent

    f4=ijiμ2pijE4

  5. Inverse difference moment (IDM): It is a measure of image homogeneity.

    f5=ij11+ij2pijE5

  6. Entropy: It is a measure of randomness in the information being processed.

    f6=ijpijlogpijE6

  7. Skewness: Gives a measure of which the given distribution differs from normal distribution.

    f7=1σ3ijpijμ3E7

  8. Kurtosis: It is a statistical measure to show how deeply the tails of the distribution differ from the tails of normal distribution.

    f8=1σ4ijpijμ43E8

The above 8 features are extracted from brain MR image and given as input to neural network for further classification.

4.2 Feature values for image classification

Eight features mentioned in Section 4.1 are extracted from the brain MR images. The MR images used are ground truth images. The values shown in Table 2 are the average values of the entire image dataset. Close inspection of the values in this table reveals that each category of No Tumor, Glioblastoma, Meningioma, and Metastasis are having clear divergent values. These values make the classification process easier.

FeaturesNo TumorGlioblastomaMeningiomaMetastasis
ASM (f1)1.0022223,000,562604,422,470384,652,269
Contrast(f2)8,763,7852,177,1806,457,2363,700,147
Correlation(f3)3.22902.00144.24671.6320
Variance(f4)4.55372.35933.83712.1467
IDM(f5)6.52503.71154.29412.1614
Entropy(f6)3.17458.51471.33352.8693
Skewness(f7)3.97609.58952.79251.6630
Kurtosis(f8)5.57601.36902.76474.7109

Table 2.

Image based feature values for image classification.

4.3 Feature values for image segmentation

Similar to classification, image segmentation is carried out to differentiate a non-tumorous region and a tumor region with the help of the eight features mentioned in Section 4.1. Here, only the pixels within a single image are used for image segmentation. Hence, the data required for image segmentation does not depend on the number of images but depends on number of pixels. The values shown in Table 3 are average values of entire image data set. Only bi-level segmentation is performed on real-time images due to availability of ground truth images only for the tumor region.

FeaturesNon-tumorous regionTumor region
ASM (f1)132,975202
Contrast(f2)59,454,5252,210,850
Correlation(f3)−4573
Variance(f4)3,786,322117
IDM(f5)54720
Entropy(f6)1268−160
Skewness(f7)444360
Kurtosis(f8)5962.7022

Table 3.

Pixel based feature values for image segmentation.

The various features provided in the Tables 1 and 2 have yielded sufficiently diversified values which ultimately guarantee the success of the classification and segmentation process in detecting the brain tumor.

Advertisement

5. Proposed methodology for image classification using SOM

In this paper, brain MR image classification is done with a modified self-organizing map (MSOM) network and the results are compared with existing conventional self-Organizing map network (CSOM). Four categories Glioblastoma, Meningioma, Metastasis, and no tumor, are detected using both CSOM and MSOM. The performance of both methods is estimated in terms of classification accuracy, sensitivity, computational complexity and convergence rate. Figure 5 shows the proposed methodology for image classification.

Figure 5.

Proposed methodology for image classification.

5.1 Conventional self-organizing map (CSOM) network based classification

Figure 6 shows the conventional self-organizing map network in a single layered ANN. The architecture is associated with one set of weight matrix (w). The training algorithm involves the concept of unsupervised training methodology. Basically, the architecture involves eight neurons in the input layer. The reason for selecting eight input neurons is based on eight features ASM (f1), Contrast (f2), Correlation (f3), Variance (f4), IDM (f5), Entropy (f6), Skewness (f7), Kurtosis (f8) which are considered as input to ANN architecture. Stabilized weight matrices (w) will decide the main objective of training. The following are the steps involved to train the algorithm.

Figure 6.

Architecture of CSOM with 8 input and 4 output classes.

5.2 Training algorithm of CSOM

Step 1: Initialization done randomly on weight vectors.

Step 2: Still stopping condition becomes false, steps 3 to 6 will be repeated.

Step 3: The Euclidean distance is computed for each output layer neurons ‘j’.

Dj=iwijxi2E9

Step 4: Value of index ‘j’ is computed till it becomes minimum.

Step 5: Winner neuron’s weight which is updated one is used by implementing the rule.

wijnew=wijold+αxiwijoldE10

‘xi’ determines the intensity values obtained from input data set,‘α’ determines the learning rate.

Step 6: Maximum number of iterations will decide to determine the test for stopping condition.

5.3 Performance measure of tumor segmentation using CSOM

See (Table 4).

Performance MeasureDescription
SensitivityTP/TP + FN
SpecificityTN/ TN + FP
Classification AccuracyTP+ TN/TP + FP + TN + FN

Table 4.

Sensitivity, specificity and classification accuracy.

WhereTP, TN, FP and FN are true positive, true negative, false positive and false negative respectively.

5.4 Result analysis of CSOM

Table 5 clearly shows that classification accuracy is very low. The main reason for low accuracy is the lack of standard convergence condition. Since the weight vector is randomly initialized, more number of iterations is required to stabilize the weight matrix. Thus, for single layer CSOM, the computational complexity in the training algorithm is as below:

  1. Euclidean distance calculation

    Mathematical operations = 1 subtraction and 1 multiplication.

    Total no of operations = 2ap, where ‘a’ input neurons and ‘p’ output neurons

  2. Weight calculation between input layer and output layer

    Mathematical operations = 1 sub, 2 mull and 1 add.

    Total no of operations = 4ap, where ‘a’ input neurons and ‘p’ output neurons

  3. Total Mathematical operations (a) and (b) put together = 6ap

  4. Convergence rate for CSOM for the above computation is 650 CPU seconds.

Class IClass IIClass IIIClass IV
Class I59987
Class II75189
Class III77456
Class IV86659

Table 5.

Confusion matrix of CSOM.

Thus, from the above analysis, it could be inferred that though CSOM is computationally effective the accuracy produced is low. As accuracy is the critical performance parameter, a low value of it could not be accepted. Hence, this paper proposes a Modified Self-Organizing Map to improve the accuracy.

5.5 Modified self-organizing map (MSOM) network

The main objective of MSOM is to reduce the number of iterations required for stabilizing the weight matrix. Moreover, as mentioned in Table 5, classification accuracy for CSOM is around 85% only. Hence, without changing the architecture of CSOM, the algorithm is modified to reduce the computational complexity and to increase the classification accuracy.

5.6 Modified training algorithm of MSOM

Instead of randomly initializing weights, the weight matrix is closely matched to the input vector. Hence the weights are determined using the following modified equations.

XW=0E11
XW=0

X = W where X is input vector and W is the weight matrix.

Thus, the minimum value can be obtained by making X = W and by doing this, the weights can be stabilized with the minimum number of iterations. The closest match is determined by finding the weight vector which gives the minimum distance value between them. Another modification done is the normalization of input vectors. Normalization helps to reduce measurements to a neutral or standard scale. Hence the normalized values only change in magnitude in the range [0, 1]. Apart from this modification, all other steps mentioned in Section 4.1 of CSOM training algorithm remain same. The testing process is same as that of the conventional SOM. The testing input is selected to the class for which the corresponding output neuron value is minimum (Table 6).

TPTNFPFNsensitivityspecificityclassification accuracy
Glioblastoma5919723230.7190.89584.7
Meningioma5120323230.6890.89884.6
Metastasis4521523210.6810.90385.5
no tumor5919923210.740.89685.4
Average value0.7070.89885

Table 6.

Performance metrics for sensitivity, specificity and classification accuracy.

5.7 Performance measure of tumor segmentation using MSOM

From the Tables 7 and 8 it is clear that the accuracy of the modified SOM is higher than the conventional SOM (96%). The reason for the improvement is weights obtained using MSOM are more stabilized than the CSOM. Also the time requirement for MSOM reduced from 650 CPU to 1 CPU second. The main reason for improvement in convergence rate is the algorithm is free from iterations.

Class IClass IIClass IIIClass IV
Class I76223
Class II26933
Class III32582
Class IV22471

Table 7.

Confusion matrix of MSOM.

TPTNFPFNSensitivitySpecificityClassification Accuracy
Glioblastoma76210660.9260.97296
Meningioma69217570.9070.97796
Metastasis58226860.9060.96595
No Tumor71213770.9060.96896
Average value0.9120.97096

Table 8.

Performance metrics for sensitivity, specificity and accuracy using MSOM.

Advertisement

6. Proposed methodology for image segmentation using BPN

Figure 7 shows the proposed methodology for image segmentation. Back propagation network (BPN) based image segmentation is used to extract abnormal portion of brain tumor from any MR brain image. This method is mainly used for volumetric analysis, such as finding size and area of the tumor. With the help of this analysis after-effect of the treatment can be determined. In this paper conventional back propagation network (CBPN) results are compared with modified back propagation network (MBPN) by testing both simulated images and real-time images. The performance is measured by calculating segmentation efficiency (SE), correspondence ratio (CR) and convergence rate.

Figure 7.

Proposed methodology for image segmentation.

6.1 Conventional back propagation network (CBPN)

Features obtained in Table 2 are used for image segmentation to discover non-tumorous and tumorous regions. In this method, input pixels (256x256) are divided into training data set and testing data set. Training dataset contains pixels from GM, WM, CSF and tumor region. Ground truth (GT) images are used randomly for training purposes. BPN is trained with training pixels, each pixel is represented by 8 features given in Table 2 and target vectors. Thus BPN is trained with the training algorithm to determine the stabilized set of weights. Finally the network is tested with the testing data and categories of all the pixels are determined. And the category of interest is assigned a value of ‘255’ and the other category is assigned a value of ‘0’.

BPN comes under supervised feed forward neural network type and the training comes under gradient descent rule. Input vectors and corresponding target vectors are used to train the network until it classifies input vectors as GM, WM, CSF and tumor region as proposed in this paper. Figure 8 shows the architecture of CBPN. The network used is an 8–12-4 layered network. The target vector is supplied to the output layer. The target representation for each class GM, WM, CSF and tumor region is given as [1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 1] respectively. Two sets of weight matrices are determined at the end of the training procedure Figures 9 and 10 shows the sample results of Conventional Back Propagation Network (CBPN).

Figure 8.

Architecture of CBPN with 8 input and 4 output classes.

Figure 9.

Sample results of CBPN (a) original image (b) gray matter (GM) phantom (c) white matter (WM) phantom (d) CSF phantom (e) tumor phantom (f) GM segment (g) WM segment (h) CSF segment (i) tumor segment.

Figure 10.

Sample results of CBPN (a) input image (b) tumor segment (c) tumor phantom.

6.2 Training algorithm of CBPN

The training algorithm of CBPN involves three stages. Feed forward of the input signals, back propagation of the associated error and the adjustment of weights. The algorithm for training is given as follows.

Step 1: Initialize the weights Uij and Vjk.

Step 2: Repeat steps 3–9 for each training pair.

Feed forward:

Step 3: Each input unit receives the input signal and transmits the signals to the hidden layer neurons.

Step 4: Each hidden unit sums its weighted input signals.

z_inj=xi.uijE12

The above equation activation function z_inj, is applied to compute its output signal zj

zj=f(zinj)E13

The activation function used in this work is sigmoid function. The output signal is fed to the output layer neurons.

Step 5: Each output unit sums its weighted input signals.

y_ink=zj.vjkE14

The above equation activation function y_ink, is applied to compute its output signal yk

yk=f(y_ink)E15

Back propagation of error

Step 6: Each output unit receives a target pattern (tk) and compute its error information term

δk=tkyk.fy_inkE16

From δk weight correction term is calculated as follows

vjk=α.δk.zjE17

Step 7: Each hidden unit compute its error information term

δj=δkvjk.fz_injE18

From the above equation weight correction is calculated as follows

uij=α.δj.xiE19

Update weights.

Step 8: Each output unit updates its weight by

vjknew=vjkold+vjkE20

Each hidden unit updates its weight by

uijnew=uijold+uijE21

Step 9: The training stops when the weight correction terms are equal to zero or when it reaches to predefine minimum value.

6.3 Quantitative analysis of CBPN

6.4 Result analysis of CBPN

The analysis presented in the Tables 911 indicates that the requirement for convergence time is significantly high i.e. 1650 CPU seconds using personal computer having Intel™ Core 2 Duo 2.0 GHz processor with 3 GB RAM (32-bit version). MATLAB 8.0 software is used for brain tumor MR images of size 256 × 256. The dependent nature of CBPN on iterations is also one of the drawbacks since it leads to local minima. Also for real time images the segmentation efficiency is low around 83–85% and convergence rate is low around 0.79 to 0.82. Hence there is scope for improvement in convergence rate, accuracy to increases the efficiency of the entire system. These limitations of CBPN can prevail over using Modified Back Propagation network (MBPN).

InputCategoryGTTPFPSE = TP/GT(%)CR = [TP-0.5xFP)]/GT
Severe stage imageWM11,7789259157078.610.719
GM10,9443173128628.990.231
CSF65053469191853.320.385
Moderate stage imageWM11,7688850109975.200.705
GM17,940545266330.390.285
CSF65013249104749.970.419
Mild stage imageWM11,77510,497227489.140.794
GM10,9412772134625.330.191
CSF65013308202550.880.353

Table 9.

Simulated images (for non-tumorous region).

Input imageGT pixelsTP pixelsFP pixelsSE (%)CR
Severe stage image29727790930.78
Moderate stage image866931800.62
Mild stage image2199430.22

Table 10.

Simulated images (tumorous region).

Input imageGT pixelsTP pixelsFP pixelsSE (%)CR
Image 1159813384183.70.824
Image 23840325952284.80.780
Image 35409453653183.80.789

Table 11.

Real-time images: (for tumorous region).

6.5 Modified back propagation network (MBPN)

The modified BPN is framed to overcome the drawbacks of computational complexity of CBPN without compromising the accuracy. The modification done in MBPN architecture is target vector is also given to the hidden layer with different representation. The algorithm proceeds with fixed error value. This reduces the number of iterations required to calculate the weights. Here weight estimation is first done for the hidden layer weights and then performed for the output layer weights. With fixed error value, the output values of the hidden layer are determined. Using the output values, the net value is determined by the inverse operation of sigmoid function. Further, the weights are determined using the input and net value of the hidden layer.

Figure 11 shows the architecture of MBPN with 8–12-4 layered network. As mentioned above, target is supplied both to hidden layer and output layer. Target representation for hidden layer is: [1 1 1 0 0 0 0 0 0 0 0 0; 0 0 0 1 1 1 0 0 0 0 0 0; 0 0 0 0 0 0 1 1 1 0 0 0; 0 0 0 0 0 0 0 0 0 1 1 1]. Two set of weight matrices are determined at the end of the training procedure. The weight values are obtained when the error value i.e. target output is equal to zero or a predefined minimum value. The error value intended for achieving the convergence in this paper is 0.01.

Figure 11.

Architecture of MBPN with 8 input and 4 output classes.

6.6 Weight matrix calculation between input and hidden layer

Step 1: Since target –output = 0.01 for convergence, the output of the hidden layer neurons is calculated using

E=targetoutput;zj=tj0.01E22

Step 2: Once the output value is calculated, the following equation yields the value for net value.

Zj=fZ_inj;Zj=11+eZinjZ_inj=lnZj1ZjE23

Step 3: Based on the net values, the weight matrix for the hidden layer is calculated using

Z_inj=xi.uijE24

Step 4: The following equations are used to determine the weight matrices between hidden layer and output layer

yk=tk0.01;y_ink=lnyk1yk;y_ink=zj.vjkE25

Step 5: The training stops when the weight correction terms are equal to zero or when it reaches to predefine minimum value.

6.7 Quantitative analysis of MBPN

See (Tables 1214).

Input imageCategoryGTTPFPSE(%)CR
Severe stage imageWM11,77810,501357289.150.739
GM10,9443395112631.020.258
CSF6505291852444.850.408
Moderate stage imageWM11,76810,068318685.550.7201
GM17,9403979193822.170.1677
CSF65013826114958.850.50
Mild stage imageWM11,77511,0585893.910.936
GM10,9414282308439.130.250
CSF65013348133251.490.412

Table 12.

Simulated images: (for non-tumorous region).

Input imageGT pixelsTP pixelsFP pixelsSE(%)CR
Severe stage image29728852960.88
Moderate stage image867520870.75
Mild stage image21159710.55

Table 13.

Simulated images (tumorous region).

Input imageGT pixelsTP pixelsFP pixelsSE (%)CR
Image 1159814163688.60.874
Image 23840357837093.10.883
Image 35409501846592.70.884

Table 14.

Real-time images (for tumorous region).

6.8 Result analysis of MBPN

Figure 11 shows the architecture of MBPN, where the target is supplied to both hidden layer and output layer. This helps in reduction of number of iterations, since stabilized weight values are achieved much earlier than its counterpart. Hence the convergence rate of MBPN is less than 1 CPU seconds, which is far better than CBPN (650 CPU seconds). The performance measures in terms of accuracy are better than CBPN. The independent nature of MBPN on iterations makes it immune to local minima. CBPN has 83% and MBPN has 91% segmentation efficiency. Hence 8% increase in MBPN compared to CBPN indicates proposed network segments tumors of varying size and shape successfully. Figures 12 and 13 gives a comprehensible improvement in finding GM, WM, CSF and tumor segment.

Figure 12.

Sample results of MBPN (a) original image (b) gray matter (GM) phantom (c) white matter (WM) phantom (d) CSF phantom (e) tumor phantom (f) GM segment (g) WM segment (h) CSF segment (i) tumor segment.

Figure 13.

Sample results of MBPN (a) input image (b) tumor segment (c) tumor phantom.

Advertisement

7. Comparison between conventional and modified methods

From the above Tables 15 and 16, the advantage of the proposed work over existing neural network is visible clearly. This analysis proves that the modified approach is suitable replacement for conventional methods. Similarly, the accuracy is also improved. Table 17 gives performance comparison to similar literature works with proposed approach. Since the images are real time collected from local diagnostic centre, these images are not used by other researchers.

ClassCSOMMSOM
SensitivitySpecificityClassification AccuracySensitivitySpecificityClassification Accuracy
Glioblastoma0.7190.89584.70.9260.97296
Meningioma0.6890.89884.60.9070.97796
Metastasis0.6810.90385.50.9060.96595
No Tumor0.740.89685.40.9060.96896

Table 15.

Comparison of CSOM and MSOM.

Inputground truth pixels countCBPNMBPNImprovement of MBPN over CBPN
correctly classified pixels countSegmentation Efficiency (%)correctly classified pixels countSegmentation Efficiency (%)
Severe stage image1598133883.7141688.65
Moderate stage image3840325984.8357893.18.3
Mild stage image5409453683.8501892.78.9

Table 16.

Comparison of CBPN and MBPN.

Reference NoMethod UsedAccuracy
[17]Particle Swarm Optimization (PSO) feed forward neural network95
[17]PSO + Modified Counter Propagation neural network89
[18]GA + Adaptive Resonance Theory neural network85
[19]PSO + Kohnean neural network86
Proposed ApproachModified Self Organizing Map + Modified Back Propagation neural network96

Table 17.

Comparative analysis with other methods.

Advertisement

8. Conclusion and future work

Design and analysis of modified self-organizing map and modified back propagation neural network on MR brain image are proposed and tested successfully in this research work. The proposed technique also eliminates the limitations of conventional SOM and conventional BPN by reducing the number of iterations required to stabilize the weights. Using modified SOM and modified BPN technique, an accuracy of 96%, which is 10%increase in tumor classification and an average efficiency of 93%, which is 8% improvement in segmentation could be achieved. The proposed technique gives superior results for tumors of different sizes and shapes. This will aid the doctors and physicians in accurately diagnosing the brain tumor and early treatment planning, to increase the survival rate of the patient. As a future work, different feature sets can be used to train the neural network and suitable modifications can be done in the architecture and algorithm to increase the accuracy of the system further. Appropriate feature selection and modifications could make this proposed work applicable in deducting other chronic diseases.

Advertisement

Acknowledgments

The authors would like to thank Dr.D. Jagadeeswara Reddy, MBBS, M.Ch.(NIMS) consultant Neurosurgeon, Regd. No 51723 Tirupathi, Andhra Pradesh, India for providing necessary guidance and help in the analysis of proposed work. Also would like to thank Centre for Diagnostic Imaging, Tirupathi for providing brain MR images and result validation.

Advertisement

Conflict of interest

The authors declare they have no potential conflict of interest with the regard to the work presented.

Informed consent

Informed consent was obtained from all the individual participants in the research.

Permissions

Permission to use the images/materials included in this study has been obtained from the institution named “Centre for Diagnostic Imaging, Tirupathi, Andhra Pradesh, India”.

References

  1. 1. Heba M et al. Classification using deep learning neural networks for brain tumors. Future Computing and Informatics. 2018;3:6871
  2. 2. EI-Dahshan E-SA, Hosny T, Salem A-BM. Hybrid intelligent techniques for MRI brain images classification. Digital Signal Processing. 2010;20(2):433-441
  3. 3. Zacharaki EI, Wang S, Chawla S, Yoo DS, Wolf R, Melhem ER, et al. Classification of brain tumor type and grade using MRI texture in a machine learning technique. Magnetic Resonance in Medicine. 2009;62:1609-1618
  4. 4. Eli G et al. Nifty net: A deep-learning platform for medical imaging. Computer Methods and Programs in Biomedicine. 2018;158:113-122
  5. 5. Michael S, Gregory F. Using a model of human visual perception to improve deep learning. Neural Networks. 2018;104:40-49
  6. 6. Ling-Li Z et al. Multi-site diagnostic classification of schizophrenia using discriminant deep learning with functional connectivity MRI. EBioMedicine. 2018;30:74-85
  7. 7. Chensi C et al. Deep learning and its application in biomedicine. Genomics, Proteomics & Bioinformatics. 2018;16:17-32
  8. 8. Yazhou K et al. Classification of autism spectrum disorder by combining brain connectivity and deep neural network classifier. Neurocomputing. 2018;26:36-52. DOI: 10.1016/j.neucom.2018.04.080
  9. 9. Hongmin L, Guoqi L, Xiangyang J, Luping S. Deep representation via convolutional neural network for classification ospatio temporal event streams. Neurocomputing. 2018;299:1-9
  10. 10. Usman K, Rajpoot. Brain tumor classification from multi modality MRI using wavelets and machine learning. Cross Mark Pattern Analysis and Application. 2017;20:871-881. DOI: 10.1007/s1004-017-0597-8
  11. 11. Aneja D, Rawat TK. Fuzzy clustering algorithms for effective image segmentation. International Journal of Intelligent Systems and Applications. 2013;11:55-61
  12. 12. Zhao F, Liu H, Fan J. A multi objective spatial fuzzy clustering algorithm for image segmentation. Journal of Applied Soft Computing. 2017;30:48-57
  13. 13. Kumar SS, Moorthi M, Madh M, Amutha R. An improved method of segmentation using fuzzy-neuro logic. In: Proc: Second International Conference on Computer Research and Development. Kuala Lumpur: IEEE; 2010. pp. 671-675
  14. 14. Wang B, Yang F, Chao S. Image segmentation algorithm based on high dimension fuzzy character and restrained clustering network. IEEE Transactions System Engineering Electronics. 2014;25(2):298-306
  15. 15. Kong Y, Deng Y, Dai Q. Discriminative clustering and feature selection for brain MRI segmentation. IEEE Signal Process. Letters. 2015;22(5):573-577
  16. 16. Damodharan S, Raghavan. Combining tissue segmentation and neural network for brain detection. International Arab Journal of Information Technology (IAJIT). 2015;12:42-52
  17. 17. Hemanth DJ, Vijila CKS, Anitha J. Performance improved PSO based modified counter propagation neural network for abnormal MR brain image classification. International Journal of Advances in Soft Computing and its Applications. 2010;2(1):65-84
  18. 18. Hemanth DJ, Selvathi D, Anitha J. Application of ART neural network for abnormal MR brain image classification. International Journal of Health Information Sciences and Informatics. 2010;5(1):61-75
  19. 19. Hemanth DJ, Vallentina EB, Anitha J. Performance improved hybrid intelligent system for medical image classification. Proceedings of 7th Balken Conference on Informatics. 2015;5:29-45. DOI: 10.1145/2801081.280109

Written By

G. Sethuram Rao and D. Vydeki

Submitted: 09 May 2022 Reviewed: 17 June 2022 Published: 09 September 2022