Open access peer-reviewed chapter

A Hybrid Image Fusion Algorithm for Medical Applications

Written By

Appari Geetha Devi, Surya Prasada Rao Borra and Kalapala Vidya Sagar

Submitted: 30 September 2020 Reviewed: 02 March 2021 Published: 13 April 2021

DOI: 10.5772/intechopen.96974

From the Edited Volume

Multimedia Information Retrieval

Edited by Eduardo Quevedo

Chapter metrics overview

491 Chapter Downloads

View Full Metrics

Abstract

The main objective of medical imaging is to get a extremely informative image for higher designation. One modality of medical image cannot offer correct and complete data in several cases. In brain medical imaging, resonance Imaging (MRI) image shows structural data of the brain with none useful information, wherever as pc imaging (CT) image describes useful data of the brain however with low spatial resolution particularly with low dose CT scan, that is helpful to scale back the radiation impact to physique. Within the field of diagnosing, Image fusion plays a really very important role. Fusing the CT and tomography pictures provides a whole data concerning each soft and exhausting tissues of the physique. This paper proposes a 2 stage hybrid fusion formula. Initial stage deals with the sweetening of a coffee dose CT scan image exploitation totally different image sweetening techniques viz., bar graph Equalization and adaptation bar graph deed. Within the second stage, the improved low dose CT scan image is united with tomography image exploitation totally different fusion algorithms viz., distinct rippling rework (DWT) and Principal element Analysis (PCA). The projected formula has been evaluated and compared exploitation totally different quality metrics.

Keywords

  • Image fusion
  • Image Enhancement
  • MRI Imaging
  • Low dose CT
  • DWT
  • PCA

1. Introduction

In medical imaging, different modalities replicate different details of human organs and tissues. For example, Magnetic Resosance Imaging (MRI) provides low density soft tissues such as blood vessels, whereas Computed Tomography(CT) provides clear detail about bone tissue and also provides the reference for location of the lesion [1]. As it is known, dose reduction lowers the radiation exposure risks, but at the same time decreases the image quality. By its nature, CT involves larger radiation doses than the more common, conventional x-ray imaging procedures [2]. We briefly discuss the nature of CT scanning and its main clinical applications, both in symptomatic patients and, in the screening of asymptomatic patients. We focus on the increasing number of CT scans being obtained, the associated radiation doses, and the consequent cancer risks in adults and particularly in children [3]. Although the risks for any one person are not large, the increasing exposure to radiation in the population may be a public health issue in the future. The use of CT has increased rapidly since 1980’s, according to recent surveys, it is showing that more than 62 million CT scans are currently obtained ever year in the United States, as compared with about 3 million in 1980’s. The largest use of CT scan, however, are within the classes of pediatric identification and adult screening, and these trends are often expected to continue for ensuing few years [4]. The rise in use of CT scan in kids has been driven primarily by the decrease within the time required to scan, that is a smaller amount than a second, and additionally eliminating the necessity for physiological condition to forestall the kid from moving throughout image acquisition method. The foremost growth space in exploitation CT scan for youngsters has been presurgical identification of inflammation, that CT seems to be each correct and efficient.

The radiation doses from CT scanning are considerably larger than those from corresponding conventional radiography. Michael F. McNitt-Gray [3] discussed that the radiation doses to a particular organ from any given CT scan depends on number of factors, such as range of scans, the tube current and scanning time in milliampseconds (mAs), the scale of the patient, the axial scan vary, the scan pitch (the degree of overlap between adjacent CT slices), the tube voltage within the potential unit peaks (kVp), and therefore the specific style of the scanner being employed. Patient dosimetry and evaluation of image quality are basic aspects of any quality control program in diagnostic radiology. Image quality must be adequate for diagnosis and obtained with reasonable patient doses [5]. As per the recommendations of International Commission on Radiological Protection, No dose limit applies to medical exposure to patients, but diagnostic reference levels or reference values have been proposed by the International Commission on Radiologic Protection [6]. Thomas Lehnert et al. said that it is always the relative noise in CT images will increase as the radiation dose decreases, which means that there will always be a tradeoff between the need for low-noise images and the desirability of using low doses of radiation [4]. The low dose CT scan image usually suffers from serious noise and artifacts by using analytical reconstruction methods. It is always preferable to have standard imaging techniques that diminish the patient dose with reasonable image quality [7]. As part of implementation efforts, an important clinical requirement has been addressed that low-dose CT (LDCT) images need to be improved in the Electronic Health Records (EHR). Khalid et al., proposed an enhanced dynamic quadrant equalization for image contrast enhancement, in which input image histogram is divided into 8 subhistograms by using median values. For individual subhistograms, clipping of histogram is done by the average pixels. New dynamic range is assigned to each subhistograms and HE is done separately. This approach preserves the mean brightness [8]. As there is no guarantee that the contrast will always be increased by the histogram equalization [1], Adaptive Histogram Equalisation has been applied on low dose CT scan image to improve the contrast.

This chapter gives a comparative study related to performance of the image fusion techniques. Organization of this paper is as follows; Section 2 explains the image enhancement techniques. The principle of PCA and DWT image fusion techniques are discussed in Section 3. In Section 4, fusion performance assessment techniques are explained. In Section 5, the results of fused images for two different data sets are compared with PCA and DWT applied to medical images by implementing in MATLAB.

Advertisement

2. Image enhancement

The goal of an image enhancement is to improve the visual effects of the entire image or to enhance the certain information in accordance with specific needs [9].

2.1 Histogram equalization

Histogram equalization is a global processing technique used to spread the pixel values over the dynamic range of image and the equalized histogram must be approximately uniformly distributed in the dynamic range [10]. It is a distribution function transformation method based on histogram modification.

Characteristics of Histogram of a digital image:

  1. The frequency of the histogram reflects only the pixels in the image of a certain grey level values but not reflects the position of each pixel.

  2. Histogram of an image doesn’t overlap each sub section of an image.

It is not sure that the contrast will always be increased by the histogram equalization. There may be some cases in which histogram equalization can be worse. In that cases the distinction could also be decreased. In general, normal bar graph exploit uses an equivalent transformation that comes from the image bar graph to rework all pixels. This works well once the distribution of pel values is comparable throughout the image [11]. However, once the image contains regions that square measure considerably lighter or darker than most of the image, the distinction in those regions won’t be sufficiently increased. Adaptive bar graph exploit (AHE) improves during this side by remodeling every pel with a change perform derived from its neighbourhood region.

2.2 Adaptive histogram equalization

Adaptive bar graph feat (AHE) may be a pc image process technique wont to improve the distinction in pictures. It differs from normal bar graph feat within the respect that the adaptive technique computes many histograms, every admire a definite section of the image, and uses them to spread the brightness values of the image [12]. In its simplest type, every element is remodeled supported the bar graph of a sq. close that element. The transformation functions derived from the bar graphs is precisely constant as for normal histogram feat. The transformation perform is proportional to the accumulative distribution function (CDF) of element values within the neighbourhood. Pixels close to the image boundary have to be compelled to be treated specially, as a result of their neighbourhood wouldn’t lie fully at intervals the image [13]. It is so appropriate for rising the native distinction and enhancing the definitions of edges in every region of a picture. However, AHE contains a tendency to over amplify noise in comparatively homogeneous regions of a picture. Properties of Adaptive Histogram Equalisation:

  • The size of the neighbourhood region is a parameter of the method. It improves the contrast at smaller scales and reduces the contrast at larger scales.

  • Due to the character of bar chart feat, the resultant price of a component underneath AHE is proportional to its rank among the pixels in its neighbourhood. This permits Associate in Nursing economical implementation of hardware that may compare the middle component with all different pixels within the neighbourhood [3]. Associate in Nursing unnormalized result price may be computed by adding two for every component with a smaller price than the middle component, and adding one for every component with equal price.

  • When the image region containing a pixel’s neighbourhood that is uniform, its bar graph are going to be powerfully peaked, and therefore the transformation perform can map a slender vary of constituent values to the complete vary of the resultant image [14]. This causes AHE to over amplify the little amounts of noise in for the most part uniform regions of the image [4].

Advertisement

3. Image fusion

3.1 DWT image fusion

Image fusion process is used to associate the two or more images in to a single image. The resultant fused image obtained will be more explanatory than the distinct source images. The wave remodel may be a mathematical tool which will be wont to discover native options in an exceedingly signal method. It can also be wont to decompose two-dimensional (2D) signals like second grayscale image signals into totally different resolution levels for multiresolution analysis. Wave remodel has been greatly utilized in several areas, like information compression, texture analysis, feature detection, and image fusion.

Wavelet transforms offer a framework within which a picture is rotten, with every level equivalent to lower band and better frequency bands. The DWT may be a spatial-frequency decomposition that provides a versatile multiresolution analysis of a picture. In general, the essential plan of image fusion supported ripple remodel is to perform a multiresolution decomposition on every supply image; the coefficients of each the low-frequency band and high-frequency bands are then performed with a definite fusion rule [13]. The wide used fusion rule is most choice rule. This straightforward theme simply selects the biggest absolute ripple constant at every location from the input pictures because the constant at that location within the united image [15]. After that, the united image is obtained by playacting the inverse DWT (IDWT) for the corresponding combined ripple coefficients. The elaborated fusion steps supported ripple remodel will be summarized below.

Step 1. the pictures to be amalgamated should be registered to assure that the corresponding pixels square measure aligned.

Step 2. These pictures square measure rotten into riffle remodeled pictures, severally, supported riffle transformation. The remodeled pictures with K-level decomposition can embrace one low-frequency portion (low-low band) and three high-frequency parts (low-high bands, poker game bands, and high-high bands).

Step 3. The remodel coefficients of various parts or bands square measure performed with an explicit fusion rule.

Step 4. The amalgamated image is built by acting associate inverse riffle remodel supported the combined remodel coefficients from Step 3 [16].

The overall fusion processing goes through the preprocessing and image registration followed by wavelet decomposition. The input images must be of same size for fusion. For easy computation and to abstract data, the image has got to be born-again into a grey scaled image from color image. Bar chart standardisation provides tonal distribution of the complete image. Preprocessed pictures square measure split in to four frequency sub bands like LL, LH, HL and HH. A general fusion rule is to select, the coefficients whose values are higher and the more dominant features at each scale are preserved in the new multi-resolution representation [17]. The fused image is constructed by performing an inverse wavelet transformation. The main objective of an image fusion is combining complimentary, as well as redundant data from multiple pictures to make one image that provides a lot of complete and correct description. This amalgamated image is a lot of appropriate for human visual, machine perception or additional image process and analysis tasks. Another advantage of image fusion is that it decreases the cupboard space and price by storing solely the one amalgamated image, rather than storing totally different modality pictures [14]. within the space of medical imaging, combining the photographs {of totally different| of various} modalities of same scene offers numerous benefits it should be fusion of image taken at different spatial resolution, intensity and by totally different strategies helps medical practitioner/Radiologists to simply extract or acknowledge the options or abnormalities that will not be typically visible in single image [18] (Figure 1).

Figure 1.

Fusion Process using Wavelet transforms.

3.1.1 Simple averaging rule

In remodel primarily based fusion formula an easy “averaging rule” is adopted to fuse the low frequency coefficients. Low-frequency coefficients contain define data associated with the image rather than specific major details, ANd therefore an averaging technique is applied to provide the composite low-frequency coefficients [18]. The computation is performed as follows:

Fxy=F1xy+F2xy2E1

where F(x, y) are the low frequency coefficients of the fused image IF, f1(x, y) and f2(x, y) are the low frequency coefficients of the source images.

3.1.2 Maximum selection rule

Maximum selection rule is used in high frequency coefficients. Two images wavelet coefficients are compared and select the maximum value coefficient for fusion process as shown in Eq. (2)

Wxy=W1XYifI1xy>I2xyW2XYifI1xy<I2xyE2

W1 (x, y) – Image l wavelet coefficient.

W2 (x, y) - Image 2 wavelet coefficient.

3.2 Principal component analysis

Principal element analysis is performed that aims at decreasing giant an outsized an oversized set of variables into a little set that also containing most of the data that was existing within the large set. As medical image knowledge is large, to cut back these knowledge PCA methodology is important. The strategy of principal element analysis permits USA to make and use a weakened set of variables, that area unit referred to as principal vectors. A reduced set is way easier to research and interpret. The foremost simple thanks to build a amalgamate image of many input pictures is playing the fusion as a weighted superposition of all input images [14]. The best coefficient coefficients, with relevancy info content and redundancy removal, is determined by a principal element analysis (PCA) of all input intensities. By computing PCA of the variance matrix of input intensities, the weights for every input image area unit obtained from the eigenvector comparable to the most important chemist price. PCA is that the simplest of verity eigenvector-based statistical procedure. Often, its operation is thought of as revealing the interior structure of the information in a very means that best explains the variance within the data [19]. If a variable knowledge set is envisioned as a collection of coordinates in a very high-dimensional data area (1 axis per variable), PCA will offer the user with a lower-dimensional image, a “shadow” of this object once viewed from its most informative viewpoint. This can be done by mistreatment solely the primary few principal parts in order that the spatial property of the remodeled knowledge is reduced. The amount of principal parts is a smaller amount than or capable the amount of original variables [20].

PCA Algorithm:

  • Transform the info into column vectors. Confirm the mean on every column

  • Subtract the empirical mean vector.

  • Compute the variance matrix C of X i.e. =XXT

  • Mean of expectation = covariance(X).

  • Compute the eigenvectors V and Eigen|chemist} price D of C and kind them by decreasing Eigen price

  • Consider the primary column of V that corresponds to larger Eigen price to figure P1 and P2 as

  • P1 = V(1)/ΣV and P2 = V(2)/ΣV

The input images (images to be fused) I1(x, y) and I2(x, y) are arranged in two column vectors and their empirical means are subtracted. From the resulting vector, compute the eigenvector and Eigen values and the Eigenvectors corresponding to the larger eigen value are obtained. The normalized components P1 and P2 (i.e., P1 + P2 = 1) are computed from the obtained eigenvector. The fused image is

IFxy=P1I1xy+P2I2xyE3

Where P1 and P2 are the principal components.

Advertisement

4. Performance analysis

In this, the outcome of fusion transformation is evaluated with different parameter, may be quantitatively & qualitatively and compared the results with the other algorithms, to check efficiency of the hybrid algorithm. Some of the quantitative parameters are listed below:

Entropy: Entropy is a measure of the information content in an image. An image with high information will have high entropy.

H=i=0L1pilogpiE4

Where L is the number of grey levels in an image; Pi is the probability of occurring ith grey level.

Standard Deviation: Standard Deviation is used to measure the contrast in the fused image. It consists of both signal and noise, an image with more information would have high standard deviation.

σ=i=0L1ii¯2hIfiE5

Where hIf(i) is the normalized histogram of the fused image; L is the number of grey levels in an image.

Mean Squared Error:

MSE=i=0M1j=0N1RijFijMXN2E6

Root Mean Square Error (RMSE): The error between fused image F and reference image R is given by,

RMSE=i=0M1j=0N1RijFijMXN2E7

Where R is reference image and F is fused image.

Peak Signal-to-Noise Ratio (PSNR):

PSNR is the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation.

The PSNR measure is given by

PSNR=10log10L12MSEE8

The higher the PSNR value, better the fusion process.

Advertisement

5. Results and discussion

The proposed algorithms are tested and compared with different fusion techniques. The testing data sets are of two medical modality images like, CT and MRI of size 480X403. The original MRI image of set 1 is shown in Figure 2(a) and also the CT image of set 1 is shown in Figure 2(b).

Figure 2.

Data set-1 of the brain. (a) MRI scan image. (b) CT scan image.

Figure 3 shows an image resulting from DWT simple averaging fusion technique. DWT maximum selection rule is applied on data set 1 and resulting image is shown in Figures 4 and 5 shows an image which is obtained from PCA fusion method.

Figure 3.

Fused image of data set 1 in DWT. Simple Averaging method.

Figure 4.

Fused image of data set 1 in DWT. Maximum selection Rule method.

Figure 5.

Fused image of data set-1 in PCA.

Table 1 shows the values of different quality parametric measures like Entropy, Standard Deviation, Mean Squared Error and Root Mean Squared Error for various fusion algorithms. Values for the proposed PCA is resulted better than other w.r.t the quality parametric measures.

DWT Simple Averaging
EntropyStandard DeviationMSERMSEPSNR
CT4.520874.0343
MRI5.682973.4328
DWT Simple Average5.843871.246391.23209.551547.3835
DWT Maximum Selection Rule6.234869.343394.07919.699453.3932
PCA7.043967.386995.20599.757358.6465
Existing [20]6.925366.856496.65239.325456.3254

Table 1.

Comparison parameters of the output images of fusion algorithm of Dataset-1.

The testing data sets are of two medical modality images i.e., CT and MRI of size 410X388. The original MRI image of set 2 is shown in Figure 6(a) and the CT image of set 1 is shown in Figure 6(b).

Figure 6.

Data set-2 of the brain. (a) MRI scan image. (b) CT scan image.

DWT maximum selection rule is applied on data set 2 and resulting image is shown in Figures 7 and 8 shows an image resulting from DWT simple averaging fusion technique and Figure 9 shows an image which is obtained from PCA fusion method.

Figure 7.

Fused image of data set 2 in DWT. Maximum selection Rule method.

Figure 8.

Fused image of Data set-2 in DWT. Simple Averaging Method.

Figure 9.

Fused image of data set-2 in PCA.

Table 2 shows, the values of different quality parametric measures like Entropy, Standard Deviation, Mean Squared Error and Root Mean Squared Error for various fusion algorithms. Values for the proposed PCA is resulted better than other w.r.t the quality parametric measures.

DWT Maximum Selection Rule
EntropyStandard DeviationMSERMSEPSNR
CT6.342581.1017
MRI5.642353.3808
DWT Simple Average6.609373.518399.80739.990428.1394
DWT Maximum Selection Rule6.674672.343178.74808.874029.1684
PCA6.692170.292372.07848.489929.5528
Existing [20]6.623569.654170.35127.685428.3522

Table 2.

Comparison parameters of the output images of fusion algorithm of Dataset-2.

Advertisement

6. Conclusion

Image fusion plays a very important role in medical diagnosis to help doctors for examining the abnormalities in CT and MRI images. In this chapter, different image fusion techniques have been discussed and three algorithms have been implemented in MATLAB for two different datasets collected from various sources and compared the results with the existing methods in literature. Low dose CT scan images have been enhanced using Image enhancement techniques and fused with MRI images. From the results, it is analysed that the PCA algorithm results in better performance in terms of PSNR, MSE and RMSE.

References

  1. 1. Surya Prasada Rao Borra, Rajesh K Panakala and P, Rajesh Kumar, “Qualitative Anlysis of MRI and Enhanced Low Dose CT scan Image Fusion”, International Conference on Advanced Computing and Communication Systems (ICACCS -2017), Jan. 06 – 07, 2017, Coimbatore, INDIA, pp. 1752-1757
  2. 2. David J. Brenner and Eric J. Hall, “Computed Tomography — An Increasing Source of Radiation Exposure”, The new engl and journal of medicine
  3. 3. Michael F. McNitt-Gray, “AAPM/RSNA Physics Tutorial for Residents: Topics in CT - Radiation Dose in CT”, Volume 22, Number 6, pp. 1541-1553
  4. 4. Thomas Lehnert, Nagy N.N. Naguib, Huedayi Korkusuz, Ralf W. Bauer, J. Matthias Kerl, Martin G. Mack, Thomas J. Vogl, Image-Quality Perception asa Function of Dose in Digital Radiography
  5. 5. Eliseo Vano, Jose I Ten, Jose M. Fernandez-Soto and Roberto M. Sanchez-Casanueva, “Experience With Patient Dosimetry and Quality Control Online for Diagnostic and Interventional Radiology Using DICOM Services”, Medical Physics and Informatics- Review, AJR: 200, April 2013, pp. 783-790
  6. 6. Mona Selim, Hiroyuki Kudo and Essam A. Rashed, “Low-Dose CT Image Reconstruction Method WithProbabilistic Atlas Prior”, 978-1-4673-9862-6/15/©2015 IEEE
  7. 7. Nithyananda C R, Ramachandra A C and Preethi, “Survey on Histogram Equalization method based Image Enhancement techniques,” 2016 International Conference on Data Mining and Advanced Computing (SAPIENCE), Ernakulam, 2016, pp. 150-158
  8. 8. J. M. Headlee, E. J. Balster and W. F. Turri, “A no-reference image enhancement quality metric and fusion technique,” 2015 International Conference on Image and Vision Computing New Zealand (IVCNZ), Auckland, 2015, pp. 1-6
  9. 9. D. S. Gowri and T. Amudha, “A Review on Mammo gram Image Enhancement Techniques for Breast Cancer Detection,” 2014 International Conference on Intelligent Computing Applications, Coimbatore, 2014, pp. 47-51
  10. 10. S. S. Jarande, P. K. Kadbe and A. W. Bhagat, “Compartive analysis of image enhancement techniques,” 2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT), Chennai, 2016, pp. 4586-4588
  11. 11. A.S. Sekhar and M.N.G. Prasad, “A novel approach of image fusion on MR and CT images using wavelet transforms”, 2011 3rd International Conference on Electronics Computer Technology, Kanyakumari, 2011, pp. 172-176
  12. 12. K. Parmar and Kher, “A Comparative Analysis of Multimodality Medical Image Fusion Methods”, Sixth Asia Modelling Symposium, Bali, 2012, pp. 93-97
  13. 13. Jinwen Yang, Weihe Zhong and Zheng Miao, “On the Image enhancement histogram processing”, 2016 3rd International conference on informative and Cybernetics for Computational Social Systems (ICCSS), Jinzhou, 2016, pp. 252-255
  14. 14. P. Mathiyalagan, “Multi-Modal Medical Image Fusion Using Curvelet Algorithm”, Advances in Computing Communications and Informatics (ICACCI) 2018 International Conference on, pp. 2453-2458, 2018
  15. 15. M.D. Nandeesh and Dr. M. Meenakshi, “Image Fusion Algorithms for medical Images – A comparison, Bonfring”, International Journal of Advances in Image Processing, Vol. 5, No. 3, July 2015, pp. 23-26
  16. 16. Vani M and Saravanakumar S, “Multi focus and multi modal image fusion using wavelet transform”, 2015 3rd International Conference on Signal Processing, Communication and Networking (ICSCN), Chennai, 2015, pp. 1-6
  17. 17. Q. Li, J. Du, F. Song, C. Wang, H. Liu and C. Lu, “Region based multi-focus image fusion using the local spatial frequency”, 2013 25th Chinese Control and Decision Conference (CCDC), Guiyang, 2013, pp. 3792-3796
  18. 18. V. Amala Rani, S. Lalithakumari, “A Hybrid Fusion Model for Brain Tumor Images of MRI and CT”, Communication and Signal Processing (ICCSP) 2020 International Conference on, pp. 1312-1316, 2020
  19. 19. Lilia Lazli, Mounir Boukadoum, Otmane Ait Mohamed, “A Survey on Computer-Aided Diagnosis of Brain Disorders through MRI Based on Machine Learning and Data Mining Methodologies with an Emphasis on Alzheimer Disease Diagnosis and the Contribution of the Multimodal Fusion”, Applied Sciences, vol. 10, pp. 1894, 2020
  20. 20. Jiayin Kang, Wu Lu, Wenjuan Zhang, “Fusion of Brain PET and MRI Images Using Tissue-Aware Conditional Generative Adversarial Network With Joint Loss”, Access IEEE, vol. 8, pp. 6368-6378, 2020

Written By

Appari Geetha Devi, Surya Prasada Rao Borra and Kalapala Vidya Sagar

Submitted: 30 September 2020 Reviewed: 02 March 2021 Published: 13 April 2021