Open access peer-reviewed chapter

Research in Medical Imaging Using Image Processing Techniques

Written By

Yousif Mohamed Y. Abdallah and Tariq Alqahtani

Submitted: 12 September 2018 Reviewed: 13 January 2019 Published: 24 June 2019

DOI: 10.5772/intechopen.84360

From the Edited Volume

Medical Imaging - Principles and Applications

Edited by Yongxia Zhou

Chapter metrics overview

5,110 Chapter Downloads

View Full Metrics

Abstract

Medical imaging is the procedure used to attain images of the body parts for medical uses in order to identify or study diseases. There are millions of imaging procedures done every week worldwide. Medical imaging is developing rapidly due to developments in image processing techniques including image recognition, analysis, and enhancement. Image processing increases the percentage and amount of detected tissues. This chapter presents the application of both simple and sophisticated image analysis techniques in the medical imaging field. This chapter also summarizes how to exemplify image interpretation challenges using different image processing algorithms such as k-means, ROI-based segmentation, and watershed techniques.

Keywords

  • medical
  • imaging
  • image processing technique

1. Introduction

Medical imaging is the process of producing visible images of inner structures of the body for scientific and medicinal study and treatment as well as a visible view of the function of interior tissues. This process pursues the disorder identification and management. This process creates data bank of regular structure and function of the organs to make it easy to recognize the anomalies. This process includes both organic and radiological imaging which used electromagnetic energies (X-rays and gamma), sonography, magnetic, scopes, and thermal and isotope imaging. There are many other technologies used to record information about the location and function of the body. Those techniques have many limitations compared to those modulates which produce images. Annually billions of images have been done globally for different diagnostic purposes. About half of them use ionizing and nonionizing radiation modulates [1]. Medical imaging produces the images of the internal structures of the body without invasive procedures. Those images were produced using fast processors and due to conversion of the energies arithmetically and logically to signals [2]. Those signals later are converted to digital images. Those signals represent the different types of tissues inside the body.

The digital images play a necessary role on a daily basis. The medical imaging processing refers to handling images by using the computer. This processing includes many types of techniques and operations such as image gaining, storage, presentation, and communication. The image is a function that signifies a measure of characteristics such as illumination or color a viewed sight. The digital images have several benefits such as faster and cheap processing cost, easy storing and communication, immediate quality assessment, multiple copying with reserving the quality, fast and cheap reproduction, and adaptable manipulation. The disadvantages of digital images are exploitation copyright, inability to resize with preserving the quality, the need of large-capacity memory, and the need of faster processor for manipulation [3].

An image processing technique is the usage of computer to manipulate the digital image. This technique has many benefits such as elasticity, adaptability, data storing, and communication. With the growth of different image resizing techniques, the images can be kept efficiently. This technique has many sets of rules to perform into the images synchronously. The 2D and 3D images can be processed in multiple dimensions. The image processing techniques were founded in the 1960s. Those techniques were used for different fields such as Space, clinical purposes, arts, and TV image improvement. In the 1970s with the development of computer system, the cost of image processing became less and faster. In the 2000s, the image processing became quicker, inexpensive, and simpler [4].

The human visual system is one of the most complex schemes that ever existed. This system allows living beings to organize and understand the many complex elements in their external environment. The visual system comprises of the eye that transmutes light into neural signals and the related parts of the brain that process those signals and excerpt essential data. The human eye is bilateral cylinder structures that are located anteriorly in the skull. The eyes are 2.5 cm in both crosswise and lengthwise diameters. In the middle of the eyeball, there is a blackened structure called the pupil. This system permits the light to cross the eye. This system narrows when exposed to a heavier light source. This reduces the light to the retina and enhances the visual process. There are many muscles surrounding the eye and that control the widening of the pupil. The eye always has some supporting structures called the sclera. The lens is a ligamentous part located behind the cornea. The shape of the lens changes continuously due to muscle contraction [4, 5]. Figure 1 shows the cross-sectional view of the eyeball.

Figure 1.

The eyeball.

The light concentrates into the middle part of the eye and focuses from the cornea and lens on retinae. The fovea emphases the image into the retina. Finally, the brain forms the details and colors using its perception through multiple processes.

Advertisement

2. Classification of digital images

The digital images have two main types of images. Raster image is described as a four-sided arrangement of frequently sampled values known as pixels. The digital images are usually inaccessible images and involve multifaceted color difference. The digital images have fixed resolution due to their pixels size. The digital images lose their quality in the resizing process due to some missing data. The digital images are used mainly in photography images because of their good color shades. The image-gaining instrument controls the resolution. The digital images include many formats such as BMP (Windows bitmap), TIFF (Tag Interleave Format), PCX (Paintbrush), PNG (Portable Network Graphics), etc. [6, 7].

A vector is described as a wrinkled and a bent object that is defined precisely by the computer. The vector has many qualities such as line width, dimension, and hue. The vectors are easily scalable images and can be reproduced in different magnitudes without change in its quality. The vectors are suitable for design, line painting, and diagrams.

Advertisement

3. Applications of digital image processing

The digital image processing has many applications in the medical field such as:

3.1 Medicine

In medicine, many techniques are used such as segmentation and texture analysis, which is used for cancer and other disorder identifications. Image registering and fusion methods are widely used nowadays specially in new modalities such as PET-CT and PET-MRI. In the field of bioinformatics, telemedicine and the formatless compression techniques are used to communicate the image remotely [1, 2, 3, 4, 5].

3.2 Forensics

The common techniques used in this field are edge detection, pattern matching, denoising, security, and biometric purposes such as identity, face, and fingerprint documentation. Forensics is based on the database information about the individuals. Forensics matches the input data (fingerprint, eye, photo, etc.) with the database to define the person identity [2].

Advertisement

4. Medical imaging systems

Medical imaging systems use the signals received from the patient to produce images. Medical imaging systems use both ionizing and nonionizing sources.

4.1 X-ray imaging systems

Since the discovery of X-rays by the German scientist Roentgen, X-rays have been used to image the body parts for diagnostic purposes. In X-ray tube, the electrons are produced in cathode through a thermal emission process and are accelerated through a potential difference of 50–150 KV. The electrons hit the anode to produce the X-rays. Only 1% of this energy is converted to X-rays, and the remaining amount is changed to heat (Figure 2) [3].

Figure 2.

X-rays tube.

In the X-ray machines, the images are produced in 2D plans of the examined part of the body. The fluoroscopy system is used to scan the moving organs. The acquired images can be displayed, stored, and communicated through different machines. Computed radiography (CT) uses image receptor to produce the image. X-rays accompany a screen covered with a storage phosphor device. The mammography imaging is used to differentiate between the breast tissues and different diseases. Mammography imaging uses lower energy compared with bony structure imaging. The range of potential difference used is 15–40 kV (Figure 3) [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16].

Figure 3.

Mammography image.

4.2 Computed tomography (CT)

In this modality, the images are produced in multiple dimensions rather than the conventional radiography. CT scanner produces multiple slices of the body tissues in different directions. In CT scanner, the patient is placed inside its aperture and scanned by a rotating X-ray tube in all directions (Figure 4) [6].

Figure 4.

CT scanner.

4.3 Nuclear medicine

This imaging modality uses the radioisotopes to produce images about the functions of the different structures such as the heart, kidney, and liver. The radioisotopes are labeled by pharmaceutical materials to be guided to the certain organs. The patient’s emitted photons are received in the detectors and convert into signals. Those signals are converted to interpretable digital images. There are many types of nuclear medicine scanning modalities such as planar, tomographic, and positron emissions. The planar emission produces 2D images. Both of the tomographic and the positron emissions produce 3D images (Figure 5) [5].

Figure 5.

Nuclear medicine imaging.

4.4 Ultrasound

Ultrasound is a technique, which uses high-frequency sound waves to produce images of the internal structure of the body from the returned echoes. Ultrasound is similar to the location determination technique, which is used by some animals like bats and whales in the nature. Ultrasound is transmitted in high-frequency pulses into the body using a transducer as those waves travel through the body tissues. Some of those waves are absorbed and some reflect back. The reflected waves are received by the transducer and converted into electric signals. Those electric signals are converted into digital ones and pass through the computer system. The computer system uses the arithmetic and logic calculation to form the 2D image of the scanned structures. In the ultrasonic system, thousands of pulses are sent per each millisecond. There are many imaging techniques used to enhance the ultrasound images (Figure 6) [1, 2, 3, 4, 5, 6].

Figure 6.

Ultrasound imaging diagram.

Advertisement

5. Fundamentals of digital image processing

The images are classified according to different qualities such as illumination, contrast, entropy, and signal-to-noise ratio. The histogram is the simplest image processing technique. The image display does not change the image quality. The grayscale histogram considers the basic type of the images that are used to evaluate and to improve the images. The histogram is a scheme showing the pixels’ values and not their locations. The gray-level histogram shows whether an image is generally shady or bright (Figure 5). The mean pixel value is obtained from the histogram by summing the produced pixel values and consistent bin altitude and dividing by the entire number of pixels [7, 8]. Histogram equalization is used to compare many images acquired on definite bases. The technique works by changing the histogram to become smooth, identical, and balanced (Figure 7).

Figure 7.

Sagittal MRI image of a head had enhanced using (i) image adjustment, (ii) histogram equalization, and (iii) adaptive histogram equalization [8].

The mean value of central pixel intensity is designated to the ideal brightness. Any intensity above or below makes the image darker or brighter. Signal-to-noise ratio (SNR) of an image is used to associate the level of the anticipated signal to the level of the contextual signal. Signal-to-noise ratio (SNR) is defined as the ratio of signal intensity to noise intensity. Signal-to-noise ratio (SNR) calculates from the image in an up-front method. The mean intensity of the image is expressed as the square of the mean of the pixel value (Eq. (1)).

SNR = P signal P noise E1

where p is the average power.

5.1 Image enhancement

Image enhancement is a technique used to improve the image quality and perceptibility by using computer-aided software. This technique includes both objective and subjective enhancements. This technique includes points and local operations. The local operations depend on the district input pixel values. Image enhancement has two types: spatial and transform domain techniques. The spatial techniques work directly on the pixel level, while the transform technique works on Fourier and later on the spatial technique (see Figures 8 and 9) [9].

Figure 8.

Edge-aware local contrast manipulation of thyroid scan images (a), (b) edge threshold, (c) original image and (d) reduced contrast −0.5 [12].

Figure 9.

Edge-aware local contrast manipulation of leukemia cell images (a) and (c) original image, (b) Edge threshold, and (d) Reduced contrast −0.5 [13].

5.2 Image segmentation

Image segmentation is a technique of segregating the image into many parts. The basic aim of this segregation is to make the images easy to analyze and interpret with preserving the quality. This technique is also used to trace the objects’ borders within the images. This technique labels the pixels according to their intensity and characteristics. Those parts represent the entire original image and acquire its characteristics such as intensity and similarity. The image segmentation technique is used to create 3D contour of the body for clinical purposes. Segmentation is used in machine perception, malignant disease analysis, tissue volumes, anatomical and functional analyses, 3D-rendered technique, virtual reality visualization and anomaly analysis, and object definition and detection (Figure 10) [12, 13, 14].

Figure 10.

Segmentation process of (a) thyroid gland and heart [3].

Image segmentation is divided into kinds: (i) local segmentation and (ii) global segmentation. The local segmentation works particularly in one subdivision of the image. This technique has a fewer number of pixels compared to the global type. The global segmentation works in the whole image as one unit. This technique has more pixels to manipulate. Segmentation can be divided into methods:

  1. region method;

  2. boundary method; and

  3. Edge method [14].

5.3 Image segmentation based on thresholding

Thresholding segmentation depends on the threshold value to convert the gray color-based image into black and white [4]. There are many other techniques applied in radiology in order to rebuild or reslice the images such as Otsu’s and k-means techniques [5, 6]. Threshold method is useful for establishing the borders of solid objects in a dark background. Threshold techniques need presence of differences between the object’s and background’s intensities. There are three types of thresholding methods. Those methods include global, adaptive, and histogram-built selection threshold. The global threshold is broader and used for all segmentation techniques. The global threshold (θ) calculates using binarization procedure as in the following equation (Eq. (2)):

f x = 1 if f m n θ 0 E2

The adaptive or fixed threshold segments image faster if the region of interest contains unique intensity and is different from the background. The disadvantage of this method is its simplicity and inability to process the multichannel images [15].

5.4 Image segmentation based on edge detection

Edge detection is a segmentation technique that uses border recognition of strictly linked objects or regions. This technique identifies the discontinuity of the objects. This technique is used mainly in image study and to recognize the parts of image where a huge variation in intensity arises.

5.5 Some types of edge detection

5.5.1 Roberts kernel

Roberts kernel is a technique used for determining the difference between two close pixels. Precisely it is called forward differences. This technique can find the edges in high noised images; it is calculated using first-order fractional derivative and cross-gradient operator (Eqs. (3) and (4)) (Figure 11) [21].

f x = f i j f i + 1 j + 1 E3
f x = f i + 1 j f i j + 1 E4

The fractional derivative can be applied into two 2 × 2 matrices. In this situation, Roberts masks are calculated as in Eq. (5):

G x = 1 0 0 1 and G y = 0 1 1 0 E5

5.5.2 Prewitt kernel

This technique is based on the idea of central difference. This technique is better than Roberts operator (Figure 11). Assume that matric has arrangement of pixels [i, j] as in Eq. (6):

Figure 11.

Prewitt edge detection technique, (a) gradient magnitude, and (b) gradient direction.

a 0 a 1 a 2 a 7 i j a 3 a 6 a 5 a 4 E6

The fractional derivative of Prewitt operator is computed as in Eq. (7):

G x = a 2 + c a 3 + a 4 a 0 + c a 7 + a 6 E7

where c is constant and expresses the pixels closed to the center of the image. G x and G y are the calculations at [i, j]. When c equals 1, the Prewitt operator is calculated as in Figure 10 and Eq. (8) [15, 16]:

G x = 1 1 1 0 0 0 1 1 1 and G y = 1 0 1 1 0 1 1 0 1 E8

5.5.3 Sobel kernel

This technique can be dependent on the central difference which tends toward the central pixels in average. This technique can be expressed as 3 × 3 matric to the first derivative of Gaussian kernel. This technique is calculated as shown in Eqs. (9)(12) [20, 21, 22]:

G x = a 2 + 2 a 3 + a 4 a 0 + 2 a 7 + a 6 E9

and

G y = a 6 + 2 a 5 + a 4 a 0 + 2 a 1 + a 2 E10

The Sobel masks are the following:

G x = 1 2 1 0 0 0 1 1 1 and G y = 1 0 1 2 0 2 1 0 1 E11

The Sobel is better than Prewitt in noise reduction [18]. This technique is used in the functional imaging modality such as nuclear medicine. In the study of red blood cell images, the unraveling of strictly neighboring cells is considered difficult issues due to the background noise. This affects the interpreting processes and makes them difficult to diagnose by the physician. Segmentation can solve such problems and identify those red cells easily (see Figure 12) [17].

Figure 12.

Red blood cell segmentation using edge detection: (a) original image and (b) Sobel and (c) Prewitt techniques.

5.6 k-means segmentation

k-means cluster is a technique of vector and signal valuations. This technique subdivides the image into n parts and into k clusters in which each observation fits to a cluster with a similar mean. k-means clustering tends to find clusters of comparable spatial extent. Given a set of comments (x1, x2, …, xn), where each comment is a d-dimensional actual vector, k-means clustering aims to subdivide the n observations into k (≤ n) sets S = {S1, S2, …, Sk} so as to minimize the within-cluster sum of squares (Eqs. (12) and (13)) [16].

arg   min S i = 1 k x S i x μ i 2 = arg   min S i = 1 k S i Var S i E12

where μi is the mean of points in Si.

arg   min S i 1 k 1 2 S i x 1 y S i x y 2 E13

k-means technique can be applied in large databases because of its simplicity. This technique is used in economical, stargazing, cultivation, and computer perception (Figure 13) [17, 18, 19].

Figure 13.

k-means segmentation technique of nuclear medicine images.

Advertisement

6. Conclusion

Images are the method of expression of the data in pictographic form. Images consist of various small elements called pixels. Each pixel has a specific position and value. Geometric image signifies an image arithmetically with geometrical primitives such as lines. Each image is saved in a specific file format, which consists of two parts, the heading and the data. Imaging processing techniques is a group of approaches that are used for handling the images by computer. The objective of segmentation is the partition of the images into important portions. Local segmentation deals with the partition of the images into small parts within the images. Global segmentation deals with the assembly of those partitions. Image segmentation works in three methods, which are region, border, and edge. Region method is used to examine images and region class of neighboring pixels. Thresholding segmentation uses the histogram and threshold value of pixels. Image edge techniques are used to analyze the images at borders or discontinuing. Those techniques include Roberts, Prewitt, Sobel, and Frei-Chen.

Advertisement

Acknowledgments

The authors are thankful to the Deanship of Scientific Research, at Majmaah University, for funding this research.

Advertisement

Conflict of interest

There are no conflicts of interest.

References

  1. 1. Abdallah Y. Improvement of sonographic appearance using HAT-TOP methods. International Journal of Science and Research (IJSR). 2015;4(2):2425-2430. DOI: http://dx.doi.org/10.14738/jbemi.55.5283
  2. 2. Abdallah Y. Increasing of edges recognition in cardiac scintigraphy for ischemic patients. Journal of Biomedical Engineering and Medical Imaging. 2016;2(6):40-48. DOI: http://dx.doi.org/10.14738/jbemi.26.1697
  3. 3. Abdallah Y. Application of Analysis Approach in Noise Estimation, Using Image Processing Program. Germany: Lambert Publishing Press GmbH & Co. KG; 2011. pp. 123-125
  4. 4. Abdallah Y, Yousef R. Augmentation of X-rays images using pixel intensity values adjustments. International Journal of Science and Research (IJSR). 2015;4(2):2425-2430
  5. 5. Abdallah Y. Increasing of Edges Recognition in Cardiac Scintography for Ischemic Patients. Germany: Lambert Publishing Press GmbH & Co. KG; 2011. pp. 123-125
  6. 6. Abdallah YM. History of medical imaging. Archives of Medicine and Health Sciences. 2017;5:275-278. DOI: 10.4103/amhs.amhs_97_17
  7. 7. Abdallah Y. An Introduction to PACS in Radiology Service: Theory and Practice. Germany: LAP LAMBERT Academic Publishing; 2012. ISBN: 978-3846588987
  8. 8. Abdallah Y. Application of Analysis Approach in Noise Estimation: Using Image Processing Program. LAP LAMBERT Academic Publishing; 2011. ISBN: 978-3847331544
  9. 9. Abdallah Y. Computed Verification of Light and Radiation Field Size Superimposition On Cobalt-60 machine, Verification of Fields Size using Image Processing Technique. Germany: LAMBERT Academic Publishing GmbH & Co. KG; 2010. ISBN: 9783838399096
  10. 10. Abdallah Y, Mohamed E. Improvement of bone scintography image using image texture analysis. Frontiers in Biomedical Sciences. 2016;1(1):1-6
  11. 11. Abdallah Y. Segmentation of salivary glands in nuclear medicine images using edge detection tools. Journal of Biomedical Engineering and Medical Imaging. 2016;3(2):1-6. DOI: http://dx.doi.org/10.14738/jbemi.32.1702
  12. 12. Abdallah Y, Mohamed S. Automatic recognition of leukemia cells using texture analysis algorithm. International Journal of Advanced Research (IJAR). 2016;4(1):1242-1248
  13. 13. Abdallah Y, Algaddal A, Alkhir M. Enrichment of ultrasound images using contrast enhancement techniques. International Journal of Science and Research (IJSR). 2015;4(1):2381-2385
  14. 14. Abdallah Y. Increasing the precision of edges recognition in static renal scintography. Indian Journal of Applied Research (IJAR). 2015;4(7):270-273
  15. 15. Abdallah Y, Wagiallah E, Yousef M. Improvement of nuclear cardiology images for ischemic patients using image processing techniques. SMU Medical Journal. 2015;2(2):1-9
  16. 16. Abdallah Y. Lungs detection in ventilation and perfusion scintigraphy using watershed transform. International Journal of Electronics Communication and Computer Engineering (IJECCE). 2015;2(3):416-419
  17. 17. Abdallah Y. An accurate liver segmentation method using parallel computing algorithm. Journal of Biomedical Engineering and Medical Imaging. 2015;3(2):15-23
  18. 18. Abdallah, Abdallah M. Using basic morphology tools in improvement of kidneys detection. International Journal of Science and Research (IJSR). 2015;4(5):1383-1386
  19. 19. Shapiro LG, Stockman GC. Computer Vision. New Jersey: Prentice-Hall; 2001. pp. 279-325. ISBN: 0-13-030796-3
  20. 20. Lauren B, Lee LW. Perceptual information processing system. Paravue Inc. U.S. Patent Application: 10/618,543; July 11, 2003
  21. 21. Batenburg KJ, Sijbers J. Adaptive thresholding of tomograms by projection distance minimization. Pattern Recognition. 2009;42(10):2297-2305. DOI: 10.1016/j.patcog.2008.11.027
  22. 22. Kashanipour A, Milani N, Kashanipour A, Eghrary H. Robust color classification using fuzzy rule-based particle swarm optimization. IEEE Congress on Image and Signal Processing. 2008;2:110-114

Written By

Yousif Mohamed Y. Abdallah and Tariq Alqahtani

Submitted: 12 September 2018 Reviewed: 13 January 2019 Published: 24 June 2019