Open access peer-reviewed chapter

Automatic Lesion Detection in Ultrasonic Images

Written By

Yung-Sheng Chen and Chih-Kuang Yeh

Published: December 1st, 2009

DOI: 10.5772/7058

Chapter metrics overview

2,152 Chapter Downloads

View Full Metrics

1. Introduction

One of the promising clinical needs of acquiring ultrasonic images is in the detection of the possible lesions. Several methods for ultrasonic image segmentation have been proposed in the literature, such as edge based (Aarnink et al., 1994), texture based (Richard & Keen, 1996), active contour based (Hamarneh & Gustavsson, 2000, Akgul et al., 2000), semi-automatic approaches (Ladak et al., 1999, Ladak et al., 2000), and so on. The edge and texture based methods usually cannot provide a high quality segmentation results due to the speckle interference of ultrasonic images. The active contour models required an initial seed point and utilized a closed contour to approach object boundary by iteratively minimizing an energy function. It is usually a time-consuming scheme; and has poor convergence to lesion boundary concavity due to inherent speckle interference thus it cannot accurately contour the irregular shape malignant tumor. The semi-automatic approach uses model-based initialisation and a discrete dynamic contour with a set of control points so that an estimation of the prostate shape can be interpolated using cubic functions. A good survey can be found in the literature (Abolmaesumi & Sirouspour, 2004), in which the authors presented an integrated approach combining probabilistic data association filter (PDAF) with the interacting multiple model (IMM) estimator to increase the accuracy of the extracted contours. Recently, we also presented a semi-automatic segmentation method for ultrasonic breast lesions (Yeh et al., 2009). The main idea of this method is that an effective scheme of removing speckle noise was applied for the segmentation of ultrasonic breast lesions, which was performed with an iterative disk expansion method (Chen & Lin, 1995).

Base on the above brief survey, as a result, automatically detecting and extracting lesion boundaries in ultrasound images is rather difficult due to the variance in shape and the interference from speckle noise. Even several methods have been proposed for approaching to this goal, however they usually cannot be applied for the whole image; and need a selected region of interest (ROI) or an initial seed for processing. To overcome this problem, a fully automatic approach is presented in this article, which can detect the possible lesions in a whole ultrasonic image.

Advertisement

2. Proposed approach

The main idea of our approach is from the concept of contour map onto earth surface, where a mountain and the valley around the mountain may be distinguished with a contour line. In ultrasonic image, a possible lesion may be imaged as a mountain. Since it is usually with a poor quality and the lesion may be with a various size, in our approach a median filtering with estimated circular-window size is first used for forming a smoothed image. A proper circular-window for median filter is necessary. That is, a proper small window should be used in the case of small lesions, and vice versa. Furthermore, to avoid the zigzag effect, a squared window should not be used.

The darker region belonging to a lesion is another useful cue in ultrasonic image processing. Hence intuitively, if an ultrasonic image is divided into two regions: darker and brighter regions, assuming a lesion located in the darker region is reasonable. To perform such segmentation, a mean or median value of the given image can be used as a threshold, which can be referred to as a mean-cut or median-cut scheme, respectively, therefore M-cut named generally in this paper. Since a lesion may be of large or small; and it may be located in the newly divided darker region or brighter region, the M-cut scheme will perform continuously until a stopping condition satisfies. In our study, the stopping condition is defined as follows.

|MbrighterMdarker|THE1

WhereMbrighterandMdarkerrepresent the M-value of brighter and darker region, respectively; and THis a defined threshold value. If a stopping condition occurs, then the previous undivided one may represent a possible lesion.

Because the stopping condition may occur at either darker or brighter region, to check the stopping condition a binary tree structure (BTS) having several nodes is adopted in our approach. Based on the BTS, each segmented part will be regarded as a new image and fed into the BTS for analysis, and thus be regarded as a new node. A 3-level node sequence starting at node 0 for a BTS is illustrated in Fig. 1. After the BTS segmentation, all possible lesions’ regions will be located and indicated by the ending nodes. To facility the final possible lesions’ regions easily identified, a postprocessing will be useful. In the following subsections, the main parts of our approach namely, segmentation, binary tree structure, and postprocessings will be detailed.

Figure 1.

Illustration of a 3-level BTS.

Figure 2.

Segmentation procedure.

2.1. Segmentation

The segmentation procedure is designed simply and depicted in Fig. 2. It is mainly composed of three consecutive processes: median processing, binarization, and labelling; and then yields two segmented subregions, R1 and R2. Here R1 and R2 denote the darker and brighter subimages, respectively. In other words, R1 is the region containing the largest labelled area based on the binarized image, whereas R2 is the rest region of subtracting R1 from the input image. An ultrasonic image containing a lesion is given in Fig. 3 for illustrating each step of our algorithms.

At the step of median process, the significant concern is to decide the radius rof a circular window. In our study, we confine the rvalue betweenrmintormaxfor dealing with the concerned lesion’s size. If a small lesion appears, the rvalue approaches tormin, and vice versa. Hence we defined a size ratio to represent the lesion’s size information, i.e.,sr=no/nt, wherenodenotes the possible object’s rectangular size, andntthe half-size of the image. Here we assume a possible lesion is located in the upper region of the image, and thus therminis used for median processing and then a rough segmentation is performed to estimate the representing object’s region. In this study, the sris also ranged betweensrminandsrmax. Based on these definitions, a proper rwith respect to srcan be expressed by

r={rminrmaxrmax(rmaxrmin)(srmaxsr)(srmaxsrmin)srsrminsrsrmaxotherwiseE2

In our experiments,[rmin,rmax]=[5,30]and[srmin,srmax]=[0.006,0.25]are empirically used. Based on the presented method, the estimated object’s region is obtained in Fig. 3(b), where the estimated rvalue is 18. After the median processing with r= 18, a binarized image can be obtained as shown in Fig. 3(c) by the found median grey value 57. Further applying labelling process, the largest region can be extracted as Fig. 3(d) shows. Even the obtained region in Fig. 3(d) is nonuniform due to the speckle property of an ultrasonic image, it is easily smoothed by applying a circular-median process. Based on the presented segmentation procedure as depicted in Fig. 2, two segmented subregions R1 and R2 are finally obtained as shown in Fig. 3(e) and 3(f) , respectively, which will be further processed by our BTS scheme.

2.2. Binary tree structure

From the segmented R1 and R2 subimages, we can find an obvious lesion located in R2 image. It means that the segmentation should be further performed for R2 image to approach the target. However, for general case, we should deal with for both R1 and R2 since the lesion may appear in both. Consider the BTS principle defined previously and refer to the 3-level BTS illustration in Fig. 1, R1 and R2 images are now represented by node 1 and 2, respectively at level 1, and will be processed continuously. If any node satisfies the stopping condition defined in (1), then it represents a possible lesion detected and the line along this node will not be proceeded. This process is continued recursively until the node number is smaller than 0 referred to the illustration in Fig. 1. In our experiments, TH= 5 is selected empirically. The whole BTS algorithm can be detailed in Fig. 4.

Figure 3.

a) An ultrasonic image containing a lesion. (b) The estimated object’s region from the upper image. (c) Binary result. (d) The largest labelled region. (e), (f) Segmented R1 and R2 subimages.

Let kbe the node number. Refer to Fig. 1, for node 0 (original image), we will have two son nodes denoted node 1 and node 2. It can be formulated as: for node k, it has two son nodes: node 2k+1 and node 2k+2. Based on this formulation, for example, two son nodes of node 2 (k=2) are node 5 and node 6 as addressed in Fig. 1.

Because of the recursive operations of BTS, the backward node indexing is also needed. That is, for node k, its father node will be

node {(k1)/2if k is odd(k2)/2otherwiseE3

In the BTS algorithm of Fig. 4, the tc[k] is used to count the passing times for the k-th node. Each node can be visited two times at most, one is forward and the other is backward.

After the BTS processing, for the current example, we obtain the following ending nodes: 3, 4, 12, 23, 24, 56, 111, 112, 57, 118, 235, 236, 30, 59, 121, 245, 246. The tree structure of these ending nodes denoted by black colour is depicted in Fig. 5. The detailed information including size and mean grey value of each node (subimage) are given in Table 1(a), which will be further used in postprocessings. Here size means the total non-white pixels of the corresponding subminage.

Figure 4.

BTS algorithm.

Figure 5.

The tree structure of ending nodes 3, 4, 12, 23, 24, 56, 111, 112, 57, 118, 235, 236, 30, 59, 121, 245, 246, denoted by black colour.

Table 1.

(a) Size and mean grey information of each node for tree structure in Fig. 5. Here size means the total non-white pixels of the corresponding subminage. (b) Reduced nodes’ information after applying Rule 1. They are ordered by size in ascent for the further use of Rule 2.

2.3. Postprocessings

In clinical application, doctor may adjust ultrasonic equipments for enhancing an ultrasonic image so that the lesion can be observed, analyzed, and located for assessment. Similarly, so far, based on our segmentation and BTS algorithms, a set of possible lesion’s regions (nodes) has been addressed. However there are some non-lesion’s regions also included. They may be an ignorable shadow or insignificant lighting phenomenon and can be regarded as non-lesion’s regions. Hence we described two rules below for filtering out these nodes.

Rule 1:For a node, if the size is smaller than a specified value, e.g., 200 in our experiments, the node is removed. In addition, if a region connecting to the boundary of the original image region, it is also removed. For the current example, after applying this rule, we obtain a reduced nodes’ information as shown in Table 1(b). Only six nodes are remained. Note here that they are ordered by size in ascent for the further processing of Rule 2.Rule 2:For a node a, if the subimage’s region ranged by the outer-most convex boundary is covered by that of other node, say b, then we sayaband the node acan be removed. To explain this operation, we give an illustration as follows. Consider the information of nodes 23, 24, and 12, the size order of them is size23< size24< size12. Their corresponding subimages are shown in Fig. 6(a), 6(b), and 6(c) respectively. Based on this rule, we check the outer-most boundary information between two subimages, the inclusion relationship may be easily identified. After performing this procedure, we have the relationships:
{node 23  node 24node 23  node 12node 24  node 12E4

Thus nodes 23 and 24 can be removed and node 12 remained. This inclusion results can be easily understood by contouring all the regions in an image as illustrated in Fig. 7. After the checking procedure of Rule 2, the total number of nodes is further reduced to four, that is, 245, 59, 57, and 12.

Figure 6.

The corresponding subimages of (a) node 23, (b) node 24, and (c) node 12.

Figure 7.

Contouring all the regions in an image for illustrating the inclusion results.

The final steps of postprocessings are to sort the nodes depending on the possibility of a node belonging to a lesion and to display the lesion detection result. It is reasonable that if a region in the given image showing a higher possible lesion, it should have a higher contrast; otherwise to identify a lesion is somewhat difficult. Hence we define a so-called contrast ratio (cr) to index the possibility of a lesion. Given a node 1, its father node is easily addressed according to (3) and thus indexing to the node 2 which complements to node 1 since node 1 and node 2 are two segmented regions from their father node. Letg1,g2, andgfbe the mean grey values of node 1, node 2, and their father node, respectively. Three parametersd1=|g1gf|,d2=|g2gf|, andavg=(g1+g2)/2are defined to formulate the following contrast ratios.

cr1=d1avgcr2=d2avgcr3=d1+d2avgE5

Herecr1considers the contrast between node 1 and the father node;cr2considers the contrast between node 2 and the father node; andcr3considers the contrast between two son nodes and the father node. Thus our totally contrast ratio (crtotal) is combined by the above three terms.

crtotal=(cr1×cr2×cr3)K=d1d2(d1+d2)avg3KE6

Since the lesion tends to a darker region and possesses a higher contrast, the higher of numerator and the lower of denominator in (5) will derivate a highercrtotaland thus show a higher lesion possibility at this node. Here constant Kis used to facilitate the numeric manipulation, 65536 is used in our program. Take node 12 at the current example as an illustration, its mean grey value isg1=63, we can find its father node, node 5 (gf=58), and the other node 11 (g2=51). Thus according to (5), we havecrtotal12=148.63for node 12; the next iscrtotal245=50.02for node 245; and so on. Along the descent order of cr-index, we show the final detected results in Fig. 8. Obviously, the most significant lesion is detected and placed at first.

Figure 8.

The final detected results are shown by the descent order ofcr-index. The most significant lesion is placed at first (a).

Advertisement

3. Results and discussion

So far, we have detailed our approach for automatically lesion detection in ultrasonic images with a series of illustrations. From the ordered display of results such as Fig. 8 shows, the most significant targets will be placed firstly in order. This facilitates the clinical applications and assists doctor’s quick assessment for the lesion. However, some other quasi-lesion regions (may not be a lesion) may also be listed in our approach like Fig. 8(b)-(d), this is a trade-off between results and fully automatic detection in our original study motivation. In order to further confirm the feasibility of our approach, other results are given in Fig. 9. Here only the first place of detected regions in each image is displayed.

Since our approach can detect all possible lesion’s regions and list them in a significant order, it implies that multiple-lesion detection can be performed. Consider an ultrasonic image having multiple-lesion in Fig. 10(a), there exist two obvious lesions. Intuitively, the detection of lesions in such an image is of difficulty due to the un-uniform brightness property and the influence of speckle information. Traditionally, it needs a manual ROI selection prior to contour detection for a lesion. After performing our approach to this image, we finally obtain 20 nodes to represent all possible lesion’s regions, where an image including the inclusion results like Fig. 7 is given in Fig. 10(b). Obviously, the real lesions should be in the detection list even many non-lesion regions are located. Because of the inherently un-uniform brightness property and the influence of speckle information, the real lesion may not be placed in front of the significance order. In this case, three most possible lesions placed in order 1, 8, 20 are shown in Fig. 10(c), 10(d) , and 10(e) respectively.

Figure 9.

Some other results. Here only the first place of detected regions in each image is displayed.

Advertisement

4. Conclusion

In this article, we have presented a simply but effectively fully automatic segmentation method for detecting lesions in ultrasonic images without the constraint of ROIs or initial seeds given. Based on the use of a binary tree structure and some postprocessings, multiple lesions can be detected and displayed in order for further visualization and inspection. Since experiments have confirmed the feasibility of the proposed approach, an e-service for ultrasonic imaging CAD system is worthy of being developed. In addition, the strategy of reducing non-lesion regions may also be an interesting topic; and will be further investigated and involved in this approach as a near future work.

Advertisement

Acknowledgments

This work was supported in part by the National Science Council, Taiwan, Republic of China, under Grant No. NSC 96-2221-E-155-057-MY3.

References

  1. 1. AarninkR.GiesenR.HuynenA.de la RosetteJ.DebruyneF.WijkstraH.1994A practical clinical method for contour determination in ultrasonographic prostate images,Ultrasound Medical and Biology,20705717.
  2. 2. AbolmaesumiP.SirouspourM. R.2004An interacting multiple model probabilistic data association filter for cavity boundary extraction from ultrasound images,IEEE Transactions on Medical Imaging,236772784.
  3. 3. AkgulY.KambhamettuC.StoneM.2000A task-specific contour tracker for ultrasound, In:Proceedings of IEEE Workshop Mathematical Methods in Biomedical Image Analysis,135142, Hilton Head Island, South Carolina, USA, June 11-12 2000.
  4. 4. ChenY. S.LinT. D.1995An iterative approach to removing the closure noise using disk expansion method,IEEE Signal Processing Letters,26105107.
  5. 5. HamarnehG.GustavssonT.2000Combining snakes and active shape models for segmenting the human left ventricle in echocardiographic images,In:Proceedings of IEEE Computers in Cadiology,27115118, Cambridge, Massachusetts, USA, September 24-27, 2000.
  6. 6. LadakH.DowneyD.SteinmanD.FensterA.1999Semi-automatic technique for segmentation of the prostate from 2D ultrasound images, In:Proceedings of IEEE BMES/EMBS Conference Serving Humanity, Advanced Technology,21144Atlanta, GA, USA, October 13-16, 1999.
  7. 7. LadakH.MaoF.WangY.DowneyD.SteinmanD.FensterA.2000Prostate segmentation from 2D ultrasound images, In:Proceedings of International Conference Engineering in Medicine and Biology,431883191, Chicago, IL, USA, July 23-28, 2000.
  8. 8. RichardW.KeenC.1996Automated texture-based segmentation of ultrasound images of the prostate,Computerized Medical Imaging and Graphics,20131140.
  9. 9. YehC. K.ChenY. S.FanW. C.LiaoY. Y.2009A disk expansion segmentation method for ultrasonic breast lesions,Pattern Recognition,425596606.

Written By

Yung-Sheng Chen and Chih-Kuang Yeh

Published: December 1st, 2009