Content-Based Image Feature Description and Retrieving

With the growth in the number of color images, developing an efficient image retrieval sys‐ tem has received much attention in recent years. The first step to retrieve relevant informa‐ tion from image and video databases is the selection of appropriate feature representations (e.g. color, texture, shape) so that the feature attributes are both consistent in feature space and perceptually close to the user [1]. There are many CBIR systems, which adopt different low level features and similarity measure, have been proposed in the literature [2-5]. In gen‐ eral, perceptually similar images are not necessarily similar in terms of low-level features [6]. Hence, these content-based systems capture pre-attentive similarity rather than semantic similarity [7]. In order to achieve more efficient CBIR system, active researches are currently focused on the two complemented approaches: region-based approach [4, 8-10] and rele‐ vance feedback [6, 11-13].


Introduction
With the growth in the number of color images, developing an efficient image retrieval system has received much attention in recent years. The first step to retrieve relevant information from image and video databases is the selection of appropriate feature representations (e.g. color, texture, shape) so that the feature attributes are both consistent in feature space and perceptually close to the user [1]. There are many CBIR systems, which adopt different low level features and similarity measure, have been proposed in the literature [2][3][4][5]. In general, perceptually similar images are not necessarily similar in terms of low-level features [6]. Hence, these content-based systems capture pre-attentive similarity rather than semantic similarity [7]. In order to achieve more efficient CBIR system, active researches are currently focused on the two complemented approaches: region-based approach [4,[8][9][10] and relevance feedback [6,[11][12][13].
Typically, the region-based approaches segment each image into several regions with homogenous visual prosperities, and enable users to rate the relevant regions for constructing a new query. In general, an incorrect segmentation may result in inaccurate representation. However, automatically extracting image objects is still a challengeing issue, especially for a database containing a collection of heterogeneous images. For example, Jing et al. [8] integrate several effective relevance feedback algorithms into a region-based image retrieval system, which incorporates the properties of all the segmented regions to perform many-tomany relationships of regional similarity measure. However, some semantic information will be disregarded without considering similar regions in the same image. In another study [10], Vu et al. proposed a region-of-interest (ROI) technique which is a sampling-based ap-proach called SamMatch for matching framework. This method can prevent incorrectly detecting the visual features.
On the other hand, the mechanism of relevance feedback is an online-learning technique that can capture the inherent subjectivity of user's perception during a retrieval session. In Power Tool [11], the user is allowed to give the relevance scores to the best matched images, and the system adjusts the weights by putting more emphasis on the specific features. Cox et al. [11] propose an alternative way to achieve CBIR that predicts the possible image targets by Bayes' rule rather than provides with segmented regions of the query image. However, the feedback information in [12] could be ignored if the most likely images and irrelevant images have similar features.
In this Chapter, a novel region-based relevance feedback system is proposed that incorporates several feature vectors. First, unsupervised texture segmentation for natural images is used to partition an image to several homogeneous regions. Then we propose an efficient dominant color descriptor (DCD) to represent the partitioned regions in image. Next, a regional similarity matrix model is introduced to rank the images. In order to attack the possible fails of segmentation and to simplify the user operations, we propose a foreground assumption to separate an image into two parts: foreground and background. The background could be regarded as the irrelevant region that confuses with the query semantics for retrieval. It should be noted that the main objectives of this approach could exclude irrelevant regions (background) from contributing to image-to-image similarity model. Furthermore, the global features extracted from entire image are used to compensate the inaccuracy due to imperfect segmentations. The details will be presented in the following Sections. Experimental results show that our framework improves the accuracy of relevance-feedback retrieval.
The Chapter is organized as follows. Section 2 describes the key observations which explain the basis of our algorithm. In Section 3, we first present a quantization scheme for extracting the representative colors from images, and then introduce a modified similarity measure for DCD. In Section 4, image segmentation and region representation based on our modified dominant color descriptor and local binary pattern are described. Then the image representation and the foreground assumption are explained in Section 5. Our integrated regionbased relevance feedback strategies, which consider pseudo query image and relevant images as the relevance information, are introduced in Section 6. Experimental results and discussions of the framework are made in Section 7. Finally, a short conclusion is presented in Section 8.

Problem statement
The major goal in region-based relevance feedback for image retrieval is to search perceptually similar images with good accuracy in short response time. For nature image retrieval, conversional region-based relevance feedback systems use multiple features (e.g., color, shape, texture, size) and update weighting scheme. In this context, our algorithm is motivated by the following viewpoints.

1.
Computational cost increases as the selected features increased. However, an algorithm with large number of features does not guarantee an improvement of retrieval performance. In theory, the retrieval performance can be enhanced by choosing more compact feature vectors.

A modified dominant color descriptor
Color is one of the most widely used visual features for retrieving images from common semantic categories [12]. MPEG-7 specifies several color descriptors [17], such as dominant colors, scalable color histogram, color structure, color layout and GoF/GoP color. The human visual system captures dominant colors in images and eliminates the fine details in small areas [18]. In MPEG-7, DCD provides a compact color representation, and describes the color distribution in an image [16]. The dominant color descriptor in MPEG-7 is defined as where N is the total number of dominant colors in image, c i is a 3-D dominant color vector, p i is the percentage for each dominant color, and Σpi = 1 .
In order to extract the dominant colors from an image, a color quantization algorithm has to be predetermined. A commonly used approach is the modified generalized Lloyd algorithm (GLA) [19], which is a color quantization algorithm with clusters merging. This method can simplify the large number of colors to a small number of representative colors. However, the GLA has several intrinsic problems associated with the existing algorithm as follows [20].

4.
It may give different clustering results when the number of clusters is changed.

5.
A correct initialization of the centroid of cluster is a crucial issue because some clusters may be empty if their initial centers lie far from the distribution of data.

6.
The criterion of the GLA depends on the cluster "distance"; therefore, different initial parameters of an image may cause different clustering results.
In general, the conventional clustering algorithms are very time consuming [2,[21][22][23][24]. On the other hand, the quadratic-like measure [2,17,25] for dominant color descriptor in MPEG7 does not matching human perception very well, and it could cause incorrect ranks for images with similar color distribution [3,20,26]. In this Chapter, we adopt the linear block algorithm (LBA) [20] to extract the representative colors, and measure the perceptual similar dominant colors by the modified similarity measure.
Considering two dominant color features ..N 2 } , the quadratic-like dissimilarity measure between two images F 1 and F 2 is calculated by: where a i, j is the similarity coefficient between color clusters c i and b j , and it is given by The threshold T d is the maximum distance used to judge whether two color clusters are similar, and d i, j is Euclidean distance between two color clusters c i and b j ; d max = αT d , notation α is a parameter that is set to 2.0 in this work.
The quadratic-like distance measure in Eq. (2) may incorrectly reflect the distance between two images. The improper results are mainly caused by two reasons. 1) If the number of dominant colors N 2 in target image increases, it might cause incorrect results. 2) If one dominant color can be found both in target images and query image, a high percentage q j of the color in target image might cause improper results. In our earlier work [19], we proposed a modified distance measure that considers not only the similarity of dominant colors but also the difference of color percentages between images. The experimental results show that the measure in [20] provides better match to human perception in judging image similarity than the MPEG-7 DCD. The modified similarity measure between two images F 1 and F 2 is calculated by: where p q (i) and p t ( j) are the percentages of the ith dominant color in query image and the jth dominant color in target image, respectively. The term in bracket, 1 − | p q (i) − p t ( j) | is used to measure the difference between two colors in percentage, and the term min(p q (i), p t ( j)) is the intersection of p q (i) and p t ( j) that represents the similarity between two colors in percentage. In Fig. 1, we use two real images selected from Corel as our example, where the color and percentage values are given for comparison. In Fig. 1, we calculate this example by using the modified measure and quadratic-like measure for comparison. In order to properly reflect similarity coefficient between two color clusters, the parameter is set to 2 and T d =25 in Eq(3). Since the pair-wised distance between Q and F 1 in Fig. 1 is exceed Td, the quadratic-like dissimilarity measure can be determined by However, using the quadratic-like dissimilarity measure between the Q and F 2 is: It can be seen that the comparison result of D 2 (Q, F 2 ) > D 2 (Q, F 1 ) is not consistent with human perception. Whereas, using the dissimilarity measure in [19], we have In DCD, the quadratic-like measure results incorrect matches due to the existence of high percentage of the same color in target image. For example, consider the quantized images in Fig. 2. We can see that the percentage of dominant colors of F 1 (rose) and F 2 (gorilla) are 82.21% and 92.72%, respectively. In human perception, Q is more similar to F 2 . However, the quadratic-like similarity measure is D 2 (Q, F 2 ) > D 2 (Q, F 1 ) . Obviously, the result causes a wrong rank. The robust similarity measure [19] is more accurate to capture human perception than that of MPEG-7 DCD. In our experiments, the modified DCD achieves 16.7% and 3% average retrieval rate (ARR) [27] improvements than Ma [28] and Mojsilovic [29], respectively. In this Chapter, the modified dominant color descriptor is chosen to support the proposed CBIR system.

Image segmentation
It has been mentioned that segmentation is necessary for those region-based image retrieval systems. Nevertheless, automatic segmentation is still unpractical for the applications of region-based image retrieval (RBIR) systems [8,[30][31][32]. Although many systems provide segmentation tools, they usually need complicated user interaction to achieve image retrieval. Therefore, the processing is very inefficient and time consuming to the user. In the following, the new approach will propose to overcome this problem. In our algorithm, the user does not need to provide precisely segmented regions, instead, the boundary checking algorithm are used to support segmented regions. For region-based image retrieval, we adopt the unsupervised texture segmentation method [30,33]. In [30], Ojala et al. use the nonparametric log-likelihood-ratio test and the G statistic to compare the similarity of feature distributions. The method is efficient for finding homogeneously textured image regions. Based on this method, a boundary checking algorithm [34] has been proposed to improve the segmentation accuracy and computational cost. For more details about our segmentation algorithm, we refer the reader to [33]. In this Chapter, the weighted distribution of global information CIH (color index histogram) and local information LBP (local binary pattern) are applied to measure the similarity of two adjacent regions.
An example is shown in Fig. 3. It can be seen that boundary checking algorithm segments the test image correctly, and it costs only about 1/20 processing time of the method in [30]. For color image segmentation, another example is shown in Fig. 4. In Fig. 4(c) Fig. 4(c'), we can see that the boundary checking algorithm achieves robustness segmentation for test image "Akiyo" and another nature image.

Region representation
To achieve region-based image retrieval, we use two compact and intuitive visual features to describe a segmented region: dominate color descriptor (DCD) and texture. For the first one, we use our modified dominant color descriptor in [19,26]. The feature representation of a segmented region R is defined as where R c i and R p i are the ith dominant color and its percentage in R, respectively.
For the second one, the texture feature of a region is characterized by the weighted distribution of local binary pattern (LBP) [6,25,32]. The advantages of LBP include its invariant property to illumination change and its low computational cost [32]. The value of kth bin in LBP histogram is given by: where n K represents the frequency of LBP value at kth bin, and P is the number of pixels in a region. Therefore, the texture feature of region R is defined as In addition, we define a feature R poa to represent the percentage of area for region R in the image. Two regions are considered to be visual similar if both of their content (color and texture) and area are similar.

Image representation and definition of the foreground assumption
For image retrieval, each image in database is described by a set of its non-overlapping regions. For an image I that contains N non-overlaping regions, i.e., Although the region-based approaches perform well in [9,11], their retrieval performances are strongly depends on success of image segmentation because segmentation techniques are still far from reliable for heterogeneous images database. In order to address the possible fails of segmentation, we propose a foreground assumption to "guess" the foreground and background regions in images. For instance, we can readily find a gorilla sitting on the grass as shown in Fig. 5. If Fig. 5 is the query image, the user could be interested in the main subject (gorilla) rather than grass-like features (color, texture, etc). In most case, user would pay more attention to the main subject.
The main goal of foreground assumption is to simply distinguish main objects and irrelevant regions in images. Assume that we can divide an image into two parts: foreground and background. In general, the foreground stands the central region of an image. To emphasize the importance of central region of an image, we define where R foreground and R background are the occupied regions of foreground and background, respectively; h and w is height and width of the image. In region-based retrieval procedure, segmented regions are required. It can be provided by the users or be generated by the system automatically. However, the criterion for similarity measure is based on the overall distances between feature vectors. If an image in database has background regions that is similar to the foreground object of the query image, this image will be considered as similar image based on the similarity measure. In this case, the accuracy of region-based retrieval system decreases. Therefore, we modify our region representation by adding a Boolean model BV ∈ {0, 1} to determine whether the segmented region R belongs to the background of the query image or not.
Note that the variable is designed to reduce the segmentation error.

Integrated region-based relevance feedback framework
In region-based image retrieval, an image is considered as relevant if it contains some regions with satisfactory similarity to the query image. The retrieval system can reconstruct a new query that includes only the relevant regions according to user's feedback. In this way, the system can capture the user's query concept automatically. For example, Jing et al. [8] suggest that information in every region could be helpful To speed up the system, we introduce a similarity matrix model to infer the region-of-interest sets. Inspired by the query-point movement method [8,31], the proposed system performs similarity comparisons by analyzing the salient region in pseudo query image and relevant images based on user's feedback information.

Region-based similarity measure
In order to perform region-of-interest (ROI) queries, the relevant regions are obtained by the measurement of region-based color similarity R _ S ( R, R ′ ) and region based texture similarity R _ S T ( R, R ′ ) in Eq. (14) and (15), respectively. This similarity measure allows users to select their relevant regions accurately. Note that the conventional color histogram could not be applied on DCD directly because the images do not have exact numbers of dominant colors [12]. The regionbased color similarity between two segmented regions R and R ' can be calculated by where m and n are the number of dominate colors in R and R ' , respectively; R _ S c ( R, R ′ ) is the maximum similarity between two regions in similar color percentage. If the pair-wise Euclidean distance of two dominate color vector c i and c j is less than a predefined threshold T d , it is set to 25 in our work. The notation R_S poa ( R, R ′ ) is used to measure the similarity of the area percentage for region pair (R, R ′ ) . To measure the texture similarity between two regions, we define where R Pxl and R ′ Pxl represent the number of pixels in regions R and R', respectively; min ( R LBP _h k , R' LBP _h k ) is the intersection of LBA histogram for the kth bin.
Theoretically, visual similar is achieved when both color and texture are similar. For example, two regions should be considered as non-similar if they are similar in terms of color but not texture. This can be achieved by imposing _ 0.8 and _ 0.9.

Similarity matrix model
In the following, we introduce a region-based similarity matrix model. The regions of positive examples, which helps the system to find the intention of user's query, are able to exclude the irrelevant regions flexibly. The proposed similarity matrix model is described as follows.   Fig. 7, where the symbol "1" means that two regions are regarded as similar. On the contrary, the symbol "0" represents that two regions are non-similar in content.
To support ROI queries, we perform the one-to-many relationships to find a collection of similar region sets, e.g., 2 } as shown in Fig. 8. The two sets are considered as region-of-interests that reflect user's query perception.  If users are interested in many regions, the simple merging process can be used to capture the query concept. In Fig. 8 2 } are the regions belong to the same relevant image I 1 and I 2 , respectively. It can be seen that the similar matrix approach is consistent with human perception and is efficient for region-based comparison.

Salient region model
To improve retrieval performance, all the region-of-interest sets from the relevant image set R s will be integrated for the next step during relevance feedback. As described in previous subsection, each region-of-interest set could be regarded as a collection of regions, and extracted information can be used to identify the user's query concept. However, correctly capturing the semantic concept from the similar regions is still a difficult task. In this stage, we define salient region as all similar regions within each ROI set. The features of the new region are equal to the weighted average features of individual regions.
In order to emphasize the percentage of area feature, we modified the dominant color descriptor in Eq. (1). The feature representation of the salient region SR is described as where C i is the i th average dominant color of similar region.
All similar regions in ROI can be determined from the eight uniformly divided partitions in RGB color space as shown in Fig. 9.
where N c i is the number of dominant colors in cluster i ; R c i j (R) , R c i j (G) and R c i j (B) represent the dominant color components of R, G and B located within partition i for the region j, re- spectively; R p i j represents the percentage of its corresponding 3-D dominant color vector in R j ; P i is the average percentage of dominant color in the ith coarse partition, i.e., ; R poa is the average percentage of area for all similar regions in ROI.

The pseudo query image and region weighting scheme
To capture the inherent subjectivity of user perception, we define a pseudo image I + as the set of salient regions, I + = { S R 1 , S R 2 , ..., S R n } . The feature representation of I + can be writ- During retrieval, the user chooses the best matched regions what he/she is looking for. However, the retrieval system cannot precisely capture the user's query intention at the first or second steps of relevance feedback. With the increasing of the returned positive images, query vectors are then constructed to perform better results. Taking average [8] from all the feedback information could introduce redundant, i.e., information from irrelevant regions. Motivated by this observation, we suggest that each similar region in ROI should be properly weighted according to the amount of similar regions. For example, the RO I 2 in Fig. 8 is more important than in RO I 1 . The weights associated with the significance of SR in I + can be dynamically updated as  (20) where | RO I l | represents the number of similar regions in region-of-interest set l , and n is the number of region-of-interest sets.

Region-based relevance feedback
In reality, inaccurate segmentation leads to poor matching result. However, it is difficult to ask for precise segmented regions from users. Based on the foreground assumption, we define three feature vectors, which are extracted from entire image (i.e., global dominant color), foreground and background, respectively. The advantage of this approach is that it provides an estimation that minimizes the influence of inaccurate segmentation. To integrate the two regional approaches, we summarize our relevance feedback as follows.
For the initial query, the similarity measure S (F entireImage I , F entireImage I ' ) for the initial query image I and target image I ′ in database are compared by using Eq. (4). Therefore, a coarse relevant-image set can be obtained. Then, all regions in the initial query image I and the positive images based on the user's feedback information are merged into relevant image set R s = { I , I 1 , I 2 , ..., I N } . The proposed region-based similarity matrix model performs Eq. (14) and (15) to find the collection of the similar regions. The similar regions can be determined by Eq. (16), and then be merged into salient region SR. For the next iteration, the feature representation of I + in Eq. (19) could be regarded as an optimal pseudo query image that is characterized by salient regions.
It should be noted that I + and R s defined above both contain the relevance information that reflects human semantics. The similarity measure for pseudo query image F S R l I + and target  (21) where n is the number of salient region sets in I + ; m is the number of color/texture segmented regions in target image I ′ ; w l is the weight of salient region S R l . In Eq. (21), the imageto-image similarity matching maximizes the value of region based color similarity by using Eq. (14). If the Boolean model BV = 1 for a partitioned region in target image, then the background of the image will be excluded for matching in Eq.
It is worth to mention that our region-based relevance feedback approach defined above is able to reflect human semantics. In other words, user might aware some relevant image from the initial query, and then provides some positive image.
Considering the ability to capture the user's perceptions more precisely, the system determines the retrieved rank according to average of region-based image similarity measure in Eq.

Experimental results
We use an image database (31 categories about 3991 images) for general-purpose from Corel's photo to evaluate the performance of the proposed framework. The database has a variety of images including animal, plant, vehicle, architecture, scene, etc. It has the advantages of large size and wide coverage [11]. Table 1 lists the labels for 31 classes. The effectiveness of our proposed region-based relevance feedback approach is evaluated.
In order to make a comparison on the retrieval performance, both average retrieval rate (ARR) and average normalized modified retrieval rank (ANMRR) [26] are applied. An ideal performance will consist of ARR values equal to 1 for all values of recall. A high ARR value represents a good performance for retrieval rate, and a low ANMRR value indicates a good performance for retrieval rank. The brief definitions are given as follows. For a query q, the ARR and ANMRR are defined as:   (29) where NQ is total number of queries; NG(q) is the number of the ground truth images for a query. The notation is a factor, and NF (β, q) is number of ground truth images found within the first β ⋅ NG(q) retrievals. Rank(k ) is the rank of the retrieved signature image in the ground truth. In eq.(28), K = min(4 ⋅ NG(q); 2 ⋅ GTM ) , where GTM is max{NG(q)} for all queries. The NMRR and its average (ANMRR) are normalized to the range of [0 1].
To test the performance of our integrated approach for region-based relevance feedback, we first query an image with a gorilla sits on grass as shown Fig. 10(a).
As mentioned in Section 5.4, the dominant color between query image I and target image I ′ is used for similarity measure in the initial query. The retrieval results are shown in Fig.  10(b), the top 20 matching images are arranged from left to right and top to bottom in order of decreasing similarity score. For better understanding of the retrieval results, the DCD vectors of the query image, rank 6th image and rank 8th image are listed, respectively. See Fig. 11. It can be seen that the query image and the image "lemon" are very similar in the first dominant color (marked by box). If we use the global DCD as the only feature for image retrieval, the system only returns eleven correct matches. Therefore, further investigation on extracting comprehensive image features is needed. Assume that the user has selected five best matched images, marked by red box, as shown in Fig. 10(a). In conventional region-based relevance feedback approach, all regions in the initial query image I and the five positive images are merged into relevant image set  Fig. 12. The following are discussions.

1.
The pseudo query image I + is capable to reflect user's query perception. Without considering the Boolean model in Eq. (21), the similarity measure by Eq. (21) returns 16 correct matches as shown in Fig. 12.

2.
Using the pseudo image I + as query image, the initial query image is not ranked first but fifth, as shown in Fig. 12.

3.
The retrieval results return three dissimilar images (marked by red rectangle boxes), which ranks are 7th, 8th and 12th, respectively.

4.
To analyze the improper result, the dominant color vectors and percentage of area of "cucumber" and "lemon" are listed. See Fig. 13. We can see that each of the images "gorilla", "cucumber" and "lemon" contains three segmented regions. For each region, the number of the dominant colors, percentage of area and BV value are listed and colored red. For similarity matching, the dominant colors (i.e. region#1, region#2 and region#3) of initial image "gorilla" are similar to the dominant color (marked by red rectangle box) of the image "cucumber". In addition, the percentages of area (0.393911, 0.316813, 0.289276) of initial image "gorilla" are similar to the percentage of area (region#2, 0.264008) of the image "cucumber". The other similarity comparisons between "gorilla" and "cucumber" image are not presented here because the maximum similarity between two regions in Eq. (14) is very small. In brief, without considering the exclusion of irrelevant regions, the region-based image-to-image similarity model in Eq. (21) could cause improper ranks in visualization. Figure 13. The analysis of retrieval results using the conventional region-based relevance feedback approach. Top row: dominant color distributions and percentage of area Poa for each region in initial query image, "cucumber" and "lemon" images. Bottom row: the corresponding segmented images.
The retrieval performance can be improved by automatically determining the user's query perception. In the following, we would like to evaluate the advantages of our proposed relevance feedback approach. For the second query, the integrated region-based relevance feedback contains not only the salient-region information, but also the "specified-region" information based on relevant images set R s . The retrieval results based on our integrated region-based relevance feedback are shown in Fig. 14. Observations and discussions are described as follows.

1.
The system returns 18 correct matches as shown in Fig. 14. 2. In Fig. 13, region#1 and region#3 in query image are two grass-like regions, which are labeled as inner region, i.e., BV = 1 . On the other hand, the region#2 in image "cucumber" is a green region that is similar to the grass-like regions in query image. In our method, this problem can be solved by examining the BV value in Eq. (21). As we can see, none of the three incorrect images including "cucumber", "lemon" and "carrot" in Fig. 12 appears in the top 20 images in Fig. 14. 3. In contrast, it is possible that the grass-like regions are parts of the user's aspect. In this case, the three feature vectors including entire image, foreground and background can be used to compensate the loss of generality. In Fig. 14 retrieval results indicate that the high performance is achieved by using these features.

4.
Our proposed relevance feedback approach can capture the query concept effectively. In Fig. 14, it can be seen that most of the retrieval results are considered to be highly correlated. In this example, 90% of top 20 images are correct images. In general, the features in all retrieval results look similar to gorilla or grass. The results reveal that the proposed method improves the performance of the region-based image retrieval. In Fig. 15-17, further examples are tested to evaluate the performance of the integrated region-based relevance feedback for nature images. In Fig. 15, the contents of the query image include a red car on country road by the side of grasslands. If the user is only interested in the red car, four positive images marked by red boxes will be selected as shown in Fig. 15 (b). In this case, retrieval results (RR=0.25, NMRR=0.7841) are far from satisfactory performance for the initial query. After the submission of pseudo query image I + and relevant images set R s based on user's feedback information, the first feedback retrieval returns 10 images containing "red car" as shown in Fig. 16. For this example, the first feedback retrieval achieves an ARR improvement of 28.6%. More precise results can be achieved by increasing of the number of regionof-interest sets and relevant image set based for the second feedback retrieval as shown in Fig. 17. The retrieval results for the second feedback retrieval returns 11 images containing "red car", and achieve an NMRR improvement of 35% compared to the initial query. Furthermore, the rank order in Fig. 17 is more reasonable than that in Fig. 16.
To show the effectiveness of our proposed region-based relevance feedback approach, the quantitative results for individual class and average performance (ARR, ANMRR) are listed in Table 2 and 3, which show the comparison of the performance for each query. It can be seen that the performance of retrieving precision and rank are relatively poor for the initial query. Through the adding positive examples by user, feedback information could have more potential in finding the user's query concept by means of optimal pseudo query image I + and relevant images set R s as described in Section 5.4. In summary, the first feedback query improves 30.8% of ARR gain and 28% of ANMRR gain, and the second feedback query further improves 10.6% of ARR gain and 11% of ANMRR gain as compared with first feedback query. Although the improvement of retrieval efficiency is decreases progressively after two or three feedback queries, the proposed technique is able to provide satisfactory retrieval results in that few feedback queries.

Conclusion
The conventional existing region-based relevance feedback approaches work well in some specified applications; however, their performances depend on the accuracy of segmentation techniques. To solve this problem, we have introduced a novel region-based relevance feedback for image retrieval with the modified dominant color descriptor. The term "specified area", which combines main objects and irrelevant regions in image, has been defined for compensating the inaccuracy of segmentation algorithm. In order to manipulate the optimal query, we have proposed the similarity matrix model to form the salient region sets. Our integrated region-based relevance feedback approach contains relevance information including pseudo query image I + and relevant images set R s , which are capable to reflect the user's query perception. Experimental results indicate that the proposed technique achieves precise results in general-purpose image database.