Open access peer-reviewed chapter

Texture Analysis in Magnetic Resonance Imaging: Review and Considerations for Future Applications

Written By

Andrés Larroza, Vicente Bodí and David Moratal

Submitted: 26 November 2015 Reviewed: 17 June 2016 Published: 26 October 2016

DOI: 10.5772/64641

From the Edited Volume

Assessment of Cellular and Organ Function and Dysfunction using Direct and Derived MRI Methodologies

Edited by Christakis Constantinides

Chapter metrics overview

4,447 Chapter Downloads

View Full Metrics

Abstract

Texture analysis is a technique used for the quantification of image texture. It has been successfully used in many fields, and in the past years it has been applied in magnetic resonance imaging (MRI) as a computer-aided diagnostic tool. Quantification of the intrinsic heterogeneity of different tissues and lesions is necessary as they are usually imperceptible to the human eye. In the present chapter, we describe texture analysis as a process consisting of six steps: MRI acquisition, region of interest (ROI) definition, ROI preprocessing, feature extraction, feature selection, and classification. There is a great variety of methods and techniques to be chosen at each step and all of them can somehow affect the outcome of the texture analysis application. We reviewed the literature regarding texture analysis in clinical MRI focusing on the important considerations to be taken at each step of the process in order to obtain maximum benefits and to avoid misleading results.

Keywords

  • texture analysis
  • magnetic resonance imaging
  • classification
  • computer aided diagnosis
  • segmentation

1. Introduction

Magnetic resonance imaging (MRI) has become a powerful diagnostic tool by providing high quality images, thanks to new advances in technology. MRI offers excellent anatomic details due to its high soft-tissue contrast and the possibility to enhance different types of tissues using different acquisition protocols. However, diagnosis of some pathologies remains difficult due to the restricted ability of the human eye to detect intrinsic, heterogeneous characteristics of certain tissues. For example, the visual appearance on MRI of a metastatic brain tumor can be very similar to one of a radionecrosis lesion (Figure 1), and a wrong diagnosis can lead to improper patient treatment. In these particular cases, histopathology remains the gold standard diagnostic technique. In an effort to avoid this invasive diagnostic approach, and considering that additional imaging modalities are costly and not as widely available as conventional MRI, great interest exists in identifying reliable imaging features from routine MRI scans that would help differentiate certain lesions [1].

Figure 1.

T1-weighted MRI with contrast enhancement of a brain metastatic lesion (a), and a radionecrosis lesion (b). Discrimination of these different entities is crucial for patient treatment but it is visually non-feasible. Texture analysis has demonstrated to be a useful tool for this purpose [1, 19].

Computer-aided diagnostic tools assist the radiologist in the diagnosis by providing quantitative measures of morphology, function, and other biomarkers in different tissues. In the past years, texture analysis has gained attention in medical applications and has been proved to be a significant computer-aided diagnostic tool [2]. There is not a strict definition of an image texture but it can be described as the spatial arrangement of patterns that provides the visual appearance of coarseness, randomness, smoothness, etc. [3]. Texture analysis describes a wide range of techniques for quantification of gray-level patterns and pixel inter-relationships within an image providing a measure of heterogeneity. It has been shown that different image areas exhibit different textural patterns that are sometimes imperceptible to the human eye [2].

Applications of texture analysis in medical imaging include classification and segmentation of tissues and lesions. A search of papers containing the keywords “texture” and “MRI” in the title was performed in SciVerse Scopus

https://www.scopus.com

retrieving 200 papers on January 19th 2016, of which 140 were original studies dealing with texture analysis in clinical MRI. The distribution of these studies per organ is shown in Figure 2. It is clear that there is an increased interest in texture analysis in recent years, and that the major attention has been paid to neurological applications. Some brain applications include discrimination between different types of tumors [4, 5], classification of diseases like Alzheimer’s [6] or Friedreich ataxia [7], and brain segmentation [8, 9]. Following brain studies, we found applications in liver, breast, and prostate [1012], and cardiac MRI for detection of scarred myocardium and classification of patients with low and high risk of arrhythmias [13, 14].

Figure 2.

Distribution of original publications regarding texture analysis in MRI according to the studied organ.

This work is presented as a literature review of the most relevant publications regarding texture analysis in MRI. Rather than providing a detailed summary of the state of the art found in the literature search, we focused this work on the process of texture analysis considering papers that compared different methods so we can have an idea of the best approaches for certain applications.

Advertisement

2. Texture analysis process

Texture analysis applications involve a process that consists of six steps: MRI acquisition, region of interest (ROI) definition, ROI preprocessing, feature extraction, feature selection, and classification (Figure 3). None of these steps is specific, and the methods have to be chosen according to the application. The texture outcome can be considerably affected depending on the methodology used throughout the process. Herein, we present a condensed description of each step of the texture analysis process focusing on applications that compared different scenarios or methods.

Figure 3.

Main steps for MRI classification by means of texture analysis. ROI: region of interest.

2.1. MRI acquisition

Magnetic resonance imaging is widely used nowadays because of its high soft-tissue contrast and the possibility to enhance specific tissues by varying the acquisition sequence parameters. In this respect, the outcome of texture analysis strongly relies on the image acquisition protocols, and these should be carefully selected in order to obtain maximum accuracy and reproducibility. Different measuring techniques produce different patterns in texture and these may vary among centers and manufacturers [15]. Texture analysis can be used reliably at one center with a specific imaging protocol but this does not mean that the same methodology can be directly applied to images acquired at different centers with different protocols [16].

2.1.1. Sequences

Three relevant MRI tissue parameters can be measured in a typical spin echo (SE) sequence: spin density (ρ), spin-lattice relaxation time (T1), and spin-spin relaxation time (T2); each of them showing different image contrast and texture. Examples of MR images weighted in these three parameters are shown in Figure 4. Other imaging techniques, like the gradient echo (GE) fast low angle shot (FLASH), introduce significant effects on image texture due to their own measuring characteristics [17]. Repetition time (TR), bandwidth/echo time (BW/TE), and flip angle are the properties that are most likely altered in a clinical setting. Repetition time had the biggest impact when comparing different foam phantoms using clinical breast MRI protocols, whereby better texture discrimination was elicited at higher TR [18].

The choice of the MRI sequence for texture analysis depends on the application. Contrast-enhanced T1-weighted images is the current standard MRI protocol used by clinicians to assess brain tumors and was used for texture analysis in [1, 4]. Some studies compared different modalities obtaining diverse results. In the study of Tiwari et al. [19], contrast-enhanced T1-weighted images provided better performance over T2-weighted and fluid-attenuated inversion recovery (FLAIR) images for discrimination of recurrent brain tumors from radiation-induced lesions. T1-weighted MRI was also notably better than FLAIR images for dementia classification [20]. T2-weigthed images were more suitable for differentiation between benign and malignant tumors [21, 22], and for discrimination of posterior fossa tumors in children [23]. Texture analysis applied to diffusion-weighted images also proved to be efficient for brain tumor classification [2426]. Texture features used in these studies differ from each other, so a definite assumption of which MRI sequence is better cannot be made.

Figure 4.

Spin echo images of a patient with meningioma. (a), ρ-image (TR/TE = 2000 ms/10 ms). (b), T1 image (TR/TE = 600 ms/10 ms). (c), T2 image (TR/TE 2000 ms/100 ms). TR: repetition time, TE: spin echo time. Reproduced with the permission of the publisher (Les Laboratories Servier ©, Suresnes, France) from [17].

2.1.2. Influence of spatial resolution and signal-to-noise ratio

Spatial resolution and signal-to-noise ratio (SNR) have been reported to be the most influential factors for texture analysis [15, 27, 28]. Image resolution is defined by slice thickness, field of view (FOV), and matrix size. Signal-to-noise ratio is defined as the coefficient between the mean signal over a homogeneous region of a tissue of interest and the standard deviation of the background noise. Texture discrimination improves with higher levels of SNR and it has been reported that a SNR > 4 is necessary to measure the real textural behavior of the human brain [17]. Discrimination based on texture analysis also improves with higher spatial resolution, as shown by Jirak et al. [29], who found the best separation of three different phantoms at a pixel resolution of 0.45 × 0.45 mm2 (good separation was also found at 0.77 × 0.90 mm2, whereas the worst discrimination was for the lowest tested resolution of 1.53 × 1.80 mm2). Texture analysis fails if the image resolution is insufficient since the finest textural details cannot be spotted. Texture features from higher spatial resolution images are more sensitive to variations in the acquisition parameters. In [28], it was found that the least influenced resolution was at 0.8 × 0.8 mm2.

Although current routine MRI scanners can produce high-resolution images, these are susceptible to motion artifacts, given the long scan times and are not widespread in clinical practice. In [30], they found a strong correlation between 3D structural indices and 3D texture features in trabecular bone in osteoporosis using routine, low-resolution images (0.7 mm), indicating that these can be used to quantify the bone architecture without the need of higher resolution images. These previous results indicate that even if high-resolution images provide better texture discrimination, its application in clinical practice is far complicated as no good reproducibility among centers is expected. Apparently the slice thickness does not influence significantly the outcome of 2D texture analysis according to Savio et al. [31], who found only moderate differences between 1 mm and 3 mm thickness for separation of white matter tissue and multiple sclerosis plaques.

2.1.3. Influence of field strength

One important difference among MRI scanners is the field strength of the magnet, the most common values in clinical routine nowadays being those of 1.5T and 3T. Scanners with higher field strength provide more SNR, thus increasing spatial and temporal resolution. In counterpart, artifacts resulting from breathing or any other type of body motion are more prominent on 3T than on 1.5T scanners, but these are generally compensated using some techniques offered by manufacturers [32]. Better texture-based discrimination is expected on the higher quality images acquired on 3T scanners as it was reported for liver fibrosis [33] and breast cancer classification [34]. In [22], they found significant differences between 1.5T and 3T when squamous cell carcinoma tumors on head and neck were compared. However, their results are in contrast with previous evidence [33, 34] since benign versus malignant tumor discrimination was better on 1.5T. In the study performed by Waugh et al. [18], texture discrimination of foam phantoms using different clinical breast MRI protocols was in general improved when a 3T scanner was used, but changes in the imaging parameters at 1.5T had less influence on the texture outcome.

2.1.4. Multicenter studies

Few multicenter studies regarding the application of texture analysis in MRI have been published. In [21], they concluded that texture analysis on MRI can discriminate between different brain tissues obtained in routine procedures at three different centers. In [16], they compared the classification performance to discriminate between bone marrow and fat tissue on T1-weighted MRI of knees from 63 patients obtained from three centers with two different field strength MRI scanners: two centers at 1T and one at 3T. Texture information was extracted from two centers and was used to predict tissue using data from the third center, concluding that feature sets from one center may be used for tissue discrimination in data from other centers. Pixel size was found to be the parameter that mostly influences the texture outcome. In a very large multicenter study, Karimaghaloo et al. [8] analyzed 2380 scans from 247 different centers for segmentation of multiple sclerosis lesions achieving an overall sensitivity of 95% on a separate dataset of 120 scans from 24 centers. The promising results of this study may be the consequence of extracting texture features from different MRI protocols (T1, T2, proton density, and FLAIR) and using them in combination when modeling the classifier. It should also be noted that images were corrected for nonuniformity effects and were normalized into a common spatial and intensity space, thus reducing the possible differences among multicenter scans. Opposite conclusions were reached by Fruehwald-Pallamar et al. [22], as they stated that texture analysis is useful for discrimination of benign and malignant tumors when using one scanner with the same protocol, but it is not recommended for multicenter studies. However, they did not mention any image normalization or inhomogeneity correction that could somehow have affected their results, as we discuss in Section 2.3.

2.2. Region of interest definition

Texture features are computed inside a predefined region of interest (ROI), or volume or interest (VOI) in the case of 3D texture analysis, and are usually placed over a homogeneous tissue or lesion area. Manual definition of ROIs is still considered the gold standard in many applications, and it is the chosen option over automatic methods [3538]. Different approaches have been used to define ROIs that are also extended to 3D texture analysis. One approach for ROI definition is the positioning of squares [39] or circles [40] of predefined sizes over the tissue to be analyzed. Using this approach, only information of the underlying tissue is captured but some texture details can be lost because the ROI does not cover the entire area of interest. Another alternative is to use a bounding box defined as the smallest enclosing rectangular area of the tissue of interest [41, 42]. The latter approach has the advantage that it covers the entire tissue or lesion, however it also includes information from adjacent parts that can affect texture quantification. Although delineation of the entire tissue or lesion can be tedious, it is a better approach since the whole area of interest is included [23, 43]. In [44], they studied the effect of lesion segmentation on the diagnostic accuracy to discriminate benign and malignant breast lesions. They concluded that for both 2D and 3D texture analysis, delineation of the entire lesion provides better accuracy than the bounding box approach. Figure 5 shows examples of the three aforementioned ROI definition approaches.

Figure 5.

Approaches for defining a region of interest (ROI) over a brain tumor. The use of a bounding box that covers the entire lesion (a), or a small square inside the tumor (b) can be defined quickly and easily, but the delineation of the entire lesion (c) is preferred in order to capture the maximum texture information only within the area of interest.

2.2.1. Size of the region of interest

The size of the ROI should be sufficiently large to capture the texture information thereby eliciting statistical significance [45]. In [46], they studied the effect of ROI size on various texture features extracted from circular ROIs of 10 different sizes on brain MRI of healthy adults. They concluded that the effect of size becomes insignificant when large ROIs are used. In general, texture features were highly affected at ROI areas smaller than 80 × 80 pixels and became unaffected at ROI areas of around 180 × 180 pixels. These results are in general true for certain texture features but they can vary among the extensive range of available texture analysis methods. In Section 2.4 we discuss the texture analysis methods mostly applied in MRI. It is also important to notice that the ROI size might depend on the MRI acquisition parameters. It is not the same to use a ROI of 180 × 180 pixels area over an image region of 1.5 × 1.5 mm2 resolution than over an image of 0.5 × 0.5 mm2. The MR images used by Sikiö et al. [46] had a pixel size of 0.5 × 0.7 mm2 with a slice thickness of 4.0 mm. A good methodology to avoid possible influences of ROI size might be the use of squares and circles of the same size among all the studied samples but as we mentioned before, complete delineation of the ROI might offer better results. We recommend the use of the ROI delineation approach when the range of lesion sizes among samples is not significantly broad and when the employed texture features are not affected between this range, otherwise ROIs of the same size might be a better approach.

2.2.2. Feature maps

Texture feature maps can be computed by defining ROIs as sliding blocks of n × n pixels centered at each pixel on the image, so for each pixel a specific texture feature value is computed including its surrounding neighborhood. The block size should be large enough to capture sufficient texture information from each pixel neighborhood, but small enough to capture more local characteristics allowing finer detection of regions [45]. Figure 6 shows examples of texture maps computed for sliding blocks of different sizes. Texture maps can reveal some characteristics that are not visible on the original image and are mainly used for segmentation tasks [47]. Computing features over texture maps can lead to better results than using the original MR images [48].

Figure 6.

Texture feature maps of a cardiac MR image: (a) original image, (b) entropy feature map computed with a sliding block with a size of 5 × 5 pixels, and (c) entropy feature map computed with a sliding block of 9 × 9 pixels.

2.3. Region of interest preprocessing

It is clear from Section 2.1 that MRI acquisition protocols are relevant for texture analysis. Several preprocessing techniques have been proposed in order to minimize the effects of acquisition protocols and are especially important when dealing with multicenter studies. The main purpose of these preprocessing techniques is to put all ROIs in the same condition, so features extracted from them represent essentially the texture being examined. Some preprocessing methods also aim to improve the texture discrimination. For example, Assefa et al. [49] extracted texture features from a power map computed from the localized Hartley transform of the image, and Chen et al. [50] computed features from ROIs defined over texture maps.

2.3.1. Interpolation

Image spatial resolution is one of the most influential factors in texture analysis, and it was demonstrated that higher resolutions tend to improve texture-based classification, but high-resolution images are not usually available in clinical routine [29, 30]. Image interpolation is an option to enhance images with a low spatial resolution. The effect of image interpolation on texture features was analyzed by Mayerhoefer et al. [51] comparing three interpolation methods applied on T2-weighted images acquired at five different resolutions. They concluded that MR image interpolation has the potential to improve the results of texture-based classification, recommending a maximum interpolation factor of four. In their study, the most considerable improvements were found when images with an original resolution of 0.94 × 0.94 mm2 and 0.47 × 0.47 mm2, respectively, were interpolated by factors of two or four using the zero-fill interpolation technique at the k-space level. Image interpolation is of special interest when dealing with 3D texture analysis because in most MRI sequences the slice thickness is larger than the in-plane resolution. Re-slicing all images to obtain isotropic image resolution is required for computing textures feature to ensure the conservation of scales and directions in all three dimensions [52].

2.3.2. Normalization

It was demonstrated that some features are not only dependent on texture, but also on other ROI properties, such as the mean intensity and variance [53]. To avoid the influence of such factors, ROI normalization is a recommended preprocessing step (Figure 7). In [54], they studied the effects of ROI normalization on texture classification of T2-weighted images and demonstrated that classification errors were dependent on the MR acquisition protocols if no normalization was applied. They compared three methods, and the one that yielded the best results is known as the “±3σ” normalization. In this method, image intensities are normalized between µ ± 3σ, where µ is the mean value of gray-levels inside the ROI, and σ is the standard deviation, so that gray-levels located outside the range [µ - 3σ, µ + 3σ] are not considered for further analysis. Enhancement of the variations in gray-levels between neighbors is a favorable factor for improving the classification performance. The “±3σ” normalization technique has become the most popular and preferred choice in most publications [1, 5557]. In another study, Loizou et al. [58] compared six MRI normalization methods applied to T2-weighted MR images from patients with multiple sclerosis and healthy volunteers. They concluded that a method based on normalization of the whole brain, in which the original histogram is stretched and shifted in order to cover a wider dynamic range, is the most appropriate for the assessment of multiple sclerosis brain lesions by means of texture analysis.

Figure 7.

Example of region of interest (ROI) normalization of a cardiac MR image. The extracted ROI is shown in the original histogram and after normalization using the “±3σ” method.

2.3.3. Inhomogeneity correction

There is still another residual effect that is not eliminated by ROI normalization, which is the variation of intensity present in MR images mainly caused by static magnetic field inhomogeneity and imperfections of the radiofrequency coils [17]. Figure 8 shows examples of liver MRI affected by nonuniformity artifacts. Texture features depend on local average image intensity and are therefore affected by image inhomogeneity. Correction of nonuniformity artifacts in MRI is recommended as a preprocessing step prior to ROI normalization and especially for large ROIs [59]. A review of methods for MRI inhomogeneity correction is available in [60], the most popular method found in texture literature [6164] being the so-called N3 algorithm [65].

Figure 8.

Example of a liver MRI with inhomogeneity (a), the average local image intensity of the lower left part is darker than the upper part. The corrected image is shown in (b). Reproduced with permission from [59].

2.3.4. Quantization of gray-levels

Texture analysis methods based on matrix computation, e.g., co-occurrence and run-length matrices, require the quantization of gray-levels. A typical MR image is represented by 10 or 12 bits per pixel, that is, 1024 or 4096 levels of gray. So, in MRI texture analysis, quantization will refer to the reduction of levels of gray used to represent the image. Typical numbers of gray-levels used for texture feature computation are 16, 32, 64, 128, and 256. Reducing the number of gray-levels improves SNR and the counting statistics inherent in the matrix-based texture analysis method at the expense of discriminatory power [66]. Some studies reported that no significant effects were found when a different number of gray levels were tested [55, 67] while in the study of Chen et al. [44], a gray-level number of 32 was reported to be an optimal choice for breast MRI. A specific study regarding the impact of the number of gray-levels on co-occurrence matrix texture features was carried out by Mahmoud-Ghoneim et al. [68]. They concluded that the number of gray levels, or dynamic range, has a significant influence on the classification of brain white matter, obtaining an optimal number of 128 levels for both 2D and 3D texture analysis approaches. It is recommended to optimize the number of gray levels for each specific application.

2.4. Feature extraction

Feature extraction is the main and specific step in the texture analysis process and implies the computation of texture features from predefined ROIs. Many approaches have been proposed in order to quantify the texture of an image allowing the computation of numerous features. In this section, we briefly describe the most popular texture analysis methods that were successfully used to characterize MRI tissues. A review of existing feature extraction methods can be found in [69, 70]. Although methods based on the first order statistics (histogram features) are normally used in combination with other methods, as they may improve the texture-based classification or segmentation [10, 7173], they are not presented here as they do not really describe the actual texture of the image or ROI being analyzed [70].

2.4.1. Statistical methods

Statistical methods represent the texture by considering the distributions and relationships between the gray-levels of an image. Hereby we briefly describe a method based on second-order statistics, the co-occurrence matrix, a method based on higher-order statistics, namely the run-length matrix, and a method that combines the statistical approach with the structural properties of the image known as local binary patterns (LBP).

2.4.1.1. Co-occurrence matrix

The co-occurrence matrix allows extraction of statistical information regarding the distribution of pixel pairs in the image. Pairs of pixels separated by a predefined distance and direction are counted and the resulting values are allocated in the co-occurrence matrix. The count is based on the number of pairs of pixels that have the same distribution of gray-level values [3]. Normally, co-occurrence matrices are computed in four directions (horizontal, vertical, 45°, 135°) for 2D, and in 13 directions for 3D approaches [52], using different pixel or voxel separations. Features originally proposed by Haralick et al. [74, 75] are then computed for each co-occurrence matrix. Figure 9 shows an example of computation of the co-occurrence matrix. The pixel distance has to be chosen according to the application: a larger distance will allow detection of coarse areas but care must be taken not to overstep the size of the ROIs.

Figure 9.

Computation of a co-occurrence matrix for a given 4 × 4 pixel image (a) with three gray-levels (b). In this example, the matrix is computed in horizontal direction for one pixel separation. The number of transitions of gray-levels is counted and allocated in the co-occurrence matrix (c). The circled values indicate that there are three transitions from one to two gray levels and this count is allocated in the corresponding position in the co-occurrence matrix.

One main concern about matrix-based texture features is their dependence on direction, so different values may be obtained if the image is rotated. This is unacceptable for texture characterization on MRI since images from different patients may have different orientations. Rotation-invariant features can be achieved by averaging each matrix value over all directions [44, 49] or by averaging the statistical features derived from the co-occurrence matrices [47]. Texture features based on co-occurrence matrices have become the most popular method and have been proven to be useful for classification of tissues and lesions in MRI [7681].

2.4.1.2. Run-length matrix

Run-length matrices consider higher-order statistical information in comparison with co-occurrence matrices. Runs of a specific gray-level are counted for a chosen direction. For example, three consecutive pixels with the same gray-level value along the horizontal direction constitutes one run of length three. Computation of a simple run-length matrix is shown in Figure 10. Fine textures will be dominated by short runs whereas coarse textures will include longer runs [69]. The features originally proposed by Galloway [82] are usually computed for ROI characterization. Rotation invariance can be achieved by averaging over all directions, as previously mentioned for the co-occurrence matrix method.

There is one important consideration about run-length matrix features. As demonstrated by Sikiö et al. [46], the features run-length nonuniformity (RLN) and gray-level nonuniformity (GLN) were linearly dependent on the ROI size. The linear behavior of these nonuniform features is due to the original mathematical definition, which squares the number of gray-levels (Ng) for each run length (Eq. (1)) or the number of run-lengths (Nr) for each gray-level (Eq. (2)). Thus, for a larger ROI there will be more runs. The normalization factor C (Eq. (3)) is correlated with the number of pixels and not its square. Using the square of the normalization factor C, as proposed by Loh et al. [83], is a recommended approach in order to reduce the dependence on ROI size.

p(i,j)i=1NgRLN=C1j=1NrE1
p(i,j)j=1NrGLN=C1i=1NgE2
C=i=1Ngj=1Nrp(i,j)E3

Figure 10.

Computation of a run-length matrix for a given 4 × 4 pixels image (a) with three different gray-levels (b). The number of runs for each gray-level is allocated in the run-length matrix (c). For example, there are two runs of size two for the gray-level of three (circled values).

2.4.1.3. Local binary patterns

The local binary pattern (LBP) is a texture descriptor introduced by Ojala et al. [84] and it became very popular, thanks to its simplicity and high-discriminative power. The LBP descriptor labels each pixel in an image by comparing its gray-level with the surrounding pixels and then assigning a binary number. A value of unity is assigned to the surrounding neighbors with gray-level greater than the central pixel in a predefined patch and a value of zero otherwise. A binary number is then obtained and assigned to the central pixel. The original LBP operator considers a 3 × 3 patch, so the surrounding pixels form a binary number of 8 digits. After labeling all pixels in an image, a LBP feature map is obtained as well as a histogram that consists of 256 bins when considering 3 × 3 patch. Figure 11 summarizes the described steps. The LBP histogram can be used as feature vector for classification where each bin represents one feature. Another approach is to compute new features from the LBP map as carried out by Oppedal et al. [20] and Sheethal et al. [85]. Uniform LBPs have been proposed to reduce the length of the feature histogram. A LBP binary code is uniform if it contains at most two transitions from 0 to 1 or vice-versa. Examples of uniform patterns are: 0000000 (no transitions), 00001111 (one transition), and 10001111 (two transitions). Patterns with more than two transitions are labeled as nonuniform, and distinct labels are assigned to each uniform pattern. For a 3 × 3 patch, the number of bins on the uniform histogram is reduced to 59 instead of the original 256. Uniform LBP patterns function as templates for microstructures, such as spots, edges, corners, etc.

Figure 11.

Computation of a basic local binary pattern (LBP) image. For each pixel in the original image, its gray-level is compared to the surrounding pixels. A value of unity is assigned to the pixels with gray-level greater than the central pixel, and a value of zero otherwise. Then a binary number is obtained and this value is assigned to the central pixel.

The original LBP descriptor defined for a 3 × 3 patch was extended to include more neighbors by adding two parameters: parameter P that defines the number of neighbors, and radius R that determines the spatial resolution. Quantification at different resolutions can be obtained by varying these two parameters. Enhancement of the discriminatory power of the LBP descriptor can be obtained by adding an image contrast measure that calculates the local variance in the pixel neighborhood. The contrast measure is the difference between the average gray-level of those pixels that have unity value and those with zero value [20, 14]. Rotation invariance is achieved by performing a bit-wise shift operation on the binary pattern P-1 times and assigning the LBP value that is the smallest. It has been shown by Unay et al. [86] that rotation-invariant LBP is robust against some common MRI artifacts. Their results show that LBP is robust to image inhomogeneity even at 40% of intensity variations. An extension of the LBP operator on three orthogonal planes, known as LBP-TOP, was proposed by Zhao et al. [87] and successfully applied to the entire brain for 3D texture classification of attention-deficit/hyperactivity disorders [88].

2.4.2. Model-based methods

Texture methods based in models attempt to represent texture by the use of a generative image model (fractals), or a stochastic model. Parameters of the model are then calculated and used for texture analysis. The computational complexity involved in the estimation of features based in models make them less popular than the previously described methods [70].

2.4.2.1 Autoregressive models

The autoregressive models assume a local interaction between the image pixels by considering the pixel gray-level as a weighted sum of its neighbors. Using the autoregressive model involves identifying the model parameters for a given image region and then using them for texture classification [89]. In the study by Holli et al. [40], significant differences were found especially for features derived from the autoregressive model when comparing brain hemispheres in controls and patients with mild traumatic injury. Application of the autoregressive model in different MRI organs was also found beneficial when used in combination with features derived from other methods [4, 64, 90, 91].

2.4.2.2 Fractal models

Fractal models describe objects that have high degree of irregularity. The central concept of fractal models is the property of self-similarity, which is the property of an object to be decomposed into smaller similar copies of itself. Fractal analysis methods provide a statistical measure that reflects pattern changes as a function scale by defining a parameter called fractal dimension. The fractal dimension describes the disorder of an object numerically; the higher the dimension, the more complicated the object. The fractal dimension is often estimated by box counting, a procedure that overlays the image with grids of decreasing size in order to capture the contour of relevant texture. Another approach treats the image as a textured surface by plotting the gray-levels at each x and y position in the z plane [69, 89, 92]. Fractal models have been successfully used especially for segmentation of brain tissues and lesions [72, 93], and for prostate tissue classification in combinations with other methods [26, 94]. For brain tumor evaluation, Yang et al. [63] achieved slightly better results using fractals in comparison with other methods such as LBP, the co-occurrence matrix, and the run-length matrix.

2.4.3. Transform methods

Methods based on image transformation produce an image in a space whose coordinate system is related to texture characteristics, such as frequency content or spatial resolution. Gabor filters provide better spatial localization compared to the Fourier transform, but their usefulness is limited in practice because there is no single filter resolution at which a spatial structure can be localized [70]. In [93], they implemented Gabor features for brain tumor segmentation but the performance was poorer than obtained with fractals and intensity methods, and even combining Gabor features with other methods did not improve the performance. In [95], the co-occurrence matrix features outperformed the Gabor features for 3D classification of brain tumors.

2.4.3.1 The wavelet transform method

The wavelet transform is a technique that analyzes the frequency content of an image within different scales and frequency directions. The frequency is directly proportional to the gray-level variations within the image. Wavelet coefficients corresponding to different frequency scales and directions can be obtained to describe a given image. Wavelet coefficients are associated to each pixel in an image to characterize the frequency content at that point over different scales [3, 89]. Figure 12 shows an example of a wavelet transform applied to an image at different scales.

Figure 12.

Wavelet transform of a cardiac MR image at one-scale decomposition. The high-high (HH) subimage represents diagonal high frequencies, high-low (HL) extracts the horizontal high frequencies, low-high (LH) vertical high frequencies, and the image low-low (LL) represents the lowest frequencies.

Wavelet transform methods are popular because they offer some advantages, such as: variation of the spatial resolution to represent textures at the most appropriate scale, and the wide range of choices for the wavelet function that can be adjusted for specific applications [70]. Wavelet-derived texture features have high discriminatory power and usually provide better classification than other methods as has been shown for assessment of mild traumatic brain injury [40], knee tissue discrimination [16], and for classification of foam phantoms [18]. It was also demonstrated that wavelet texture features are less sensitive to changes in the MRI acquisition protocol [18]. In some studies the wavelet transform has been used as a preprocessing method to enhance texture appearance by selecting the sub-band with the maximum variance [96, 97]. Features derived from other methods can be extracted from these preprocessed images. Other approaches applied the wavelet transform over previously computed texture maps [10, 85].

2.4.4. 3D texture analysis

Feature extraction methods were first proposed for 2D texture analysis. The advantage of the volumetric nature of MRI datasets can be obtained by extending the original methods to 3D. A simple approach to capture volumetric information is to compute 2D features in all MRI slices and then average these values, as done by Assefa et al. [49], but in this case the gray-level distributions in the third dimension are not taken into account. Nevertheless, it has been shown that even this simple averaging method outperforms the typical 2D approach where only one slice is analyzed [98]. The extension of 2D approaches to 3D is not straightforward as factors such as translation and scaling become more complex. A review of 3D feature extraction methods is presented in [52]. Compared with 2D texture analysis, 3D approaches increase the dimensionality and the information captured from the image, thus improving the discrimination power [44, 99101]. Implementation of 4D texture analysis is possible by including the temporal dimension available in some MRI datasets. Notable results were observed for discrimination of benign and malignant breast lesions [102] and for localization and segmentation of the heart [103] using the 4D spatio-temporal approach.

2.4.5. Feature extraction tools

The widely used software package MaZda (Institute of Electronics, Technical University of Lodz, Lodz, Poland) [104] is freely available and allows computation of texture features based on co-occurrence matrix, run-length matrix, gradient matrix, autoregressive model, and the Haar wavelet transform. MATLAB (MathWorks Inc., Natick, MA) toolboxes can also be found for texture feature extraction, like the one provided by Vallières et al. [55]

Available from https://github.com/mvallieres/radiomics

that allows computation of features based on four matrix methods, and the implementation of the local binary pattern operator provided by Ojala et al. [84]

Available from http://www.cse.oulu.fi/CMV/Downloads/LBPSoftware

.

2.5. Feature selection

The vast variety of feature extraction methods for texture analysis allows us to obtain a myriad of features. This generates a problem, because the more features we have, the more complex the classification model becomes. The computed features have different discrimination power depending on the application. Redundant or irrelevant features hinder the classification performance and can yield issues of dimensionality. This phenomenon arises when dealing with a high-dimensional feature space. The classification performance decreases when more features are added to the model. Feature selection is the process to choose the most relevant features for a specific application. Reducing the number of features speeds up the testing of new data and makes the classification problem easier to understand, but the main benefit is the increase of classification performance [105, 106]. While methods like principal component analysis (PCA) or linear discriminant analysis (LDA) are used for feature reduction [23, 56], they are not considered as feature selection methods since they still require computation of all the original features [107].

2.5.1. Filter methods

A straightforward approach to find the most discriminative features, or the combination of features that yields the best classification, is to perform an exhaustive search as done by [26, 33, 108]. In the exhaustive search method, all possible combinations of features are tested as input to a classifier and those that yield the best discrimination are selected. The problem with this method is that it becomes tremendously expensive to compute when the feature space is very high. Filter feature selection methods make use of a certain parameter to measure the discriminatory power. For example, typical statistical methods, such as the Mann-Whitney U-test, can be used to find and select features with statistical significance [64]. The Fisher coefficient defines the ratio of between-class variances to within-class variances and it is a popular filter method found in the literature [21, 40, 47, 109111]. However, it was claimed that the Fisher technique generates highly correlated features with the same discriminatory power. Another method that relies on both the probability of classification error (POE) and the average correlation coefficient (ACC) was reported to perform better than the Fisher method for classification of knee joint tissues [16]. Filter methods rank the features according to the measuring parameter and usually a predefined number of features is selected, e.g. 5 or 10, for future classification. The Fisher and POE/ACC feature selection methods are implemented in the B11 module which is part of the MaZda software (Institute of Electronics, Technical University of Lodz, Lodz, Poland) [104].

2.5.2. Wrapper methods

The main drawback of the filter methods is that feature selection is based on the intrinsic information of the training data and does not consider the predictive capability of a certain subset of features. Wrapper methods take advantage of a classification algorithm and search the subset of features that provides optimal classification performance. The quality of the selected subset of features depends fundamentally on the search algorithm used. We mentioned earlier that an exhaustive search is not feasible for high dimensional datasets, so an algorithm that uses some type of search strategy has to be chosen. Genetic search algorithms have been effectively applied for brain tumor classification [95, 97] and mammogram lesions [98]. Another search algorithm, the recursive feature elimination (RFE), ranks the features by recursively training a classifier and removing the feature with the smallest ranking score and selecting the subset of features that yields the best classification. Any classifier can be used in conjunction with the RFE to compute the feature scores. The feature selection technique known as recursive feature elimination-support vector machine (RFE-SVM), first proposed for gene selection in cancer classification [112], has gained major attention for selecting texture features due to its good performance over other methods [113], and in MRI was particularly used for brain tumor classification [1, 4].

2.6. Classification

The main goal in texture analysis applications is the classification of different tissues and lesions to automate or aid the diagnosis decision. The results of a texture-based classification method can be later used to partition new images into regions, an approach known as texture-based segmentation [70]. Simple statistical methods can be used to determine the texture features with statistical significance for discrimination of two or more classes. However, following the feature selection step described in the previous section, we focus on more complex classification algorithms that make use of proper combination of features to achieve the highest discrimination. The feature selection and classification steps are not specific for texture analysis, so instead of providing a full description of the existing methods, we briefly describe the two classifiers mostly used in MRI texture analysis applications, which are artificial neural networks (ANN) and support vector machines (SVM).

2.6.1. Artificial neural networks

Artificial neural networks (ANN) simulate the way the human brain processes information by implementing nodes and inter-connections. The ANN discrimination power depends on the density and complexity of these interconnections [114]. Applications of ANNs in MRI texture analysis include classification of: brain tumors [23, 115], multiple sclerosis lesions [109], Alzheimer’s disease [111], and breast [102] and knee lesions [16]. While ANNs perform well in most applications, their popularity decreased in the past years due to the introduction of the support vector machine (SVM), which is computationally cheaper and provides similar or even better performance than ANNs [114].

2.6.2. Support vector machines

The SVM maps the input space to a higher dimension via a kernel function to find a hyperplane that will result in maximal discrimination. Here, a kernel is a matrix that encodes the similarities between samples that can be used to achieve discrimination between classes that are not linearly separable [114]. In [4], they demonstrated better performance of the SVM classifier over ANN for differentiation of benign and malignant brain tumors. SVMs were also applied for brain tumor classification in [1, 116]. Other applications of SVMs include the staging of liver fibrosis [33], detection of prostate [26], assessment of osteoarthritis [117], classification of cervical cancer [118], mammogram lesions [98], and Parkinson disease [73].

2.6.3. Classification results

Important considerations have to be made when reporting classification results. To avoid overestimated values, it is always recommended to separate the data into training and validation sets so that results on new data can be reported. When the dataset is sparse, resampling approaches like cross-validation or bootstrapping are recommended. For unbalanced data, i.e., data containing more normal than abnormal tissues, it is suggested to report results using the area under the curve (AUC) of the receiver operating characteristic (ROC) instead of the overall accuracy or misclassification rate [114]. Feature vector standardization is recommended and required for some classification methods to work accurately and to improve accuracy in some cases [16].

Advertisement

3. Summary

In this chapter, we reviewed the literature regarding the application of texture analysis in MRI. This chapter was organized and focused on the six steps that define the texture analysis process: MRI acquisition, ROI definition, ROI preprocessing, feature extraction, feature selection, and classification. Our main goal was to provide a condensed reference of the state of the art and especially to make readers aware about important considerations to be made for future applications in order to implement MRI texture analysis into clinical practice. Since many parameters can vary in each step, it is impossible to give a definite guideline of what needs to be used, while each choice has to be made in view of the specific application. The clinical applicability relies on the reproducibility of the methods regarding the scanner and acquisition parameters. Therefore, it is necessary to execute more multicenter studies combining different acquisition protocols and applying appropriate preprocessing steps to ensure that texture features describe the actual image characteristics and are not biased by other factors. Regarding the ROI definition step, it is recommended to carry out studies using automatic methods to guarantee user independence. Finally, we suggest to compute as many texture features as possible and to take advantage of powerful feature selection and classification techniques to achieve the highest performance.

Advertisement

Acknowledgments

This work was supported in part by the Spanish Ministerio de Economía y Competitividad (MINECO) FEDER funds under grant BFU2015-64380-C2-2-R, by Instituto de Salud Carlos III and FEDER funds under grants FIS PI14/00271 and PIE15/00013 and by the Generalitat Valenciana under grant PROMETEO/2013/007. The first author, Andrés Larroza, was supported by grant FPU12/01140 from the Spanish Ministerio de Educación, Cultura y Deporte (MECD).

References

  1. 1. Larroza A, Moratal D, Paredes-Sánchez A, Soria-Olivas E, Chust ML, Arribas LA, Arana E. Support vector machine classification of brain metastasis and radiation necrosis based on texture analysis in MRI. Journal of Magnetic Resonance Imaging. 2015;42(5):1362–1368. DOI: 10.1002/jmri.24913
  2. 2. Castellano G, Bonilha L, Li LM, Cendes F. Texture analysis of medical images. Clinical Radiology. 2004;59:1061–1069. DOI: 10.1016/j.crad.2004.07.008
  3. 3. Materka A. What is the texture? In: Hajek M, Dezortova M, Materka A, Lerski R, Editors. Texture Analysis for Magnetic Resonance Imaging. 1st Ed. Prague, Czech Republic: Med4publishing; 2006. p. 11–40.
  4. 4. Juntu J, Sijbers J, De Backer S, Rajan J, Van Dyck D. Machine learning study of several classifiers trained with texture analysis features to differentiate benign from malignant soft-tissue tumors in T1-MRI images. Journal of Magnetic Resonance Imaging. 2010;31(3):680–689. DOI: 10.1002/jmri.22095
  5. 5. Zacharaki EI, Wang S, Chawla S, Soo Yoo D, Wolf R, Melhem ER, Davatzikos C. Classification of brain tumor type and grade using MRI texture and shape in a machine learning scheme. Magnetic Resonance in Medicine. 2009;62(6):1609–1618. DOI: 10.1002/mrm.22147
  6. 6. De Oliveira MS, Balthazar ML, D’Abreu A, Yasuda CL, Damasceno BP, Cendes F, Castellano G. MR imaging texture analysis of the corpus callosum and thalamus in amnestic mild cognitive impairment and mild Alzheimer disease. American Journal of Neuroradiology. 2011;32(1):60–66. DOI: 10.3174/ajnr.A2232
  7. 7. Santos TA, Maistro CE, Silva CB, Oliveira MS, Franca MC, Castellano G. MRI texture analysis reveals bulbar abnormalities in Friedreich ataxia. American Journal of Neuroradiology. 2015;36(12):2214–2218. DOI: 10.3174/ajnr.A4455
  8. 8. Karimaghaloo Z, Rivaz H, Arnold DL, Collins DL, Arbel T. Temporal hierarchical adaptive texture CRF for automatic detection of gadolinium-enhancing multiple sclerosis lesions in brain MRI. IEEE Transactions on Medical Imaging. 2015;34(6):1227–1241. DOI: 10.1109/TMI.2014.2382561
  9. 9. Iftekharuddin KM, Ahmed S, Hossen J. Multiresolution texture models for brain tumor segmentation in MRI. In: 33rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, IEEE, Boston, Massachusetts, USA; 2011. p. 6985–6988. DOI: 10.1109/IEMBS.2011.6091766
  10. 10. Yao J, Chen J, Chow C. Breast tumor analysis in dynamic contrast enhanced MRI using texture features and Wavelet transform. IEEE Journal of Selected Topics in Signal Processing. 2009;3(1):94–100. DOI: 10.1109/JSTSP.2008.2011110
  11. 11. Jirák D, Dezortová M, Taimr P, Hájek M. Texture analysis of human liver. Journal of Magnetic Resonance Imaging. 2002;15(1):68–74. DOI: 10.1002/jmri.10042 68
  12. 12. Ghose S, Oliver A, Martí R, Lladó X, Freixenet J, Vilanova JC, Meriaudeau F. Prostate segmentation with local binary patterns guided active appearance models. In: Proceedings of SPIE 7962, Medical Imaging: Image Processing, 796218, SPIE, Lake Buena Vista, Florida, USA; 2011. DOI: 10.1117/12.877955
  13. 13. Eftestøl T, Maløy F, Engan K, Kotu LP, Woie L, Ørn S. A texture-based probability mapping for localisation of clinically important cardiac segments in the myocardium in cardiac magnetic resonance images from myocardial infarction patients. In: IEEE International Conference on Image Processing (ICIP), IEEE, Paris, France; 2014. p. 2227–2231. DOI: 10.1109/ICIP.2014.7025451
  14. 14. Kotu LP, Engan K, Eftestøl T, Woie L, Ørn S, Katsaggelos AK. Local binary patterns used on cardiac MRI to classify high and low risk patient groups. In: Proceedings of the 20th European Signal Processing Conference (EUSIPCO), IEEE, Bucharest, Romania; 2012. p. 2586–2590.
  15. 15. Schad LR, Lundervold A. Influence of resolution and signal to noise ratio on MR image texture. In: Hajek M, Dezortova M, Materka A, Lerski R, Editors. Texture Analysis for Magnetic Resonance Imaging. 1st ed. Prague, Czech Republic: Med4publishing; 2006. p. 127–147.
  16. 16. Mayerhoefer ME, Breitenseher MJ, Kramer J, Aigner N, Hofmann S, Materka A. Texture analysis for tissue discrimination on T1-weighted MR images of the knee joint in a multicenter study: Transferability of texture features and comparison of feature selection methods and classifiers. Journal of Magnetic Resonance Imaging. 2005;22(5):674–680.
  17. 17. Schad LR. Problems in texture analysis with magnetic resonance imaging. Dialogues in Clinical Neuroscience. 2004;6(2):235–242.
  18. 18. Waugh SA, Lerski RA, Bidaut L, Thompson AM. The influence of field strength and different clinical breast MRI protocols on the outcome of texture analysis using foam phantoms. Medical Physics. 2011;38(9):5058–5066. DOI: 10.1118/1.3622605
  19. 19. Tiwari P, Prasanna P, Rogers L, Wolansky L, Badve C, Sloan A, Cohen M, Madabhushi A. Texture descriptors to distinguish radiation necrosis from recurrent brain tumors on multi-parametric MRI. In: Aylward S, Hadjiiski LM, Editors. SPIE Medical Imaging, SPIE, San Diego, California, USA; 2014. vol. 9035. p. 90352B. DOI: 10.1117/12.2043969
  20. 20. Oppedal K, Eftestøl T, Engan K, Beyer MK, Aarsland D. Classifying dementia using local binary patterns from different regions in magnetic resonance images. International Journal of Biomedical Imaging. 2015;2015:1-14. DOI: 10.1155/2015/572567
  21. 21. Herlidou-Même S, Constans JM, Carsin B, Olivie D, Eliat PA, Nadal-Desbarats L, Gondry C, Le Rumeur E, Idy-Peretti I, de Certaines JD. MRI texture analysis on texture test objects, normal brain and intracranial tumors. Magnetic Resonance Imaging. 2003;21(9):989–993.
  22. 22. Fruehwald-Pallamar J, Hesselink J, Mafee M, Holzer-Fruehwald L, Czerny C, Mayerhoefer M. Texture-based analysis of 100 MR examinations of head and neck tumors – is it possible to discriminate between benign and malignant masses in a multicenter trial? Fortschr Röntgenstr. 2016;188(2):195–202. DOI: 10.1055/s-0041-106066
  23. 23. Orphanidou-Vlachou E, Vlachos N, Davies NP, Arvanitis TN, Grundy RG, Peet AC. Texture analysis of T1- and T2-weighted MR images and use of probabilistic neural network to discriminate posterior fossa tumours in children. NMR in Biomedicine. 2014;27(6):632–639. DOI: 10.1002/nbm.3099
  24. 24. Brynolfsson P, Nilsson D, Henriksson R, Hauksson J, Karlsson M, Garpebring A, Birgander R, Trygg J, Nyholm T, Asklund T. ADC texture—an imaging biomarker for high-grade glioma? Medical Physics. 2014;41(10):101903. DOI: 10.1118/1.4894812
  25. 25. Foroutan P, Kreahling JM, Morse DL, Grove O, Lloyd MC, Reed D, Raghavan M, Altiok S, Martinez GV, Gillies RJ. Diffusion MRI and novel texture analysis in osteosarcoma xenotransplants predicts response to anti-checkpoint therapy. PLoS One. 2013;8(12):e82875. DOI: 10.1371/journal.pone.0082875
  26. 26. Khalvati F, Wong A, Haider MA. Automated prostate cancer detection via comprehensive multi-parametric magnetic resonance imaging texture feature models. BMC Medical Imaging. 2015;15:27. DOI: 10.1186/s12880-015-0069-9
  27. 27. Jirák D, Dezortová M, Hájek M. Phantoms for texture analysis of MR images. Long-term and multi-center study. Medical Physics. 2004;31(3):616–622.
  28. 28. Mayerhoefer ME, Szomolanyi P, Jirak D, Materka A, Trattnig S. Effects of MRI acquisition parameter variations and protocol heterogeneity on the results of texture analysis and pattern discrimination: An application-oriented study. Medical Physics. 2009;36(4):1236–1243. DOI: 10.1118/1.3081408
  29. 29. Jirák D, Dezortova M, Hajek M. Phantoms for texture analysis of MR images. In: Hajek M, Dezortova M, Materka A, Lerski R, Editors. Texture Analysis for Magnetic Resonance Imaging. 1st ed. Prague, Czech Republic: Med4publishing; 2006. p. 113–124.
  30. 30. Tameem HZ, Selva LE, Sinha US. Texture measure from low resolution MR images to determine trabecular bone integrity in osteoporosis. In: 29th Annual International Conference of the IEEE on Engineering in Medicine and Biology Society, EMBS 2007, IEEE, Lyon, France; 2007. p. 2027–2030. DOI: 10.1109/IEMBS.2007.4352717
  31. 31. Savio SJ, Harrison LCV, Luukkaala T, Heinonen T, Dastidar P, Soimakallio S, Eskola HJ. Effect of slice thickness on brain magnetic resonance image texture analysis. Biomedical Engineering Online. 2010;9:60. DOI: 10.1186/1475-925X-9-60
  32. 32. Bradley WG. Pros and cons of 3 Tesla MRI. Journal of the American College of Radiology. 2009;5(8):871–878. DOI: 10.1016/j.jacr.2008.02.005
  33. 33. Zhang X, Gao X, Liu BJ, Ma K, Yan W, Liling L, Yuhong H, Fujita H. Effective staging of fibrosis by the selected texture features of liver: Which one is better, CT or MR imaging? Computerized Medical Imaging and Graphics. 2015;46:227–236. DOI: 10.1016/j.compmedimag.2015.09.003
  34. 34. Giger M, Li H, Lan L, Abe H, Newstead G. Quantitative MRI phenotyping of breast cancer across molecular classification subtypes. In: Fujita H, Hara T, Muramatsu C, Editors. Breast Imaging. Volume 8539 of the series Lecture Notes in Computer Science. Springer, Switzerland; 2014. p. 195–200. DOI: 10.1007/978-3-319-07887-8_28
  35. 35. Sanz-Cortes M, Figueras F, Bonet-Carne E, Padilla N, Tenorio V, Bargalló N, Amat-Roldn I, Gratacós E. Fetal brain MRI texture analysis identifies different microstructural patterns in adequate and small for gestational age fetuses at term. Fetal Diagnosis and Therapy. 2013;33(2):122–129. DOI: 10.1159/000346566
  36. 36. Harrison LCV, Nikander R, Sikiö M, Luukkaala T, Helminen MT, Ryymin P, Soimakallio S, Eskola HJ, Dastidar P, Sievänen H. MRI texture analysis of femoral neck: Detection of exercise load-associated differences in trabecular bone. Journal of Magnetic Resonance Imaging. 2011;34(6):1359–1366. DOI: 10.1002/jmri.22751
  37. 37. Shi Z, Yang Z, Zhang G, Cui G, Xiong X, Liang Z, Lu H. Characterization of texture features of bladder carcinoma and the bladder wall on MRI: Initial experience. Academic Radiology. 2013;20(8):930–938. DOI: 10.1016/j.acra.2013.03.011
  38. 38. Loizou CP, Petroudi S, Seimenis I, Pantziaris M, Pattichis CS. Quantitative texture analysis of brain white matter lesions derived from T2-weighted MR images in MS patients with clinically isolated syndrome. Journal of Neuroradiology. 2015;42(2):99–114. DOI: 10.1016/j.neurad.2014.05.006
  39. 39. Liu H, Shao Y, Guo D, Zheng Y, Zhao Z, Qiu T. Cirrhosis classification based on texture classification of random features. Computational and Mathematical Methods in Medicine. 2014;2014:536308. DOI: 10.1155/2014/536308
  40. 40. Holli KK, Harrison L, Dastidar P, Wäljas M, Liimatainen S, Luukkaala T, Ohman J, Soimakallio S, Eskola H. Texture analysis of MR images of patients with mild traumatic brain injury. BMC Medical Imaging. 2010;10:8. DOI: 10.1186/1471-2342-10-8
  41. 41. Karimaghaloo Z, Rivaz H, Arnold DL, Collins DL, Arbel T. Adaptive voxel, texture and temporal conditional random fields for detection of Gad-enhancing multiple sclerosis lesions in brain MRI. In: Medical Image Computing and Computer-Assisted Intervention-MICCAI 2013, Springer, Nagoya, Japan; 2013. p. 543–550. DOI: 10.1007/978-3-642-40760-4_68
  42. 42. Vignati A, Mazzetti S, Giannini V, Russo F, Bollito E, Porpiglia F, Stasi M, Regge D. Texture features on T2-weighted magnetic resonance imaging: New potential biomarkers for prostate cancer aggressiveness. Physics in Medicine and Biology. 2015;60(7):2685–2701. DOI: 10.1088/0031-9155/60/7/2685
  43. 43. Mayerhoefer ME, Stelzeneder D, Bachbauer W, Welsch GH, Mamisch TC, Szczypinski P, et al. Quantitative analysis of lumbar intervertebral disc abnormalities at 3.0 Tesla: Value of T(2) texture features and geometric parameters. NMR in Biomedicine. 2012;25(6):866–872. DOI: 10.1002/nbm.1803
  44. 44. Chen W, Giger ML, Li H, Bick U, Newstead GM. Volumetric texture analysis of breast lesions on contrast-enhanced magnetic resonance images. Magnetic Resonance in Medicine. 2007;58(3):562–571.
  45. 45. Materka A. Statistical methods. In: Hajek M, Dezortova M, Materka A, Lerski R, Editors. Texture Analysis for Magnetic Resonance Imaging. 1st ed. Prague, Czech Republic: Med4publishing; 2006. p. 79–103.
  46. 46. Sikiö M, Harrison L, Eskola H. The effect of region of interest size on textural parameters. A study with clinical magnetic resonance images and artificial noise images. In: 9th International Symposium on Image and Signal Processing and Analysis (ISPA), IEEE, Zagreb, Croatia; 2015. p. 149–153. DOI: 10.1109/ISPA.2015.7306049
  47. 47. Antel SB, Collins DL, Bernasconi N, Andermann F, Shinghal R, Kearney RE, Arnold DL, Bernasconi A. Automated detection of focal cortical dysplasia lesions using computational models of their MRI characteristics and texture analysis. Neuroimage. 2003;19(4):1748–1759. DOI: 10.1016/S1053-8119(03)00226-X
  48. 48. Kjaer L, Ring P, Thomsen C, Henriksen O. Texture analysis in quantitative MR imaging. Tissue characterisation of normal brain and intracranial tumours at 1.5 T. Acta Radiologica. 1995;36(2):127–135.
  49. 49. Assefa D, Keller H, Ménard C, Laperriere N, Ferrari RJ, Yeung I. Robust texture features for response monitoring of glioblastoma multiforme on T1-weighted and T2-FLAIR MR images: A preliminary investigation in terms of identification and segmentation. Medical Physics. 2010;37(4):1722–1736. DOI: 10.1118/1.3357289
  50. 50. Chen X, Wei X, Zhang Z, Yang R, Zhu Y, Jiang X. Differentiation of true-progression from pseudoprogression in glioblastoma treated with radiation therapy and concomitant temozolomide by GLCM texture analysis of conventional MRI. Clinical Imaging. 2015;39(5):775–780. DOI: 10.1016/j.clinimag.2015.04.003
  51. 51. Mayerhoefer ME, Szomolanyi P, Jirak D, Berg A, Materka A, Dirisamer A, Trattnig S. Effects of magnetic resonance image interpolation on the results of texture-based pattern classification: A phantom study. Investigative Radiology. 2009;44(7):405–411. DOI: 10.1097/RLI.0b013e3181a50a66
  52. 52. Depeursinge A, Foncubierta-Rodriguez A, Van De Ville D, Müller H. Three-dimensional solid texture analysis in biomedical imaging: Review and opportunities. Medical Image Analysis. 2014;18(51):176–196. DOI: 10.1016/j.media.2013.10.005
  53. 53. Materka A, Strzelecki M, Lerski R, Schad L. Evaluation of texture features of test objects for magnetic resonance imaging. In: Pietikainen M, Editor. Infotech Oulu Workshop on Texture Analysis in Machine Vision. Infotech, Oulu, Finland; 1999. p. 13–19.
  54. 54. Collewet G, Strzelecki M, Mariette F. Influence of MRI acquisition protocols and image intensity normalization methods on texture classification. Magnetic Resonance Imaging. 2004;22(1):81–91. DOI: 10.1016/j.mri.2003.09.001
  55. 55. Vallières M, Freeman CR, Skamene SR, El Naqa I. A radiomics model from joint FDG-PET and MRI texture features for the prediction of lung metastases in soft-tissue sarcomas of the extremities. Physics in Medicine and Biology. 2015;60(14):5471–5496. DOI: 10.1088/0031-9155/60/14/5471
  56. 56. Thornhill RE, Golfam M, Sheikh A, Cron GO, White EA, Werier J, Schweitzer ME, Di Primio G. Differentiation of lipoma from liposarcoma on MRI using texture and shape analysis. Academic Radiology. 2014;21(9):1185–1194. DOI: 10.1016/j.acra.2014.04.005
  57. 57. Kölhi P, Järnstedt J, Sikiö M, Viik J, Dastidar P, Peltomäki T, Eskola H. A texture analysis method for MR images of airway dilator muscles: A feasibility study. Dentomaxillofacial Radiology. 2014;43(5):20130403. DOI: 10.1259/dmfr.20130403
  58. 58. Loizou CP, Pantziaris M, Seimenis I, Pattichis CS. Brain MR image normalization in texture analysis of multiple sclerosis. In: 9th International Conference on Information Technology and Applications in Biomedicine, 2009, ITAB 2009, IEEE, Larnaca, Cyprus; 2009. p. 1–5. DOI: 10.1109/ITAB.2009.5394331
  59. 59. Materka A, Strzelecki M. On the importance of MRI nonuniformity correction for texture analysis. In: Conference Proceedings of Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), IEEE, Poznan, Poland; 2013. p. 118–123.
  60. 60. Belaroussi B, Milles J, Carme S, Zhu YM, Benoit-Cattin H. Intensity non-uniformity correction in MRI: Existing methods and their validation. Medical Image Analysis. 2006;10(2):234–246. DOI: 10.1016/j.media.2005.09.004
  61. 61. Prasanna P, Tiwari P, Madabhushi A. Co-occurrence of local anisotropic gradient orientations (CoLlAGe): Distinguishing tumor confounders and molecular subtypes on MRI. In: Medical Image Computing and Computer-Assisted Intervention-MICCAI 2014, Springer, Boston, MA, USA; 2014. p. 73–78. DOI: 10.1007/978-3-319-10443-0_10
  62. 62. Zhang Y, Traboulsee A, Zhao Y, Metz LM, Li DK. Texture analysis differentiates persistent and transient T1 black holes at acute onset in multiple sclerosis: A preliminary study. Multiple Sclerosis Journal. 2011;17(5):532–540. DOI: 10.1177/1352458510395981
  63. 63. Yang D, Rao G, Martinez J, Veeraraghavan A, Rao A. Evaluation of tumor-derived MRI-texture features for discrimination of molecular subtypes and prediction of 12-month survival status in glioblastoma. Medical Physics. 2015;42(11):6725–6735. DOI: 10.1118/1.4934373
  64. 64. Chuah TK, Van Reeth E, Sheah K, Poh CL. Texture analysis of bone marrow in knee MRI for classification of subjects with bone marrow lesion – data from the osteoarthritis initiative. Magnetic Resonance Imaging. 2013;31(6):930–938. DOI: 10.1016/j.mri.2013.01.014
  65. 65. Sled JG, Zijdenbos AP, Evans AC. A nonparametric method for automatic correction of intensity nonuniformity in MRI data. IEEE Transactions on Medical Imaging. 1998;17(1):87–97. DOI: 10.1109/42.668698
  66. 66. Gibbs P, Turnbull LW. Textural analysis of contrast-enhanced MR images of the breast. Magnetic Resonance in Medicine. 2003;50(1):92–98. DOI: 10.1002/mrm.10496
  67. 67. Ahmed A, Gibbs P, Pickles M, Turnbull L. Texture analysis in assessment and prediction of chemotherapy response in breast cancer. Journal of Magnetic Resonance Imaging. 2013;38(1):89–101. DOI: 10.1002/jmri.23971
  68. 68. Mahmoud-Ghoneim D, Alkaabi MK, de Certaines JD, Goettsche FM. The impact of image dynamic range on texture classification of brain white matter. BMC Medical Imaging. 2008;8:18. DOI: 10.1186/1471-2342-8-18
  69. 69. Nailon H. Texture analysis methods for medical image characterisation. In: Mao Y, Editor. Biomedical Imaging. InTech, Rijeka, Croatia; 2010. p. 75-100. DOI: 10.5772/8912
  70. 70. Materka A. Texture analysis methodologies for magnetic resonance imaging. Dialogues in Clinical Neuroscience. 2004;6:243–250.
  71. 71. Ain Q, Jaffar MA, Choi TS. Fuzzy anisotropic diffusion based segmentation and texture based ensemble classification of brain tumor. Applied Soft Computing. 2014;21:330–340. DOI: 10.1016/j.asoc.2014.03.019
  72. 72. Ahmed S, Iftekharuddin KM, Vossough A. Efficacy of texture, shape, and intensity feature fusion for posterior-fossa tumor segmentation in MRI. IEEE Transactions on Information Technology in Biomedicine. 2011;15(2):206–213. DOI: 10.1109/TITB.2011.2104376
  73. 73. Mohanty DR, Mishra SK. MRI classification of Parkinson’s disease using SVM and texture features. In: Proceedings of the Second International Conference on Computer and Communication Technologies, Springer, India; 2016. p. 357–364. DOI: 10.1007/978-81-322-2523-2_34
  74. 74. Haralick RM, Shanmugam K, Dinstein I. Textural features for image classification. IEEE Transactions on Systems, Man, and Cybernetics SMC-3. 1973;6:610–621. DOI: 10.1109/TSMC.1973.4309314
  75. 75. Haralick RM. Statistical and structural approaches to texture. Proceedings of the IEEE. 1979;67(5):786–804. DOI: 10.1109/PROC.1979.11328
  76. 76. Wibmer A, Hricak H, Gondo T, Matsumoto K, Veeraraghavan H, Fehr D, Zheng J, Goldman D, Moskowitz C, Fine SW, Reuter VE, Eastham J, Sala E, Vargas HA. Haralick texture analysis of prostate MRI: Utility for differentiating non-cancerous prostate from prostate cancer and differentiating prostate cancers with different Gleason scores. European Radiology. 2015;25(10):2840–2850. DOI: 10.1007/s00330-015-3701-8
  77. 77. Kovalev V, Kruggel F. Texture anisotropy of the brain’s white matter as revealed by anatomical MRI. IEEE Transactions in Medical Imaging. 2007;26(5):678–685. DOI: 10.1109/TMI.2007.895481
  78. 78. Suoranta S, Holli-Helenius K, Koskenkorva P, Niskanen E, Könönen M, Äikiä M, Eskola H, Kälviaïanen R, Vanninen R. 3D texture analysis reveals imperceptible MRI textural alterations in the thalamus and putamen in progressive myoclonic epilepsy type 1, EPM1. PLoS One. 2013;8(7):e69905. DOI: 10.1371/journal.pone.0069905
  79. 79. House MJ, Bangma SJ, Thomas M, Gan EK, Ayonrinde OT, Adams LA, Olynyk JK, St Pierre TG. Texture-based classification of liver fibrosis using MRI. Journal of Magnetic Resonance Imaging. 2015;41(2):322–328. DOI: 10.1002/jmri.24536
  80. 80. Bonilha L, Kobayashi E, Castellano G, Coelho G, Tinois E, Cendes F, Li LM. Texture analysis of hippocampal sclerosis. Epilepsia. 2003;44(12):1546–1550. DOI: 10.1111/j.0013-9580.2003.27103.x
  81. 81. Boutsikou K, Kostopoulos S, Glotsos D, Cavouras D, Lavdas E, Oikonomou G, Malizos K, Fezoulidis IV, Vlychou M. Texture analysis of articular cartilage traumatic changes in the knee calculated from morphological 3.0 T MR imaging. European Journal of Radiology. 2013;82(8):1266–1272. DOI: 10.1016/j.ejrad.2013.01.023
  82. 82. Galloway MM. Texture analysis using gray level run lengths. Computer Graphics and Image Processing. 1975;4(2):172–179. DOI: 10.1016/S0146-664X(75)80008-6
  83. 83. Loh HH, Leu JG, Luo RC. The analysis of natural textures using run length features. Journal of Magnetic Resonance Imaging. 1988;35(2):323–328. DOI: 10.1109/41.192665
  84. 84. Ojala T, Pietikäinen M, Mäenpää T. A generalized local binary pattern operator for multiresolution gray scale and rotation invariant texture classification. In: Second International Conference ICAPR 2001, Springer, Rio de Janeiro, Brazil; 2001. p. 399–408. DOI: 10.1007/3-540-44732-6_41
  85. 85. Sheethal MS, Kannanm DB, Varghese A, Sobha T. Intelligent classification technique of human brain MRI with Efficient Wavelet based Feature Extraction using Local Binary Pattern. In: International Conference on Control Communication and Computing (ICCC), IEEE, Thiruvananthapuram, India; 2013. p. 368–372. DOI: 10.1109/ICCC.2013.6731681
  86. 86. Unay D, Ekin A, Cetin M, Jasinschi R, Ercil A. Robustness of local binary patterns in brain MR image analysis. In: 29th Annual International Conference on Engineering in Medicine and Biology Society, EMBS 2007, IEEE, Lyon, France; 2007. p. 2098–2101. DOI: 10.1109/IEMBS.2007Arunadevi B, Deepa SN. Texture analysis for 3D classification of brain tumor tissues. Przegląd Elektrotechniczny. 2013;4:342–348. .4352735
  87. 87. Zhao G, Pietikainen M. Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2007;29(6):915–928. DOI: 10.1109/TPAMI.2007.1110
  88. 88. Chang C-W, Ho C-C, Chen J-H. ADHD classification by a texture analysis of anatomical brain MRI data. Frontiers in Systems Neuroscience. 2012;6:66. DOI: 10.3389/fnsys.2012.00066
  89. 89. Materka A, Strzelecki M. Texture Analysis Methods: A Review. Institute of Electronics, Technical University of Lodz, Poland; 1998. Vol. 11. p. 1–32.
  90. 90. Skoch A, Jirák D, Vyhnanovská P, Dezortová M, Fendrych P, Rolencová E, Hájek M. Classification of calf muscle MR images by texture analysis. Magnetic Resonance Material in Physics. 2004;16(6):259–267. DOI: 10.1007/s10334-004-0032-1
  91. 91. Wu Z, Matsui O, Kitao A, Kozaka K, Koda W, Kobayashi S, Ryu Y, Minami T, Sanada J, Gabata T. Hepatitis C related chronic liver cirrhosis: Feasibility of texture analysis of MR images for classification of fibrosis stage and necroinflammatory activity grade. PLoS One. 2015;10(3):30118297. DOI: 10.1371/journal.pone.0118297
  92. 92. Alic L, Niessen WJ, Veenland JF. Quantification of heterogeneity as a biomarker in tumor imaging: A systematic review. PLoS One. 2014;9(10):e110300. DOI: 10.1371/journal.pone.0110300
  93. 93. Islam A, Reza SMS, Iftekharuddin KM. Multifractal texture estimation for detection and segmentation of brain tumors. IEEE Transactions on Biomedical Engineering. 2013;60(11):3204–3215. DOI: 10.1109/TBME.2013.2271383
  94. 94. Duda D, Kretowski M, Mathieu R, de Crevoisier R, Bezy-Wendling J. Multi-image texture analysis in classification of prostatic tissues from MRI. Preliminary results. In: Pietka E, Kawa J, Wieclawek W, Editors. Information Technologies in Biomedicine. Volume 3. Springer International Publishing, Switzerland; 2014. p. 139–150. DOI: 10.1007/978-3-319-06593-9_13
  95. 95. Arunadevi B, Deepa SN. Texture analysis for 3D classification of brain tumor tissues. 30 Przegląd Elektrotechniczny. 2013;4:342–348.
  96. 96. Meenakshi R, Anandhakumar P. Wavelet statistical texture features with orthogonal operators tumour classification in magnetic resonance imaging brain. American Journal of Applied Sciences. 2013;10(10):1154–1159. DOI: 10.3844/ajassp.2013.1154.1159
  97. 97. Sasikala M, Kumaravel N. A wavelet-based optimal texture feature set for classification of brain tumours. Journal of Medical Engineering & Technology. 2008;32(3):198–205. DOI: 10.1080/03091900701455524
  98. 98. Wagner F, Gryanik A, Schulz-Wendtland R, Fasching PA, Wittenberg T. 3D characterization of texture: Evaluation for the potential application in mammographic mass diagnosis. Biomedical Engineering. 2012;57:490–493. DOI: 10.1515/bmt-2012-4240
  99. 99. Kovalev VA, Kruggel F, Gertz HJ, Von Cramon DY. Three-dimensional texture analysis of MRI brain datasets. IEEE Transactions on Medical Imaging. 2001;20(5):424–433. DOI: 10.1109/42.925295
  100. 100. Mahmoud-Ghoneim D, Toussaint G, Constans JM, De Certaines JD. Three dimensional texture analysis in MRI: A preliminary evaluation in gliomas. Magnetic Resonance Imaging. 2003;21(9):983–987. DOI: 10.1016/S0730-725X(03)00201-7
  101. 101. Georgiadis P, Cavouras D, Kalatzis I, Glotsos D, Athanasiadis E, Kostopoulos S, Sifaki K, Malamas M, Nikiforidis G, Solomou E. Enhancing the discrimination accuracy between metastases, gliomas and meningiomas on brain MRI by volumetric textural features and ensemble pattern recognition methods. Magnetic Resonance Imaging. 2009;27(1):120–130. DOI: 10.1016/j.mri.2008.05.017
  102. 102. Woods BJ, Clymer BD, Kurc T, Heverhagen JT, Stevens R, Orsdemir A, Bulan O, Knopp MV. Malignant-lesion segmentation using 4D co-occurrence texture analysis applied to dynamic contrast-enhanced magnetic resonance breast image data. Journal of Magnetic Resonance Imaging. 2007;25(3):495–501. DOI: 10.1002/jmri.20837
  103. 103. Huang J, Huang X, Metaxas D, Axel L. Dynamic texture based heart localization and segmentation in 4-D cardiac images. In: 4th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, IEEE, Arlington, VA; 2007. p. 852–855. DOI: 10.1109/ISBI.2007.356986
  104. 104. Szczypinski PM, Strzelecki M, Materka A, Klepaczko A. MaZda. A software package for image texture analysis. Computer Methods and Programs in Biomedicine. 2009;94(1):66–76. DOI: 10.1016/j.cmpb.2008.08.005
  105. 105. Guyon I, Elisseeff A. An introduction to variable and feature selection. Journal of Machine Learning Research. 2003;3:1157–1182. DOI: 10.1162/153244303322753616
  106. 106. Chu C, Hsu A-L, Chou K-H, Bandettini P, Lin C. Does feature selection improve classification accuracy? Impact of sample size and feature selection on classification using anatomical magnetic resonance images. NeuroImage. 2012;60(1):59–70. DOI: 10.1016/j.neuroimage.2011.11.066
  107. 107. Bhalerao A, Reyes-Aldasoro CC. Volumetric texture description and discriminant feature selection for MRI. Information Processing in Medical Imaging. 2003;18:282–293. DOI: 10.1007/b13239
  108. 108. Boniatis I, Klironomos G, Gatzounis G, Panayiotakis G. Texture-based characterization of pre- and post-operative T2-weighted magnetic resonance signals of the cervical spinal cord in cervical spondylotic myelopathy. In: IEEE International Workshop on Imaging Systems and Techniques, IEEE; 2008. p. 353–356. DOI: 10.1109/IST.2008.4660000
  109. 109. Zhang J, Tong L, Wang L, Li N. Texture analysis of multiple sclerosis: A comparative study. Magnetic Resonance Imaging. 2008;26(8):1160–1166. DOI: 10.1016/j.mri.2008.01.016
  110. 110. Nketiah G, Savio S, Dastidar P, Nikander R, Eskola H, Sievänen H. Detection of exercise load-associated differences in hip muscles by texture analysis. Scandinavian Journal of Medicine & Science in Sports. 2015;25(3):428–434. DOI: 10.1111/sms.12247
  111. 111. Zhang J, Yu C, Jiang G, Liu W, Tong L. 3D texture analysis on MRI images of Alzheimer’s disease. Brain Imaging and Behavior. 2012;6(1):61–69. DOI: 10.1007/s11682-011-9142-3
  112. 112. Guyon I, Weston J, Barnhill S, Vapnik V. Gene selection for cancer classification using support vector machines. Machine Learning. 2002;46(1–3):389–422. DOI: 10.1023/A:1012487302797
  113. 113. Fernandez-Lozano C, Seoane JA, Gestal M, Gaunt TR, Dorado J, Campbell C. Texture classification using feature selection and kernel-based techniques. Soft Computing. 2015;19(9):2469–2480. DOI: 10.1007/s00500-014-1573-5
  114. 114. Wang S, Summers RM. Machine learning and radiology. Medical Image Analysis. 2012;16(5):933–951. DOI: 10.1016/j.media.2012.02.005
  115. 115. De Nunzio G, Pastore G, Donativi M, Castellano A, Falini A. A CAD system for cerebral glioma based on texture features in DT-MR images. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 2011;648(1):S100–S102. DOI: 10.1016/j.nima.2010.12.086
  116. 116. Tantisatirapong S, Nigel D, Rodriguez D, Abernethy L, Auer D, Clark C, et al. Magnetic resonance texture analysis: Optimal feature selection in classifying child brain tumors. In: XIII Mediterranean Conference on Medical and Biological Engineering and Computing, MEDICON 2013, Springer, Sevilla, Spain; 2014. p. 309–312. DOI: 10.1007/978-3-319-00846-2_77
  117. 117. Urish KL, Keffalas MG, Durkin JR, Miller DJ, Chu CR, Mosher TJ. T2 texture index of cartilage can predict early symptomatic OA progression: Data from the osteoarthritis initiative. Osteoarthritis and Cartilage. 2013;21(10):1550–1557. DOI: 10.1016/j.joca.2013.06.007
  118. 118. Torheim T, Malinen E, Kvaal K, Lyng H, Indahl UG, Andersen EKF, Futsaether CM. Classification of dynamic contrast enhanced MR images of cervical cancers using texture analysis and support vector machines. IEEE Transactions on Medical Imaging. 2014;33(8):1648–1656. DOI: 10.1109/TMI.2014.2321024

Notes

  • https://www.scopus.com
  • Available from https://github.com/mvallieres/radiomics
  • Available from http://www.cse.oulu.fi/CMV/Downloads/LBPSoftware

Written By

Andrés Larroza, Vicente Bodí and David Moratal

Submitted: 26 November 2015 Reviewed: 17 June 2016 Published: 26 October 2016