Nonlinear Image Filtering for Materials Classification Nonlinear Image Filtering for Materials Classiﬁcation

The Ruderman Bialek paradigm An image, or a general pattern of (Euclidean) dimension two or three, can be often analysed and classified according to its autocorrelation properties. By means of a well-known theorem of Fourier analysis (the Wiener Khinchin Einstein theorem), the autocorrelation function relates to the power spectral density (of the image, of the pattern). With reference to image analysis, D. L. Ruderman and W. Bialek, in the introductory paragraph of a paper of theirs [1], wrote


Introduction
The Ruderman -Bialek paradigm An image, or a general pattern of (Euclidean) dimension two or three, can be often analysed and classified according to its autocorrelation properties. By means of a well-known theorem of Fourier analysis (the Wiener -Khinchin -Einstein theorem), the autocorrelation function relates to the power spectral density (of the image, of the pattern).
With reference to image analysis, D. L. Ruderman and W. Bialek, in the introductory paragraph of a paper of theirs [1], wrote efficient signal processing systems take advantage of statistical structure in their input signals, . . . to generate compact representations of seemingly complex data.
Their declared focus was on early visual processing, occurring in the central nervous system of mammals. Nonetheless, their statement was to affect subsequent research on statistics-based image analysis and classification.
More recently, progress in image analysis and in automated image understanding systems has benefitted from the formalisation of statistical learning theory [2], [3], [4] and from developments in functional analysis [5], [6].

Image classification and pattern recognition
The need for analysing and understanding (e.g., classifying) images with possibly high throughput is felt everywhere in the natural sciences, in engineering and in medicine: optical and electron microscopes and diagnostic instruments acquire and produce large amounts of images.
Two tasks can be assigned: a) classification, which is usually stated in broad terms, and b) pattern recognition, where specific features, shapes, or objects are located and matched to a library. The former task can rely on the statistical properties of the image encoded by the power spectral density function. This function replaces the original image for statistics-based classification. Pattern recognition, instead, requires processing at a higher, often cognitive, level.
The satisfactory performance of classification and, respectively, recognition is still a challenge in terms of reliability and efficiency. Namely, there are two main obstacles: at the system input, faults in the overall design of experiments and, at the system output, biased judgment.
The "compact representation of seemingly complex data" requires a method to filter information out. Unlike a procedure which may have been specifically designed for some application and need not work in another case, a method reflects, and is affected by, prior knowledge about what is being processed.
In other words, a method always relates to a model. The R. C. Conant -W. R. Asbhy [7] paradigm (sensū lato) on the relation between system modelling and system control, comes to mind here.

Feature extraction
In the realm of this article, the Ruderman -Bialek paradigm has served as a guideline for obtaining quantitative image features (or descriptors) of statistical type. The procedure relies on the comparison between the power spectral density of the image and some model function of the spatial wavenumber or wave-vector.
Feature extraction usually complies with some requirements, dictated in turn by the materials or body organs which images represent, and by the imaging technique. The requirements which have been taken into account herewith are summarised as follows.
• Independence of the coarse (very low spatial frequency) details in the image and possibly of the absolute intensity scale. This requirement is typical in optical or epifluorescence microscopy, where background need not be uniform.
• Capability to separate image structure from image texture e.g., in terms of relative power spectral densities. Structure may correspond to spatial organisation arising from morphogenesis e.g., a cell colony, or from other processes e.g., turbulence. (The so-called "coherent structures" [8].) Separation of structure from texture is one of the fundamental tasks of image understanding, as formalised by the Osher-Rudin paradigm [6], Ch. 1.
characterize the image or the image class. Rings in diffraction patterns (e.g., X-ray selected area diffraction of polycrystalline materials) are the most intuitive example. In 1977 the first author [10], in analyzing deposits of micron-sized particles on glass substrates by coherent light, devised a texture analysis method based on the occurrence of local maxima in the power spectral density profiles (i.e., rings) as a function of wave-number. In this frame of mind it makes sense to focus on the relative departure e.g., a difference on a log scale, of power spectral density from a given model, and classify images accordingly: this is the basic idea behind the non-linear filtering algorithm named spectrum enhancement.

Scope
The purpose of this article is twofold • to outline the principles of image feature extraction by means of the spectrum enhancement algorithm; • to describe applications of the algorithm to morphological analysis of nano-composite materials.
The first application deals with the surface morphology of a compounded elastomer, tread rubber, worn by abrasion and corrosion, the second application with the morphology of nano-dispersions in a thermoplastic polymer.

The spectrum enhancement algorithm
Given an image g[.], its "enhanced spectrum" is a function h[.] of spatial wave-number u, obtained by a sequence of operations on the power spectral density of g [.].
Broadly speaking, by "spectrum enhancement", ση for short, one understands a sequence of linear and non-linear operations carried out in the spatial frequency domain, whereby either the complex amplitude or the power spectral density of the given image are compared to a reference function. Comparison emphasizes ("enhances") differences between the observed properties, carried by the image, and the expected ones, represented by the reference function, which plays the role of a model. The ση-algorithm of interest herewith [11] relies on the following definitions.  ]] reflection. Next let QΩ be discretized by a square grid of step-length ℓ. In practice, each square mesh corresponds to a pixel.

Reciprocal domain
The Fourier transform G[ u] of Qg[ x] is supported at grid nodes in the square and is distribution-valued. In particular , where obviously |a 0,0 | > 0 for any non-degenerate image. As a consequence of the continuity of Qg[ x] on T , the graph of G[ u] exhibits no "cross artefact" [9], Ch. 4].
Denote by G|[ u]| 2 the power spectral density. Let

Obtaining functions of wave number
Def. 1 (arc-averaged spectral density). The scaled, arc-averaged spectral density profile is the function s[.] of u alone defined in 0 ≤ u ≤ u max (cycles/image) according to where the integral reduces to a finite sum over the grid nodes belonging to Θ.
Let m[u] be a model spectral density. For example choose the continuous function parameterized by p and defined by where p (> 0) is the model exponent.
Def. 2 (log-enhanced spectrum). The m (p) -log enhanced spectrum h (p) [u] is defined by As it will be shown next, the enhanced spectrum is related to spatial differentiation of integer order of the image, Qg[ x], followed by non linear operations.

Spectrum enhancement vs. spatial differentiation
Def. 3 (enhanced spectrum). The m (p) -(plain)enhanced power spectral density is defined by where | u| p : Thm. 1. (Spectrum enhancement and spatial differentiation of integer order). Assume the image is not degenerate and that all partial derivatives of g [.] up to a suitable order exist as tempered distributions.
a) If the model exponent satisfies p/2 = N (> 0), integer, then H (p) [ u] has the representation b) Let p/2 = N + λ such that N + 1 ∈ N, 0 < λ < 1. Then A relation between spectrum enhancement and the spatial differentiation of fractional order when p/2 is not an integer has also been obtained ( [11], Thm. 2).
The role of spectrum enhancement as a non linear image filter has thus been specified by Thm. 1.

Fractal analysis vs. spectrum enhancement
Unlike linear regression of the log[G] vs. log[u] plot used in fractal analysis [9], ση serves to separate structure from texture as suggested by the already mentioned Osher-Rudin paradigm [6]. Namely, ση can be tuned to emphasise the low frequency behaviour of h (p) [.], which corresponds to image structure.

Polynomial interpolation, image feature vector
The values of the raw enhanced spectrum generally form an oscillatory sequence.
The variables and parameters on which the graph of q[.] depends are arranged as an n-tuple i.e., a vector ψ of n real-valued components Only some entries of the n-tuple are shown: the endpoints of Θ (Eq. 2), the model exponent p of Eq. 4, the polynomial degree d, as well as u L and u H of Eq. 11.

As a consequence one states
Def. 4 (The ση -derived feature vector). Let and then the ση -derived feature vector of an image is the M-dimensional vector w = [w m ], the m-th entry of which reads The dependence of w[.] on ψ may be occasionally emphasised by writing

Applications of w[ ψ]
As the Reader may have already deduced, the n-tuple ψ can control the ση algorithm, hence image feature extraction.
The most straightforward application of ση consists of choosing only a few n-tuples and focussing on some properties of the corresponding polynomials, q[·; ψ]. An example is provided by the morphological analysis of worn tread rubber (Section 3).
In a more elaborate application, ψ plays the role of a (vector-valued) control variable in the training and validation of an image classifier. An example is provided by the morphological analysis of nano-dispersions (Section 4).
Still more articulate applications include classifier training followed by image recognition in complex experiment designs. This issue is not covered herewith.

Rubber wear
The term wear referred to tire rubber, in particular tread rubber, includes a variety of processes which, according to the classification by Alan G. Veith [13] and, years earlier, by B. Briscoe [14], can be divided into wear of cohesive and interfacial type. These processes are of interest to industrial designers, product engineers and environmental scientists. Treadwear occurs between tire and road and is essential to the performance of a tire. In addition to determining the roadworthiness of a vehicle, treadwear affects fuel efficiency. It also has an impact on the environment, because it contributes to the so-called non exhaust emissions from vehicle traffic, in both gaseous and particulate form. Compound rubber, of which a tire is made, is the result of scientific research and technological advancement which have characterized the past 100 years. In essence, compounding consists of adding, under suitable conditions, sub-micrometer-sized particles of a filler, such as carbon black or amorphous silica, to an elastomer matrix, such as a styrene-butadiene block copolymer (SBR), natural rubber, . . . , or mixtures thereof.
In designing laboratory or field experiments involving tread wear, regardless of the final goal, scientists of any discipline, from chemistry to mechanical engineering to toxicology, shall bear in mind four basic facts: • rubber, as a polymer, has a complex thermodynamical behaviour [15]; • tire rubber is a composite material originating from filler dispersion and distribution within the elastomer; • different additives in the compound play different roles at different times in different parts of the tire during its life cycle; • many properties and processes, including fracture, are rate-dependent. Abrasive, corrosive and fatigue wear The processes of interest herewith are abrasive wear and a kind of corrosive wear.
Quoting Veith [13], abrasive wear is "caused by cutting-rupture action of sharply cornered asperities on the sliding counterface" among which road pavement or "third bodies (particles)". Hence it belongs to the cohesive type, being "controlled by the rupture strength or energy (toughness) of the wearing material".
Instead, corrosive wear results "from direct chemical surface attack", under albeit very mild conditions, and as such is a type of interfacial wear, because it involves reactions originating "in very thin surface layers" [13].
During normal use, tire tread undergoes both abrasive and fatigue wear. The latter is "caused by rapid or gradual material property changes, that give rise to cracks and, with their growth, to a loss of material" [13]. As by-products, gases, aerosols and material particles in a broad range of sizes are released to the environment.
The material particles, generally called tire wear particles (TWP), carry information about the thermo-mechanical degradation of compound rubber and the interaction between tread and road. The TWP release rate of a vehicle is a function of the tread wear rate, ρ tw , the ratio of lost tread mass over traveled distance. For a four-tire passenger car, 50 ≤ ρ tw ≤ 200 mg/km [16].
The contribution of TWP to total suspended particulate matter is estimated to range from 2% to 10%, with minor contribution to PM 10 and to PM 2.5 [17]. Once released, TWP undergo degradation (corrosive wear) by environmental agents: this can be regarded as a secondary process.
A thorough investigation of all processes involving treadwear is a broad scope, resource intensive project, where interdisciplinary expertise is mandatory, if the risk of gross errors in experiment design is to be kept under control.
Whenever the fracture or corrosion of materials are involved, quantitative morphological information is needed. In this Section, the morphology of particles originated from laboratory tread wear is analysed and an attempt is made at relating morphological features to elemental microanalytical data.

Tread abrasion material
In laboratory simulation experiments, tire tread abrasion particles can be obtained from different equipment, which include steel brush abraders (material labeled TrBP) and high severity abraders (material labeled hsWP). This article focuses on TrBP. However, some properties of hsWP are described for comparison.
The steel brush abrader operates in air and is used for coarse balancing newly manufactured tires. The passenger car tire rotates at v T ≃ 9 m/s (tangential speed) against the brush, which exerts local pressure p in the range 10 5 < p < 10 6 N/m 2 . (Average pressure is orders of magnitude lower.) Steel brush abrasion does not compare to tread wear on the road, because a) it causes degradation of the elastomer mass by mechanical fracture only, b) it does not cause any significant thermal degradation, because the average tread temperature never exceeds 50 C, c) it does does not involve contact between tread and pavement, hence no foreign materials contaminate rubber debris.
High severity abraders, instead, are intended to simulate the tire -road interaction. A much higher pressure is exerted on the rubber and the tread surface temperature is remarkably higher. The hsWP material was obtained by a Gent -Nah-type abrader [18], where a drum clad by tread rubber rotates at v T = 8 · 10 −2 m/s against a steel blade. The latter exerts a force F ≃ 35 N, resulting in p in the range 10 6 < p < 10 7 N/m 2 and surface temperature T ≃ 120 C. For this reason hsWP is believed to represent real world TWP.
Sample images of TrBP and hsWP are shown by Figure 1.
Differences in the surface chemistry of TrBP and hsWP are best understood by means of X-ray photoelectron (XP) spectroscopy [19]. Some XP spectra of TrBP and hsWP are shown by Figure 2. The caption provides the details.

Leaching tests
From the arguments presented in the previous Subsection, TWP originated from vehicle traffic are most likely going to remain on the ground and be leached out by rain and surface waters at pH values ranging from 3 to 7.5. In principle, leaching can be simulated by laboratory tests as part of a more comprehensive experiment design.
In spite of the differences between TWP (hsWP in particular) and TrBP outlined above, a laboratory leaching test can be informative. Experiments were set up at the then Department of Environmental Sciences, University of Milan -Bicocca, Milan, IT. In order to simulate leaching by water in a pond, TrBP from the abrader were poured into half-filled, 5-litre glass flasks containing water buffered at pH = 7.5. The flasks were fastened to the steel table  of a LaboShake™apparatus, which rotated at ≃ 100 rpm on the horizontal plane, at room temperature. Samples of the reference material (fresh TrBP, labelled e0) and the 24h-and 48h-leached TrBPs, respectively labelled e1 and e2, were then examined by the experimental methods described below and images analysed by the algorithm outlined in Subsection 3.3.3.

Electron microscopy and microanalysis
Materials were analysed at the Electron Microscopy Laboratory, Center for Advanced Materials, University of Massachusetts-Lowell, Lowell, MA by means of an Amray 1400 scanning electron microscope (SEM) equipped with a Tracor Northern TN 3205 energy dispersive X-ray (EDX) spectrometer. Specimen preparation complied with the laboratory standards. TrBP were directly applied to Shinto Paint™carbon tape on top of Al studs, without depositing any conductive coating. The electron optics magnification was set at 5,000 and the resulting 640 2 pixel SEM images were saved in TIF. EDX spectra were acquired and saved by DTSA software.
Sample images of the TrBP material are shown by Figure 3.

Spectrum enhancement
The ση algorithm is applied to the images of TrBP. One 640 2 pixel image per material is available, from which four 512 2 pixel tiles are obtained. Tiles represent different portions of the original with overlap.
For reasons which will be explained below, the following n-tuple (as defined by Eq. 12) is chosen Length of square side (all panels) L 2 ≃ 40µm. Tile size = 512 2 pixels. Coating: none. Particles are sub-millimeter sized. Before the leaching experiment (left panel) surfaces show coarse and fine features i.e., carry structure and texture. After 24h leaching in water at room temperature and pH = 7.5 (centre panel), and more so after 48h (right panel), particle surfaces lose texture and become smoother.
The graphs of the corresponding polynomials q[.; ψ] − q[0; ψ] are assumed to describe surface morphology. Three such graphs, one per material, are displayed by

Qualitative interpretation of enhanced spectra
Qualitatively, the graphs of Figure 4 are interpreted as follows. The reference material, tile #1 of e0 (black curve), shows a broad peak in 16 ≤ u ≤ 200 cycles/L, which is due to structure (features ranging from 5 down to 2 µm) and texture (features down to 0.4 µm). In the 24and 48h-leached materials, tile # 3 of e1 (magenta, lowest curve) and tile #2 of e2 (cyan, middle curve), surface texture disappears. The narrower peaks centered at 32 cycles/L come from coarse features only (5 down to 1.5 µm). The relation between wavenumber values and feature sizes is further explained by Table I Rem. 1 (Justification). The value of u 4 is chosen high enough to account for both surface structure and texture, but not too high in order to leave out high frequency image noise. For the index ρ to be strictly positive, a proper maximum of q[u; ψ] has to exist in 1 < u ≤ u 4 . The values of u 5 and u 6 define the left and, respectively, right endpoints of the "bell" about the maximum at u 4 . The index ρ, whenever > 0, is the area under such a bell.

Wavenumber values and feature sizes
Some representative values of the above defined u 5 , u 3 and u 6 in cycles/image (of side length L) and of ρ are shown Table I, as well as the spatial periods ℓ 5 and ℓ 6 , in µm, which correspond to u 5 and u 6 , respectively, by recalling L = 80µm.
With the exception of u 6 of material e0, wave number values vary by one or two units from tile to tile of the same material. The variation of ρ is more significant: standard deviations are  Table 1. Typical values of the quantities of Def. 5: u 5 , u 3 and u 6 in cycles/image, spatial periods ℓ 5 and ℓ 6 , in µm, and ρ in arbitrary units. displayed in Figure 5 below. The ℓ 5 column of Table I suggests coarsening of the "dominant" feature, from 1.8 to 2.6 µm, caused by elastomer leach out. The computed values of ρ confirm the trend as well as the qualitative remarks about surface structure and texture given above (before Def. 5).

How to choose ψ and why
Rem. 2 (On the selection of ψ and on the dependence of ρ on ψ). The values u H and u L of Eq. 17 are selected in order to include the coarsest image features (u L = 0) and to filter out image noise (u H = 384). Since the computation of ρ is controlled by ψ, the entries θ b , θ e , p and d are chosen among a few 4-tuples (e.g., p= 2.2, 2.4, 2.6, 2.8; d= 7,9,11) in order to maximise the morphological discrimination of the materials through ρ.

Elemental microanalysis
Since the same materials had been analysed by an EDX probe, elemental microanalytical data for S, Si, and Zn were available. The microanalytical data of Figure 5 (right axis) were collected at three sampling points for each specimen. The operating conditions were: incident electron beam energy = 200 keV, detector resolution = 143 eV, take-off angle = 68 deg, analyser channel width = 10 eV. Detector live times varied from one sampling point to another, ranging between 4.9 to 9.0 s. The background count rates were estimated in the neighbourhood of the spectral lines of interest, Si Kα, S Kα, and Zn Kα, then subtracted from the corresponding peak count rates.

Surface morphology and microanalytical data
The sample-averaged values of elemental concentrations and of ρ are displayed by Figure 5 vs. leaching time.
The graphs of Figure 5 describe the corrosive wear of TrBP. As the surface becomes smoother (decreasing ρ), the relative S content decreases as well i.e., the vulcanized elastomer loses its cross-links; at the same time SiO 2 , the mineral filler, expressed as Si, becomes more and more exposed, whereas Zn increases slightly.

Main result
In other words, Figure 5 correlates surface morphology to microanalysis during the corrosive wear of the compounded elastomer.

Filler dispersion and distribution
Polymer nano-composites A polymer nano-composite is a heterogeneous medium which is characterised, from a geometrical point of view, by the same methods developed for random media [21]. The host material is a polymer and the filler consists of nano-particles having suitable shape, which shall comply with three requirements, all related to geometry. a) At least one dimension shall be smaller than 100nm. b) Filler nano-particles shall not form agglomerates i.e., shall be dispersed. c) This property shall be exhibited everywhere in the material volume i.e., filler distribution shall be uniform.
Interest in the quantitative assessment of dispersion and distribution arises because they alter, in fact improve, many properties of the materials as compared to the raw polymer.
Let the nano-composite occupy a domain Ω and U ⊂ Ω be a test subdomain of suitable size. Let U (= meas[U ]) be the volume of U and V F be the total volume of the filler particles in U .

Dispersion indicator
As suggested by experimental evidence, theoretical arguments and computer simulations [22] - [23], polymer chains interact with the filler inside a hull of thickness t which surrounds the filler particle. Alterations of the geometry and the physical properties of the chains occurring inside the hull affect the material as a whole. Let V H be the total volume of hulls in U . An indicator of dispersion in U is defined by Let r be the smallest dimension (e.g., radius of gyration) of the particle and δ := t r . For example, if the three assumptions hold: A1) t depends on the polymer properties, not on the size of the filler particle (typically 3 ≤ t ≤ 10 nm); A2) filler particles are spheres of radius r; A3) individual filler spheres, not aggregates, are found in U , then and the value of Q[δ] can be assumed to represent the degree of dispersion in U .
With particles of different shape the corresponding relation between δ and Q can be found. Filler-host interaction is stronger if dispersion is higher: hence the interest in determining, or estimating Q by some method.

Representative elementary domain
A high Q is a necessary requirement for a good nano-composite, but not the only one. Namely, the quantities V F and r make sense in statistical terms. If the same degree of uniformity as inside U is observed everywhere else in the nano-composite, then the values of V F and r determined in U characterize the material and one says distribution to be good. In this case, the function Q[δ] calculated by e.g., Eq. 20 is valid everywhere, hence U is a representative elementary domain [21] of the heterogeneous medium and U its representative elementary volume (REV). Conversely, if either r or V F fluctuate from one location to another because of non-uniform distribution, then a larger domain is needed to represent the composite. The Debye algorithm (Ch. 1 of [21]]) is applied to estimate the REV. Very roughly speaking, for a fixed confidence level supplied to the Debye algorithm, a smaller REV means better distribution. The practical implication of inhomogeneity at the micrometer (or sub-millimeter) scale is a material which performs poorly or unreliably.
Since dispersion and distribution can be derived from the geometrical properties of the nano-composite, and the latter can be seen by microscopy, the role of image analysis in materials characterization should be evident.

Materials, visual scoring
The nano-composite materials [24] of interest herewith were prepared from polyethylene terephthalate and alumina nanoparticles (Al 2 O 3 , approximately spherical of average diameter = 48 nm). A Haake Rheomix 600 torque rheometer was used, hence the material is labelled H for short. Four different sets of processing parameters (temperature, torque, feed rate) gave rise to as many classes (1 to 4), hereinafter regarded as the classes of belonging.
Unfortunately, the parameter values were not disclosed. Material samples were cut by a diamond knife microtome and examined using a transmission electron microscope in bright field mode at 100 keV. Recording took place on photographic film. Hardcopies were scanned, giving rise to "master images". A number of tiles (8 to 9) can be read out of each master image. Sample tiles of the material H classes are shown by Figure 6. In the past [24], images of each class had been visually rated by dispersion and distribution and ranked by an expert according to a subjective criterion. Another expert looked for a particle scoring method which could justify visual rating. A method was found and the corresponding skewness index, β, was determined, which met expectation. As a consequence, each class of belonging comes with its value of β.

Scope
The purpose of this Subsection is to address the following question: can ση lead to an automated classifier capable of replicating the expert's, β-based, ranking ?

Classifier flow chart
The application of the ση algorithm to material H images is more complex than the previous one ( § 3).
Since master images were a priori known to belong to four classes, the algorithm is requested to assign each tile to its own class with the highest possible score.
In the first place, the request can indeed be addressed, if one recalls how ση can be controlled by ψ of Eq. 12. Next, one has to design a scheme by which • prior knowledge about the class of belonging is passed on to the algorithm, • the class assigned by the algorithm is compared to that of belonging and class assignment is displayed, • the classifier success rate is measured and the best performing ψ determined.
The first requirement includes the formation of training sets. The second relies on multivariate statistics (principal components analysis). The third is based on the classifier training matrix, M T [ ψ], where the row index is the class of belonging, the column index the assigned class and each entry is a count; exact classification corresponds to a diagonal matrix; non-zero off diagonal entries mean misclassification.
The ση algorithm thus becomes part of a feedback loop [25], which includes multivariate statistical analysis and supervised training. An outline of the procedure is provided by Figure 7 and its caption. With reference to Figure 9, the q[.] from class 1 (top left panel of Figure 6) rises steeply up to u = 64 cycles/tile, then exhibits a slowly increasing trend: this corresponds to the lack of relevant filler structure and texture. The q[.] from class 3 (bottom left panel of Figure 6) reflects agglomerates. The graphs representing the other two materials can also be easily interpreted: a graph of q[.] with a uniform trend (class 1) comes from a feature-less image; instead, local maxima in 60 ≤ u ≤ 150 as in class 3 and 4 (bottom right panel of Figure  6) are due to particles of size ranging between 15 and 45 nm, regardless of aggregation or agglomeration.
A class with feature-poor tiles, such as class 1, is more likely to yield inconsistent training vs. validation results (Figure 8).
From the principal components plane display and from enhanced spectra, the dispersion and distribution properties of the nano composite can be at least qualitatively inferred. Namely, dispersion can be rated by the occurrence of agglomerates (e.g., class 1 vs. classes 3 and 4). Class 1 turns out to have the poorest distribution (longest class error bars in Figure 3).

Correlation between visual scoring and automated classification
The first principal component, z 1 , of the class centroids, as determined by training and validation, (Figure 8) when regressed against the visual scoring index, β, yields the result of Figure 10. The index β derives from visual scoring, z 1 from spectrum enhancement classification. The centroid z 1 coordinates, as determined by both training and validation, have been included into affine regression to make the result more realistic.

General discussion and conclusion
The core of image spectrum enhancement (ση) is spatial differentiation of a suitable order, including a fractional one, followed by non linear transformations. When the control parameters are optimized, enhanced spectra seem to adequately describe the morphology of the image by separating structure from texture, which in turn are (statistical) properties of the image, or of the image set, as a whole. Image classification based on spectrum enhancement follows accordingly. If one is interested in structure and texture, then classification most likely succeeds. Indeed, the algorithm has been shown to perform in a satisfactory way on a variety of image sets, originated from as many different processes (nanodispersion, growth of tubulin microfilaments, formation of cell colonies, light scattering by material particles). Instead, spectrum enhancement, as any frequency domain method, is inappropriate to exactly locate isolated features or details.
The applications to materials science described in this article differ by complexity of the algorithm and by the degree of "assimilation" to other experimental data.
The morphology of TrBP has been described in a very simple way by means of the surface roughness index, ρ (Def. 5). Graphs of enhanced spectra of the investigated materials ( Figure  4) have been related to surface structure and texture (Table I). Finally, ρ has been related to elemental microanalytical data from EDX spectroscopy ( Figure 5).
Possible developments include: the analysis of other types of TWP and studies of fracture dynamics.
Images of other TWP materials by the ση algorithm is possible, provided the particle surfaces are visible. Namely, coarse particles from treadwear tests are clad by minerals (from anti-smear agents or from road pavement), as shown by the right panel of Figure 1.
Micrometer sized particles are more easily imaged and analysed.
Quantitative morphology of wear debris is relevant to the characterisation of rate dependent fracture mechanisms and therefore to assess the reliability of a given product.
The application to nanodispersions has included the development of an automated classifier, capable of discriminating materials to some extent ( Figure 8). Enhanced spectra of single tiles have been interpreted in relation to particle dispersion and the formation of aggregates ( Figure 9). Correlation between the automated classifier and visual scoring has been obtained ( Figure 10). Knowledge of the mixer parameters might have deepened the understanding of automated morphological analysis.
Developments are needed at least in two directions. In the first place a relation shall be found between the Q of Eq. 19 and enhanced spectra. Namely, Q describes dispersion if evaluated locally (i.e., estimated from a single image), whereas its statistical properties describe distribution, and shall be estimated from an image set. Another issue of interest is the estimation of the representative elementary domain, U , from ση and other experimental techniques.

Acknowledgments
The author thanks the Editor and the Publisher for the invitation and for the acceptance of this chapter.
The experimental data analysed herewith are the result of a collaboration network. Credits are given to the following people, with gratitude.