Open access peer-reviewed chapter

Similarity Measures and Dimensionality Reduction Techniques for Time Series Data Mining

By Carmelo Cassisi, Placido Montalto, Marco Aliotta, Andrea Cannata and Alfredo Pulvirenti

Submitted: January 10th 2012Reviewed: May 21st 2012Published: September 12th 2012

DOI: 10.5772/49941

Downloaded: 6301

1. Introduction

A time series is “a sequence X =(x1, x2, …, xm) of observed data over time”, where mis the number of observations. Tracking the behavior of a specific phenomenon/data in time can produce important information. A large variety of real world applications, such as meteorology, geophysics and astrophysics, collect observations that can be represented as time series.

A collection of time series can be defined as a Time Series Database (TSDB). Given a TSDB, most of time series mining efforts are made for the similarity matching problem. Time series data mining can be exploited from research areas dealing with signals, such as image processing. For example, image data can be converted to time series: from image color histograms (Fig. 2), where image matching can be applied, to object perimeters for the characterization of shapes [39].

Figure 1.

Examples of time series data relative to a) monsoon, b) sunspots, c) ECG (ElectroCardioGram), d) seismic signal.

Time series are essentially high-dimensional data [17]. Mining high-dimensional involves addressing a range of challenges, among them: i) the curse of dimensionality [1], and ii) the meaningfulness of the similarity measure in the high-dimensional space. An important task to enhance performances on time series is the reduction of their dimensionality, that must preserve the main characteristics, and reflects the original (dis)similarity of such data (this effect will be referred to as lowerbounding[11]). When treating time series, the similarity between two sequences of the same length can be calculated by summing the ordered point-to-point distance between them (Fig. 3). In this sense, the most used distance function is the Euclidean Distance[13], corresponding to the second degree of general Lp-norm[41]. This distance measure is cataloged as a metric distance function, since it obeys to the three fundamentals metric properties: non-negativity, symmetryand triangle inequality[29]. In most cases, a metric function is desired, because the triangle inequality can then be used to prune the index during search, allowing speed-up execution for exact matching [28]. In every way, Euclidean distance and its variants present several drawbacks, that make inappropriate their use in certain applications. For these reasons, other distance measure techniques were proposed to give more robustness to the similarity computation. In this sense it is required to cite also the well known Dynamic Time Warping(DTW) [22], that makes distance comparisons less sensitive to signal transformations as shifting, uniform amplitude scaling or uniform time scaling. In the literature there exist other distance measures that overcome signal transformation problems, such as the Landmarks similarity, which does not follow traditional similarity models that rely on point-wise Euclidean distance [31] but, in correspondence of human intuition and episodic memory, relies on similarity of those points (times, events) of “greatest importance” (for example local maxima, local minima, inflection points).

In conjunction to this branch of research, a wide range of techniques for dimensionality reduction was proposed. Among them we will treat only representations that have the desirable property of allowing lower bounding. By this property, after establishing a true distance measure for the raw data (in this case the Euclidean distance), the distance between two time series, in the reduced space, results always less or equal than the true distance. Such a property ensures exact indexing of data (i.e. with no false negatives [13]). The following representations describe the state-of-art in this field: spectral decomposition through Discrete Fourier Transform(DFT) [1]; Discrete Wavelet Transform(DWT) [7]; Singular Value Decomposition(SVD) [24]; PiecewiseAggregate Approximation(PAA) [19]; Adaptive Piecewise Constant Approximation(APCA) [6]; Piecewise Linear Approximation(PLA) [20]; and Chebyshev Polynomials(CHEB) [29]. Many researchers have also included symbolic representations of time series, that transform time series measurements into a collection of discretized symbols; among them we cite the Symbolic Aggregate approXimation(SAX) [26], based on PAA, and the evolved multi-resolution representation iSAX 2.0[35]. Symbolic representation can take advantage of efforts conducted by the text-processing and bioinformatics communities, who made available several data structures and algorithms for efficient pattern discovery on symbolic encodings [2],[25],[36].

The chapter is organized as follows. Section 2 will introduce the similarity matching problem on time series. We will note the importance of the use of efficient data structures to perform search, and the choice of an adequate distance measure. Section 3 will show some of the most used distance measure for time series data mining. Section 4 will review the above mentioned dimensionality reduction techniques.

Figure 2.

An example of conversion from the RGB (Red, Green, Blue) image color histograms, to a time series.

Advertisement

2. Similarity matching problem

A TSDBwith mobjects, each of length n, is denoted by TSDB ={x1, x2,..., xm}, where xi= (xi(1), xi(2),..., xi(n)) is a vector denoting the ith time series and xi(j) denotes the jth values of xi, with respect to time. nindicates the dimensionalityof the data set.

The general representation of a TSDBis a m× nmatrix:

A=(x1(1)x1(2)x1(j)x1(n)x2(1)x2(2)x2(j)x2(n)xi(1)xi(2)xi(j)xi(n)xm(1)xm(2)xm(j)xm(n))=(x1x2xm)E1

In many cases, datasets are supported by special data structures, especially when dataset get larger, that are referred as indexingstructures. Indexing consists of building a data structure Ithat enables efficient searching within database [29]. Usually, it is designed to face two principal similarity queries: the (i) k-nearest neighbors(knn), and the (ii) range queryproblem. Given a time series Qin TSDB, and a similarity/dissimilarity measure d(T,S) defined for each pair T, Sin TSDB, the former query deals with the search of the set of first ktime series in TSDBmore similar to Q. The latter query finds the set Rof time series that are within distance rfrom Q. Given an indexing structure I, there are two ways to post a similarity query in time series databases [29]:

  • whole matching: given a TSDBof time series, each of length n, whole matching relates to computation of similarity matching among time series along their whole length.

  • subsequence matching: given a TSDBof mtime series S1, S2, …, Sm, each of length ni, and a short query time series Qof length nq< ni, with 0 < i< m, subsequence matching relates to finding matches of Qinto subsequences of every Si, starting at every position.

Indexing is crucial for reaching efficiency on data mining tasks, such as clusteringor classification, specially for huge database such as TSDBs. Clustering is related to the unsupervised division of data into groups (clusters) of similar objects under some similarity or dissimilarity measures. Sometimes, on time series domain, a similar problem to clustering is the motifdiscoveryproblem [28], consisting of searching main cluster (or motif) into a TSDB. The search for clusters is unsupervised. Classification assigns unlabeled time series to predefined classes after a supervised learning process. Both tasks make massive use of distance computations.

Distance measures play an important role for similarity problem, in data mining tasks. Concerning a distance measure, it is important to understand if it can be considered metric. A metric function on a TSDBis a function f: TSDB× TSDB→ R (where R is the set of real numbers). For all x, y, zin TSDB, this function obeys to four fundamental properties:

f(x, y) ≥ 0 (non-negativity)

f(x, y) = 0 if and only if x= y(identity)

f(x, y) = f(y, x) (symmetry)

f(x, z) ≤ f(x, y) + f(y, z) (triangle inequality)

If any of these is not obeyed, then the distance is non-metric. Using a metric function is desired, because the triangle inequality property (Eq. 5) can be used to index the space for speed-up search. A well known framework, for indexing time series, is GEMINI(GEneric Multimedia INdexIng) [13] that designs fast search algorithms for locating time series that match, in an exact or approximate way, a query time series Q.

It was introduced to accommodate any dimensionality reduction method for time series, and then allows indexing on new representation [29]. GEMINIguarantees no false negatives on index search if two conditions are satisfied: (i) for the raw time series, a metric distance measure must be established; (ii) to work with the reduced representation, a specific requirement is that it guarantees the lower bounding property.

Advertisement

3. Similarity measures

A common data mining task is the estimation of similarity among objects. A similarity measure is a relation between a pair of objects and a scalar number. Common intervals used to mapping the similarity are [-1, 1] or [0, 1], where 1 indicates the maximum of similarity.

Considering the similarity between two numbers x and y as :

numSim(x,y)=1|xy||x|+|y|E2

Let two time series X= x1,…,xn, Y= y1,…,yn, some similarity measures are:

  • mean similarity defined as:

tsim(X,Y)=1ni=1nnumSim(xi,yi)E3
  • root mean square similarity:

rtsim(X,Y)=1ni=1nnumSim(xi,yi)2E4
  • and peak similarity [15]:

psim(X,Y)=1ni=1n[1|xiyi|2max(|xi|,|yi|)]E5

Another common similarity functions used to perform complete or partial matching between time series are the cross-correlationfunction (or Pearson’s correlationfunction) [38] and the cosine angle between the two vectors. The cross correlation between two time series xand yof length n, allowing a shifted comparison of lpositions, is defined as:

rXY=i=1n(xiX¯)(yilY¯)i=1n(xiX¯)2i=1n(yilY¯)2E6

Where X¯and Y¯are the means of Xand Y. The correlation rXYprovides the degree of linear dependence between the two vectors Xand Yfrom perfect linear relationship (rXY= 1), to perfect negative linear relation (rXY= -1).

Another way to evaluate similarity between two vectors is the estimation of cosine of the angle between the two vectors Xand Y, defined as:

cos(ϑ)=XYXY=i=1nxiyii=1nxi2i=1nyi2E7

This measure provides values in range [-1, 1]. The lower boundary indicates that the Xand Yvectors are exactly opposite, the upper boundary indicates that the vectors are exactly the same, finally the 0 value indicates the independence.

3.1. Metric distances

Another way to compare time series data involves concept of distance measures. Let be two time series Tand Svectors of length n, and Tiand Sithe ith values of Tand S, respectively. Let us list the following distance measures. This subsection presents a list of distance functions used in Euclidean space.

  1. Euclidean Distance. The most used distance function in many applications. It is defined as (Fig. 3):

  2. Manhattan Distance. Also called “city block distance”. It is defined as:

  3. Maximum Distance. It is defined to be the maximum value of the distances of the attributes:

  4. Minkowski Distance. The Euclidean distance, Manhattan distance, and Maximum distance, are particular instances of the Minkowski distance, called also Lp-norm. It is defined as:

  5. Mahalanobis Distance. The Mahalanobis distance is defined as:

d(T,S)=(TS)W1(TS)TE12

where Wis the covariancematrix [12].

Figure 3.

TandSare two time series of a particular variablev,along the time axist. The Euclidean distance results in the sum of the point-to-point distances (gray lines), along all the time series.

Advertisement

3.2. Dynamic time warping

Euclidean distance is cataloged as a metric distance function, since it obeys to the metric properties: non-negativity, identity, symmetryand triangle inequality(Section 2, Eq.2~5). It is surprisingly competitive with other more complex approaches, especially when dataset size gets larger [35]. In every way, Euclidean distance and its variants present several drawbacks, that make inappropriate their use in certain applications:

  • It compares only time series of the same length.

  • It doesn’t handle outliers or noise.

  • It is very sensitive respect to six signal transformations: shifting, uniform amplitude scaling, uniform time scaling, uniform bi-scaling, time warping and non-uniform amplitude scaling [31].

Dynamic Time Warping(DTW) [22] gives more robustness to the similarity computation. By this method, also time series of different length can be compared, because it replaces the one-to-one point comparison, used in Euclidean distance, with a many-to-one (and viceversa) comparison. The main feature of this distance measure is that it allows to recognize similar shapes, even if they present signal transformations, such as shifting and/or scaling (Fig. 4).

Given two time series T= {t1, t2,..., tn} and S= {s1, s2,..., sm} of length nand m, respectively, an alignment by DTWmethod exploits information contained in a n× mdistance matrix:

distMatrix=(d(T1,S1)d(T1,S2)d(T1,Sm)d(T2,S1)d(T2,S2)d(Tn,S1)d(Tn,Sm))E13

where distMatrix(i, j) corresponds to the distance of ith point of Tand jth point of Sd(Ti, Sj), with 1 ≤ inand 1 ≤ jm.

The DTWobjective is to find the warping pathW= {w1, w2,...,wk,..., wK} of contiguous elements on distMatrix(with max(n, m) < K< m +n -1, and wk= distMatrix(i, j)), such that it minimizes the following function:

Figure 4.

Difference betweenDTWdistance and Euclidean distance (green lines represent mapping between points of time seriesTandS). The former allows many-to-one point comparisons, while Euclidean point-to-point distance (or one-to-one).

DTW(T,S)=min(k=1Kwk)E14

The warping path is subject to several constraints [22]. Given wk=(i, j) and wk-1 =(i’, j’) with i, i’ ≤ nand j, j’ ≤ m:

  1. Boundary conditions. w1 = (1,1) and wK = (n, m).

  2. Continuity. i – i’ ≤ 1 and j – j’ ≤ 1.

  3. Monotonicity. i – i’ ≥ 0 and j – j’ ≥ 0.

The warping path can be efficiently computed using dynamic programming [9]. By this method, a cumulative distance matrix γof the same dimension as the distMatrix, is created to store in the cell (i, j) the following value (Fig. 5):

γ(i,j)=d(Ti,Sj)+min{γ(i1,j1),γ(i1,j),γ(i,j1)}E15

The overall complexity of the method is relative to the computation of all distances in distMatrix, that is O(nm). The last element of the warping path, wKcorresponds to the distance calculated with the DTWmethod.

In many cases, this method can bring to undesired effects. An example is when a large number of point of a time series Tis mapped to a single point of another time series S(Fig. 6a, 7a). A common way to overcome this problem is to restrict the warping path in such a way it has to follow a direction along the diagonal (Fig. 6b, 7b). To do this, we can restrict the path enforcing the recursion to stop at a certain depth, represented by a threshold δ. Then, the cumulative distance matrix γwill be calculated as follows:

γ(i,j)={d(Ti,Sj)+min{γ(i1,j1),γ(i1,j),γ(i,j1)}|ij|<δotherwiseE16

Figure 5.

Warping path computation using dynamic programming. The lavender cells corresponds to the warping path. The red arrow indicates its direction. The warping distance at the (i,j) cell will consider, besides the distance betweenTiandSj, the minimum value among adjacent cells at positions: (i-1,j-1), (i-1,j) and (i,j-1). The Euclidean distance between two time series can be seen as a special case ofDTW, where path’s elements belong to theγmatrix diagonal.

Figure 6.

Different mappings obtained with the classic implementation ofDTW(a), and with the restricted path version using a thresholdδ= 10 (b). Green lines represent mapping between points of time seriesTandS.

Figure 7.

a) Classic implementation of DTW. (b) Restricted path, using a thresholdδ= 10. For each plot (a) and (b): on the center, the warping path calculated on matrixγ. On the top, the alignment of time seriesTandS, represented by the green lines. On the left, the time seriesT. On the bottom, the time seriesS. On the right, the color bar relative to the distance values into matrixγ.

Figure 7a shows the computation of a restricted warping path, using a threshold δ= 10. This constraint, besides limiting extreme or degenerate mappings, allows to speed-up DTWdistance calculation, because we need to store only distances which are at most δpositions away (in horizontal and vertical direction) from the distMatrixdiagonal. This reduces the computational complexity to O((n + m)δ). The above proposed constraint is known also as Sakoe-Chiba band(Fig. 8a) [34], and it is classified as global constraint. Another most common global constraint is the Itakura parallelogram(Fig. 8b) [18].

Local constraints are subject of research and are different from global constraints [22], because they provide local restrictions on the set of the alternative depth steps of the recurrence function (Eq. 19). For example we can replace Eq. 19 with:

γ(i,j)=d(Ti,Sj)+min{γ(i1,j1),γ(i1,j2),γ(i2,j1)}E17

to define a new local constraint.

Figure 8.

Examples of global constraints: (a) Sakoe-Chiba band; (b) Itakura parallelogram.

Advertisement

3.3. Longest Common SubSequence

Another well known method that takes advantage of dynamic programming to allow comparison of one-to-many points is the Longest Common SubSequence(LCSS) similarity measure [37]. An interesting feature of this method is that it is more resilient to noise than DTW, because allows some elements of time series to be unmatched (Fig. 9). This solution builds a matrix LCSSsimilar to γ, but considering similarity instead of distances. Given the time series Tand Sof length nand m, respectively, the recurrence function is expressed as follows:

LCSS(i,j)={0i=0,0j=0,1+LCSS[i1,j1]ifTi=Sj,max(LCSS[i1,j],LCSS[i,j1])otherwiseE18

with 1 ≤ inand 1 ≤ jm. Since exact matching between Tiand Sjcan be strict for numerical values (Eq. 22 is best indicated for string distance computation, such as the edit distance), a common way to relax this definition is to apply the following recurrence function:

LCSS(i,j)={0i=0,0j=0,1+LCSS[i1,j1]if|TiSj|<ε,max(LCSS[i1,j],LCSS[i,j1])otherwiseE19

The cell LCSS(n, m) contains the similarity between Tand S, because it corresponds to length lof the longest common subsequence of elements between time series Tand S. To define a distance measure, we can compute [32]:

LCSSdist(T,S)=n+m+2lm+nE20

Also for LCSS the time complexity is O(nm), but it can be improved to O((n + m)δ) if a restriction is used (i.e. when |i - j| < δ).

Figure 9.

Alignment usingLCSS. Time seriesT(red line) is obtained fromS(blue line), by adding a fixed value = 5, and further “noise” at positions starting from 20 to 30. In these positions there is no mapping (green lines).

Advertisement

4. Dimensionality reduction techniques

We have already introduced that a key aspect to achieve efficiency, when mining time series data, is to work with a data representation that is lighter than the raw data. This can be done by reducing the dimensionality of data, still maintaining its main properties. An important feature to be considered, when choosing a representation, is the lower boundingproperty.

Given two raw representations of the time series Tand S, by this property, after establishing a true distance measure dtruefor the raw data (such as the Euclidean distance), the distance dfeaturebetween two time series, in the reduced space, R(T) and R(S), have to be always less or equal than dtrue:

dfeature(R(T),R(S))dtrue(T,S)E21

If a dimensionality reduction techniques ensures that the reduced representation of a time series satisfies such a property, we can assume that the similarity matching in the reduced space maintains its meaning. Moreover, we can take advantage of indexing structure such as GEMINI(Section 2) [13] to perform speed-up search even avoiding false negative results. In the following subsections, we will review the main dimensionality reduction techniques that preserve the lower boundingproperty.

4.1. DFT

The dimensionality reduction, based on the Discrete Fourier Transform(DFT) [1], was the first to be proposed for time series. The DFT decomposes a signal into a sum of sine and cosine waves, called FourierSeries. Each wave is represented by a complex number known as Fourier coefficient(Fig. 10) [29],[32]. The most important feature of this method is the data compression, because the original signal can be reconstructed by means of information carried by the waves with higher Fourier coefficient. The rest can be discarded with no significant loss.

Figure 10.

The raw data is in the top-left plot. In the first row, the central plot (“Fourier coefficients” plot) shows the magnitude for each wave (Fourier coefficient). Yellow points are drawn for the top ten highest values. The remaining plots (in order from first row to last, and from left to right) represent the waves corresponding to the top ten highest coefficients in decreasing order, respectively of index {2, 100, 3, 99, 98, 4, 93, 9, 1, 97}, in the “Fourier coefficients” plot.

More formally, given a signal x= {x1, x2,..., xn}, the n-point Discrete Fourier Transformof xis a sequence X= {X1, X2,..., Xn} of complex numbers. Xis the representation of xin the frequency domain. Each wave/frequency XFis calculated as:

XF=1ni=1nxie2πijFn(j=1)F=1,,nE22

The original representation of x, in the time domain, can be recovered by the inverse function:

xi=1nF=1nXFe2πijFn(j=1)i=1,,nE23

The energy E(x) of a signal xis given by:

E(x)=x2=i=1n|xi|2E24

A fundamental property of DFTis guaranteed by the Parseval’s Theorem, which asserts that the energy calculated on time series domain for signal xis preserved on frequency domain, and then:

E(x)=i=1n|xi|2=F=1n|XF|2=E(X)E25

Figure 11.

The raw data is in the top-left plot. In the first row, the central plot (“Fourier coefficients” plot) shows the magnitude (Fourier coefficient) for each wave. Yellow points are drawn for the top ten highest values. The remaining plots (in order from first row to last, and from left to right) represent the reconstruction of the raw data using the wave with highest values (of index 2) firstly, then by adding the wave relative to second highest coefficient (of index 100), and so on.

If we use the Euclidean distance (Eq. 12), by this property, the distance d(x,y) between two signals xand yon time domain is the same as calculated in the frequency domain d(X,Y), where Xand Yare the respective transforms of xand y. The reduced representation X’= {X1, X2,..., Xk} is built by only keeping first kcoefficients of Xto reconstruct the signal x(Fig. 11).

For the Parseval’s Theorem we can be sure that the distance calculated on the reduced space is always less than the distance calculated on the original space, because knand then the distance measured using Eq. 12 will produce:

d(X',Y')d(X,Y)=d(x,y)E26

that satisfies the lower bounding property defined in Eq. 25.

The computational complexity of DFTis O(n2), but it can be reduced by means of the FFTalgorithm [8], which computes the DFTin O(n log n) time. The main drawback of DFTreduction technique is the choice of the best number of coefficients to keep for a faithfully reconstruction of the original signal.

4.2. DWT

Another technique for decomposing signals is the WaveletTransform(WT). The basic idea of WTis data representation in terms of sum and difference of prototype functions, called wavelets. The discrete version of WTis the Discrete Wavelet Transform(DWT). Similarly to DFT, wavelet coefficients give local contributions to the reconstruction of the signal, while Fourier coefficients always represent global contributions to the signal over all the time [32].

The Haarwavelet is the simplest possible wavelet. Its formal definition is shown in [7]. An example of DWTbased on Haar wavelet is shown in Table 1.

The general Haartransform HL(T) of a time series Tof length ncan be formalized as follows:

AL'+1(i)=AL'(2i)+AL'(2i+1)2E27
DL'+1(i)=DL'(2i)DL'(2i+1)2E28
HL(T)=(AL,DL,DL1,,Do)E29

where 0 < L’ ≤ L, and 1 ≤ in.

Level (L)Averages coefficients (A)Wavelet coefficients (D)
110, 4, 6, 6
28, 63, 0
371

Table 1.

The Haartransform of T= {10, 4, 8, 6} depends on the chosen level, and corresponds to merging Averagescoefficients(column 2) at the chosen level and all Wavelet coefficients(column 3) in decreasing order from the chosen level. At level 1 the representation is the same of time series: H1(T) = {10, 4, 6, 6} + {} = {10, 4, 6, 6} = T. At level 2 is H2(T)= {8, 6} + {3, 0} + {} = {8, 6, 3, 0}. At level 3 is H3(T) = {7} + {1} + {3, 0} = {7, 1, 3 0}.

The main drawback of this method is that it is well defined for time series which length nis a power of 2 (n= 2m). The computational complexity of DWTusing HaarWavelet is O(n). Chan and Fu [7] demonstrated that the Euclidean distance between two time series Tand S, d(T,S), can be calculated in terms of their Haartransform d(H(T), H(S)), by preserving the lower bounding property in Eq. 25, because:

d(H(T),H(S))=2d(T,S)<d(T,S)E30

Figure 12.

DWTusingHaarWavelet withMATLAB Wavelet Toolbox™ GUI tools.Tis a time series of lengthn= 256 and it is shown on the top-left plot (Original Signal). On the bottom-left plot (Original Coefficients) there are all theAL, represented by blue stems, andDL’coefficients (L’<L= 7), represented by green stems (stems’ length is proportional to coefficients value). On the top-right plot, theSynthesized Signalby selecting only the 64 biggest coefficients, as reported on the bottom-right plot (Selected Coefficients): black points represent unselected coefficients.

4.3. SVD

As we have just seen in Section 2, a TSDBwith mtime series, each of length n, can be represented by a m× nmatrix A(Eq. 1). An important result from linear algebra is that Acan always be written in the form [16]:

A=UWVTE31

where Uis an m× nmatrix, Wand Vare n× nmatrices. This is called the Singular Value Decomposition(SVD) of the matrix A, and the elements of the n× ndiagonal matrix Ware the singular valueswi:

W=(w1000w2000wn)E32

Vis orthonormal, because VVT= VTV= In, where Inis the identity matrix of size n. So, we can multiply both sides of Eq. 35 by Vto get:

AV=UWVTVAV=UWE33

UWrepresents a set of n-dimensional vectors AV={X1, X2,..., Xm} which are rotated from the original vectors A={x1, x2,..., xm} [29]:

(X1X2Xm)=(U1U2Um)(w1000w2000wn)E34

Figure 13.

SVDfor aTSDBofm=7 time series of lengthn=50. It is possible to note in thetransformeddataplot how only firstk< 10 singular values are significant. In this example we heuristically choose to store only firstk=5 diagonal elements ofV, and their relative entries inA,UandW, because they represent about 95% of total variance. This permits to reduce space complexity fromntok, still maintaining almost unchanged the information (see the reconstruction on the bottom-left plot).

Similarly to sine/cosine waves for DFT(Section 3.1) and to wavelet for DWT(Section 3.2), Uvectors represent basis for AV, and their linear combination with W(that represents their coefficients) can reconstruct AV.

We can perform dimensionality reduction by selecting the first ordered kbiggest singular values, and their relative entries in A, Vand U, to obtain a new k-dimensional dataset that best fits original data.

SVDis an optimal transform if we aim to reconstruct data, because it minimizes the reconstruction error, but have two important drawbacks: (i) it needs a collection of time series to perform dimensionality reduction (it cannot operate on singular time series), because examines the whole dataset prior to transform. Moreover, the computational complexity is O(min(m2n, mn2)). (ii) This transformation is not incremental, because a new data insertion requires a new global computation.

4.4. Dimensionality reduction via PAA

Given a time series Tof length n, PAAdivides it into wequal sized segments ti(1 < iw) and records values corresponding to the mean of each segment mean(ti) (Fig. 14) into a vector PAA(T) = {mean(t1), mean(t2), …, mean(tw)}, using the following formula:

mean(ti)=wnj=nw(i1)+1nwitjE35

When nis a power of 2, each mean(ti) essentially represents an Averages coefficient AL(i), defined in Section 4.2, and wcorresponds in this case to:

w=n2LE36

Figure 14.

An approximation viaPAAtechnique of a time seriesTof lengthn= 256, withw= 8 segments.

The complexity time to calculate the mean values of Eq. 39 is O(n). The PAAmethod is very simple and intuitive, moreover it is strongly competitive with other more sophisticated transforms such as DFTand DWT[21],[41]. Most of data mining researches makes use of PAAreduction for its simplicity. It is simple to demonstrate how the distance on raw representation is bounded below by the distances calculated on PAArepresentation (even using Euclidean distance as reference point), satisfying Eq. 25. A limitation of such a reduction, in some contexts, can be the fixed size of the obtained segments.

4.5. APCA

In Section 4.2 we noticed that not all Haarcoefficients in DWTare important for the time series reconstruction. Same thing for PAAin Section 4.4, where not all segment means are equally important for the reconstruction, or better, we sometimes need an approximation with no-fixed size segments. APCAis an adaptive model that, differently from PAA, allows to define segments of variable size. This can be useful when we find in time series areas of low variance and areas of high variance, for which we want to have, respectively, few segments for the former, and many segments for the latter.

Given a time series T= {t1, t2,..., tn} of length n, the APCArepresentation of Tis defined as [6]:

C={cv1,cr1,,cvM,crM},cr0=0E37

where criis the last index of the ith segment, and

cvi=mean(tcri1+1,,tcri)E38

To find an optimal representation through the APCAtechnique, dynamic programming can be used [14,30]. This solution requires O(Mn2) time. A better solution was proposed by Chakrabarti et al. [6], which finds the APCArepresentation in O(n log n) time, and defines a distance measure for this representation satisfying the lower bounding property defined in Eq. 25. The proposed method bases on Haarwavelet transformation. As we have just seen in Section 4.2, the original signal can be reconstructed by only selecting bigger coefficients, and truncating the rest. The segments in the reconstructed signal may have approximate mean values (due to truncation) [6], so these values are replaced by the exact mean values of the original signal. Two aspects to consider before performing APCA:

  1. Since Haartransformation deals only with time series length n= 2p, we need to add zeros to the end of the time series, until it reaches the desired length.

  2. If we held the biggest MHaar coefficients, we are not sure if the reconstruction will return an APCArepresentation of length M. We can know only that the number of segments will vary between Mand 3M[6]. If the number of segments is more than M, we will iteratively merge more similar adjacent pairs of segments, until we reach Msegments.

The algorithm for compute APCA representation can be found in [6].

4.6. Time series segmentation using PLA

As with most computer science problems, representation of data is the key to efficient and effective solutions. A suitable representation of a time series may be Piecewise Linear Approximation(PLA), where the original points are reduced to a set of segments.

PLArefers to the approximation of a time series T, of length N, using Kconsecutive segments with Kmuch smaller than n(Fig. 15). This representation makes the storage, transmission and computation of the data more efficient [23]. In the light of it, PLAmay be used to support clustering, classification, indexing and association rule mining of time series data (e.g. [10]).

The process of time series approximation using PLAis known as segmentation and is related to clustering process where each segment can be considered as a cluster [33].

There are several techniques to segment a time series and they can be distinguished into off-line and on-line approaches. In the former approach an error threshold is fixed by the user, while the latter uses a dynamic error threshold that changes, according to specific criteria, during the execution of the algorithm.

Figure 15.

The trend approximation (red line) of the original time series (black line) obtained byPLA.1

Although off-line algorithms are simple to realize, they are less effective than the on-line ones. The classic approaches to time series segmentation are the sliding window, bottom-up and top-down algorithms. Sliding window is an on-line algorithm and works growing a segment until the error for the potential segment is greater than the user-specified threshold, then the subsequence is transformed into a segment; the process repeats until the entire time series has been approximated by its PLA[23]. A way to estimate error is by taking the mean of the sum of the square of vertical differences between the best-fit line and the actual data points. Another commonly used measure of goodness of fit is the distance between the best fit line and the data point furthest away in the vertical direction [23].

In the top-down approach a segment, that represents the entire time-series, is recursively split until the desired number of segment or a error threshold is reached. Dually, the bottom-up algorithm starts from the finest approximation of the time series using n/2segments and merging the two most similar adjacent segments until the desired number of segment or an error threshold is reached.

However, an open question is the choice of best knumber of segments. This problem involves a trade-off between compression and accuracy of time series representation. As suggested by Salvador et al. [33], the appropriate number of segments may be estimated using evaluation graph. It is defined as a two dimensional plot where x-axis is the number of segments, while y-axis is a measure of the segmentation error. The best number of segments is provided by the point of maximum curvature, also called “knee”, of the evaluation graph (Fig. 16).

Figure 16.

Evaluation graph. The best number of segments is provided by the knee of the curvature.

4.7. Chebyshev Polynomials approximation

By this technique, the reduction problem is resolved by considering the values of the time series Tas values of a function f, and approximating it with a polynomial function of degree nwhich well fits f:

Pn(x)=i=0naixiE39

where each aicorresponds to coefficients and xito the variables of degree i.

There are many possible ways to choose the polynomial: Fourier transforms (Section 4.1), splines, non-linear regressions, etc. Ng and Cai [29] hypothesized that one of the best approaches is the polynomial that minimizes the maximum deviation from the true value, which is called the minimaxpolynomial. It has been shown that the Chebyshevapproximation is almost identical to the optimal minimax polynomial, and is easy to compute [27]. Thus, Ng and Cai [29] explored how to use the Chebyshevpolynomials(of the first kind) as a basis for approximating and indexing n-dimensional (n ≥1) time series. The Chebyshev polynomial CPm(x) of the first kind is a polynomial of degree m(m= 0, 1, …), defined as:

CPm(x)=cos(marccos(x))x[1,1]E40

It is possible to compute every CPm(x) using the following recurrence function [29]:

CPm(x)=2xCPm1(x)CPm2(x)E41

for all m≥ 2 with CP0(x) = 1 and CP1(x) = x. Since Chebyshev polynomials form a family of orthogonal functions, a function f(x) can be approximated by the following Chebyshev series expansion:

SCP(f(x))=i=0ciCPi(x)E42

where cirefer to the Chebyshev coefficients. We refer the reader to the paper [29] for the conversion of a time series, which represents a discrete function, to an interval function required for the computation of Chebyshev coefficients. Given two time series Tand S, and their corresponding vectors of Chebyshev coefficients, C1 and C2, the key feature of their work is the definition of a distance function dChebbetween the two vectors, that guarantees the lower bounding property defined in Eq. 25. Since it results:

dcheb(C1,C2)dtrue(T1,T2)E43

the indexing with Chebyshev coefficients admits no false negatives. The computational complexity of Chebishev approximation is O(n), where nis the length of the approximated time series.

Figure 17.

An example of approximation of a time seriesTof lengthn= 10000 with a Chebyshev series expansion (Eq. 46)whereiis from 0 tok= 100, using thechebfuntoolbox for MATLAB (http://www2.maths.ox.ac.uk/chebfun/)

4.8. SAX

Many symbolic representations of time series have been introduced over the past decades. The challenge in this field is to create a real correlation between the distance measure defined on the symbolic representation, and that defined on original time series. SAXis the most known symbolic representation technique on time series data mining, that ensures both a considerable dimensionality reduction, and the lower bounding property, allowing enhancing of time performances on most of data mining algorithm.

Given a time series Tof length n, and an alphabet of arbitrary size a, SAXreturns a string of arbitrary length w(typically w << n). The alphabet size ais an integer, where a > 2. SAXmethod is PAA-based, since it transforms PAAmeans into symbols, according to a defined transformation function.

To give significance to the symbolic transformation, it is necessary to deal with a system producing symbols with equal probability, or with a Gaussian distribution. This can be achieved by normalizing time series, since normalized time series have generally a Gaussian distribution [26]. This is the first assumption to consider about this technique. However, for data not obeying to this property, the efficiency of the reduction is slightly deteriorated. Given the Gaussian distribution, it is simple to determine the “breakpoints” that will produce aequal-sized areas of probability under the Gaussian curve. What follows gives the principal definitions to understand SAXrepresentation.

Definition 1.Breakpoints:A sorted list of numbers B = β1,..., βa−1such that the area under a N(0, 1)Gaussian curve from βito βi+1 = 1/a(β0 and βaare defined as −∞ and ∞, respectively) (Table 2). For example, if we want to obtain breakpoints for an alphabet of size a = 4, we have to compute the first (q1), the second (q2), and the third (q3) quartiles of the inverse cumulative Gaussian distribution, corresponding to the 25%, 50% and 75% of the cumulative frequency: β1 = q1, β2 = q2, β3 = q3.

Definition 2.Alphabet:A collection of symbols alpha = α1, α2,…, αa of size aused to transform mean frames into symbols.

(i\a345678
(1-0.43-0.67-0.84-0.97-1.07-1.15
(20.430-0.25-0.43-0.57-0.67
(30.670.250-0.18-0.32
(40.840.430.180
(50.970.570.32
(61.070.67
(71.15

Table 2.

A look-up table for breakpoints used for alphabet of size 2 < a < 9.

Definition 3.Word:A PAA approximation PAA(T) = {mean(t1), mean(t2), …, mean(tw)} of length wcan be represented as a word SAX(T) = {sax(t1), sax(t2), …, sax(tw)}, with respect to the following mapping function:

sax(ti)=αj,iffβj1mean(ti)<βj(0<iw,1<j<a)E44

Lin et al. [26] defined a distance measure for this representation, such that the real distance calculated on original representation is bounded below from it. An extension of SAXtechnique, iSAX, was proposed by Shieh and Keogh [35] which allows to get different resolutions for the same word, by using several combination of parameters aand w.

Figure 18.

An example of conversion of a time seriesT(blue line) of lengthn =128, into a word of lengthw =8, using an alphabetalpha ={a,b,c,d,e,f,g,h} of sizea =8. The left plot refers to the Gaussian distribution divided into equal areas of size 1/a.PAAmean frames falling into two consecutive cutlines (gray lines) will be mapped into the corresponding plotted symbol (colored segments). ThePAAplot shows thePAArepresentation (red line), whileSAXplot shows the conversion ofPAA(T) into the wordSAX(T) = {c,g,e,g,f,g,a,a}. Images generated by MATLAB and code provided bySAXauthors [26].

© 2012 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Carmelo Cassisi, Placido Montalto, Marco Aliotta, Andrea Cannata and Alfredo Pulvirenti (September 12th 2012). Similarity Measures and Dimensionality Reduction Techniques for Time Series Data Mining, Advances in Data Mining Knowledge Discovery and Applications, Adem Karahoca, IntechOpen, DOI: 10.5772/49941. Available from:

chapter statistics

6301total chapter downloads

30Crossref citations

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Data Mining and Neural Networks: The Impact of Data Representation

By Fadzilah Siraj, Ehab A. Omer A. Omer and Md. Rajib Hasan

Related Book

First chapter

Survey of Data Mining and Applications (Review from 1996 to Now)

By Adem Karahoca, Dilek Karahoca and Mert Şanver

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us