Open access

Detrended Fluctuation Analysis and Higuchi's Windowing Method Applied to an Analysis of Southern California Seismicity

Written By

A. Muñoz-Diosdado

Submitted: 24 April 2014 Published: 20 May 2015

DOI: 10.5772/59660

Chapter metrics overview

1,250 Chapter Downloads

View Full Metrics

1. Introduction

Analysing the statistics of earthquakes can contribute to a better understanding of such processes, for example, the Omori law for the decay of aftershock activity [1] and the law of Gutenberg-Richter as measure for the relationship between the frequency and magnitude of earthquakes [2]. These approaches and others allow for assessing the spatial distribution of stress on a seismic fault or estimating seismic risk following earthquakes of great magnitude [3]. It is important to study the three quantities that characterize a seismic process, i.e., magnitude, space and time, as opposed to only studying magnitudes. The interevent intervals, i.e., intervals between consecutive earthquakes in a region, have been extensively studied using a range of different approaches [4, 5]. Furthermore, statistics pertaining to the distances between consecutive events (or jumps) is equally important for the analysis of risks, but has been much less studied [6].

Recent studies have shown that many complex natural systems are characterized by correlations [7]; however, the identification and quantification of the presence of these long range correlations using spectral analysis is inadequate, because the data are non-stationary. However, the detrended fluctuation method DFA enables the detection of correlations in non-stationary time series, thereby avoiding spurious detections [8, 9]. Telesca et al. [10] described the scale behaviour of seismic sequences in Southern California from 1981 to 1998 (Figure 1), represented as a two-dimensional sequence of jumps and interevent intervals. They used the catalogue of Richards-Dinger and Shearer (RDS) [11], which in the years mentioned above lists 284925 events. The catalogue is complete from a magnitude of M ≥ 1.5, this means that no earthquake greater than 1.5 is missing (Figures 2 and 3). They discussed long-range entire catalogue properties using the DFA method and applying it to the time series of jumps and interevent intervals, and found that in both cases the presence of long-range correlations both in the temporal and spatial domains, because the DFA exponents were close to 1 in both sequences. They then conducted a more general analysis consisting of a joint study of both series; however, essentially, the results were the same as those obtained when tested separately. They changed the threshold from a magnitude of 1.5 to 3.8 and found 1/f behaviour for small to intermediate (1.5 to 3) thresholds, where there was a scaling exponent almost constant with ranges from 1.1 to 1.2 in all cases. For threshold magnitudes greater than 3.0, the scale exponent had a linear decrease, showing a tendency of the seismic process towards Poisson behaviour.

The results of the work by Telesca et al. are in some ways surprising, because in general, such correlations were not expected; eventually, it was revealed that the recurrence time series (or interevent intervals) and the series of jumps (distances between epicentres) show long-term correlations. The objectives of the current paper are to reproduce and supplement the results of Telesca et al. with an extended catalogue of seismicity occurring in California, then to compare this results with a time series of synthetic earthquakes obtained using a model, and finally, to look for patterns in the method in order to calculate the fractal dimensions of Higuchi, which can be used as seismic precursors.

Figure 1.

The zone considered in the study by Telesca et al. and in the present work.

Advertisement

2. The results of Telesca et al. obtained with an earthquake catalogue from 1981 to 2014

First, it the results of Telesca et al. were checked and validated. It was then checked whether these results would still be valid if an expanded catalogue of the Southern California seismicity was used, but updating the data to 2014. The types of correlations that the time series of earthquakes had in terms of magnitude was then investigated, which Telesca et al. did not complete.

Figure 2.

There are many registered events, but only events with a magnitude greater or equal to 1.5 were considered, because the catalogue was complete for these magnitudes.

The DFA method proposed in 1993 by Peng et al. [7, 9] begins with a series of length N, which is integrated and then divided into boxes of size n, and in each box, a straight line is adjusted to the points; this is called the local trend, yn(k). The points of the line are subtracted from the integrated series y(k) in each box. The quadratic mean fluctuation of the integrated series without trends is calculated by:

F(n)=1Nk=1N[y(k)yn(k)]2.E1

This is done on various timescales (box sizes) to check if there is power law behaviour between F(n) and n (F(n)~nγ); if this is the case, γ reflects the properties of signal autosimilarity; γ, the scaling exponent, or the DFA exponent, provides information about the type of signal, γ = 0.5 corresponds to white noise, γ = 1 means 1/f noise and γ = 1.5 and represents Brownian noise.

A catalogue of seismicity was used as a time series; when the DFA method is applied to a time series, it is possible that crossovers can be obtained, i.e., the graph has two or more linear regions, each one with a different slope; therefore, two or more DFA exponents are needed to describe the dynamics of correlations. In the present case, Telesca et al. found crossovers in the graphics of the DFA method, but they only considered the last part of the graphics, i.e., only the large-sized boxes. In order for the results to be consistent, in this part, the adjustment was also effected in the final part of the graph to obtain the correlation exponent, as shown in Figure 4.

Figure 3.

The time series with magnitudes greater than or equal to a threshold clearly shows fewer events.

An important aspect is the elimination of the trend; this is usually done by adjusting the graph of F(n) versus n to a polynomial of degree 1, when in fact, it can be adjusted to any polynomial. Apparently, Telesca et al. suspected the possible presence of linear and quadratic trends, and therefore adjusted to polynomials of degree 3 to eliminate these tendencies. This method is called DFA1 if it is desired that only constant trends be eliminated and as a result, polynomials of degree 1 are used. The method is called DFA2 if polynomials of degree 2 are used, etc. Telesca et al. used DFA3. With the goal being that the results were completely comparable to the present work, this study also applied the DFA3 method. However, the decision of Telesca et al. was exaggerated, because the same results are obtained with the DFA1 and DFA2 methods; as such, there are no linear or quadratic trends in these data. In practice, it is in fact better to use the DFA1 and DFA2 methods, because the calculations involved are quicker. With the DFA3 method, there is an error associated with a large number of calculations, because the data are adjusted to polynomials of a higher degree.

The procedure to build the series was as follows: first, the (RDS) catalogue was obtained [11].This catalogue is freely available; however, for some reason, the website from which it is discharged and six months of data were missing. It was therefore necessary to complete the catalogue. This work was also conducted by Telesca et al. [9], but they do not mention it in their article; instead, they simply state that the catalogue is complete for magnitudes of 1.5 and higher. However, without the missing data of those six months this statement is not true; therefore, the data were completed using the catalogue and its associated data from the Southern California Seismic Network (SCSN). Missing data were obtained to complete the RDS catalogue and data were also obtained to expand the base up to the present. The extended catalogue was also completed for seismic events greater than or equal to a magnitude of 1.5, as was the previous catalogue. It is useful to work with a catalogue that is complete from a magnitude perspective, so that small the statistical results will be more significant, given that there are many events in the catalogue. For the same period, Mexican catalogues contain little statistics from a magnitude of 4.0; as a result, statistical analyses concerning this data must be conducted in a different way.

Figure 4.

Graphic of the DFA method for a seismic series. In this part of the work, the least square adjustment was carried out on the latest data from the graph. As this figure shows, this was done with the purpose of avoiding a crossover effect, in which case it would have been necessary to use two correlation exponents, one before the crossover and one after it. This approach was used because one of the objectives of the present research was to compare the results with those reported by Telesca et al. [10]. It is necessary to emphasize that the fit to a straight line applies only for long time scales; the first part of the graph is not included in this part of the work but is instead considered later. This means that the results are valid for low and middle frequencies.

Once the catalogue was extended, a time series of magnitudes, recurrence times and distance between the epicentres for different thresholds were constructed. The thresholds that were used were almost the same as those of Telesca et al., i.e., from 1.5 to 4.0 at intervals of 0.1 and the maximum magnitude of threshold was increased because there were more events.

The subseries had less data as the threshold increased and the magnitude series was easier-to-build. Once the magnitude time series for each threshold had been built, as the occurrence times of each earthquake became known, the differences between them were considered the elapsed times between each event. As the threshold was increased, the recurrence times became larger, because it required an extended period of time to produce an earthquake of the same magnitude or greater. To obtain the series of distances between epicentres, the following procedure was used: the angular distance between two events was calculated using the following formula [14],

r=cos1[cosθ1cosθ2+senθ1senθ2cos(ϕ1ϕ2)]E2

where (θ1, θ2) and (ϕ1, ϕ2) are the latitudes and longitudes of the two events, respectively. The angular distance r is multiplied by 111 km to obtain the distance between the two events.

Advertisement

3. Results using the DFA method, part one

The first aspect that should be highlighted is that the results of Telesca et al. were reproduced for the data of the RDS catalogue, i.e., for a recurrence times series and a series of distances between epicentres. The correlations are 1/f for small thresholds and these become lower as the threshold is increased. Results for the expanded catalogue can be summarized in Figures 5, 6 and 7. The first shows the correlation in terms of the threshold for recurrence times. As can be seen, such a series shows long term correlations or 1/f noise for small thresholds, which will reduce as the threshold is increased. For larger thresholds, it is much closer to white noise (absence of correlations) than to 1/f noise. Figure 6 shows the magnitude series; however, for small thresholds of 1/f noise, this was not maintained, as is the case with the recurrence time series. Though the DFA exponents quickly began to decrease, they nonetheless had the same correlation values for high thresholds as the recurrence time series.

Figure 5.

Graphic of the correlation exponents as the threshold is increased for the series of recurrence times. Almost up to a threshold of magnitude 3.0 the 1/f behaviour type remains constant and then begins to decrease close to being white noise. The DFA exponent had no units.

Figure 6.

Graph of the correlation exponents obtained with the DFA method for the series of magnitudes; however, the dynamic was different for these series than for times of recurrence and distances between epicentres; 1/f behaviour was also present for low thresholds and there was white noise for high thresholds; however, 1/f noise was not maintained to the threshold of 3.0, but decreased immediately, showing almost linear behaviour.

Finally, Figure 7 shows the results for the series of distances between epicentres. The dynamics of correlations are practically the same as that for the recurrence time series, i.e., noise 1/f for small thresholds, which is kept up to almost a threshold of 3.0 and then begins to decrease until it approaches white noise.

Figure 7.

Graphic of the correlation exponents as the threshold is increased for the series of distance between epicentres. Almost up to a threshold of magnitude 3.0 it remains as 1/f behaviour, then begins to decrease until it approaches white noise. The dynamic is practically the same as for recurrence times.

Advertisement

4. Results with the DFA method, part two

Since the graphs for calculating the correlation exponents present a crossover, in this part of the work, an integral analysis of such graphs was carried out, which included the calculation of correlation exponents before and after the crossover, as well as the analysis of the behaviour of the crossing points before and after events of greater magnitude. Crossovers divide the graph into two regions, each of which can be adjusted with a straight line; values of the slope before the crossover (high frequencies) are lower than the values of the slope after the crossover (low frequencies) (see Figure 8).

Figure 9 shows the correlation exponents as the threshold is increased for the recurrence time series, distance between epicentres and magnitudes. As can be seen, following the crossover, as previously mentioned, the correlations of the recurrence time series and the distance between epicentres series showed 1/f behaviour up to a threshold of 3.0, then began to drop to almost white noise. Correlations after the crossover of magnitude series also began as 1/f noise and they decreased more quickly than in the previous cases. The three graphics in the lower part are the DFA exponents before the crossover, first they were very close to white noise and then their values grow to the same values that the graphs after the crossover. The fact that they crossed means that there was no crossover for large thresholds.

Figure 8.

The DFA method showing the crossover and the two regions for high and low frequencies.

Figure 9.

Correlation exponents before and after the crossover. The upper plots are the recurrence time, distance between epicentres and magnitude series after the crossover. The lower plots are the same but prior to the crossover; at the end, they cross and as such, for large thresholds there were no crossovers.

Figures 10 and 11 amplify the results for the DFA before and after the crossover. Red dots correspond to the times of recurrence series, blue dots correspond to the series of distances between epicentres and the black dots correspond to the series of magnitudes. Before the crossover, the correlations are not important (it is essentially white noise), when the threshold is increased the series are long-term correlated, with the exception of the magnitude series, which DFA exponents are close to white noise. After the crossover, series of magnitudes for low thresholds were 1/f noise and decreased essentially in a linear fashion. The correlations of the series in terms of recurrence and distance between epicentre times also decreased, but not as quickly as the magnitude series.

The position of the crossover is shown in Figure 12; behaviour for the three types of time series were the same and the crossover moved to the left as the threshold increased.

Figure 10.

Correlation exponents before the crossover for the time series of recurrence times (red), distance between epicentres (blue) and magnitude (black).

Figure 11.

Correlation exponents after the crossover for the time series of recurrence times (red), distance between epicentres (blue) and magnitude (black).

Figure 12.

Position of the crossover.

The following section explores the results of Telesca et al. by using a synthetic seismicity model, i.e., the model created by Olami, Feder and Christensen (OFC), which has qualitatively reproduced many of the properties of synthetic seismicity.

Advertisement

5. Analysis using synthetic seismicity — The OFC model

In 1997, Bak et al. [15, 16] introduced the new concept of self-organized criticality (SOC). This concept was introduced as a principle for describing the behaviour of complex dynamic systems. Bak et al. affirmed that these open systems, with many elements that have nonlinear interactions, organize themselves into a state that is critical and stationary. Spatial and temporal power laws characterize this critical state. When a system reaches this state, any event can begin a chain reaction or “avalanche” that can produce a catastrophe. The presence of 1/f noise is the temporal “finger print” of the SOC state and the appearance of a fractal structure is the spatial signature. Fractal behaviour is a frequent property of many geological phenomena and structures, and is reflected in empirical power laws [17]. The earth’s crust can be seen as a hierarchical set of shapes and sizes suitable for a fractal description. The so-called Gutenberg-Richter (GR) law for the size distribution of earthquakes is a typical power law of seismology [2]. Bak has asserted [18] that any numerical or theoretical model based on the SOC concept must reproduce the Gutenberg-Richter law as proof of its consistency.

Many authors [19-27] have reported SOC-versions of the Burridge-Knopoff (BK) spring-block model [28] for earthquakes. This model mimics the behaviour of a seismic fault by using a linear spring-block array, the dynamics of which are obtained by solving the system of differential equations that describes it. The subsequent BK-type models were solved using cellular automata. These models have been solved in two- and three dimensions, and have been very successful in the qualitative reproduction of the GR law, as well as several other properties of real seismicity [24- 25, 29-32]. These results have strengthened the notion that earth’s crust is a SOC-system.

The OFC earthquake model is a version of the spring block BK model in two dimensions. This model was the first attempt at obtaining self-organized criticality in a non-conservative model. The model has many blocks located on a rectangular plate (Figure 13). Each block in this array is joined to its four block neighbours by harmonic springs and all of them are lugged by other springs fastened to a superior plate, which is moved at a very low and constant speed. The upper plate is moved and causes the stress (or force) to increase in each block until the force equals a previously established threshold (the stress for the fault breaking), and then the block slips into a residual force state. Each sliding block transfers stress to its neighbours and when these neighbours reach the threshold, they also slip and so on. In this way, a chain reaction or synthetic earthquake can be generated that will not stop until all the stress values in the blocks are less than the threshold.

Figure 13.

Spring block model geometry. The relative movement of the two plates increases the stress on the blocks.

As noted, in this model, each block has four nearest neighbours and is connected to them. According to Olami et al. [21-22], once a block relaxes, the force in it is set equal to zero. In the present study, the arrangement was a square of LxL blocks. The position of each block is indicated by (i, j), where I and j are integers between 1 and L. If the displacement of the block (i, j) from its relaxed position on the lattice is xi,j, then the total force exerted on it by the springs is given by,

Fi,j=k1[2xi,jxi1,jxi+1,j]+[2xi,jxi,j1xi,j+1]+kLxi,jE3

where K1, K2 and KL are the elastic constants. The force redistribution in the position (i, j) is given by the following relationship,

Fi±1,jFi±1,j+δFi±1,j;Fi,j±1Fi,j±1+δFi,j±1;Fi,j0,E4

where the force increment in the nearest neighbours is given by,

δFi±1,j=k12k1+2k2+kLFi,j=α1Fi,jE5
δFi,j±1=k22k1+2k2+kLFi,j=α2Fi,j,E6

where α1 and α2 are called the elastic ratios. The force redistribution is not conservative. This model is labelled homogeneous, because the same values of α1 and α2 are considered in the entire modelled fault. When α1 = α2 the model is isotropic. In reference to [21, 22], Olami et al. mapped the spring block model into a continuous and non-conservative cellular automaton; the algorithm is described in the same references.

Olami et al. used open frontier conditions and reports robust SOC behaviour for earthquake sizes probability distribution. They found a power law in qualitative agreement with the GR law and discovered that the earthquake frequency was related to the magnitude m by means of,

log10N(M>m)=abm,E7

where a and b are constants and N(M > m) is the number of earthquakes greater than m in the time interval. The a and b values depend on each region, although the GR relationship is universal; a specifies a regional level of seismicity. They reported that the b values were approximately between 0.75 and 1.54. With the calculated values of b, Olami et al. concluded that the most approximated values to real seismicity were as being around 0.2 for α-values. This seems logical if it is supposed that all the K constants were almost the same (K1 ≈ K2 ≈ KL) than α ≈ 0.20. Olami et al. also showed that the same behaviour was obtained in the isotropic and anisotropic cases. It has been reported that the OFC model has other interesting properties that seem to be related to real seismicity [24- 25, 29-32]. As such, it has been proposed that the OFC model can be used as a basic model for describing earthquake occurrence in a seismic fault, because it has many properties that correspond to properties observed in real seismicity.

For studying the results of the OFC model in relation to the results of Telesca et al., the procedure was as follows: for different values of the conservation level α and the size of the network that represented the failure of 100 x 100, i.e., 10,000 blocks, synthetic seismicity catalogue were obtained, each with 1x106 events. In each of the synthetic earthquakes the number of blocks that were relaxed was counted; this number of blocks can be associated with magnitude value: when only a few blocks relax, the earthquake will have a small magnitude; when many blocks relax, there will be a synthetic earthquake of great magnitude. For example, Figure 14 shows one synthetic earthquake; each of the circles represents one of the relaxed blocks and the magnitude of the synthetic earthquake is the number of all the blocks that were relaxed. The epicentre is denoted by a small x; in this case, it is the block in which the earthquake started. As can be seen, it is very easy to obtain the coordinates of such epicentres.

Figure 14.

A synthetic earthquake that occurred in a region of the network representing the seismic fault. In this case, the network size was 100 x 100 and the level of conservation was α = 0.24. The site where the earthquake began, i.e., the epicentre, is indicated by X.

The synthetic seismicity catalogue contains the magnitude, the coordinates of the epicentre and the time of occurrence for each of the synthetic earthquakes. Figure 15 shows a section of one of these catalogues, where magnitude is represented by the number of blocks that were relaxed in each event.

The magnitude M was defined as M = log3(N), where N is the number of blocks that are relaxed, for example, an earthquake of magnitude 4.0 is obtained when 81 blocks are relaxed and an earthquake of magnitude 7 is obtained when 2187 blocks are relaxed. In this way, a grid with 10 000 blocks will be able to obtain earthquakes with at the highest an estimated magnitude of 8.4 and only when almost all 10 000 blocks relaxed.

Figure 15.

A section of one of the synthetic seismicity catalogues. In this case, the synthetic earthquakes were generated in a network of size 100 x 100 with a conservation level of α = 0.2. Magnitude in this case was the number of blocks that were relaxed.

Once a threshold had been selected, the number of earthquakes was reduced; for example, if a threshold value of 100 was chosen, only earthquakes in which 100 or more blocks were relaxed remained. The magnitude series can be easily determined, because it is formed by the magnitude of the earthquakes that remain. Since each earthquake's epicentre coordinates are known, the distance between the epicentres can be determined and therefore also the corresponding series of jumps or distances between epicentres. The time between each of the events can also be determined, i.e., the recurrence times. For example, Figure 16 shows the epicentres of earthquakes left behind when a threshold of 243 is applied as log3(243) = 5; this means that these earthquakes have a magnitude greater than or equal to 5.0. As each of the events is perfectly identified, the distance between epicentres and recurrence times are easy to determine. For example, Figure 17 shows a series of recurrence times when the threshold was placed at 3; as can be seen, from the initial 1000 000 synthetic earthquakes only approximately 300 0000 remain. Figure 18 shows a series of jumps where the threshold is set to 729; the results show that just over 4000 events has a magnitude greater than or equal to 6.0.

For each level of conservation that was used (0.2, 0.21, 0.23 and 0.22 0.24) one million events were obtained. Following on, a subseries for each one of the thresholds was obtained for each catalogue. The thresholds that were used were 2.0 to 7.0 at intervals of 0.1; then, the procedure that Telesca et al. applied to the seismicity of the south of California was repeated, i.e., the DFA method was applied to the series of magnitudes, recurrence times and distances between epicentres with the goal of calculating the type of existing correlations. Considering that this model has replicated properties of real seismicity in a qualitative way, it was hypothesized that the results would be very similar to those obtained in real seismicity.

For the sake of brevity, the results are shown only for a value of the conservation level, as the results for other conservation levels were virtually the same. Figure 19 shows the correlations for different thresholds of the three types of series considered. The study of the time series of synthetic earthquakes obtained from the OFC model allowed for reproducing the results observed for real seismicity by Telesca et al. with regards to the series of recurrence times. For low thresholds, there were long-range correlations (1/f) and these correlations decreased for larger thresholds. For the series of magnitudes and for the series of jumps, the results did not coincide; for low thresholds, the series showed an absence of correlations and there were important correlations only for large thresholds. These results are not disappointing, although they are somewhat contradictory, as it was not expected that a model as simple as the OFC would be able to reproduce all the properties of the correlations that were found in the case of real seismicity.

Figure 16.

Coordinates of the epicentres remaining when the threshold was 243, i.e., earthquakes that had magnitudes greater than or equal to 5.0. As the model mimics a flat failure, the distance between epicentres was calculated in the same way as the distance between two points in geometry. The coordinates were not real and thus, there were no units.

Figure 17.

The remaining earthquakes when the threshold was set equal to 3.0. This meant that all the eliminated earthquakes had magnitudes less than 1.0. In this case, recurrence times are shown, that is, the elapsed time between each of the events. The interevent interval was a number of steps in the program and therefore did not have physical units.

Figure 18.

Series of jumps or distance between epicentres when the threshold was 729, i.e., only earthquakes with magnitudes greater than or equal to 6.0 remained. As the grid dimension was 100 x 100, the minimum distance was 1.0, while the maximum distance would have been at about 140.

Figure 19.

DFA exponent for the time series of recurrence times (triangles), magnitude (circles) and jumps or distances between the epicentres of consecutive events (points), with a level of conservation of 0.24, n = 100.

Advertisement

6. Higuchi’s method

Complex systems such as seismic zones generate time series showing the combination of fractal and periodic components. Two decades ago, the so-called Higuchi`s method [12, 13] for calculating the fractal dimension of complex time series has been applied to investigate correlations and non-linear dynamic properties embedded in non-stationary time series. For example, this method has been used for analysing electroseismic time series [33]. Recently, Higuchi’s method has been used to detect periodic components mixed with fractal signals [34-36].

In this part of the work, the seismicity of Southern California is studied using Higuchi's fractal dimension method. The idea is to apply the method of windowing to Higuchi’s method to study whether there is a pattern that can be identified as a possible precursor to events of great magnitude. Here, the results of the windowing are presented, which suggest that some months prior to an earthquake, there is little variation in Higuchi’s fractal dimension, while closer to the main event this pattern changes and the fractal dimension decreases.

A time series can be expressed by x(i) i = 1,..., N, where each datum is taken at equally spaced time intervals, with a uniform time denoted by δ. Usually δ is set to δ = 1 because in principle, this parameter does not alter the data analysis. The following describes how to apply Higuchi’s method [12, 13] to a time series.

  1. From the time series x(i) the new series xkm(i) is obtained as follows:

xkm;x(m),x(m+k),x(n+3k),...,x(m+[Nmk]k),(m=1,2,3,...,k)E8

where k and m are integer numbers, and m and k represents the initial time interval width and [] denotes the integer part.

  1. The length of the series xkm(i) is defined as:

Lm(k)={(i=1[Nmk]|x(m+ik)x(m+(i1)k)|)N1[Nmk]k}/kE9

The term (N-1)/[(N-m)/k]k represents the normalization factor for the length of the subset.

  1. The length of the series L(k) for x(i) is obtained by averaging all the subseries lengths Lm(k) that have been obtained for a given k value.

  2. If L(k)kD, that is, if it behaves as a power law, it was found that exponent D was the fractal dimension of the series. Applying the last equations, implies a proper choice for the maximum value of k, for which the relationship L(k)kD is approximately linear (Figure 20).

Figure 20.

Evaluation of the fractal dimension of Brownian noise using Higuchi’s method. In this case, the slope is approximately 1.5 and β = 5 – 2D = 2, that is, theβ value corresponds to Brownian noise.

In the case of self-affine curves, this fractal dimension relates to the exponent β (by means of β = 5 – 2D, where if D is in the range 1<D<2, then 1<β<3. Higuchi showed that this method provides an accurate estimate of the fractal dimension, even for a small number of data. Higuchi developed his method as an alternative to spectral analysis, because although there is a relationship between D and β, the standard deviation of the fractal dimension obtained by using the fast Fourier transform (FFT) is greater than the standard deviation obtained by calculating the fractal dimension using the Higuchi’s method. As the FFT method requires calculating the averages of power spectra to obtain a stable spectrum, this will require many of these averages to obtain the precise and stable values of those afforded by this technique. Additionally, Higuchi’s method allows for clearly defining the two or more regions in which the graph of logLm (k) vs. log k is divided in cases where it has crossovers, i.e., the points that divide the different scaling regions with different values of the fractal dimension D.

Advertisement

7. Implementation of Higuchi's method

The catalogue containing data compiled by the Southern California Seismic Network (SCSN) was used for the calculation of the Higuchi’s dimension. The catalogue contains all data since 1981 and up to 2014 (between 32o and 37o north latitude and 114o and 122o west longitude) and events of a magnitude less than 1.5 were not considered in the analysis; however, there were still thousands of events left with which to make calculations. When applying Higuchi’s method, long-range correlations are always found, because the obtained D values oscillate around 2.0; as a result, the spectral exponent β is around 1.0, which corresponds to 1/f noise, i.e., long-range correlations. The objective was to analyse the seismicity around the main events that have taken place in a specific region, the first with a magnitude of 7.3 that occurred in 1992, 9 km to the N of the Yucca Valley, CA, the second of magnitude 7.1, which occurred in 1999, 51 km to the N of Joshua Tree, CA and the third of magnitude 7.2, which occurred in 2010, 54 km to the SSE of Calexico, CA. Windowing using Higuchi’s method was created around all three events in the way described below.

For each of the three above-mentioned earthquakes, a period of six years was analysed, three years prior to the earthquake and three years following it. If one of the three earthquakes was j-th, the windows were moved forward and backward. Each window had 1000 data. For example, the first window to the right contained data from the j + 1 to the j + 1001 data, the second from the j + 101 to the j + 1101 data, the third from the j + 201 to the j + 1201 data and so forth, until it was no longer possible to compete a window. As can be seen, the overlapping of the windows is equal to 100 data. The backwards windowing was performed in the same manner; for example, the first back window comprised from the j-1001 data up to the j-1 data. The slope was calculated for each window as described above and the graphs of the different values of the slope were plotted for each of the windows. Additionally, the value of the y-intercept was calculated, as it has been shown that this y-intercept also holds important information [37].

Advertisement

8. Results using Higuchi's method

As already noted, Higuchi´s method was not applied directly to the time series. The three events with the highest magnitude in the catalogue were selected; these were events with magnitudes greater than 7.0. The first had a magnitude of 7.2 and occurred on 28 June, 1992; the second had a magnitude of 7.1 and occurred on 16 October, 1999; the third, the epicentre of which was located on the Mexican side, had a magnitude of 7.2 and occurred on 4 April, 2010. These three events were removed from the catalogue and then, subsets of the catalogue with a duration of six years were chosen and Higuchi’ method was applied to them. However, when measuring windows (hence the term 'Higuchi’s windowing'), it was found that the windows overlapped in order to contain an adequate number of data. Higuchi´s dimension and the y-intercept were calculated prior to and following the earthquake for each window and the graphs obtained are shown below. For each earthquake, three figures are shown: the first indicates the location of the earthquake in the catalogue during periods of six years (except for the last, which had almost five years), the second shows the variation of the fractal dimension in the windows before and after the earthquake and the third shows the y-intercept for each window before and after the earthquake. In general, it was observed that for the three events, there was a variation of the fractal dimension D before and after the quake; however, prior to the earthquake, a decrease in the fractal dimension was noted, which was evident in all three events. In the graphics of the y-intercept, it was observed that prior to the earthquake, there was an increase in its value, which can be seen in the three graphs. Indeed, Figure 22 shows on the right another peak of important variation in the fractal dimension; this can also be observed in Figure 23 as another peak in the y-intercept. However, Figure 21 shows that this peak corresponded to an event of magnitude 6.7.

Figure 21.

The earthquake of magnitude 7.3 on 28 June, 1992; the graphic shows six years of events higher or equal to 1.5, three years before and three years after the earthquake.

Figure 22.

Higuchi’s windowing, implemented over a period of six years around the event on 28 June 1992. EQ indicates when the aforementioned earthquake occurred. Note that there is significant variation in Higuchi’s fractal dimension and a decrease therein prior to the earthquake.

Figure 23.

The y-intercept for each of the windows in Higuchi’s windowing method implemented over a period of six years around the event on 28 June, 1992. Note that the calculated maximum values of the y-intercept occurred prior to the earthquake.

Figure 24.

The earthquake of magnitude 7.1 on 16 October, 1999. The graphic shows six years of events higher than or equal to 1.5, three years before and three years after the earthquake.

Figure 25.

Higuchi’s windowing method, implemented over a period of six years around the event on 16 October, 1999. Qualitatively, similar behaviour was observed to the 1992 event shown in Figure 22.

Figure 26.

The y-intercept for each of the windows in Higuchi’s windowing method, implemented over a period of six years around the event on 16 October, 1999. Note that the structure of maximum values that occurred prior to the earthquake was also repeated in this case.

Figure 27.

The earthquake of magnitude 7.2 on 4 April, 2010. The graphic shows almost five years of events higher than or equal to 1.5, three years before and after the earthquake.

Figure 28.

Higuchi’s windowing method, implemented over a period of almost five years around the event on 4 April, 2010. Note again the decrease in the fractal dimension prior to the earthquake.

Figure 29.

Note how it is once again clear that the earthquake occurred after the rise in the values of the y-intercept.

Advertisement

9. Conclusions

Analysis of seismic series obtained for Southern California during the period of 1981 to 2014 showed that the results by Telesca et al. could be reproduced in this expanded catalogue, i.e., for the recurrence times series and for the distance between epicentres series. Behaviour with long-range correlations for low thresholds was obtained and was maintained for intermediate thresholds; for high thresholds, behaviour close to white noise was obtained. In contrast, in the series of magnitude, although there was a decrease in correlations, the behaviour type 1/f noise was not maintained; the correlations immediately began to decrease, almost linearly approaching values also obtained for other series and for high thresholds, i.e., closer to white noise. Verifying these results was attempted using synthetic seismicity catalogues obtained from the spring-block model of Olami, Feder and Christensen.

Interesting results were found that showed anomalous behaviour in the fractal dimension, possibly indicating the imminence of an earthquake of great magnitude. These results were found using Higuchi’s windowing method and by calculating the fractal dimension and the value y-intercept in each window. A decreasing pattern was observed in the fractal dimension prior to three earthquakes with magnitudes greater than 7.0. Additionally, an increase in the value of the y-intercept prior to the three earthquakes was also observed. It is necessary to perform more calculations under different conditions, but the two patterns observed for the three earthquakes suggest the possibility of a possible earthquake precursor by using the results obtained with the Higuchi’s windowing method.

Advertisement

Acknowledgments

The author acknowledges EDI, SIP and COFFA-IPN for partial support.

References

  1. 1. Utsu T, Ogata Y, Matsu’ura RS. The Centenary of the Omori Formula for a Decay Law of Aftershock Activity. J. Phys. Earth 1995; 43: 1-33.
  2. 2. Gutenberg B, Richter CF. Frequency of earthquakes in California. Bull. Seism. Soc. Am 1944; 34: 185-188.
  3. 3. Schorlemmer D, Wiemer S. Earth science: Microseismicity data forecast rupture area. Nature 2005; 434: 1086.
  4. 4. Bak, P, Christensen K, Danon L, Scanlon T. Unified Scaling Law for Earthquakes. Phys. Rev. Lett 2002; 88: 178501.
  5. 5. Corral A. Long-Term Clustering, Scaling, and Universality in the Temporal Occurrence of Earthquakes. Phys. Rev. Lett 2004; 92: 108501.
  6. 6. Ito K. Punctuated-equilibrium model of biological evolution is also a self-organized-criticality model of earthquakes. Phys. Rev. E 1995; 52: 3232.
  7. 7. Peng CK, Havlin S, Stanley HE, Goldberger AL. Quantification of scaling exponents and crossover phenomena in nonstationary heartbeat time series. Chaos 1995; 5: 82.
  8. 8. Matsoukas C, Islam S, Rodriguez-Iturbe I. Detrended fluctuation analysis of rainfall and streamflow time series. J. Geophys. Res 2000; 105: 29165-29172.
  9. 9. Peng CK, Buldyrev SV, Goldberger AL, Havlin S, Simons M, Stanley HE. Finite-size effects on long-range correlations: Implications for analyzing DNA sequences. Phys. Rev. E 1993, 47: 3730.
  10. 10. Telesca L, Lovallo M, Lapenna V, Macchiato M. Long-range correlations in two-dimensional spatio-temporal seismic fluctuations. Physica A 2007, 377: 279–284.
  11. 11. http://www.data.scec.org/catalog_search/date_mag_loc.php.
  12. 12. Higuchi T. Approach to an irregular time series on the basis of the fractal theory. Physica D 1998; 31: 277-283.
  13. 13. Higuchi T. Relationship between the fractal dimension and the power-law index for a time series: a numerical investigation. Physica D 1990; 46: 254-264.
  14. 14. Hirata T. A correlation between the b value and the fractal dimension of earthquakes. J. Geophys. Res 1989; 94: 7507-7514.
  15. 15. Bak P, Tang C, Weisenfeld K. Self-organized criticality: An explanation of 1/f noise. Phys. Rev. Lett 1987; 59: 381-384.
  16. 16. Bak P, Tang C, Weisenfeld K. Self-organized criticality, Phys. Rev. A 1988; 38(1): 364-374.
  17. 17. Turcotte DL. Fractals and chaos in geology and geophysics. Cambridge University Press, Cambridge; 1997.
  18. 18. Bak P. How Nature Works, Springer Verlag, New York; 1996.
  19. 19. Carlson JM, and Langer JS.Mechanical model of an earthquake fault, Phys. Rev. A 1989; 40: 6470-6484.
  20. 20. Brown SR, Scholz CH, and Rundle JB. A simplified spring block model of earthquakes, Geophys. Res. Lett 1991; 18: 215-218.
  21. 21. Christensen K and Olami Z. Scaling, phase transitions, and nonuniversality in a self-organized critical cellular automaton model, Phys. Rev. A 1992; 46: 1829-1838.
  22. 22. Olami Z, Feder HJS, and Christensen K. Self organized criticality in a continuous, nonconservative cellular automaton model, Phys. Rev. Lett 1992; 68: 1244-1247.
  23. 23. Ferguson CD, Klein W and Rundle JB. Long-range earthquake fault models, Comp. Phys 1999; 12: 34-40.
  24. 24. Muñoz-Diosdado A, Angulo-Brown F. Patterns of synthetic seismicity and recurrence times in a spring-block earthquake model, Rev. Mex. Fís 1999; 45(4): 393-400.
  25. 25. Angulo-Brown F, Muñoz-Diosdado A. Further seismic properties of a spring-block earthquake model, Geophys. J. Int 1999; 139(2): 410- 418.
  26. 26. Bach M, Wissel F, Drossel B. Olami-Feder-Christensen model with quenched disorder, Phys. Rev. E 2008; 77: 067101 (4 pages).
  27. 27. Yamamoto T, Yoshino H and Kawamura H. Simulation study of the inhomogeneous Olami-Feder-Christensen model of earthquakes, Eur. Phys. J. B 2010; 77: 559-564.
  28. 28. Burridge R, Knopoff L. Model and theoretical seismicity, Bull. Seism. Soc. Am 1967; 57: 341-371.
  29. 29. Helmstetter A, Hergarten S and Sornette D. Properties of foreshocks and aftershocks of the nonconservative self-organized Olami-Feder-Christensen model, Phys. Rev. E 2004; 70, 046120 (13 pages).
  30. 30. Kawamura H, Yamamoto T, Kotani T, Yoshino H. Asperity characteristics of the Olami-Feder-Christensen model of earthquakes, Phys. Rev. E 2010; 81: 0311119 (10 pages).
  31. 31. Kotani T, Yoshino H, Kawamura H. Periodicity and criticality in the Olami-Feder-Christensen model of earthquakes, Phys. Rev. E 2008; 77: R010102 (4 pages).
  32. 32. Muñoz-Diosdado A, Rudolf-Navarro AH, Angulo-Brown F. Simulation and properties of a non-homogeneous spring-block earthquake model with asperities, Acta Geophysica 2012, DOI: 10.2478/s11600-012-0027-7.
  33. 33. Guzmán-Vargas L, Ramírez-Rojas A, Hernández Pérez R, Angulo-Brown F. Correlations and variability in electrical signals related to earthquake activity. Physica A 2009; 388: 4218-4228.
  34. 34. Peralta JA, Muñoz Diosdado A, Reyes López P, Delgado Lezama R, Gálvez Coyt C, Del Río Correa JL. Criteria based on the index of physical work by unitary distance and multifractal analysis of the slow movement of the forefinger to determinate neurological lesion. AIP Conference Proceedings 2006; 854:218-220.
  35. 35. Muñoz Diosdado A, Gálvez Coyt G, Pérez Uribe BM, Arellanes González, J. Analysis of RR intervals time series of congestive heart failure patients with Higuchi’s fractal dimension, Proceedings IEEE EMBS Annual International Conference 2009; 31: 3453-3456.
  36. 36. Muñoz-Diosdado A, Gálvez Coyt G, Pérez Uribe BM. Oscillations in the evaluation of fractal dimension of RR intervals time series. Proceedings IEEE EMBS Annual International Conference 2010; 32: 4570-4573.
  37. 37. Gálvez-Coyt G, Muñoz-Diosdado A, Peralta JA, Balderas-López J A, Angulo-Brown F. Parameters of Higuchi’s Method to Characterize Primary Waves in Some Seismograms from the Mexican Subduction Zone, Acta Geophysica 2012, DOI: 10.2478/s11600-012-0033-9.

Written By

A. Muñoz-Diosdado

Submitted: 24 April 2014 Published: 20 May 2015