Omori-Utsu Parameters and their standard deviation obtained for each aftershock sequence with magnitude above the threshold magnitude,
This chapiter is dedicated to the analysis of some aftershock sequences occurred in Algeria-Morocco region, namely Al Hoceima earthquakes of May 26, 1994 (Mw6.0) and February 24, 2004 (Mw6.1) which occurred in northern Morocco, the October 10, 1980 El Asnam earthquake (Mw7.3), the May 21, 2003 Zemouri earthquake (Mw6.9) and March 26 2006 Laalam earthquake (Mw5.2) in northern Algeria.
Aftershock sequence is usually attributed to the strain energy not released by the mainshock. Statistical properties of aftershocks have been extensively studied for long time. Most of them dealt with the distribution of aftershock in time, space and magnitude domain. Several authors have noted the importnace of systematic investigation of aftershock sequences to earthquake prediction and a number of statistical models have been proposed to describe seismicity characters in time, space and magnitude (Utsu, 1961; Utsu et al., 1995; Guo and Ogata, 1997; Ogata, 1999). It is also well known that aftershock sequence offer a rich source of information about Earth's crust and can provide and understanding of the mechanism of earthquakes, because tens of thousands of earthquakes can occur over a short period in small area. The tectonic setting and the mode of faulting are factors others than fault surface properties that might control the behavior of the sequence (Kisslinger and Jones, 1991; Tahir et al., 2012). It is widely accepted that aftershock sequence presents unique opportunity to study the physics of earthquakes. In the same time importnat questions concern the fundamental origin of some widely applicable scaling laws, as the Gutenberg-Richter frequency-magnitude relationship, the modified Omori law or the Omori-Utsu law for aftershock decay and Bath's law for the difference between the magnitude of the largest aftershock and mainshock.
The frequency-magnitude distribution (Gutenberg and Richter, 1944), firstly examinated, describes the relation between the frequency of occurence and magnitude of earthquake
where, is the cumulative number of events with magnitude larger than or equal to and are constants. The parameter showing the activity level of seismicity exhibits significant variation from region to region becauses it depends on the period of observations and area of investigation. The parameter describes the size distribution of events and is related to tectonic characteristics of the region under investigation. It is shown that the estimated coefficient varies mostly from 0.6 to 1.4 (Weimer and Katsumata, 1999). Many factors can cause perturbations of the value. For example it is established that the least square procedure introduce systematic biais in the estimation of the parameter (Marzocchi and Sandri, 2003; Sandri and Marzocchi, 2005). The value of a region not only reflects the relative proportion of the number of large and small earthquakes in the region, but is also related to the stress condition over the region. The physical implication of the value, however, is not as obvious, is still under investigation.
The temporal decay of aftershocks is important task because it contains information about seismogenic process and physical condition in the source region. It is well known that the occurrence rate of aftershock sequences in time is empirically well described by the Omori-Utsu law as proposed by Utsu (Utsu, 1969). The power law decay represented by the Omori-Utsu relation is an example of temporal self-similarity of the earthquake source process. The variability of the parameter -value is related to the structural heterogeneity, stress and temperature in the crust (Mogi, 1962; Kisslinger and Jones, 1991). The importance to derive reliable -value, is due to the fact that it contains information about the mechanism of stress relaxation and friction law in seismogenic zones (Mikumo and Miyatake, 1979), but this information cannot be derived without a precise characterization of the empirical relations that best fit the data. In this study we examine the temporal patterns of aftershock distribution of earthquakes occurred in Algeria-Morocco region. The parameters of Omori-Utsu law are estimated by the maximum likelihood method, assuming that the seismicity follow a non-stationary Poisson process. Two statistical models, a first stage Omori-Utsu model, model without secondary aftershock and a two stage Omori-Utsu model including the existence of secondary aftershock activity, are tested for goodness of fit to aftershock data. We adopt the Akaike Information Criterion denoted , (Akaike, 1974) as a measure for selecting the best among competing models, using fixed data.
A model describing the temporal behavior of cumulative moment release in aftershock sequences is analyzed as alternative approach with respect to the Omori-Utsu law. In this model the static fatigue is assumed to be the principal explanation of the aftershock temporal behavior. Following Marcellini (1997), the different aftershock sequences are analyzed using this model. The most important result shows that globally the model explain and fit correctly the data.
Modified Bath's law is analyzed for each aftershock sequence, it allows us to derive some important properties related to the energy partionning. This approach is used to derive the energy dropped during the mainshock.
The fractal dimension deduced from the correlation integral is used to carry out the spatial analysis of the studied sequences. The inter-event distances analysis is performed for the different aftershock sequences and the density of probability of the inter-event distances are derived using non-parametric approach, especially using the kernel density estimation technics (Silverma, 1986).
2. Tectonic sketch and seismicity overview
In this section we give a large overview of the tectonic skech and the seismicity of the studied region (Pelaéz et al., 2012). The Morocco-Algeria region, namely the Maghrebian region (Fig. 1) occupies the NW part of the African (Nubia) Plate in what is referred to its continental crust. Its oceanic crust continues till the area of the Azores Islands. To the North it is immediately situated the Eurasian Plate, although between the Gibraltar Arc and the South of Italy an intermediate complex domain is intercalated. This domain is formed by some oceanic basins, as is the Algero-Provençal Basin and the Thyrrehnian Basin, and by a former region, presently disintegrated and now forming the Betic-Rifean Internal Zone, the Kabylias (in Algeria), the Peloritani Mountains (Sicily) and the Calabrian area in Italy (AlKaPeCa domain). This area underwent from the early Miocene a northwards subduction of Africa, then opening the small oceanic basins quoted, accompanied by the disintegration of the AlKaPeCa domain.
Presently, the convergence between the Nubia Plate and Iberia has an approximate NNW-SSE direction, with values of the order of 3 to 5 cm/year, according the places. This compression is accompanied, at least in the area of the Gibraltar Arc (in the Alboran Sea) by a noticeable ENE-WSW tension, in some cases even more important than the compression. For this reason, in the Alboran area, some extensional movements can be important The Maghrebian region is also a complex area in which the Saharan Shield affected by the Pan-African Orogeny (Precambrian to early Cambrian) is in contact with the Atlasic Mountains of mainly Alpine age (Fig. 1). To the North of the Atlas is situated the Moroccan Meseta, to the West, and the High Plateaus in Algeria, which to the North contact with the Rif and Tell mountains, typically Alpine chains. The Saharan Shield forms part of the Precambrian areas of Africa, clearly cratonized and generally not affected by later important deformations. In fact, in the Maghrebian area it corresponds to a clearly stable area. In Morocco, the so called Antiatlas, corresponds to a Precambrian and, mainly, Paleozoic area, making a tectonic transition between the shield and the Atlas. The Atlasic Mountains correspond to an intracontinental chain. To the West, in Morocco, the High Atlas reach the coast in the Agadir area and continues to the Northeast and East, passing, although with lesser heights, to the Saharan Atlas, which cross Algeria and reach the central part of Tunis. They can be considered as aulacogens bordering the northern part of the Saharan shield. To the North, the Middle Atlas in Morocco has a different direction, NE-SW, separating the Moroccan Meseta and the High Plateaus, both forming by Paleozoic rocks, although with a Mesozoic and Tertiary cover, well developed in some areas. On the whole, the Atlasic Mountains has been tectonically unstable from the Triassic times, and along the Alpine orogeny suffered important deformations and, more recently, also important volcanism, reaching the Quaternary. The Rif and Tell thrust southwards the Moroccan Meseta and the High Plateaus, and even in some places part of the Atlasic Mountains. They are formed by sedimentary External zones (only slightly affected by metamorphism in some Moroccan places) and by Internal zones. Mostly Internal zones (divided in several tectonic complexes) are affected by alpine metamorphism, moreover the existence of previous Paleozoic and even older deformations. In any case, their present structure has being formed during the Alpine Orogeny. They appear mainly to the E of Tetuan, in Morocco, and in the Kabylias, in Algeria. These Alpine chains have being structured from the Cretaceous to the Oligocene-early Miocene. Later, were formed numerous Neogene basins, clearly cutting in many cases previous structures. In this time, particularly from the late Miocene to the present, a near N-S compression provoked the existence of strike-slip faults (NE-SW, sinistral, and NW-SE, dextral), moreover reverse fault, many of which has N70ºE to E-W direction. In many cases, the cited strike-slip faults moved mainly as normal faults, releasing by this way the regional tension, practically perpendicular to the compression.
In this study, we analyze the aftershock sequences triggered by some important and damaging earthquakes which occurred in the Morocco-Algeria region. The seismic activity in this region is mainly characterized by moderate to destructive magnitude events. It has been the site of numereous historical earthquakes, and was therefore subject to extensive damage and several thousands of casualties in the past. Among the largest recorded earthquake in Morocco, the February 29, 1960 Agadir (Morocco) earthquake (Ms6.0), the May 26, 1994 and February 24, 2004 Al Hoceima earthquakes with Mw6.0 and Mw6.1 respectively (Pelàez et al., 2007). In northern Algeria, earthquakes up to magnitude Ms 7.3 have been recorded, namely the October 10, 1980 El Asnam earthquake (Ms 7.3) (Ouyed et al.,1981) and May 21, 2003 Zemouri earthquake (Mw6.9) earthquake, (Hamdache et al., 2004). It is important to point out that these two events were the most damaging events occurred in northern Algeria and, the region of EL Asnam experienced in the past damaging earthquake on 09 September 1954 (Hamdache et al., 2010). The original epicentral database and the arrival time readings of the different aftershock sequences used, were obtained from the Spanish National Geographical Institut (IGN) earthquake catalog. For the sequences triggered by the May 21, 2003 Zemouri earthquake and by March 20, 2006 Laalam event, we used the data recorded by portable seismological stations network monitored by CRAAG. It is well known that the process to identify aftershock is related to the seismicity declustering process, a crucial step in separating an erthquake catalog into foreshock, aftershock and mainshock. This process is widely used in seismology, in particular for seismic hazard assessment and in earthquake prediction model. There are sevceral declustering algorithms that have been proposed. Up to now, most users have applied either the algorithm of Gardner and Knopoff (1974) or Reasenberg (1985). Gardner and Knopoff (1974) introduced a procedure for identifying aftershocks within seismicity catalog using inter-event distances in time and space. They also provided specific space-time distances as a function of mainshock magnitude to identify aftershocks. This method is known as a window method and is one of the most used. Reasenberg's algorithm (1985) allows tolink up aftershock triggering within an earthquake cluster: if A is the mainshock of B, and B is the mainshock of C, then all A, B and C are considered to belong to one common cluster. When defining a cluster, only the largest earthquake is finaly kept to be the cluster's mainshock. In this study we have used the single-link cluster algorithm (Frohlich and Davis, 1990). We first identify large magnitude within the database. Earthquakes occurring within a certain distance in km and periode in days after these potentialmainshock were extrated from the database. Aftershocks were then selected from this subset of events using single-link cluster algorithm (Frohlich and Davis, 1990). A space-time metric is used to define the proximity of events relative to one another. The metric is defined as
where is the distance between two epicenters is the temporal separation between two events and is a constant relating time to distance. Afteshpocks are then defined as events within a - size (critical size) cluster containing the mainshock. The single-link cluster analysis has been used with the parameters defined by Frohlich and Davis (1990), it is natural choice for aftershock selection because it avoids subjective decision-making with regard to the spatial distribution and duration of the sequence, it requires no assumption as the efficiency of event triggering, as is necessary when applying other clustering algorithms (e.g. Reasenberg, 1985).
The May 26, 1994 Al Hoceima earthquake (Mw 6.0) took place at 12 km depth, the focal mechanism indicates the presence of a main set of sinistral fault with a N-S trend, which may involve several parallel surface (Calvert et al., 1997). Analysis of the aftershock sequence highlighted the presence of NNE-SSW distribution of the seismicity (El Alami et al., 1998). The sequence used covers a period of about one year from the mainshock, including 318 events with magnitude ranged from 2.0 to 6.0. The February 24, 2004 Al Hoceima earthquake (Mw 6.1) took place at a depth of 10 to 14 km. The focal mechanism suggests that the active nodal plane corresponds to a sinistral strike-slip fault oriented N11 N and dipping 70 toward the E (Stich et al., 2005). The aftershocks were aligned preferently in NNE-SSW, in the same way as one of the nodal planes of the focal mechanism. As pointed by Galindo-Zaldivar et al., (2009) the main stresses for the two aftershock sequences trigged by the May 26, 1994 and February 24, 2004 events have a trend with NW-SE compression (P-axis) and orthogonal NE-SW extension (T-axis) compatible with the present convergence of the Africa and Eurasia plate. The aftershock sequence includes 1233 events with magnitude ranged between 0.6 to 6.1. The time span of the sequence is about 1 year.
The October 10, 1980 El Asnam earthquake (Mw 7.3) is one of the most important and most damaging event occurred in northern Algeria. This event has taken place on the Oued-Fodda reversse fault. The later is segmented into three segments, ruptured along 26 km. This fault is located in the Cheliff high seismogenic Quaternary bassin, considered as very active. It is important to point out that almost all the seismicity in northern Algeria is located around the Plio-Quaternary intermountains active bassins (Meghraoui et al., 1996). The aftershock sequence used include the 130 most important magnitude events, ranged between 2.4 and 7.3. In the same way, the May 21, 2003 earthquake (Mw 6.9) has been located in the NE continuation of the south reversse fault system of the Quaternary Mitidja bassin (Maouche et al., 2011). The aftershock sequence we used include 1555 magnitude events, ranged between 0.9 to 6.9 and recorded during the first 40 days from the main shock (Hamdache et al., 2004). The March 20, 2006 Laalam earthquake (Mw 5.2), was located in the Babor chain, in the 'Petite Kabylie' south of Bejaia city. This chain belongs to the Tell Atlas, which is a portion of the Alpine belt in northern Africa. As pointed out by Beldjoudi et al., (2009) the region is affected by several faults. The regional seismicity analysis shows that the Babors chain seems to belongs to a "transition zone" between a large belt of reverse faulting along the western and central part of northern Algeria and a more distributed zone where deformation is mainly accomodated through strike-slip faulting (Beldjoudi et al., 2009). The aftershock sequence include 111 of the best recorded events with more than 54 with , and . The event magnitude varies between 1.3 and 5.2.
3. Magnitude-frequency relationship
The frequency-size distribution (Gutenberg and Richter, 1944) describes the relation between the frequency of occurrence and the magnitude of earthquake
the statistic is the cumulative number of events with magnitude larger than or equal to , whereas and are positive constant. The parameter which exhibits significant variations in space measures the activity level of seismicity. This parameter is sensitive to the input period of observation and area of investigation. The parameters describes the size distribution of events and is related to tectonic characteristics of the region under investigation. It is shown that the estimated coefficient varies mostly from 0.6 to 1.4 (Weimer and Katsumata, 1999). Many factors can bias the value estimation. It is established for example that the least square procedure introduce systematic error in the estimation of the parameters (Marzocchi and Sandri, 2003; Sandri and Marzocchi, 2005). Locally, the value not only reflects the relative proportion of the number of large and small earthquakes in the region, but it is also related to the stress condition over the region (Mogi, 1962). The physical meaning of the value is, however not clear and still under investigation.
The first step in the analysis of the Gutenberg-Richter law, is the determination of the threshold magnitude of completness. It is defined as the lowest magnitude for which all the events are reliably detected (Rydelek and Sacks, 1989). There are many approaches to the estimation of , the most popular are;
The maximum curvature (MAXC) method (Weimer and Wyss, 2000) defines the completeness magnitude as the magnitude for which the first derivative of the frequency magnitude curve is maximum (being the maximum of the non-cumulative frequency-magnitude distribution). The goodness of fit (GFT) method (Weimer and Wyss, 2000) compares the observed frequency-magnitude distribution with a synthetic distribution, and the goodness of fit is calculated as the absolute difference of the number of earthquake in each magnitude bins between the observed and synthetic distribution. The synthetic distribution is calculated using a and b-values estimated from the observed dataset by increasing the cutoff magnitude. The completness magnitude, is given by the magnitude for which 90% of the data are fitted by a straight line. The entire magnitude range (EMR) method was developed by Ogata and Katsura (1993) and modified later by Woessner and Weimer (2005). The maximum likelihood estimation method is used to estimate the power law G-R law parameters a and b. The same method is applied to estimate the mean and standard deviation of the Normal distribution considered for the incomplete part of the frequency-magnitude distribution. μ and σ are the magnitude at which of the earthquakes are detected and the standard deviation describing the width of the range where earthquakes are partially detected, respectively.
The frequency-magnitude distribution as shown previously is defined as,(Gutenberg and Richter, 1954)
where, is the cumlulative number of earthquakes with magnitudes greater than or equal to magnitude , and and are constants. This relationship holds for global earthquake catalogs and is applicable to aftershock sequences as well. The estimation of the - value has been the subject of considerable research, and various methods exist. The most statistically appealing of these is the maximum likelihood method first used independently by Aki (1965) and Utsu (1965), which gives the estimate of the - value as
where is the minimum magnitude up to which the Gutenberg-Richter law can accurately represent the cumuluative number of earthquakes larger than or equal to a given magnotude, and is the average magnitude. The definition of origionates from the definition of the probability density function for magnitudes, , consistent with the mathematical form of the Gutenberg-Richter law. This is given as (Bender, 1983)
where is the maximum magnitude up to which describes the distribution of magnitudes and . There are two problems with equation (4) when applied to real earthquake catalogs. Firstly, it considers reported earthquake magnitudes as a continuous variable, which is not accurate as most earthquake catalogs report magnitudes up to a precision of one decimal place for each event. Therefore, magnitude should be considered as a grouped or binned variable with a finite non zero bin length , which generally and in our case it is taken equal to 0.1, . This means that instead of observing physically in the catalog, we observe a different minimum magnitude , which is the aforementioned completeness magnitude. This is the smallest observed magnitude in the catalog above which the cumulative number of earthquakes, above a given magnitude, are accurately described by the Gutenberg-Richter law. It is very important to note that the minimum magnitude of completeness , physically observed in the catalog, is not present in the equation (4), and hence we need to express in terms of to be able to use the formula for real catalog. Assuming that would lead to a bias in the estimate. The other source of inaccuray in equation (4) is the fact that it considers the maximum magnitude value for the dataset to be infinite, which is never the case. in fact, for aftershock sequences, the maximum magnitude in the sequence can be pretty small. This too introduces a bias in the estimate of the - value obtained using the equation (4) (Bender, 1983; Tinti and Mulargia, 1987; Guttorp and Hopkins, 1986). This problem was considered by Bender (1983), and she gave a method for obtaining the - value of grouped and finite maximum magnitude data as a function of the root of the following equation
where and .
(Tinti and Mulargia, 1987; Gutorp and Hopkins, 1986). It has been observed, however, that for a difference of about 3.0 in magnitude between and , the value of obtained from equation (6) agrees closely with the asymptotic value of obtained from equation (7) (Bender, 1983).
For the two aftershock sequences triggered by the mainshocks occurred around Al Hoceima city in Morocco on 1994 and 2004, the threshold magnitude has been examined in details, using the different procedures introduced previously, especially the maximum curvature procedure MAXC (Weimer and Wyss, 2000) and the changing point procedure introduced by Amores, (2007). The results obtained for these two aftershock sequences, are shown in Fig. 2
In Fig.2, the frequency-magnitude relation for the 1994 and 2004 aftershocks series of Al Hoceima (Morocco) are displayed. Based on maximum curvature procedure (MAXC), the magnitude of completeness was taken equal to 2.8 for the 1994 aftershock seqiuence and 3.4 for the 2004 sequence. It is important to point out that the changing point procedure (Amores, 2007) gives the same results. Using these threshold magnitudes, we derive the -value of the Gutenberg-Richter relationship and its standard deviation using the maximum likelihood procedure. The -value is estimated equal to for the 1994 aftershock sequence and for the 2004 series. For both sequences the -value obtained is close to 1.0, the typical value for aftershock sequence. The obtained parameter is equal to for the 1994 aftershock series and equal to for the 2004 aftershock sequence. The results obtained for the aftershock sequences triggered by the events occurred in northern Algeria, namely the October 10, 1980 El Asnam earthquake (Mw 7.3), the May 21, 2003 Zemouri earthquake (Mw 6.9) and March 20, 2006 Laalam earthquake (Mw 5.2) are displayed in the following Fig. 3.
For the sequence of October 10,1980 El Asnam earthquake and March 20, 2006 Laalam earthque, we have used the maximum curvature procedure to derive the threshold magnitude, we obtained for the El Asnam 1980 aftershock sequence and for Laalam 2006 sequence. The higher value obtained for El Asnam series reflect the quality of the data include in the El Asnam series, it seems that the file doesn't include all the events. It is till now one of the "best" file including the mainshock and almost all the aftershock occurred just after the mainshock. The -value obtained for these two series, for El Asnam series and for Laalam aftershock series are close to 1.0, a typical value for the aftershock sequences. For the Zemouri aftershock sequence we have used the best combination between obtained for and 90% confidence and the maximum curvature procedure (Weimer and Wyss, 2000), which gives a threshold magnitude equal to . To improve the -value analysis for this sequence, we obtained a threshold magnitude , using the procedure based on the stabilisation of the -value and its uncertainty derived by Shi and Bolt (1982), as implemented by Weimer and Zuniga, (1994) in the software Zmap under Matlab. We obtained for the first procedure and for the second procedure, the difference between the two values is of the order of 0.20.
4. Decay rate of aftershock activity — Omori-Utsu law
The third studied scaling law is related to the decay rate of aftershock activity. It is well know that the decay rate of aftershock activity with time is governed by the modified Omori law or Omori-Utsu law (Utsu et al. 1995),
is the rate of occurrence of aftershocks per unit time, at time after the mainshock . The parameters and are positive constants. depends on the total number of events in the sequence, on the rate of activity in the earliest part of the sequence and is related to the power law decay of aftershocks. It is generally accepted that the number of aftershocks cannot be counted completely in the beginning of the sequence when small shock are often obscured by large ones due to overlaping, thus an overly large value of the parameter is obtained. After Utsu, (1971), the parameter must be zero if all shocks could be counted. Two opinions constitue the nowdays debate around the value: one is that value is essentially zero and all reported positive value results from incompletness in the early stage of an aftershock sequence; the second point of view is that value do exist (Enescu and Ito, 2002; Kagan, 2004; Shcherbakov et al. 2004a, b; 2005, 2006; Enescu et al. 2009). The constant is a controversial quantity (Utsu et al. 1995; Enescu et al. 2009) and is mainly influenced by the incomplete detection of small aftershocks in the early stage of the sequence (Kisslinger and Jones, 1991). According to Olsson (1999), values generally vary in the interval 0.5 to 1.8 and this index has shown by Utsu et al. (1995), differs from sequence to sequence and vary according to the tectonic condition of the region, however it is not clear which factor is the most significant in controlling value. Although more attention in the estimation of the value have been done to take into account the recommandation by Nyffenegger and Frolich (1998, 2000), we have use the standard use the standard deviation obtained by maximum likelihood which could be over or underestimating (Nyffenegger and Frolich, 1998; 2000).
In this study aftershock sequences are modeled using point process defined by the following conditional intensity,
with, is the history of observation until time , e.g the - algebra generated by the events occurred before the time , thus, following Daley and Vere-Jones (2005), the family is called the natural filtration of the point process or the internal history. The conditional intensity in equation 4, is independent of the internal history and depends only on the current time , like . It defines a non-stationary (nonhomogeneous) Poisson process. We assume the occurrence time's of the aftershock sequence, namely are distributed according to a non-stationary Poisson process with conditional intensity function defined by equation 4. The parameters and are estimated by maximizing the following likelihood function
where, are the model parameters.
The log likelihood is then given by;
it follows that the maximum likelihood estimate MLE of the parameter is solution of the following equation
Following, Ogata (1983), the maximum estimation of the parameters are obtained by using the Davidon-Fletcher-Powell optimization algorithm (Press et al. 1986, pages 277, 308) applied to equation 14. The standard deviation of the estimated parameters through the maximum likelihood procedure are derived using the inverse of the Fisher information matrix . In our case, Fisher information matrix is given by
the results obtained using this procedure for the aftershock sequences are shown on Fig. 4.
It is often observed that a sequence of aftershocks contains secondary aftershocks, which are aftershocks of a major aftershock (Utsu, 1970). Secondary aftershock are typically detected as changing-point in the activity rate of the sequence using Akaike information criteria (AIC) (Akaike, 1974). In this study, the changing-point in the activity rate of the sequence is detected by using the plot of cumulative number of events vs time from the mainshock. Assuming one secondary aftershock occurred at time , we test four different hypothesis, H1 : no secondary aftershock, H2: a secondary aftershock series does exist with the same -value and - value as the original series, H3; secondary aftershock series does exist with same - value as the original series and H4: a secondary aftershock series does exist with Omori law parameters different from the original series. Four point process models are tested. Respectively, the conditional intensity and the cumulative function of such point process are given by;
and the cumulative function is given by;
is the occurrence time of the main shock, and is the occurrence time of the last event. All the occurrence time are counted as the number of days elapsed from the mainshock. For all the model relative to the hypothesis H1, H2, H3 and H4, the model with the lowest AIC is selected as the best model. The AIC is used as a measure to select which model fits the observations better. This is a measure of which model most frequently reproduces features similar to the given observations, and is defined by
a model with smaller value of is considered to be better fit to the observations. The fit of the data by the Omori-Utsu law is obtained for both the minimlum magnitude in the aftershock sequence and the threshold magnitude discussed and derived in the previous section. The results of the fit are shown on the following figure
The Omori-Utsu parameters obtained for each aftershock sequence for magnitude above the threshold magnitude, , are given on the Table 1.
|Omori-Utsu parameters for|
|Al Hoceima 1994||0.760.03||0.00400.0008||12.471.51|
|Al Hoceima 2004||0.850.02||0.0340.016||47.383.81|
|El Asnam 1980||0.840.06||0.0250.045||4.483.81|
As pointed previously, it is often observed that a sequence of aftershocks contains secondary aftershocks, aftershocks of a major aftershock (Utsu, 1970). If a secondary aftershock equence starts at time then from the Eq. 15 and 16, the conditional intensity of the point process is given as
denoting, and where is teh Heaviside unit step function. The occurrence of secondary aftershock is analyzed using the procedure of detecting the aftershock activity change point by , procedure introduced by Ogata (1999), Ogata (2001) and Ogata et al., (2003). Using the changing point procedure to the aftershock sequence of Al Hoceima 1994, it appears that the time days is a suspecious point of activity change. Fitting the data by two stage Omori-Utsu models, shows that the smaller value, equal to -403.6618 is obtained for simple Omori-Utsu model. The results obtained are shown on Fig. 5. The results obtained for the two stage Omori-Utsu models are shown on Fig 5 (b), (c) and (d). The values are include on the graphs.
The analysis of the aftershock sequence of Al Hoceima 2004, give us that a suspecious point of aftershock activity change is given by days, the lower equal to = -3787.1548 is obtauined for the simple Omori-Utsu model as shown on the following Fig. 6(B). The results obtained for the two stage Omori-Utsu models are displayed on Fig. 5 (b), (c) and (d), including the values. The principal reason of this, seen from the comparaison of the predicted and real cumulative curves would be that the secondary aftershock triggered by large aftershocks are not frequent enough in relation to their magnitude. It is well established that the residual analysis of point process is a "good" tool to evaluate the goodness of fit of the selected model to the data (Ogata, 1988; Ogata, 1999). The integral
of non-negative conditional intensity function produces a 1-1 transformation of the time from to , so that the occurrence times are transformed into . It is well known that are distributed as a standard Poisson process. If is the intensity function of the process actually generating the data, using the maximum likelihood conditional intensity , the corresponding called residual process (Ogata, 1988; 1999), provides a measure of teh deviation of the data from the hypothezed model. The residual analysis performed for the two aftershock sequences, trigged respectively by Al Hoceima earthquakes of 1994 and 2004 is shown on Fig. 6. The graphs display the Omori-Utsu model fit and the residual process for Al Hoceima 1994 series and for Al Hoceima 2004 series.
from the graphs of the residual process we can deduce, as shown previously using the criteria, a global agreement with the Omori-Utsu model. We proceded with the same technics for the others aftershock sequences, the is used to select the most appropriate model fitting the data. Table 2, gives the obtained results.
|Akaike Information Criteria|
|Model 1||Model 2||Model 3||Model 4||B. Model|
|Al Hoceima 1994||-639.1429||-609.7559||-609.276||-633.7404||Model 1|
|AlHoceima 2004||-3616.8231||-36020.7106||-3609.2007||-3610.6351||Model 1|
|El Asnam 1980||-47.1127||-32.118||-31.2841||-42.9912||Model 1|
|Zemouri 2003||-8973.1703||-9244.9968||-9342.0748||-9353.1883||Model 4|
|Laalam 2006||-293.5383||-274.8197||-275.9812||-288.698||Model 1|
The temporal aftershock decay is also analyzed using the approach introduced by Marcellini (1997). This approach describes the temporal behavior of the cumulative seismic moment released in aftershock sequences. It is an alternative approach to the Omori-Utsu law, previously analyzed. Static fatigue is assumed to be the principal explanation of the aftershock temporal behavior. Under the condition that the main shock causes a redistribution of stress, the initial stress condition of the afterhock sequence at main shock origin time can be considered as the superposition of the stress before the main shock and the stress step caused by the dynamic rupture of the main shock. It has been shown Marcellini (1995) that
where is the time from the mainshock to the aftreshok, the absolute temperature, the universal gas constant, is a constant and the cumulative stress drop. Since the seismic moment may be defined as , where is the focal volume (Madariaga, 1979), the previous equation can be written as follow
is the cumulative focal volume, is the focal volume of the main shock, is the focal volume of the aftershock and and are the seismic moment of the main shock and the aftershock, respectively.
Following Marcellini (1997), we consider the aftershock zone as characterized by barriers that breaks after a given elapsed time proportional to the stress intensity factor and therefore to the stress . Data fit of equation (22) characterizes the distribution, namely, the more our data explained by Eq. 22, the most likely the distribution of is close to the uniform distribution. This property is partially influenced by the spatial variation of the stress change produced by the mainshock, given that in the present static fatigue model a barrier breaks at , which is the superposition of the static stress before the mainshock and stress step induced by the mainshock In this study, as in Marcellini (1997), the seismic moment is evaluated using the Thatcher and Hanks (1973) relation, given by
to test if the static model fatigue holds as represented by equation (22), two conditions must be checked:
(a) The validity of equation 22, which is adjusted to different aftershock data and plotted on Figure 8. The same figure shows the estimates and , their respective standard deviation and and the determination coefficient of the linear adjustment on the semi-logarithmic scale. Solid curves are plots of the function where . Dashed curves say , and , , correspond to the 95% confidence limits relative to the confidence intervals and of and , respectively. Namely, , ; where is the -quantile of the Student distribution with degrees of freedom (in our case) with
are the regression errors and the predicted values.
The confidence limite of are calculated from those of using the relation .
Taking into account the obtained coefficient of correlation and given on each graph, the modele introduced by Marcellini (1997) fit very well our data.
5. Energy partitioning
The other scaling law exalined in this study is the modified Bath law. In its original form, Bath law states that the difference between a given mainshock with magnitude and its largest aftershock magnitude is approximately constant, independently of the mainshock magnitude (Bath, 1965). That is,
is typically around 1.2.
Extensive studies about the statistical variability of have been carried out by several authors e.g Vere-Jones (1969), Kisskinger and Jones (1991) and Console et al. (2003). However, the law remains an open problem nowadays (Vere-Jones, 1969; Vere-Jones et al. 2005). In this study, we use the modified Bath's law proposed by Shcherbakov and Turcote (2004a). It is based on an extrapolation of the Gutenberg-Richter relationship for aftershocks. Namely, the magnitude of the largest aftershock consistent with the Gutenberg-Richter relationship for aftershocks is obtained by formally putting which yields to . If Bath law is applicable to the inferres magnitude , the Gutenberg-Richter relationship takes the following form;
combining this last relation with the empirical relation of the energy dissipated
we derive the partionning of the energy, in the following way. It is wellestablished that the magnitude distribution of aftershocks clearly exhibits a near-universal scaling relative to the mainshock magnitude. To explore this relation for our aftershock sequences, we will determine the ratio of the total energy radiated by the afytershock sequence to the seismic energy radiated by the mainshock. The energy radiated during an earthquake is related empirically to its moment magnitude by (Utsu, 2002)
This relation is applied directly to describe the link between the energy radiated by the mainshock and the moment magnitude of the mainshock ,
following Shcherbakov et al.(2004a), the total energy radiated during the aftershock sequence is obtained by integrating over the distribution of aftershock till the inferred largest magnitude as upper bound in the integration. The upper bound of the integration is the largest aftershock magnitude unferred from the Gutenberg-Richter relationship and is denoted . The total radiated energy in the aftershock sequence is obtained by integrating over the distribution of aftershock as,
is the largest aftershock magnitude inferred from the Gutenberg-Richter law. Taking the derivative of the modified Bath law,
in addition, different version of the above equation, Eq 35 is obtained if we use the equation giving as a function of and value, we get
taking into account the modified Bath's law, we obtain
the ratio of the total radiated energy by the aftershocks to the total energy radiated by the mainshock is given by
assuming that all earthquakes have the same seismic efficiency, which means that the ratio of the radiated energy to the total drop is stored as elastic energy is also the ratio of the drop in the stored elastic energy due to the aftershocks to the drop in the stored elastic energy due to the mainshock. Finally, the following relation is derived
Using the Eq. 39 for the studied aftershock sequences, the results are shown on the following table.
|Ratio of the elastic energy released|
|Al Hoceima 1994||1.070.07||1.10||0.05|
|Al Hoceima 2004||1.130.05||0.50||0.35|
|El Asnam 1980||0.820.10||1.20||0.02|
From the obtained results shown on Table 3, we deduce the percentage of the total energy radiated during the mainshock. From this point of view, 95 % of the total energy has been radiated during the Al Hoceima 1994 mainshock and 65% during Al Hoceima 2004 mainshock. On the other side the El Asnam 1980 main shock radiated about 98% of the total energy, 2% has been radiated by the aftershock sequence, but we shouldpoint out that these results depend directly on the quality of data used and it is clear that whatever the sequence of aftershocks used, it is still incomplete, especially at the beginning just after the occurrence of the main shock. For the aftershock sequence triggered by the El Asnam earthquake of 1980, it seems that the sequence used is truncated due to the delay in the implementation of seismological network just after the main shock. Nevertheles, the results shown on Table 3, gives a large overview on the ratio of the total energy radiated by the main shock and by the aftershocks. Thus, 86 % of the total energy has been radiated during the main shock of Zemouri 2003 and 99% during Laalam 2006 mainshock.
6. Spatial aftershock distribution
It is well known that seismicity is a classical example of a complex phenomenon that can be quantified using fractal theory (Turcotte, 1997). In particular, fault networks and epicenter distributions have fractal properties (Goltz, 1998). Thus, a natural way to analyze the spatial distribution of seismicity is to determine the fractal dimension . This - value is an extension of the Euclidian dimension and measures the degree of clustering of earthquakes. In the two dimensional space, can be a decimal number and ranges from 0 to 2.0. Therefore, the distributions characterized by different fractal dimensions defiine different clustering of events in space. In the two dimensional space, as approaches 1, the distribution of events approaches a line (Euclidean dimension equal to 1). The same occurs for the distribution of events along a fault. Alternatively, as approaches 2, the distribution of events tends to be uniform on the plane (Euclidean dimension equal to 2). When the -values approaches 0, the distribution is concentrated in a single point (Beauval et al. 2006; Spada et al. 2011). In this study, the fractal dimension is estimated using the correlation dimension (Grassberger and Procaccia, 1983)
where is the radius of the sphere considered in the study, and is the correlation integral given by
where is the number of points in the analysis window, ; ,...., are the coordinates of the epicenters, and the Heaviside step function, ,
. The quantity is equivalent to the probability that two points will be separated by a distance less than . The correlation integral is theoretically proportional to the power of , i.e , where is the correlation dimension equals the second generalized Renyi dilmension (Molchan and Kronod, 2009). To estimate from the correlation integral (Spada et al. 2011), we plot versus on the log-log scale, Fig. 6., then we use the least square method to fit the data over the region where is linear, which corresponds to the gradient of straight line of the resulting plot of against .
In practice, however, for large values of the gradient is artificially low, whereas for small values of the gradient is artificially high. These two cases have been called "saturation" and "depopulation" (Nerenberg and Essex, 1990). Whereas, it is common in the estimation of the fractal dimension to use fiting procedure to straight line than to a subjectivelly chosen straight part of the curve. Nerenberg and Essex (1990), give formulas for determining the distances of depopulation and saturation, and given by
where is the dimensionality of the data cluster, and is the approximate lenght of the side of the area containing data. As pointed by Eneva (1996), it is safe to start the scaling range at values of as low as , but in this study, we have use the slope method, which as explained previously consist in estimating the slope of the double logarithmic plot of the correlation integral versus distance. The stability of the scaling range is verified with the slope. It consists in calculating the first derivative between each two pints of the correlation integrale curve and plotting versus logarithmic distance. To eliminate the depopulation and saturation effects, the scaling range is defined within the region where the slope is most constant. Using this procedure and as for Al Hoceima 1994 and 2004 series, the results obtained are displayed in the following figure
The results obtained are close to 2.0, which allow us to deduce that the spatial distribution of the epicenters tends to be uniform on the plane.
The ratio of the slip on the primary fault to the total slip over the fault system is given by (Khattri, 1995)
where is the slip over primary fault and represents the total slip over the fault-system.
The obtained results are shown on Table 3, we deduce that during Al Hoceima earthquake of 1994, 62% of the total slip accomodates the primary rupture, 60 % during Al Hoceima 2004 earthquake. During the El Asnam earthquake of 10 October 1980, 59% of the total slip accomodates the primary fault segment, during Zemouri earthquake of 2003 this ratio has been estimate to 57% and 45% during Laalam earthqiuake of 2006. It is important to point out that in each case the remainder of the slip is distributed over the secondary rupture.
|Fractale dimension and ratio of the slip|
|D2 ± σD2||Range||Sp/S|
|Al Hoceima 1994||1.600.05||1.79 - 12.56||0.62|
|Al Hoceima 2004||1.670.04||2.16 - 17.27||0.60|
|El Asnam 1980||1.700.09||12.90 - 31.35||0.59|
|Zemouri 2003||1.790.02||1.60 - 10.00||0.57|
|Laalam 2006||2.130.02||1.57 - 8.90||0.45|
In this section we attempt to analysis the inter-event distance distribution of probability. We use a non-parametric approach to analysis the density of probability of the inter-event distances, especially the kernel density estimation, this methodlogy is clearly presented in Silverman (1986). We used it in the following way. Given a sample of observations with unknown probability distribution function , the kernel density estimate of is given by,
where is a positive function called kernel, a typical example is the Gaussian kernel defined by;
in Eq. 44, is a parameter and its determination is crucial. There are differents method to estimate the parameter , in the following lines we summarise the least-square cross-validation introduced by Silverman (1986, pp 48). The kernel density estimate of is given as shown previously by
considere a sample of observations ; the quadratic uncertainty is then given by
The last term of the right member of the last equality is independnat of the parameter , thus the optimal value of h is obtained by minimizing the two others terms. The problem then consists to find minimizind the score function defined as follow
futhermore, the score function could be written in the following form
assuming that the minimum of is in the vicinity of the minimum of and then it is in the vicinity of . Also, for n large, ; finaly the problem consites to minimize the simplified score function defined as follow, we replace by given by
we observe that in the case of the Gaussian kernel, the kernel is a centered Gaussian distribution with variance 2. In this case the simplified score function is written as follow
the parameter minimizing the simplified score function is then obtained as solution of the following equation
the parameter obtained using teh cross-validation method is then given by
where is the kernel density calculated from the sample . We use the Silverman multi-modal tests to estimate the number of true bumps. This test is as follow test the null hypothesis : " has bumps " against the alternative : " has more than bumps". Under the hypothesis , the smoothing parameter of the Gaussian kernel density estimate is given by
The test is inspired from the fact that big values of parameter reject the null hypothesis . The following therorem by Silverman (1981) gives a characterisation
Theorem (Silverman, 1981)
The kernel density estimate with Gaussian kernel, has more than k bumps if and only .
The test is constructed by simulating statistics from smoothed bootstrap samples of size obtained from .
Proof. The proof of this theorem is given in details in Silverman (1981).
Under the null hypothesis, samples can be simulated from the kernel density estimate by using Efron formula,
where is a bootstrap sample of size n simulated from the sample ; and are the mean value and variance of the sample , and is a randomly simulated sample from the standard normal distribution.
if is the parameter obtained from then the value of this test is given by
# is the sign of the number of element in the set. In practice, we apply the series of tests in this order until the null hypothesis is not rejected. We used the previous methods, to derive the distribution of the inter-event distances. The results obtained for the aftershock sequence triggered by the Al Hoceima 1994 earthquake are shown on Fig. 11.
The Fig. 11(a) gives the estimated density obtained using the rule of thumb. The cross validation optimal smoothing method gives a bandwidth parameter , the estimated density of probability obtained is shown on Fig. 11(b). The estimated density obtained with one mode critical bandwidth equal to is shown on Fig. 11(c). On the three Figures a concentration of the inter-event distances around the value 8.9 km, which correspond to the mode of the estimated distribution of probability. The results obtained for the Al Hoceima 2004 aftershock sequence are shown on the Fig. 12. The estimated density using the rule of thumb shown on Fig 12(a) displays a concentration of the inter-event distances around the value 9.45 km, the estimated density using the cross validation optimal smoothing has been obtained using a bandwidth parameter . The density obtained displays a concentration of the inetr-event distances around the value 7.5 km. Although, the estimated density with one mode critical bandwidth obtained equal to , shows a concentration of the inter-event distances around the value 14.14 km.
For the aftershock sequence triggered by the 21 May 2003 Zemouri earthquake (Mw 6.9), the obtained results are shown on Fig. 13.
The estimated density using the cross validation optimal smoothing has been obtained with a bandwidth parameter . The density displays an inter-event distances concentration around the value 4.29 km as shown on Fig. 13(b). The estimated density with one mode critical bandwidth ; displays a concentration of the inter-event distances around 9.0 km.
Aftershock sequences in Algeria-Morocco region have been analyzed in order to estimate and derive with accuracy the parameters of the most important scaling laws in statistical seismology. For each aftershock sequence the threshold complteness magnitude has been derived using different approach, we have choose the one giving the "most" stability of the fit of the cumulative number by a straight line. For the magnitude above the threshold magnitude above the threshold completeness , we used the maximum likelihood to estimate above the threshold completeness . The results obtained are close to 1.0, the typical universal value for aftershock sequence. The Omori-Utsu law for aftershock decay and Bath's law for the difference between magnitude and the largest aftershock and mainshock as modified by Shcherbakov et al., (2004a, b)., while using as a measure to select the most appropriate model between the first stage and two stage Omori-Utsu model, the rate of decay of aftershocks was found to follow in all case, except the May 21, 2003 Zemouri aftershock sequence, the first stage Omori-Utsu law, which mean the Omori-Utsu model without secondary aftershock. The decay of aftershock trigged by the May 21, 2003 Zemouri earthquake exhibit a better fit with a two stage Omori-Utsu model, denoted model 4 in the text. The later allow us to include in the modelisation the secondary aftershock of magnitude . The study of the Omori-Utsu law has been complemented of aftershock sequences. This procedure is an alternative to the Omotri-Utsu law based on the physics of the static fatigue mechanism. The study of Gutenberg-Richter relationship and the modified Bath's law, allowed us to perform the partitioning of the energy released during each aftershock sequence by the main shock and aftershocks. The spatial analysis has been performed using the fractal dimension derived using the Correlation integrale. Using non parametric estimation approach, especially the kernel density estimation, we derive the estimation of the density of the probability of the inter-event distances. The mode of this density highlight how the spatial clusters of the events.
This study is a first attemp to perform analysis of aftershock sequneces triggered by main events in the Algeria-Morocco region.