EARTHQUAKE PREDICTION

Among the countless natural disasters, earthquakes are capable to inflict vast devasta‐ tion to a large number of buildings and constructions at the blink of an eye. Lack of knowledge and awareness on earthquake as well as its comeback is conspicuous and results in disaster; leading to bitter memories. Therefore, earthquake forecast has been a polemical study theme that has defied even the most intelligent of minds. In this chapter, an attempt was made to do an extensive overview in the area of the earthquake prediction as well as classifying them into the main strategies comprising short‐, immediate‐, and long‐term prediction. An example of each strategy was carried out by mentioning their corresponding approaches/algorithms, such as ΔCFS, CN, MSc, M8, ANN, FFBPANN, KNN, GRNN, RBF, and LMBP; depending on the importance of each strategy. Based on these, it was concluded that, after the Tohoku‐Oki earthquake with M9.0, the current orientation of the Headquarters for earthquake Research Promotion of MEXT in Japan declare that, their mission would be long‐term statistical forecast of seismicity. Even, it is claimed that they do not emphasize on short‐term forecasting. Besides, intermediate‐term estimations are not capable to be used for prevention of all damages and protect all human life, but they may be utilized to undertake certain affordable activities to decrease damage, losses, and modify postdisaster relief. And, despite the long‐term prediction is more concerned by researchers, there is no certain satisfactory level to content them. De facto, the made covenant of 1970 that investigators will be capable to forecast/predict ground excitations within a decade, still remains unmet.


Introduction
Seismologists try to help human beings across the globe by taking different strategies and techniques to be aware of the most dangerous disaster, earthquake (EQ).The first usage of seismology dated back to 100 years ago as a recorder of EQ.Earthquake is the result of the faults mutation and their failure process.For failure of the faults, a massive energy has to be collected to the level where friction is broken down.Recently, seismologists have realized the source of energy.Activities of the flexible plates located at the earth surface contribute in coolness of the globe by heat transfer.Physically, the degree which is recognized an occurrence is frequently evaluated by how truly it can be predicted.Consequently, the request is not how EQ can be detected; it is preferably how accurate they can be predicted.No scientific estimation is feasible without precise description of the predicted phenomenon and the instructions, which specify evidently before it; whether the anticipation is affirmed or not [1].Perchance, the simplest adaptive formulation in report of future incidences was carried out by the eminent Panel on Earthquake Prediction [2] in 1976.Based on it, for prediction of an EQ the geographical zone, the time period in which it would occur with sufficient meticulousness and the predictable magnitude range must be taken into consideration so that the final failure or success of the EQ prediction can willingly be judged.Furthermore, scientists should correspondingly allot a level of confidence for each prediction.
In this matter, researchers have studied and examined various techniques and approaches, such as numerical and mathematical models [3][4][5][6][7][8][9][10][11] to predict EQs in a desired range.A good example of finite meshing of a complex earthquake was evaluated by Landry and Barbot [8], in which a new numerical approach was presented as a numerical solution to the elastostatic equations with embedded discontinuities.The method was performed in a new earth modeling code, Gamra.In particular, recommendation of recent studies done encourage the use of several seismicity indicators or features comprising geophysical information corresponding to EQ incidence in order to predict EQ [12][13][14].The relationship of such indicators using the binary class showed that some indicators displayed information fetch up nine to zero [15].However, some investigators [16] tried to work out one step forward since all of those indicators were used within a baseline arrangement and only used the standard values, neglecting the fact that, changes in arrangements or configurations might lead to opposed results or in some conditions to better consequences.
Review of the abovementioned studies as representatives of investigations in the field of EQ prediction shows serious and heavy works, but according to studies done, to predict a potent motion, the estimation of pertaining parameters is vital for the aim of seismic analysis, seismic design, and seismic retrofitting.This study attempts to present new current methods for prediction of EQ in seismic regions.However, the EQ prediction, as mentioned, is a very difficult task which has been broadly addressed by means of various techniques, but it still sounds to achieve precise results, there is a long way to go.Therefore, to have an overview of utilized approaches, a category of these has been carried out.Herein, an effort was made to figure out those strategies used such as short-term (SHT hereinafter), intermediate-term (IMT hereinafter) and long-term (LT hereinafter) prediction in order to familiarize readers in this filed.Moreover, this chapter will go briefly through artificial intelligence branches that are being comprehensively implemented by scientists across the globe.Figure 1 demonstrates a general process of the chapter.

Earthquake prediction strategies
Prediction of earthquakes is routinely categorized into three divisions: SHT, IMT, and LT prediction.They may differ based on the used methods, purpose, and accuracy.The SHT prediction requests precursors.Even though some encouraging precursors are informed, the dominant pieces of evidence in Japan and other places are excessively pessimistic.The cynicism basically roots as a fact is that, the SHT precursor is commonly nonseismic and the developed tools in the seismology field are not planned to find them.Regrettably, SHT predictions that ran to practical activities have rarely been accomplished and doubtful views are prevalent in seismological society.Both IMT and LT predictions are in nature predictions of the actuarial possibility of earthquakes.LT prediction contracts with the possibility of EQ incidence on a time-scale between 10 and 100 years and mainly is based on the geological investigations of faults and epochal seismicity records.IMT prediction, unlike to LT prediction is mainly in a time-scale of 1-10 years, utilizes more of current instrumental seismology and geodesy data.Despite the negative opinions such as that of Geller et al. [17], substantial development has been created in the study of precursory model changes of seismicity like [18][19][20] and in case of the IMT estimation of strong EQs around the globe is already proven in the statistically stage [21].More lately, even the attempts to simplify the main time to the SHT range are being carried out [1,22].Therefore, this chapter attempts to present aforesaid strategies in the EQ prediction field to update and familiarize readers with current progress of the area.

Short-term (SHT) prediction
Historically, a broad variety of methods have been practiced to predict EQs.Nowadays, in addition to attempts at IMT and LT prediction, SHT forecasting is on progress too.This would postwarnings at the very early signs of a momentous EQ or tsunami.EQ prediction has to determine the epicenter, EQ size and domain, and time with high precision.Amongst the short-, mid, and long-term predictions, SHT prediction may be significant, but as mentioned before, unfortunately, it has not achieved yet.Some precursors are thoroughly required for SHT detection.Several types of EQ precursors like Uyeda et al. [23] were determined.Seismological events such as fore-shocks can be considered as precursors.Nevertheless, most of precursors cannot be considered as seismological.In order to predict EQ, the national scheme (launched in Japan, 1965) has no success even a unique achievement.The explanation of this failure was that it failed to understand those precursors as explained by Uyeda [24].In 1995, the Kobe EQ with M7.3 occurred with no prediction (Figure 2), in the seventh 5-year plan.With no prediction, the national project was violently criticized.After long deliberations at different points and stages, they resulted to disuse SHT forecast since precursors were highly knotty to cope with (see Swinbanks [25]) and struggles must have focused on the fundamental study that really was seismology [26].Taking this exercise, the mission not only outlived the criticism, but investing was augmented too.After 1995, the no SHT prediction strategy was stepped up even to "make decision" that, precursors are not existent and their investigation is not science.A few years later, namely in 2011, the Tohoku-Oki EQ with M9.0 hit Japan (see Figure 3) and produced a destructive tsunami, which caused huge explosions, smelt at Fukushima Nuclear Plant No.1 [27] and over 20,000 people were killed.Such an EQ was described by the severity model, however, none of them even visualize that an EQ with M9.0 occur.With this broad disappointment, researchers now even converse about eliminating a well-disciplined working group (Seismological Society of Japan) in the field of EQ forecast/prediction.The current authorized attitude of the Headquarters for EQ Research Promotion of MEXT (Ministry of Education, Culture, Sports, Science, and Technology) of Japan declares that, their mission would be long-term statistical forecast of seismicity.Even, it is claimed that they do not emphasize on SHT forecast.These are weird positions for accountable authorities, once the people instantly need all conceivable information on next ground motions, for example, the Ryukyu trench as shown in Figure 4 with high potential, particularly after the ruinous Tohoku EQ, 2011 with M9.0.This is inoperative.Recently, even the national project title has been altered to "Promotion of EQ and Volcano Observation Research Project to Contribute to Disaster Mitigation", eliminating "prediction" term.Prediction of such ground excitations are not any more a simple monetary source, but the ship has shifted the helm to "catastrophe prevention".Herein, the SHT prediction has been described as an untrusted strategy compared to other prediction techniques.Therefore, an attempt is made to present an example of this strategy in a brief manner just in order to acquaintance.In this regard, the Chiayi zone (southwestern Taiwan) is selected with a significant deformation rate which has been hit by several devastating EQs, e.g., Chiayi EQ, 1792 [28].This EQ can be due to the Meishan fault rupture.Moreover, the Meishan EQ sequence (during 1904 and 1906) caused wide fatalities and numerous buildings collapsed.Nevertheless, a SHT probabilistic seismic hazards assessment was applied, as shown in Figure 5, to the Chiayi zone that involved the Meishan EQ sequence.Figure 6 and Figure 7 show the Touliu, 1904, Meishan, 1906, and Yangshuigang EQ, 1906 which is used as the source events in order calculate ΔCFS.Here, the rate and state friction model [29] is considered for assessing SHT seismicity-rate evolution.The Coulomb failure stress changes, ΔCFS, by EQ are computed for the model application.Based on the constant apparent friction model, the general state for ΔCFS can be written as follows: in which Δ is the changes of shear stress calculated along the receiver fault slip.′ is the coefficient of apparent friction; and Δ is the changes of normal stress which is perpendicular to receiver fault.Former studies, e.g., [30] have discovered that a positive change of stress boosts following events.In contrast, a negative change of stress debars the future seismicity events.Additionally, the focal mechanisms or receiver fault mechanism should be used as an information key for calculation of the ΔCFS.According to those previous investigations [30,31], a temporally fixed fault is assumed and procedure is made.The ΔCFS should be estimated on each grid (e. g., 0.01° × 0.01°) and solved partial changing of receiver faults by means of an appropriate program such as Coulomb 3.3.
The ΔCFSs divulged by the Meishan EQ sequence have been assessed in the Chiayi zone (Figure 8).The results showed notable intensifications in the neighborhood of each event.The M6.9 Meishan EQ (Figure 8(b)) produced a stress, that increased with a vaster range and a severer magnitude.In contrary, the M6.1 Touliu EQ (Figure 8(a)) generated less considerable stress disturbance.Such an inconsistency can be ascribed to the difference of magnitude between EQs.In short, the outcomes demonstrated that, a large EQ close to the epicenter zone of the Yanshuigang EQ could be predicted.It is noteworthy mentioning that, some areas with the highest increases of Coulomb stress did not take place subsequent to devastating EQs (Figure 8).Such consequences can be attributed to the low rate of cumulative stress (away from next coming rupture, or to relevant tectonics not found with seismogenic).

Intermediate-term (IMT) prediction
In the last few decades the serious EQ model, that is according to considerations pertaining to accelerating seismic transformation and dynamic concepts of the critical point, has been suggested by several seismologists as an advantageous tool in order to predict IMT EQ.IMT EQ prediction is divided into various algorithms such as CN, MSc (Mendocino Scenario), M8, and M8S.The common methodology to the different algorithms creates usage of generic concepts of pattern identification that allow dealing with several sets of EQ precursors, and permits for a regular seismicity monitoring as well as for an extensive testing of the predictions.
In fact, predictions may be helpful if their precisions are recognized, but not certainly high.It is the standard exercise to all forecasting difficulties comprising national defense.For instance, the MSc algorithm reduces the estimation area to a fine range (2-3 times the EQ source zone size) and for some cases approximately a precise location (inside the source zone dimension).The M8 algorithm forecasts EQ with several years (intermediate-term) middle-range (5-10 times the EQ source zone size) precision [32].Though, the long-running global testing IMT and middle-range EQ forecasts support the theory that, the algorithm of M8 can be efficient to globally decrease the effect of strong EQs (M ≥ 8.0) [32,33], there is no connexion to implement events and improve EQ readiness in a reaction to them.This is not ended so far; the EQ prediction might have been used to fulfill measures and amend EQ preparedness beforehand; unfortunately, this was not achieved, in part owing to the limited distribution of predictions and the absence of applying current approaches in order to use IMT detections to take action and make decisions [32].Overall, IMT estimations are not capable to be used for prevention of all damages and protect all human life, but they may be utilized to undertake certain affordable activities to decrease damage, losses and modify postdisaster relief.Davis et al. [34] proposed examples and methodologies on how activities may be occupied in reflex to predictions.Davis [35] explained how to employ economic parameters to ignored equations for optimization of activities that may be affected a forecast.Such activities are fruitful to improve the normal LT seismic risk reduction approaches, such as building codes or standard disaster preparation methods.As an example, these processes were recognized [34] and models were prepared for the Tohoku EQ on how prudent, cost effective, and reasonable conclusions can be made to decrease destructive EQ influences.Information provided through M8-MSc algorithms may reduce the worldwide effects from the strongest EQs.Retrospectively, there were many precursor-like anomalies earlier the Wenchuan EQ, 2008 (see [36] for review), but no anomaly was so decisive until the SHT or IMT alarm was used.Indeed, so long as seismologists deal with the SHT to IMT EQ forecast, which has many uncertainties and is frequently linked to emergency management counteractions, e. g., evacuation activities, this public stress becomes even larger.
A simple explanation is prepared herein to express some potentials on how the EQ forecasting may have been used by means of the IMT strategy, e. g., EQ prediction algorithms using CN, M8, and MSc, real-time forecasts using the M8-MSc algorithms and other applications of M8 algorithm.In brief, the CN, MSc, and M8 algorithms, as the main algorithms for IMT prediction of EQs, are present as below.

CN algorithm
CN algorithm is structured accordance with a pattern identification scheme to permit an analysis of the times of increased probability (TIP) for the event of strong EQs.It represents the possible occurrence (within a specified time window and region) with a magnitude larger than an immovable threshold, based on a measurable investigation of the seismic study.The seismicity patterns are obtained using a set of experiential time functions (assessed on the order of the incidences occurred in the analyzed zone) and description of the seismic activity level), seismic reticence and space-time collecting of events.Therefore, CN algorithm makes usage of the information arranged by minor and moderate EQs, having more or less good stats within the surrounded region, to forecast the severer EQs, which are infrequent events.
Since the period of TIPs varies from few months to few years, CN estimations are categorized with a time uncertainty over the years and with a space uncertainty of several kilometers, the so-called medium-range predictions, in relation to an entire single monitored area.Accordance with CN, once a TIP is notified, the strong EQ could take place in any location of the alerted region; accordingly, defined regions ought to be small.Nonetheless, the algorithm is on the basis of precursors, which can be hosted in a region with linear dimensions extremely larger than the dimension of the anticipated source.At the end, taking into consideration the accuracy of CN algorithm in global tests and the low proportion of occurrence of the strong EQs, it may be possible to predict the contingent possibility for a TIP about 40%.So, as revealed by Peresan et al. [37] a notified TIP has roughly 60% probability to be an incorrect alarm, whilst if TIP is not designated, at 96% of probability no potent EQ will happen.

MSc algorithm
Algorithm MSc was introduced and named by retroactivity examination of the local seismic catalog before the Eureka EQ, 1980 with M7.2, in California nearby Cape Mendocino.By having a TIP recognized for a given territory (U) at the time (T), the algorithm is considered to detect U, in a smaller area (V) in which the predicted EQ can be anticipated.An application of this algorithm needs a rationally complete catalog of EQs with ranged magnitudes (M ≥ 4.0), that is lesser than the least threshold normally utilized via M8 algorithm.The nature of MSc algorithm is briefly described as follows: a. Coarse-grained U (territory) is changed to fine squares with s × s dimension.Let say i and j is the center coordinates of the squares.In each coordinate (i, j), the number of EQs n ij (k, the subsequence quantity of a specific time window) including aftershocks is computed for consecutive (short time windows), months long (u), onset from initial time to onward (=6 years), to permit the EQs which is associated with the TIP's recognition.The considered time-space should separate into small cases (i, j, k) of the dimensions (s × s × u).
b. "Quiet" cases are separate from each small square with i, j coordinate; they are differentiated by n ij (k) that is under the n ij Q percentile/centile.
c.The clusters of more quiet cases or Q linked in place or in a time are known.The area (V) is the regional pattern of these clusters.
The adjusted values of parameters have been standardized for the Eureka EQ and are as follows: where, D is the circle diameter used in M8 algorithm.Note that, the phenomenon utilized in the MSc algorithm may reflect the shorter-term (second) phase of premonition increase of seismic motion near the initial source of main-shock.

M8 algorithm
The M8 algorithm was introduced and named by retroactivity examination of the seismicity prior to the greatest EQs worldwide (M8+).It is according to an ordinary physical system of forecast, which is briefly presented as subsequent writing.M8 is a written program in Fortran 77 which agrees in order to predict the EQ in the IMT by means of the M8 algorithm.Also, the algorithm is used to evaluate a time series of integral numbers according to transient seismicity within a region.

Prediction is designed at EQs with magnitude M ≥ 0
Various values with a step of 0.5 for M 0 are considered.Overlying circles with diameter of D (M 0 ) monitor the seismic region.In each circle, EQ sequence is deliberated with after-shocks eliminated {ti, hi, mi, bi(e)} where i = 1, 2,....
Here, ti is the source time and ti ≤ ti + 1; hi and mi stand for focal depth and magnitude and bi(e) represents the number of after-shocks within the first (e) days.The sequence is standardized by the less magnitude (С), in which C is the standard value for the annual average number of EQs in the sequence.The used magnitude scale must reflect the EQ source size.

Calculation of numerous running norms in the time windows
In this case, they illustrate diverse measures of severity in EQ flow, its deviance from the trend of long-term and grouping of EQs.These averages contain: the number of main-shocks, N(t), the deviance of N(t) from the trend of long-term, L(t), and the linear concentration of the main-shocks, Z(t), considered as the average diameter ratio of the source, i. e., l, to the r (average distance) between them.The EQ sequence, {i}, is deliberated within the time window.The functions of N, L, and Z are calculated for С = 20 and 10.Hence, the EQ sequence gives a strong averaged statement by seven functions; namely N, L, and Z (twice each) and B (the maximum number of after-shocks). Figure 9 shows the criterion of the M8 algorithm in the seismic extended standardized phase space.

A TIP or an alarm (increased probability time)
It is announced for 5 years once at least six out of seven aforesaid functions including B become too large within a limited time window; namely (t-u, t).To fix prediction, this statement is necessary for two successive periods, t and t + 0.5 years.
To sum up, prediction algorithms of an EQ, through above mentioned algorithms, are in accordance with a comprehensive general scheme demonstrated in Figure 10.A seismically energetic region is considered with an example of areas, generally, (CIs) circles of investigation (a), in which the areas have their own seismic events "history" of various magnitude (b).Each "history" of seismic activity is described based on the specified exactly definable moving numbers (c), which synthesis is expose to pattern diagnosis of "precursor" representing whether the next periods are a TIP or not, for the expected target EQ occurrence (d).

Long-term (LT) prediction
Despite the serious effort done and the several models developed [38], no prosperous technique has been detected yet.Because of the random actions of EQs, it may not be possible to determine the exact location, magnitude, and time of the next fatal EQ.Most recently, Tohoku, Japan was hit by an EQ with M9.1, on March 11, 2011.This excitation caused approximately 20,000 deaths, more than $300 billion (USD) in detriment and it proved impossible to predict the LT disasters (it ranks amongst the most devastating natural hazards ever recorded).As regards, it occurred in a well-prepared and well-organized country where great efforts have been made to reduce seismic hazard, it was a stupendous experience.Long-term procedures are occasionally defined as timeless historical risk valuation, whereas SHT preEQ processes mean a detected process happening minutes to months earlier than an EQ.Considering longterm seismicity properties is much more problematic than similar investigations of SHT and IMT changes of EQ event rates [39].A reason for this can obviously be lack of good documents and analogous long time catalogs of EQs.Existing historical catalogs are not strongly homogeneous both in space and time.In fact, primary investigations of active faults as well as their LT seismicity spreading disclosed that, majority of active regions instrumental seismicity catalogs do not include the seismic sequence.Even, in active areas where the epochal seismicity covers more than 1000 years, e.g., the Fault of Dead Sea, paleoseismology prepared substantial results on the return period of fatal EQs with fault properties and corresponding magnitude of past seismic actions that complete old scripts [40,41].Investigations of the LT performance of seismogenic faults and correlated paleoseismic statistics are today often mandatory in seismic hazard characteristics of applications for building services and as participation to national and international requirements for seismic safety [42].More significantly, current works query the credit of seismic hazard maps mentioning the weakly constrained EQ limitations resulting from overlooking paleoseismic data for the risk analysis [43].Prospective EQ predictions make scientific theories of EQ occurrence refutable, transparent, and testable.A main step along this direction was undertaken by the Regional Earthquake Likelihood Models (RELM) working group, which demanded LT (5 year) predictions for California city in a particular format to ease syllogistic testing [44,45].The aim of this section is to present the current progress in LT EQ prediction and its effect on the seismic risk assessment in zones with long epochal EQ excitation records.Herein, the LT forecast has been described as a strategy which has a long path to complete the mission that is EQ prediction with a secure and accurate manner.Therefore, an effort is made to present an example of this prediction, in the particular application of neural networks, in a short manner, as presented in the subsequent writings (Section 2.3.1), to acquaint.Despite using Bat-ANN algorithm (a combination of Bat Algorithm, BA, with Artificial Neural Network, ANN, algorithm) in one of the most recent investigations [46] to predict EQs in Pakistan regions, but in conclusion the research is confessing that, due to more diversity provided by the method and stochastic approach, used algorithm has "further chances" to detect global goals (still waveringly).In a nutshell, an EQ location of a given magnitude range and time prediction can be arranged into the listed categories as illustrated in Table 1 based on spatial accuracy and its temporal [47].

Artificial neural networks (ANN)
A number of EQ forecast approaches including artificial intelligence (AI) have used.One of these techniques is the ANN which has presented a good ability for detecting solutions in different fields.Variations of different algorithmic have been designed and proposed to expand the ANN accuracy.The ANN algorithm is a challenge to model things with sophisticated software or professionalized hardware, the several layers of processing elements via neurons.The human brain as the most fundamental element is a particular kind of neuron that furnishes one with the capacities to think, remember, and apply past skills to every practice.The benefit of this frame is that, the ANN delivers a black-box method and the operator does not require to distinguish much about the process nature being simulated.Having this in mind, ANNs have been promising methods in order to predict and detect locally imminent EQs according to reliable seismic information.For these reasons, the ANN has recently been broadly applied to different areas to prevail the problem of exclusive and nonlinear relationships.
Herein, the section tries to briefly investigate the methodical possibilities of EQ forecast by means of Neural Networks (NNs).Moreover, this short investigation provides an accurate layout of different precursors; namely PGA, and main-shock detection, however, other precursors such as radon detection, liquefaction, and aftershocks should also be taken into consideration.In addition, it is also discussed how these precursors are used by NN in order to EQ detection and prediction.A corresponding network analysis is stated for each seismic precursor, beside the type used NN.

Prediction of peak ground acceleration (PGA)
A PGA is an acceleration measure of EQ due to extensive ground motion and it is induced by a powerful energy released from an EQ, leading to earth deformation like landslides, surface ruptures, and liquefaction.In this regard, Derras and Bekkouche [48] presented a comprehensive method to assessing the maximum PGA by means of a feed-forward back propagation neural network (FFBPANN).The result was compared with those ground motion prediction equations (GMPEs) demonstrated by Ambraseys and Douglas [49].The GMPEs were utilized as a substitute method for the prediction of the PGA where accelerogram monitoring stations do not exist.Such an approach needs a wide data of PGA values as well as the site coefficients.
An FFBPANN was projected with an overall selected 1000 epochs set and a tangent hyperbolic sigmoid activation function containing five input parameters; namely the depth of focus where an EQ was activated, the Japanese Meteorological Agency magnitude ( JMA ), the epicentral distance, the resonant frequency, , and the sedimentary layers' thickness, .Prominently, shear wave velocity with x = 800 m/s is constant for both and parameters.Figure 11 presents a visual view of the site seismic parameters used for a PGA estimation [48].The configuration of ANN used in Figure 11 contains of 1850 testing data taken out from Kiban Kyoshin network (KiK-net) data and 326 training.An evaluation between an FFBPANN and selected GMPE model [49] indicated that, the FFBPANN performance is far compared to those GMPEs.The determination coefficient ( 2 ) predicted by an NN was 0.94 in comparison of those two GMPE methods, which were 0.82 and 0.76.In the same site, the normalized root mean square error (NMRSE) for the NN models was noticeably smaller, 0.11%, in comparison with the selected GMPE approaches, which depicted a corresponding NMRSE of 0.17 and 0.25%.It is approving that, a PGA estimation by means of an NN is much better than the GMPE approaches from both accuracy and performance point of view.Specifically, the outcomes showed that, the parameter of epicentral distance heavily affects the result of a PGA value, procuring the best mean square error (MSE)and R values of 0.075/0.076and 0.51/0.48for the testing phases and training, respectively.In contrary, the site parameters and focal depth have the minimum effect on the PGA value outcome.In addition to these, a mixture of altogether five parameters was seen to return the optimum results, keeping the MSE value with 0.0205 and 0.0203 and an R-score (correlational coefficient) with 0.84 and 0.85, for the testing phases and training.The R-score shows the independence degree betwixt an output and an input.An R coefficient closer to ± 1 displays a powerful correlation, though, R = 0 is considered as an inaccurate prediction.
For prediction of PGA values in a specific site a different direction was discovered [50] using an ANN algorithm and microtremor measurements.On the other hand, microtremor measurements are an experiential method to collect all vibrations of ground in a very scant time interval, thereby taking an obvious superiority over a traditional record database.Applying predefined data for training and network validating utilizing a BP syllogism, the ANN method was developed.Particularly, three types of input neurons were assessed: the focal depth, epicentral distance, and EQ magnitude.The results revealed that, using input parameters obtained 0.972 for an R-score, which was interestingly higher superior to two other input parameters (on an average 0.6-0.9)and one input parameter (on an average under 0.6).A comparative study of this issue has given clear consequences that microtremor measurements are literally efficient once time is limited, but do not have performance ability.On the contrary, an ANN is noteworthy in performance, nonetheless as a weak point extremely relies on predefined data.The models of three neural networks are shown in Figure 12 and a comparison between microtremor measurements and a used ANN for the PGA approximation is shown in Figure 13, respectively [50].Additionally, several novel methods have been examined to estimate PGA values, giving different accuracy rates and performance levels.Günaydın and Günaydın [51] presented a comparative method to PGA forecast by assessing three ANN types; namely a generalized regression neural network (GRNN), feed-forward back propagation (FFBP) and a radial basis function (RBF).All these ANN types were trained by means of a back propagation (BP) procedure through solely one hidden layer and were also assessed using four input request parameters; namely the EQ moment magnitude, focal depth, site conditions, and hypocentral distance.These request parameters, through 15 accelerometers for 95 records, were taken from three wave directions including east-west, north-south, and up-down).In summary, for forecasting horizontal and vertical values of PGA the FFBPANN and GRNNANN were the ideal options.

Prediction of large-magnitude of main-shocks
Panakkat and Adeli [13] performed an analysis using three different types of NNs; namely a Levenberg-Marquardt back propagation neural network (LMBP)recurrent neural network (RNN), and a radial basis function (RBF) neural network, to eventually forecast a strongest EQ one month earlier.The input request parameters were alike to the neurons described in [52].
Comparative results amongst three different networks indicated that, for ranging from M5.0 to M6.0, the RNN showed 0.20-0.51for an R-score, while LMBP and RBF networks acquired ranging from 0.01 to 0.14 and 0.12 to 0.37, respectively, for an R-score.Consequently, it was apparent that, an RNN is able to determine large EQs (M6.0+) more accurate and faster than both RBF and LMBP networks.Figure 14 shows an RNN layout.Despite the fact, the presented EQ forecast cannot be completed with a satisfactory degree of certainty.Meantime, Reyes et al. [12] lately developed an approach for determining EQs in Chile by employing a BPANN in a three layer using the b-value, Omori-Utsu's and Bath's law.The network result is twofold [53].First, it shows the probability of a magnitude beyond a defined data threshold.Next, the network will produce a possible magnitude that might take place within an interval of 5 days.A comparison of different NNs like a KNN [54], an ANN, K-means clustering [55], and an SVM [56] indicated that, except SVM, a KNN, an ANN, and K-means clustering give better forecast precisions.For validation of the used network, 500 eons were utilized in four unlike zones in Chile.A comparison among the NNs demonstrated that, the overall measured performance according to sensitivity and specificity values was highly location dependent.Specifically, the sensitivity of 35.7, 42.9, and 50% was recorded for the ANN, KNN, and K-means clustering, respectively.Inconclusive results were given for the SVM.

Conclusions
A brief description in anatomy of an EQ causes has been made and the existing problem in relation to predict and detect this fatal phenomenon has been stated.Therefore, the prediction strategies have been classified into three main scenarios; namely short-term (SHT) prediction, intermediate-term (IMT) prediction and long-term (LT) prediction.For each strategy, an attempt has been made to present either an example or a general trend of that strategy to familiarize readers more in the corresponding field as well as presenting an overall background of the researches done by several investigators worldwide.Based on these explanations the below conclusion is drawn: In the field of EQ prediction, numerous researchers have vastly done serious investigations by means of diverse techniques to improve seismic reliability, recognize warning of EQ in a specific zone, and eventually decrease the negative effects of this natural hazard on human life.The improvement of science and equipment leads experts to develop the strategies of earthquake prediction more accurately.Based on the practical, numerical, and equational studies, it can be detected that majority of SHT predictions cannot be fruitful.Indeterminate and random surface displacement is normally seen in the LT predictions.Practical applications need the knowledge of ground shaking especially in analyses of liquefaction.In addition to these, AI branches/algorithms such as ANN, KNN, SVM, RBF LMBP, and RNN, however, have been proven with more accuracy and intensively used in EQ estimation field because of obtaining more satisfactory results, but regrettably, no general beneficial approach to precisely predict EQs has been detected yet.As a matter of fact, it may not be possible to determine the exact time once a devastating EQ will take place.It is because, as soon as sufficient strain has made, a fault possibly will become innately unstable and all teeny background EQ may or may not keep rupturing and become a large EQ.In particular, the major novelty of these efforts is the growth of a system capable of forecasting EQ occurrence for phases of time from statistical point of view, in order to permit the administrations to organize cautionary policies.

Figure 1 .
Figure 1.The general process of EQ prediction used by researchers is investigated in this chapter to clarify the current progress in the relevant field.

Figure 4 .
Figure 4.An EQ probability along the Ryukyu trench, Japan.

Figure 5 .
Figure 5. Propagation of three EQs with M ≥ 6.0 for the Meishan EQ sequence between 1904 and 1906.Three Xingang, Chiayi, and Meishan cities considered for seismic hazards assessments and are depicted as black squares.Stars display the EQs epicenters.The location of the illustrated region is given in Figure 7.

Figure 6 .
Figure 6.The flowchart for an approach of SHT prediction.

Figure 7 .
Figure 7.The repartition of declustered EQs with M ≥ 5.0 during 1940 and 2010.Note that, some interface actions in northeastern Taiwan occurred at the longitude 122° east are not revealed in this figure.

Figure 9 .
Figure 9.The parameters of N, L, Z and be for the M8 algorithm in the seismic extended standardized phase space.

Figure 10 .
Figure 10.A general design of an EQ detection tool.

Figure 11 .
Figure 11.Site seismic parameters in assessment of a PGA.

Figure 14 .
Figure 14.Structure of the back propagation NN to predict an EQ occurrence [13].

Table 1 .
Precision classification of EQ prediction.