Open access

Inter-Comparison of an Evolutionary Programming Model of Suspended Sediment Time-Series with Other Local Models

Written By

M. A. Ghorbani, R. Khatibi, H. Asadi and P. Yousefi

Submitted: 24 November 2011 Published: 18 October 2012

DOI: 10.5772/47801

From the Edited Volume

Genetic Programming - New Approaches and Successful Applications

Edited by Sebastian Ventura

Chapter metrics overview

2,384 Chapter Downloads

View Full Metrics

1. Introduction

The experience of applying evolutionary computing to time series describing local physical problems has benefited the modelling culture by showing that many different mathematical formulae can be produced to describe the same problem. This experience brings into the focus the roles of pluralism in the modelling culture as opposed to searching for the best model, where physical problems provide relevance and context to the choice of modelling techniques. Both of these roles are often overlooked and do not directly influence research agenda. Although the focus of this paper is on evolutionary computing, it also promotes a pluralistic modelling culture by studying other modelling techniques, as well as by keeping the role of physical problems in the foreground.

Estimating suspended sediment loads is a problem of practical importance and includes such problems as changing courses in rivers, loss of fertile soil, filling reservoirs and impacts on water quality. The study of these problems in the short-run are referred to as sediment transport and erosion for those in the long-run. Past empirical capabilities remain invaluable but are not sufficient on their own as management and engineering solutions often require an insight into the problem. Empirical knowledge has been incorporated into the body of distributed modelling techniques giving rise to sophisticated modelling software tools but their applications require a great deal of resources. There remains a category of problems, often referred to as time series analysis, which uses the sequences of time variations and predicts the future values. This category of models provides useful information to management of local problems. For instance, such models may be used to schedule dredging requirements or other maintenance activities. Time series analysis is developing into local management tools and it is a focus of this chapter.

The aim of this chapter is to predict suspended sediment load of a river into the future. Besides the traditional empirical Sediment Rating Curve (SRC), there are several strategies for analysing such time series and evolutionary computing is one of Artificial Intelligence (AI) approaches, which broadly include capabilities for searching and recognising patterns among others. This chapter also employs Artificial Neural Network (ANN), which is another AI approach. Yet another strategy is to regard time series as outcomes of many random drivers and this assumption is supported by a whole body of probabilistic approaches, where this chapter uses Multi-Linear Regression (MLR) analysis to model the same data. Over the past few decades, research has increasingly focused on the application of deterministic chaos (or chaos theory or dynamic systems) showing that many of apparently randomly varying system behaviours can be explained by deterministic chaos. The concept behind this modelling strategy is that the particular data can largely be explained by deterministic behaviour, where in time the system evolves asymptotically towards an attractor. Its random-looking variations are assumed to be an internal feature of the system and depending on its initial conditions, its state under a certain range may become highly erratic but with a predictable behaviour. Evidently, none of these strategies are identical and different models rarely produce identical results. This chapter therefore compares the performance of these modelling strategies for solving an engineering problem.

The study employs 26 years of the Mississippi River data recorded at Tarbert + RR Landings and involve both flows and suspended sediment load. The river discharges about 200 million metric tons of suspended sediment per year to the Gulf of Mexico, where it ranks about sixth in the world today.

Advertisement

2. Literature review

Sediment Rating Curve (SRC) is an empirical approach used by practitioners in the engineering studies of sediment and erosion problems. The log linear rating curve method has been used widely and Sivakumar and Wallender (2005) outline the many flaws associated with this technique, including the lack of fit due to missing variables (e.g. Miller, 1951), retransformation bias (e.g. Ferguson, 1986), and non-normality of the error distribution (e.g. Thomas, 1988). According to Sivakumar and Wallender (2005), the technique has been modified including, among others, use of separate curves for different seasons (Miller, 1951), stratifying the data according to the magnitude of flow and applying a separate curve for each stratum (Glysson, 1987), and use of a single multivariate model instead of multiple rating curves (Cohn et al., 1992). Sivakumarand Wallender (2005) argue that there is not a simple (and universal) ‘water discharge-suspended sediment concentration-suspended sediment load’ relationship. A brief overview of past studies is as follows.

Kisi, et al (2008) review the application of ANN and neuro-fuzzy techniques to time series analysis of sediment loads at various timescales, uncertainty in the data. Variations of these techniques have also been reported by Jain (2001), Tayfur (2002), Cigizoglu (2004), Kisi (2004), Raghuwanshiet al. (2006), Cigizoglu& Kisi (2006). Other studies on the application of ANN to suspended sediment include that by Wang et al (2008), who applied ANN to derive the coefficients of regression analysis for their SRC model.

Aytek and Kishi (2008) used the GP approach to model suspended sediment for two stations on the Tongue River in Montana, USA, andindicate that the GP formulation performs quite well compared to sediment rating curves and multi linear regression models.

Chaotic signals have also been identified in time series of suspended sediment loads by Sivakumar and Jayawardena (2002, 2003), Farmer and Sidorowich, 1987). The outcomes revealed the usefulness of these methods towards an effective prediction capability.

Overall, a general understanding of the analysis of suspended sediment load is yet to emerge and one way to gain an insight into the problem is to carry out inter-comparison studies of the performance of a host of models applied to diversity of rivers of different shapes and sizes.

Advertisement

3. Study area and data

3.1. Understanding the problem

Sediment transport is concerned with entrained soil materials carried in water by erosion on the catchment and within channels. Sediment particles are categorised as follows (i) the saltation load (not discussed here); (ii) bedload (not discussed here) and (iii) suspended load including clay (< 62μm in particle diameter), silt and sand. Suspended load (both as “fine-grained sediment” and “wash load”) is directly a result of the turbulence in water and forms a large proportion of the transported load, where the turbulence is a measure of the energy in the water to carry the load.

Sediment discharge is a measure of the mass rate of sediment transport at any point in space and time and determines whether the load is being transported or deposited. The whole process comprises soil erosion, sediment transport and sediment yield, where the deposited load delivered to a point in the catchment is referred to as sediment yield and is expressed as tons per unit area of the basin per year, measured at a point. Estimation of sediment yield (and soil erosion) is essential for management but these and mathematical models are used to gain an insight into the underlying processes. Sediment yield is estimated by (i) direct measurement, (ii) using local time series models to predict future states; (iii) using mathematical models to study jointly both erosion and sediment processes.

Suspended sediment forms most of the transported load and can be affected by many parameters including rainfall, land use pattern, slope, soil characteristics, e.g. soil moisture content but their considerations lead to distributed models, which are complex. Recorded suspended sediment derives distributed models by serving them as boundary conditions or input sources but their inherent information is not tapped on. There is a case for local models to study the information contained in recorded sediment loads alone in terms of flow and sediment hydrographs. This chapter is concerned with the study of the suspended load of a river, as discussed below.

Figure 1.

Mississippi River Station at Tarbert + RR Landings (http://pubs.usgs.gov/circ/circ1133/geosetting.html).

3.2. Study area

The flow–sediment time series data of a Mississippi river Station is used in the study, the location of which is shown in Figure 1. The gauge is situated at Tarbert + RR Landings, LA (USGS Station no. 07373291, latitude 30°57′40″, longitude 91°39′52″) and it is operated by the US Geological Survey (USGS) – the location map is shown in Figure 1. The Mississippi River discharges an average of about 200 million metric tons of suspended sediment per year to the Gulf of Mexico and to the ocean.

3.3. Review of data records

Daily suspended sediment measurements for the above station have been made available by the USGS from April 1949. The data used herein span over a period of about 26 years (amounting to 9496 datapoints) starting on October 1, 1949. Figures 2 show the variation of the daily suspended sediment and stream flow series observed at the above station.

Of the 26 water-years of the data sample of daily records of flow and suspended sediment (9496 datapoints), the first 25 water years of data (9131 datapoints) were used to train the models and the remaining 365 datapoints of daily records were used for testing. The statistical parameters of stream flow and sediment concentration data are shown in Table 1. These results show that the overall contribution of the datapoints in the test period is average; its individual characteristics in terms of kurtosis show the annual hydrographs to be less peaked and more flat but at the same time, the suspended sediment load during the year is significantly high. Thus, the minimum values during this year were significantly above the average but persistent and though less dynamic.

Figure 2.

Variation of Daily Suspended Sediment and Flow Data in the Mississippi River Basin

Training setTesting setAll Dataset
Data TypeSuspended
Sediment
(ton/day)
Discharge (m3/sec)Suspended
Sediment
(ton/day)
Discharge (m3/sec)Suspended Sediment
(ton/day)
Discharge (m3/sec)
Datapoints9131913136536594969496
Mean6.37E51.27E44.52E51.58E46.30E51.28E4
St dev6.30E57.60E32.53E57.80E36.21E57.60E3
Max4.97E64.25E41.22E63.45E44.97E64.25E4
Min.4.00E32.80E36.70E45.40E34.00E32.80E3
CV1.00.600.560.490.990.59
Skew1.780.950.100.511.820.93
Kurt3.980.34-0.98-0.854.200.27

Table 1.

Statistical Parameters for Dataset from the Mississippi River Basin

3.4. Overview of the models

The sediment rating curve method is the traditional method for converting measured flows to predict suspended sediment load and this paper aims to test the performance of evolutionary computing models but uses a host of other techniques for the inter-comparison purpose. These models are outlined in this section but evolutionary computing is explained in more detail. Their underlying notion is that past values contain a sufficient amount of information to predict the future values and a systematic way of representing this notion is purported in Table 2 in terms of a selection of models. These models, in essence, are reminiscent of regression analysis but GEP, ANN and MLR models approach the problem in their own individual ways to unearth the structure of the information inherent in time series. Notably, the SRC model is expressed by Model 1 and the deterministic chaos model is expressed by Model 0. These models will all be evaluated by using coefficient of Correlation (CC), Relative Absolute Errors (RAE) and Root Mean Square Errors (RMSE).

ModelInput variablesOutput variablesThe Structure
Model 0St-1, St-2…StChaos
Model 1QtStANN, SRC
Model 2Qt , St-1StGEP, ANN, MLR
Model 3Qt ,Qt-1StGEP, ANN, MLR
Model 4Qt , Qt-1 , St-1StGEP, ANN, MLR
Model 5Qt,Qt-1,Qt-2StGEP, ANN, MLR
Model 6Qt,Qt-1,Qt-2,St-1StGEP, ANN, MLR

Table 2.

Modelling Structures of the Selected Modelling Techniques

3.4.1. Sediment rating curve

Sediment rating-curve is a flux-averaged relationship between suspended sediment, S, and water discharge, Q, expressed as a power law in the form of:S=aQb, where a and b are coefficients. Values of a and b for a particular stream are determined from data via a linear regression between (log S) and (log Q). The SRC model is represented in terms of Model 1 in Table 2. For more critical views on this model, references may be made Kisi (2005) and Walling (1977), among others.

3.4.2. Evolutionary computing

Evolutionary computing techniques apply optimisation algorithms as a tool to facilitate the mimicking of natural selection. A building block approach to generalised evolution driven by natural selection is yet to be presented, although Khatibi (2011) has outlined a rationale for it. Traditional understanding of natural selection for biological species is well developed, according to which the process takes place at the gene level of all individuals of all species carrying hereditary material for reproduction by inheriting from their parents and by passing on a range of their characteristics to their offspring. The process of reproduction is never a perfect copying process, as mutation may occur from time to time in biological reproductions involving the random process of reshuffling the genes during sexual reproduction. The paper assumes preliminary knowledge on genes, chromosomes, gene pool, DNA and RNA, where the environment also has a role to play. The environment for the production of proteins and sexual reproduction is different than the outer environment for the performance of the individual entities supported by the proteins or produced by sexual reproduction. The outer environment is characterised by (i) being limited in resources, (ii) having no foresight, (iii) organisms tend to produce more offspring than can be supported, a process that is driven by positive feedback loops, and (iv) there is a process of competition and selection. Some of these details are normally overlooked or simplified in evolutionary computing and therefore the paper stresses the point that natural selection takes place at the gene level and this is not directly applicable to that at the social level.

Facts on natural selection are overwhelming but there are myths as well, e.g. the myth of “the survival of the fittest” and this term is widely used in evolutionary computing. Although the fittest has a selective advantage to survive, this is not a guarantee for the survival in the natural world. An overview of the dynamics of natural selection in an environment is that (i) the environment can only support a maximum population of certain size, but there is also a lower size at the critical mass below which a population is at risk of losing its viability; (ii) there is a process of reproduction, during which natural selection operates at the gene level, although there are further processes operating at the individual levels beyond the direct reach of natural selection (e.g. interactions among the individuals catered for by other mechanisms or each individual is under selection pressure by the environment); (iii) the process of reproduction is associated with mutation, which gives rise to the production of gene pools.

A great deal of the above overview has been adopted in evolutionary computing, the history of which goes back to the 1960s when Rechenberg (1973) introduced evolution strategies. The variants of this approach include genetic algorithm (Holland, 1975), evolutionary programming (Fogel et al, 1966), genetic programming (Koza, 1992) and Gene Expression Programming (GEP), Ferreira (2001a). This paper uses the latter approach, which in a simple term is a variation of GP but each of these techniques have differences with one another. These techniques have the capability forderiving a set of mathematical expressions to describe therelationship between the independent and dependent variables using such functions as mutation, recombination(or crossover) and evolution.

This chapter is concerned with GEP and one of the important preliminary decisions in its implementations is to establish the models represented in Table 2 (Models 2-6). There is no prior knowledge of the appropriateness of any of these models and therefore this is normally fixed in a preliminary modelling task through a trial-and-error procedure. Whichever the model choice (Model 2 – Model 6 or similar other ones), each implementation of GEP builds up the model in terms of the values of the coefficients (referred to as terminals) and the operations (functions) through the procedure broadly outlined in Figure 3.

Figure 3.

Simplified Outline of Implementation of Evolutionary Programming Models

The working of a gene expression program depicted in Figure 3 is outlined as follows. A chromosome in GEP is composed of genes and each gene is composed of (i) terminals and (ii) functions. The gene structures and chromosomes in GEP are illustrated for the solution that is obtained for the dataset used in this study (see Section 4.2). The terminals as their names suggest are composed of constants and variables and the functions comprise mathematical operations, as shown by (4.a)-(4.f).

Figure 4.

Expression Trees – (a) typical expression tree; (b) the selected GEP model in this study

As the term terminal suggests, it comprises a set of values at the tail-ends of the genes of the chromosomes and these are made meaningful by the functions making up the other component of the genes of the chromosomes. In GEP, these are represented by a bilingual notation called Karva language of (i) genetic codes, which are not deemed necessary for a description here and (ii) expression trees (or parse trees), as illustrated by Figure 4.a and the recommended solution is shown in Figure 4.b, which is transcribed by Equation (4) in Section 4.2. The initial chromosomes of the initial population are no different than the solution shown in Figure 4.b but their difference is that the composition of each of the initial chromosomes is selected often in random and then GEP is expected to improve them through evolution by the strategy of selections, replication and mutation but there are other facilities that not mentioned facilitating a more robust solution and these include inversion, transposition and recombination. The improvements are carried out through selection from one generation to another and this is why this modelling strategy is called evolutionary computation. The main strength of this approach is that it does not set up any system of equation to predict the future but it evaluates the fitness of each chromosome and selects from those a new population with better performance traits.

The GEP employed in this study is based on evolving computer programs of different sizes and shapes encoded in linear chromosomes of fixed lengths, Ferreira, 2001a; Ferreira, (2001b). The chromosomes are composed of multiple genes, each gene encoding a smaller subprogram. Furthermore, the structural and functional organisation of the linear chromosomes allows the unconstrained operation of important genetic functions, such as mutation, transposition and recombination.It has been reported that GEP is 100-10,000 times more efficient than GP systems (Ferreira, 2001a; Ferreira, 2001b) for a number of reasons, including: (i) the chromosomes are simple entities: linear, compact, relatively small, easy to manipulate genetically (replicate, mutate, recombine, etc); (ii) the parse trees or expression trees are exclusively the expression of their respective chromosomes; they are entities upon which selection acts, and according to fitness, they are selected to reproduce with modification.

3.4.3. Artificial Neural Networks (ANNs)

Whilst evolutionary programming emulates the working of Nature, ANNs emulate the workings of neurons in the brain. Both the brain and ANNs are parallel information processing systems consisting of a set of neurons or nodes arranged in layers but this is where the parallel ends. The actual process of information processing in the brain is a topical research issue but the drivers of ANNs are polynomial algebra and there is no evidence that the brains of humans, monkeys or any other animals employ algebraic computations such as optimisation methods. Although there is a great incentive to understand the working of the brain, it is not imperative to be constrained by it and the use of algebra in ANNs is not criticised here but awareness is raised as these two processes are not identical.

The ANN theory has been described in many books, including the text by Rumelhart et al. (1986). The application of ANNs has been the subject of a large number of papers that have appeared in the recent literature. There are various implementations of ANNs but the type used in this study is a Multi-Layer feedforward Perceptron (MLP) trained with the use of back propagation learning algorithm with the following functions: (i) the input layer accepts the data, (ii) intermediate layer processes them, and (iii) the output layer displays the resultant outputs. The number of hidden layers is decided in a preliminary modelling process by finding the most efficient number of layers through a trial-and-error procedure. Each neuron in a layer is connected to all the neurons of the next layer, and the neurons in one layer are not connected among themselves. All the nodes within a layer act synchronously.

This study implements the ANN models in terms of Models 1-6 of Table 2 and Figure 5 shows one of the implementation selected. For each of these models, the data passing through the connections from one neuron to another are multiplied by weights that control the strength of a passing signal. When these weights are modified, the data transferred through the network changes; consequently, the network output also changes. The signal emanating from the output node(s) is the network's solution to the input problem.

In the back-propagation algorithm, a set of inputs and outputs is selected from the training set and the network calculates the output based on the inputs. This output is subtracted from the actual output to find the output-layer error. The error is back propagated through the network, and the weights are suitably adjusted. This process continues for the number of prescribed sweeps or until a prescribed error tolerance is reached. The mean square error over the training samples is the typical objective function to be minimized. After training is complete, the ANN performance is validated. Depending on the outcome, either the ANN has to be retrained or it can be implemented for its intended use.

Figure 5.

Implementation of the ANN Models and its Various Layers

3.4.4. Multi Linear Regression (MLR)

An overview of the data presented in Figure 2 invokes the thought that other than the annual trend within the data, the underlying process is probably random and a more rational way of explaining the data would be through probabilistic approaches. One such method applied to the selected data is the Multi-Linear Regression (MLR) model. It fits a linear combination of the components of a multiple signals x (e.g. recorded flows and suspended sediment timeseries as defined by the Models 2-6 in Table 2) to a single output signal y, as defined by (1.a) (e.g. predicted suspended sediment load) and returns the residual, r, i.e. the difference signal, as defined by (1.b):

y=b+i=0NaixiE1
r=ya1x1a2x2...bE2

Where xi is defined in Table 2 in terms of various models and ai values are called regression coefficients, which are estimated by using the least square or any other similar method. In this study, the coefficients of the regressions were determined using the least square method.

3.4.5. Chaos theory

A cursory view of the suspended sediment record of the Mississippi River in Figure 2 provides no clue to a strategy for its underlying patterns, if any, although annual trend superimposed on random variation may be an immediate reaction of a hydrologist. Another strategy to explore such possible patterns is through the application of chaos theory, more specifically through the “phase-space diagram” as shown in Figure 6 for this river data. A point in the phase-space represents the state of the system at a given time. The narrow dark band in the figure signifies strong determinism but its scattered band signifies the presence of noise and therefore there is a possibility to explain this set of data by chaos theory. The dark band signifies convergence of the trajectories of the phase-space with a fractal dimension towards the attractor of the data, where the dynamics of the system can be reduced to a set of deterministic laws to enable the prediction of its future states.

Figure 6.

Phase-space Diagram of Daily Suspended Sediment Data in the Mississippi River Basin

Chaos theory is a method of nonlinear time series analysis and involves a host of methods, essentially based on the phase-space reconstruction of a process, from scalar or multivariate measurements of physical observables. This method is implemented in terms of Model 0 of Table 2. It is largely based on the representation of the underlying dynamics through reconstruction of phase-space, originally given by Takens, 1981. It is implemented in terms of two parameters of delay time and embedding dimension, according to which given a set of physical variables and an analytical model describing their interactions, the dynamics of the system can be represented geometrically by a single point moving on a trajectory, where each of its points corresponds to a state of the system. The phase-space diagram is essentially a co-ordinate system, whose coordinates represent the variables necessary to completely describe the state of the system at any moment.

One difficulty in its construction is that in most practical situations, information on every variable influencing the system may not be available. However, a time series of a single variable may be available, which may allow the construction of a (pseudo) phase-space. The idea behind such a reconstruction is that a non-linear system is characterized by self-interaction, and a time series of a single variable can carry the information about the dynamics of the entire multi-variable system. The trajectories of the phase-space diagram describe the evolution of the system from some initial state, and hence represent the history of the system.

This paper applies chaos theory to analyse the suspended sediment load of the Mississippi River data in a similar fashion to the other modelling strategies described above. It uses the local prediction method for training and testing, as outlined below, but it is a traditional practice to apply several methods to build evidence for the existence of chaotic signals in a particular data. These techniques employ the delay-embedding parameters of τ and m, which are unknown a-priori. The following methods are used in this chapter:

  1. Average Mutual Information (AMI) is used to estimateτ; and the minimization of the False Nearest Neighbours to do that of the optimal values for the embedding dimension, m.

AMI (Fraser and Swinney, 1986) defines how the measurements X(t)at time t are related, from an information theoretic point of view, to measurementsX(t+τ) at timet+τ. The average mutual information is defined as:

I(τ)=X(i),X(i+τ)P(X(i),X(i+τ))log[P(X(i),X(i+τ))P(X(i))P(X(i+τ))]E3

where i is total number of samples. P(X(i))andP(X(i+τ)) are marginal probabilities for measurementsX(i)andX(i+τ), respectively, whereasP(X(i)), P(X(i+τ))is their joint probability. The optimal delay time τ minimises the functionI(τ)fort=τ, X(i+τ)adds the maximum information onX(i).

The False Nearest Neighbours procedure (Kennel et al., 1992) is a method to obtain the optimum embedding dimension for phase-space reconstruction. By checking the neighbourhood of points embedded in projection manifolds of increasing dimension, the algorithm eliminates 'false Neighbours': This means that points apparently lying close together due to projection are separated in higher embedding dimensions. when the ratio between the number of false neighbours at the dimension m +1 and m is below a given threshold, generally smaller than 5%, eachm'>m+1is an optimal embedding. A poor reconstruction of few embedding states with several components is obtained if m' is too large and the next analyses should not be performed.

  1. Correlation Dimension (CD) method: is a nonlinear measure of the correlation between pairs lying on the attractor. For time series whose underlying dynamics is chaotic, the correlation dimension gets a finite fractional value, whereas for stochastic systems it is infinite. For an m-dimensional phase-space, the correlation function Cm(r) is defined as the fraction of states closer than r (Grassberger and Procaccia, 1983; Theiler, 1986):

C(r)=limN2N(N1)i,j=1NH(r|YiYj|)E4

whereH is the Heaviside step function, Yiis the ithstate vector, andNis the number of points on the reconstructed attractor. The number w is called Theiler window and it is the correction needed to avoid spurious results due to temporal correlations instead of dynamical ones. For stochastic time series Cm(r)rm holds, whereas for chaotic time series the correlation function scales with r as:

Cm(r)rD2E5

where D2, correlation exponent, quantifies the degrees of freedom of the process, and defined by:

D2=limr0lnC(r)lnrE6

and can be reliably estimated as the slope in the lnCm(r)vs. ln(r)plot.

  1. Local Prediction Model: The author’s implementation of the local prediction method for deterministic chaos is details in Khatibi et al (2011) but the overview is that a correct phase-space reconstruction in a dimension m facilitates an interpretation of the underlying dynamics in the form of an m-dimensional map,fT ,according to

Yj+T=fT(Yj)E7

whereYj and Yj+T are vectors of dimension m, describing the state of the system at times j (i.e. the current state) andj+T (i.e. the future state), respectively. The problem then is to find an appropriate expression for fT (i.e.FT). Local approximation entails the subdivision of the fTdomain into many subsets (neighbourhoods), each of which identifies some approximationsFT,valid only in that same subset. In other words, the dynamics of the system is described step-by-step locally in the phase-space. In this m-dimensional space, prediction is performed by estimating the change ofXiwith time, which are observed values of discrete scalar timeseries, with delay coordinates in the m-dimensional phase space. The relation between the points Xtand Xt+p (the behaviour at a future time p on the attractor) is approximated by function Fas:

Xt+pF(Xt)E8

In this prediction method, the change of Xt with time on the attractor is assumed the same as those of nearby points,(XTh,h=1,2,...,n). Herein, Xt+pis determined by the dth order polynomialF(Xt)as follows (Itoh, 1995):

xt+pf0+k1=0m1f1k1Xtk1τ+k2=k1k1=0m1f2k1k2Xtkτ1Xtk2τ+...+kd=kd1...k2=k1k1=0m1fdk1k2...kdXtk1τXtk2τ...XtkdτE9

Using n of XTh and XTh+p for which the values are already known, the coefficients, f, are determined by solution of the following equation:

XAfE10

where

X=(XT1+p,XT2+p,...,XTn+p)E11

and

f=(f0,f10,f11,...,f1(m1),f200,...,fd(m1)(m1)...(m1))E12

and A is the n×(m+d)!/m!d!Jacobian matrix which in its explicit form is:

A=[1XT1XT1τXT1(m1)τXT12XT1(m1)τd1XT2XT2τXT2(m1)τXT22XT2(m1)τd1XTnXTnτXTn(m1)τXTn2XTn(m1)τd]E13

In order to obtain a stable solution, the number of rows in the Jacobian matrix A must satisfy:

n(m+d)!m!d!E14

As stated by Porporato and Ridolfi (1997), even though F-values are first degree polynomials, the prediction is nonlinear, because during the prediction procedure every pointx(t)belongs to a different neighbourhood and is therefore defined by different expressions for f (Koçak,1997).

Advertisement

4. Setting up models and preliminary results

4.1. Performance of sediment rating curve

The SRC model was implemented by using a simple least squares method leading to

S=13.2Q1.14E15

The performance of this model is summarised in Table 3 and shown in Figure 7.Evidently, its performance is poor and the concern raised in the literature on this model is confirmed. This is a sufficient justification to search for reliable models.

Model InputTrainingTesting
CCMAERMSECCMAERMSE
Model 1: Qt0.762.62E54.11E50.823.89E54.86E5

Table 3.

Statistical Performance of the Sediment Rating Curve for the Training and Test Periods

Figure 7.

Comparison of Observed Suspended Sediment with that Modelled by SRC; (a) hydrograph, (b) cumulative values

4.2. Implementation of GEP

The preliminary investigation for the construction of a relationship between flows and suspended sediment in GEP requires: (i) the setting of the functions, as discussed below; (ii) the fitness function; and (iii) a range of other parameters, but the default values, given in Table 11, were sufficient in this study. The following functions were investigated:

{+,,´}E16
{+,,´,x}E17
{+,,´,x2}E18
{+,,´,x3}E19
{+,,´,ex}E20
{+,,´,ln(x)}E21

The performance of each function was investigated in terms of CC, MAE, and RMSE and the results are shown in Table 4.a for the training periods. The results show that (i) the model performances are more sensitive to the choice of independent variables than the function choices; (ii) the models not including suspended sediment time series perform poorly; and (iii) the model performance is not overly sensitive to the choice of the function. Appendix I, Table 11 specifies the fitness function to be Root Relative Squared Errors (RRSE).

ModelQtQt , St-1Qt ,Qt-1Qt , Qt-1 , St-1Qt,Qt-1,Qt-2Qt,Qt-1,Qt-2,St-1
4.a):
{+,-,}
CC0.770.990.780.990.780.99
MAE2.79E53.88E42.78E53.25E42.79E53.37E4
RMSE4.13E58.43E44.07E57.74E44.05E57.98E4
(4.b):
{+,-,, x}
CC0.770.990.770.990.770.99
MAE2.82E053.87E042.81E053.80E042.78E053.27E04
RMSE4.15E058.42E044.14E058.36E044.10E057.75E04
(4.c):
{+,-,, x2}
CC0.770.990.780.990.780.99
MAE2.82E053.89E042.78E053.25E042.76E053.45E04
RMSE4.15E058.43E044.08E057.74E044.05E058.02E04
(4.d):
{+,-,,x3}
CC0.770.990.770.990.770.99
MAE2.43E53.89E42.76E53.21E42.76E53.41E4
RMSE4.05E58.43E44.12E57.76E44.13E58.03E4
(4.e):
{+,-,,ex}
CC0.770.990.770.990.770.99
MAE2.81E53.88E42.81E53.56E42.42E53.64E4
RMSE4.15E58.43E44.14E58.16E44.00E58.23E4
(4.f):
{+,-,,ln(x)}
CC0.760.990.760.990.780.99
MAE2.56E53.89E42.64E53.25E42.60E53.21E4
RMSE4.09E58.42E44.10E57.72E44.02E57.72E4

Table 4a.

Statistical Performance of a Selection of Functions for the Training Period

The performance of the GEP model is presented in Table 4.b, according to which there is not much to differences between performances of a number of the alternative models but (4.e) is selected in this study for the prediction purposes (its expression tree is given in Figure 4) and given below.

St=St1+16.77Qt16.77Qt113.87E22
ModelQtQt , St-1Qt ,Qt-1Qt , Qt-1 , St-1Qt,Qt-1,Qt-2Qt,Qt-1,Qt-2,St-1
(4.a):
{+,-,}
CC0.830.990.840.990.840.99
MAE3.99E52.34E44.01E52.32E44.08E52.08E4
RMSE4.72E53.87E44.73E54.06E44.79E53.75E4
(4.b)
{+,-,,x}
CC0.830.990.840.990.840.99
MAE3.99E052.33E044.01E052.28E043.96E052.34E04
RMSE4.69E053.87E044.70E053.08E044.65E054.04E04
(4.c)
{+,-,,x2}
CC0.830.990.840.990.840.99
MAE4.00E052.35E044.01E052.32E043.79E052.31E04
RMSE4.69E053.86E044.71E054.06E044.67E053.96E04
(4.d):
{+,-,,x3}
CC0.830.990.830.990.830.99
MAE3.87E52.35E44.00E52.10E43.96E52.06E4
RMSE4.94E53.87E44.75E53.81E44.69E53.69E4
(4.e):
{+,-,,ex}
CC0.830.990.840.990.840.99
MAE3.98E52.35E43.97E52.06E43.80E52.17E4
RMSE4.67E53.86E44.66E53.64E44.84E53.73E4
(4.e):
{+,-,,ln(x)}
CC0.830.990.830.990.830.99
MAE3.90E52.35E43.90E52.40E43.81E52.29E4
RMSE4.93E53.86E44.81E54.18E44.69E54.05E4

Table 4b.

Statistical analysis of the estimated values for the test period

Figure 8 compares modelled suspended sediment against their observed values, according to which the improvement by GEP is remarkable compared with SRC. Overall, the GEP modelling results follow observed values rather faithfully both in values and patterns, although there are still discrepancies in predicted values.

Figure 8.

Comparison of Observed Suspended Sediment with that Modelled by GEP; (a) hydrograph, (b) cumulative values

4.3. Implementation of ANN

ANN implements another AI approach to the data represented in Figure 2 by another strategy, as described in Section 3.4.3. A preliminary investigation was carried out to make decisions on the choice of the models given in Table 2 (Models 1-6) and the ANN structure in terms of the neuron structure of the various layers.Table 5 presents model structures investigated. The preliminary modelling task also included a normalisation function for the data. In this study, MATLAB was employed to develop the ANN model and its mapstd function was selected for the normalisation (further defaults values are given in Table 11). The investigated ANN model structures are defined in Table 5 and their results for both the training and testing periods are presented in Table 6.

Model IdentifierModel InputsTrainingTesting
Model 1Qt2-5-12-5-1
Model 2Qt , St-13-5-13-5-1
Model 3Qt ,Qt-13-7-13-7-1
Model 4Qt , Qt-1 , St-14-6-14-6-1
Model 5Qt,Qt-1,Qt-24-9-14-9-1
Model6Qt,Qt-1,Qt-2,St-15-12-15-12-1

Table 5.

ANN Structure (number of nodes in layers)

The performances of Models 1-6 are shown in Table 6 in terms of the values of three statistical indices of CC, MAE and RMSE. The performance of different models in terms of CC is remarkably high but Model 4 (Qt, Qt-1, St-1) produce less deviations, which is selected for the final run.

Model TrainingModel Testing
Model InputsCCMAERMSECCMAERMSE
Qt0.9992.32E42.70E40.9992.16E42.30E4
Qt , St-10.9992.59E43.12E40.9962.17E42.64E4
Qt ,Qt-10.9992.00E42.79E40.9814.19E44.84E4
Qt , Qt-1 , St-10.9992.01E42.51E40.9981.18E41.37E4
Qt,Qt-1,Qt-20.9917.57E48.42E40.9428.47E48.41E4
Qt,Qt-1,Qt-2,St-10.9955.66E46.42E40.9764.63E45.47E4

Table 6.

Statistical Performance of the Selected Model Structure for the Training and Testing periods

4.4. Implementation of the MLR model

The MLR modelling strategy was implemented using Mathematica to derive regression coefficients for both periods of model fitting (training in the AI terminology) and testing using different statistical indices (CC, MAE and RMSE) given in Table 7, which shows that Model 2 (Qt, St-1) performs relatively better than the others. The regression equation suggested by this technique is given by:

St=0.24Qt+30.99St1E23
Model TrainingModel Testing
Model InputsCCMAERMSECCMAERMSE
Qt0.772.41E54.05E50.8333.85E54.96E5
Qt , St-10.9943.90E48.40E40.9882.40E43.90E4
Qt ,Qt-10.782.40E53.97E50.8373.85E54.98E5
Qt , Qt-1 , St-10.9933.30E47.60E40.9872.50E44.10E4
Qt,Qt-1,Qt-20.7792.40E53.95E50.8403.84E54.96E5
Qt,Qt-1,Qt-2,St-10.9933.30E47.70E40.9872.50E44.10E4

Table 7.

Statistical analysis of the estimated values for the train and test period

4.5. Implementation of the deterministic chaos model

A visual assessment for the existence of chaotic behaviour in the suspended sediment time series was presented in Figure 9, although it was not conclusive evidence but just invoked the possibility of the existence of a low-dimensional chaos. Traditionally, several techniques are employed to show the existence of low-dimensional chaos and below the results of the determination of the dimensions of the phase-state diagram are given:

  1. Using the AMI method, the delay time, is estimated for the data as the intercept with the x-axis of the curves by plotting the values of the AMI evaluated by the TISEAN package (Hegger et al., 1999) against delay times progressively increased from 1 to 100. The value of delay time is calculated as the first (local) minimum in the variation of AMI against varying delay time from 1 to 100 day. The results are shown in Figure (9.a), signifying a well-defined first minimum at delay time of 94 day. The delay time is then used in the determination of the sufficient embedding dimension using the percentage of false nearest neighbours for the time series. Figure (9.b) shows the results of the false nearest neighbours method for embedding dimension m, by allowing it to vary from 1 to 40 and hence its value is 28.

  2. The presence of chaotic signals in the data is further confirmed by the correlation dimension method. Figure (10.a) shows the relationship between correlation function C(r) and radius r (i.e. lnC(r)versus ln(r)) for increasing m, whereas Figure (10.b) shows therelationship between the correlation dimension valuesD2(m) and the embedding dimension values m. It can be seen from Figure (10.b) that the value of correlation exponent increases with the embedding dimension up to a certain value and then saturates beyond it. The saturation of the correlation exponent is an indication of the existence of deterministic dynamics. The saturated correlation dimension is 3.5, (D2=3.5). The value of correlation dimension also suggests the possible presence of chaotic behaviour in the dataset. The nearest integer above the correlation dimension value (D2=4) is taken as the minimum dimension of the phase space.

  3. Local prediction algorithm is used to predict suspended sediment time series. The procedure involves varying the value of the embedding dimension in a range, say 3-8, and estimating the CC and RMSE. The embedding function with the highest coefficient of correlation is selected as the solution. These are given in Table 8 for Mississippi River basin for the dataset with daily time interval, as well as a selection of other time steps. It shows that the best predictions are achieved when the embedding dimension is m=3 produce the best results.

Figure 9.

Analysis of the Phase-Space Diagram of Suspended Sediment Data in the Mississippi River basin; (9.a):Average Mutual Information; (9.b) Percentage of false nearest neighbours

mCCRMSE
30.9884.00E4
40.9884.10E4
50.9864.30E4
60.9854.60E4
70.9864.40E4
80.9874.20E4

Table 8.

Local Prediction Using Different Embedding Dimension for the Mississippi River Dataset

Figure 10.

Correlation Dimension Method to Identify the Presence of Chaos Signal in the Dataset; (10.a):Convergence of logC(r)versus log(r); (10.b):saturation of correlation dimension D2(m) with embedding dimension m – this signifies chaotic signals in the Dataset

Advertisement

5. Inter-comparison of the models and discussion of results

Table 9 summarises the performance and main features of each and all of the modelling strategies. The results presented so far confirms the experience that the traditional SRC model performs poorly and may only be used for rough-and-ready assessments. However, the results by the GEP model show that considerable improvements are likely by using it. This section also analyses the relative performance of the various modelling strategies. An overall visual comparison of all the results is presented in Figure 11, according to which GEP, ANN, MLR and local prediction models perform remarkably well and similar to one another.

ModelPerformanceModel StructureOutcomeComments
SRCPoorModel 1Eq. (3)For rough-and-ready estimates
GEPGoodModel 4Eq. (4.e)
ANNGoodModel 4The model is bounded to software
MLRGoodModel 2Eq. (6)
ChaosGoodModel 0Needs expertise to implement

Table 9.

Qualitative Overview of the Performances of Various Modelling Strategies

Figure 11.

Model Predictions for Suspended Sediment – Performances of GP, ANN, MLR, Chaos (closest to observed), and SRC (poor)

Scatter diagrams are also a measure of performance. These are presented in Figures 12, which provides a quantitative basis that (i) SRC performs poorly and (ii) there is little to choose between the other models, although the performance of ANN stands out.

Figure 12.

Scatter between Modelled and Observed Suspended Sediment Load

The relative performances of GEP, ANN, MLR and local prediction models are not still visible from Figure 12 and therefore attention is focused on the differences between the GEP and ANN models with respect to their corresponding observed values. Figures 13 shows the respective results for both the GEP and ANN models and that of ANN is remarkable, as the differences are nearly zero. It may be reported that those of local prediction model and MLR are very close to that of GEP.

Figure 13.

Performances of the ANN and GEP Models – y-ordinates: observed – modelled values

Due to the importance of the volume of transported sediment, the total predicted values are also compared with that of the observed values for the testing period and the results are presented in Table 10. The table show that the traditional SRC model is in error by as much as nearly 50% but the other models perform well, among which the error in the performance of ANN is the lowest. It is also noted that, despite the good performance of ANN models, it is not transferrable, like the GEP models. The implementation of both ANN and deterministic chaos models require considerable expertise.

ModelActual Val. (ton/year)Estimation Val. (ton/year)Dif. Val. (%)
SRC1.65E83.06E8+46 %
GEP1.65E81.65E8- 0.4 %
ANN1.65E81.64E8- 0.3 %
MLR1.65E81.66E8+0.6 %
Chaos1.65E81.66E8+0.7 %

Table 10.

Total Volume of Suspended Sediment Predicted by each of the Modelsat Gauging Stationfor the Mississippi River basin

The chapter presents the performance of the GEP model, as a variation of evolutionary programming, to forecast suspended sediment load of the Mississippi River, the USA. GEP is just a modelling strategy, where any other relevant strategy is just as valid if its performance is satisfactory. The overall results show that the information contained in the observed data can be treated by the following modelling strategies:

Evolutionary computing: this produced a formula to forecast the future values in terms of recorded values of flows and suspended sediment. The results show that the strategy can be successful in identifying a number of different formulae.

Emulation of the working of the brain: this successfully fitted an inbuilt polynomial to the data. It performs better than the other tested models but is not readily transferrable as it resides in particular software applications.

Regression analysis: this produced a regression equation, according to which the future values would regress towards average recorded values, in spite of the presence of noise.

Deterministic chaos: this produced future values of suspended sediment load by identifying an attractor towards which the system performance would converge even when the internal system behaves erratically.

The only common feature in the above modelling strategies is their use of optimisation techniques. Otherwise, they are greatly different from one another but remarkably, they produce models fit for purpose and can explain the data. Undoubtedly, the data can be explained by many more sets of equations or by other possible strategies. This emphasises that models are just tools and the modelling task is to test the performance of the various models to add confidence to the results. Yet the poor performance of the traditional SRC underlines the fact that a good performance cannot be taken for granted.

A review of the data (in Section 3) shows that the overall contribution of the datapoints in the test period is average; its individual characteristics in terms of kurtosis shows that the annual hydrographs are less peaked and more flat but at the same time the suspended sediment load during the year was significantly high. Thus, the minimum values during this year were significantly above the average but persistent and though less dynamic. However, all the four modelling strategies coped well with these data peculiarities. If the data during the test period have a more pronounced feature not very common during the training period, the various local modelling strategies are likely to perform poorly in their own unique way and one of the greatest tasks of research in modelling should be investigations to understand these unique features and not to sweep them under the carpet.

A general view projected by the investigation in this chapter is that the performance of modelling techniques must not be the only basis of practical applications. Equal attention must also be paid to the quality of the data used. If the data suffers from inherent uncertainties, no good model will compensate for the inherent shortfalls.

Advertisement

6. Conclusion

This chapter presents an investigation of the performance of the Gene Expression Programming (GEP) models of suspended sediment load of the Mississippi River, the USA. The study employs the Mississippi River data spanning 26 years involving both flows and suspended sediment load, of which the first 25 years of the data is used for training and the remaining for the prediction of one year into the future. This investigation concurs with the past findings that the performance of sediment rating curve, an empirical technique used widely in practice, can lead to gross errors. This alone underlines the value of other modelling techniques capable of producing reliable results with less than 1% of errors.

The chapter promotes a pluralist culture of modelling and although presents the GEP model as the focus, it also presents the application of other techniques to model the same data. The other models comprise: artificial neural networks, multi-linear regression analysis and deterministic chaos. The chapter outlines the modelling strategy underlying each of these techniques and the results show in spite of their differences they produce similar results inflicting less 1% of errors. The lowest errors are associated with the artificial neural networks for this set of data but each of these techniques should be considered as reliable. The volume of sediment load is an important management parameter and the error associated with each model was estimated for each model. The results show that the traditional SRC model suffers from gross errors by as much as 50% but the other tested models perform well, among which the error in the performance of ANN is the lowest. ANN is noted for its good performance but with some drawback that these models are not transferrable, like the GEP models. It is noted that the implementation of ANN requires an ANN-platform for further modelling and deterministic chaos models require considerable expertise.

Advertisement

7. Appendix

Symbols

SRCSediment Rating Curve
MLRMulti Linear Regression
QtDischarge Series
StSediment Series
MLRXiTerm of Various Model
aiValues Called Regression
ChaosΤDelay Time
Cm(r)Fraction of states
HHeaviside Step
NNumber of Points
D2Correlation Exponent
YjVectors of Dimension
MDimensional phase Step
AJacobean Matrix
x(t)different neighbors
RRadius Spherical
C(r)Correlation Function

Appendix I

Table 11 Defaults Values Employed in Implementing GEP and ANN Models

GPANN
Training parametersValuesTraining parametersValues
Crossover rate0.1GoalMean Square Error
Mutation rate0.044Epochs10 - 100
Inversion0.1Training algorithmTrainlm
IS Transposition0.1
RIS Transposition0.1
1-point Recombination0.3
2-point Recombination0.3
Gene Recombination0.1
Gene Transposition0.1
Population (Chromosome) size30
Head Size7
Number of Genes3
Linking FunctionAddition
Random Numerical ConstantsYes
Number of generation1000
Arithmetic functions(4.a)-(4.f)
Fitness FunctionRRSE: RRSE: Root Relative Squared Errors

Table 11.

Default Parameter Values Used by the Model

References

  1. 1. AytekA.KisiO.2008A genetic programming approach to suspended sediment modeling,J. Hydrol., 351288298
  2. 2. CigizogluH. K.KisiO.2006Methods to improve the neural network performance in suspended sediment estimationJ. of Hydrology, 317221238
  3. 3. CigizogluH. K.2004Estimation and forecasting of daily suspended sediment data by multi layer perceptronsAdvances in Water Resources27185195
  4. 4. Cohn TA, Caulder DL, Gilroy EJ, Zynjuk LD, Summers RM.1992The validity of a simple statistical model for estimating fluvial constituent loads: An empirical study involving nutrient loads entering Chesapeake BayWater Resources Research28923532363
  5. 5. FarmerD. J.SidorowichJ. J.1987aPredicting chaotic time seriesPhys. Rev. Lett. 59845848
  6. 6. FarmerJ. D.SidorowichJ. J.1987bExploiting chaos to predict the future and reduce noiseIn: Lee, Y.C. (Ed.), Evolution, Learning and Cognition. World Scientific, River Edge, NJ, 277330
  7. 7. Ferguson RI.1986River loads underestimated by rating curvesWater Resources Research2217476
  8. 8. FerreiraC.2001aGene expression programming in problem solving. In: 6th Online World Conference on Soft computing in Industrial Applications (invited tutorial)
  9. 9. FerreiraC.2001bGene expression programming: a new adaptive algorithm for solving problemsComplex Syst 13287129
  10. 10. FraserA. M.SwinneyH. L.1986Independent coordinates for strange attractors from mutual informationPhysical Review A1134 EOF1140 EOF
  11. 11. MAGhorbaniKhatibi. R.AytekA.MakarynskyyO.ShiriJ.2010Sea water level forecasting using genetic programming and comparing the performance with artificial neural networksJ Comput Geosci 365620627
  12. 12. Glysson GD.1987Sediment transport curves. US Geological Survey Open File Report 87218
  13. 13. GrassbergerP.ProcacciaI.1983Characterization of strange attractorsPhysical review letters, 505
  14. 14. HeggerR.KantzH.SchreiberT.1999Practical implementation of nonlinear time series methods: The TISEAN packageChaos. 9413435
  15. 15. ItohK.1995A method for predicting chaotic time-series with outliersElectron. Commun. Jpn. 78 (5), 44 EOF
  16. 16. JainS. K.2001Development of integrated sediment rating curves using ANNsJ. Hydraul. Eng ASCE 127(1), 30 EOF37 EOF
  17. 17. KennelM.BrownR.AbarbanelH. D. I.1992Determining embedding dimension for phase- space reconstruction using a geometrical construction. Phys Rev A.45340311
  18. 18. KhatibiR.2011Evolutionary Systemic Modelling for Flood Risk Management Practices,” Journal of Hydrology 4011-23652
  19. 19. (http://dx.doi.org/10.1016/j.jhydrol.2011.02.006)
  20. 20. KisiO.2004bMulti-layer perceptrons with Levenberg-Marquardt optimization algorithm for suspended sediment concentration prediction and estimation. Hydrol. Sci. J. 49(6), 1025-1040.
  21. 21. KisiO.2005Daily river flow forecasting using artificial neural networks and auto regressive models", Turkish J. Eng. Env. Sci. 29
  22. 22. KisiO.2005bSuspended sediment estimation using neuro-fuzzy and neural network approachesHydrol. Sci. J., 50(4), 683 EOF696 EOF
  23. 23. Koza JR1992Genetic programming: on the programming of computers by means of natural selection.MIT, Cambridge
  24. 24. Miller CR.1951Analysis of flow-duration, sediment-rating curve method of computing sediment yieldUS Bureau of Reclamation Report: Denver, Colorado.
  25. 25. PorporatoA.RidolfiL.1997Nonlinear analysis of river flow time sequencesWater Resources Research1353 EOF1367 EOF
  26. 26. RumelhartD. E.Mc ClellandJ. L. .Eds.(1986)1986Parallel Distributed Processing". Explorations in the Microstructure of CognitionMIT Press, Cambridge
  27. 27. SivakumarB.2002A phase-space reconstruction approach to prediction of suspended sediment concentration in riversJournal of Hydrology149 EOF
  28. 28. SivakumarB.JayawardenaA. W.2002An investigation of the presence of lowdimensional chaotic behavior in the sediment transport phenomenon, Hydrological Sciences Journal, 47(3), 405- 416.
  29. 29. SivakumarB.JayawardenaA. W.2003Sediment transport phenomenon in rivers: An alternative perspectiveEnvironmental Modeling and Software, in press.
  30. 30. SivakumarB.WallenderW.2005Predictability of river flow and suspended sediment transport in the Mississippi River basin: a non-linear deterministic approachJ. Earth Surf. Process. Landforms 30665677
  31. 31. TakensF.1981Detecting strange attractors in turbulence, in Dynamical Systems and Turbulence, Lecture Notes in Mathematics 898, D. A. Rand and L. S. Young (eds.), 366381Springer-Verlag, Berlin.
  32. 32. TayfurG.GuldalV.2006Artificial neural networks for estimating daily total suspended sediment in natural streamsNordic Hydrology376979
  33. 33. TayfurG.2002Artificial neural networks for sheet sediment transportHydrol. Sci. J. 47(6), 879 EOF892 EOF
  34. 34. TheilerJ.1986Spurious dimension from correlation algorithms applied to limited time series data.Phys Rev A. 34. 24272432
  35. 35. Thomas RB.1988Monitoring baseline suspended sediment in forested basins: The effects of sampling of suspended sediment rating curvesHydrological Sciences Journal335499514
  36. 36. UstoorikarK.DeoM. C.2008Filling up gaps in wave data with genetic programmingMarine Structures21177195
  37. 37. Wilks DS.1991Representing serial correlation of meteorological events and forecasts in dynamic decision-analytic models. Monthly Weather Rev 11916401662
  38. 38. Yu-Min Wang, Seydou Traore and Tienfuan Kerh2008Monitoring Event-Based Suspended Sediment Concentration by Artificial Neural Network Models” WSEAS TRANSACTIONS on COMPUTERS, 57May 2008
  39. 39. WallingD. E.1977Limitations of the rating curve technique for estimating suspended sediment loads, with particular reference to British rivers. In: Erosion and solid matter transport in inland waters (Proceedings of the Paris symposium, (July, 1977), IAHS Publ., 1223438

Written By

M. A. Ghorbani, R. Khatibi, H. Asadi and P. Yousefi

Submitted: 24 November 2011 Published: 18 October 2012