Derivation of Sediment Transport Models for Sand Bed Rivers from Data-Driven Techniques

Hydraulic engineers and geologists have studied sediment transport in natural streams and rivers for centuries due to its importance in understanding river hydraulics. Erosion and deposition of sediment alters the hydraulic geometry of the channel and may cause increase of flood frequency as well as navigation problems from excessive deposition. Moreover, dis‐ charge of industrial and agricultural residuals sets the sediment particles to be the primary transporters of toxic substances that contaminate aquatic systems. High sediment discharge peaks may be destructive for fish habitats and ecosystems, and long-term sediment yield af‐ fects the design and function of constructions such as dams and reservoirs, as well as the coastal erosion at the basin outlet.


Introduction
Hydraulic engineers and geologists have studied sediment transport in natural streams and rivers for centuries due to its importance in understanding river hydraulics.Erosion and deposition of sediment alters the hydraulic geometry of the channel and may cause increase of flood frequency as well as navigation problems from excessive deposition.Moreover, discharge of industrial and agricultural residuals sets the sediment particles to be the primary transporters of toxic substances that contaminate aquatic systems.High sediment discharge peaks may be destructive for fish habitats and ecosystems, and long-term sediment yield affects the design and function of constructions such as dams and reservoirs, as well as the coastal erosion at the basin outlet.
Sediment transport in sand bed rivers and natural streams is a complex process.For its quantification, numerous sediment transport functions have been introduced in the past years based on different concepts.There are four basic approaches used in the derivation of sediment transport formulae (Yang, 1977): 1) The deterministic approach, which obeys the laws of physics and usually is based on an independent variable like slope, shear stress, stream power, unit stream power etc. 2) The regression approach, which has emerged from the thought that sediment transport is such a complex phenomenon that cannot be described by a single dominant variable.3) The pioneering probabilistic approach of Einstein (1942), which highlighted the complexity and the stochastic nature of the sediment transport in a rather laborious way for common usage in engineering, and 4) The regime approach, which was developed as a result of long-term measurements in equilibrium conditions.
The emerging results from all these concepts usually differ drastically from each other and from the measured data.Consequently, none of the published sediment transport equations has gained universal acceptance in confidently predicting sediment transport rates, especially in rivers.An alternative approach may be the usage of data-driven modeling, which is especially attractive for modeling processes, in which knowledge of the physics of the problem is inadequate.The scope of this chapter is the utilization of some widely used data-driven techniques, namely artificial neural networks (ANNs) and symbolic regression based on genetic programming (GP) in order to determine the dominant dimensionless variables that can be used as inputs in such schemes and generate sediment transport models for natural streams and rivers that are based solely on the data without presuming anything about their structure and their degree of nonlinearity.
For the proper training of a data-driven scheme, data of good quality are needed.Since field measurements accommodate the peculiarities of the considered streams and the inclusion of noise in the measurement process is inevitable, the training data comprise solely laboratory flume measurements.The testing data, however, comprise exclusively field measurements in order to implement the models in actual applications.Based on this concept, the approach of the basic trend of the function is feasible and the derived model will be applicable to the data range for which it will be trained.Regarding the efficiency of scaling in the sediment transport context, model-prototype comparisons have shown that correspondence of behavior is often well beyond expectations, as has been attested by the successful operation of many structures designed from model tests (Pugh, 2008).This study exhibits the potential of machine learning in capturing functions with physical meaning since the training and testing sets have significant differences in their statistical distributions.The determination of the input variables that best define the problem is accomplished by the assessment of some common independent dimensionless variables based on their correlation with the sediment concentration and the aid of ANNs on the basis of a tentative trial-and-error procedure.Subsequently, ANNs and symbolic regression are utilized in order to derive equations from the selected input combinations.

Data mining and data-driven techniques in the context of sediment transport
The recorded observations of a system can be further analyzed in the search for the information they encode.Such automated search for models accurately describing data constitutes a direction that can be identified as that of data mining.Data mining and knowledge discovery aim at providing tools to facilitate the conversion of data into a number of forms, such as equations.The latter provide a better understanding of the process generating or producing these data.These models combined with the already available understanding of the physical processes result in an improved understanding and novel formulations of physical laws and improved predictive capability (Babovic, 2000).Data-driven modeling (DDM) and machine learning techniques used for predictions are essentially modernized regression schemes with the significant advantage over the classical regression schemes that they do not have to presume the structure of the nonlinear model, which they attempt to fit.They are based on simple ideas, usually inspired from the way nature works, and their only prerequisite is a good, although usually large, data set.The data are usually divided into three sets, namely the training, validation and testing set.The training set trains the scheme on the basis of a minimization criterion and the validation set is used as a stopping criterion for training to avoid overfitting to the data used for training.The test set is used to evaluate the generated model.The minimization criterion, on the basis of which the training process takes place, is usually a sum of errors between the computed outputs and the actual measured data.The optimization model that is used for the minimization depends on the data-driven scheme and may be deterministic as well as stochastic.
Inferring models from data is an activity of deducing a closed-form explanation based solely on observations.These observations, however, represent a limited source of information.The question emerges as to how this, a limited flow of information from a physical system to the observer, can result in the formation of a model that is complete in the sense that it can account for the entire range of phenomena encountered within the physical system in question and describe even the data outside the range of previously encountered observations.The present efforts are characterized by the search for a model that is capable of acquiring semantics from syntax.Clearly, every model has its own syntax.Artificial neural networks have the syntax of a network of interconnected neurons, whereas genetic programming has the syntax of treelike networks of symbolic expressions in reverse Polish notation.The question is whether such a syntax can capture the semantics of the system it attempts to model (Babovic, 2000).Witten et al. (2011) argued that the universal learner is an idealistic fantasy since experience has shown that no single machine learning scheme is appropriate to all data mining problems.Certain classes of model syntax may be inappropriate as a representation of a physical system.One may choose the model whose representation is complete, in the sense that a sufficiently large model can capture the data's properties to a degree of error that decreases with an increase in the model size.For example, one may decide to expand Taylor or Fourier series and decrease the error by adding terms in a series.However, in these cases, semantics almost certainly would not be caught (Babovic, 2000).

Artificial neural networks
ANN is the most widely used data-driven method.Since abundant information on ANNs is available in the literature [e.g.Haykin (2009)], only a brief description of ANNs is provided, with regard only to the methodology applied herein.ANN is a broad term covering a large variety of network architectures and structures.The most common of them, and the one utilized herein, is the multilayer feedforward network.This type of network is a parallel distributed information processing system that consists of the input layer, the hidden layer(s), and the output layer, and the information goes only in a forward direction.Each layer comprises a number of neurons, each one of which is connected with those in the successive layer with synaptic weights that determine the strength of the connections.The hidden and output layer neurons have an inherent activation function, which accommodates the nonlinear transformation of the input data to the targets.In this study, the neurons of the hidden layer(s) will have the hyperbolic tangent activation function, which squashes the data between (-1, 1), and the single neuron of the output layer will have the linear activation function, which simply returns the value that is passed to it.The input data are scaled to the range (-0.9, 0.9) because, if the values are scaled to the extreme limits of the transfer function, the size of the weight updates is extremely small and flat-spots in training are likely to occur (Maier and Dandy, 2000).
The training process of an ANN may be viewed as a "curve fitting" problem and the network itself may be considered simply as a nonlinear input-output mapping (Haykin, 2009).Supposing that a deterministic relation between sediment load concentration and some specific independent variables exists, a multilayer feedforward ANN is able to approximate this function, if it includes at least one hidden layer with a sufficient number of neurons (Hornik et al., 1989).However, this universal approximation theorem does not specify if a single hidden layer is optimal in the sense of learning time, ease of implementation, or (more importantly) generalization (Haykin, 2009).As a result, several network architectures are tested in order to determine the optimal one.Although the implementation of ANNs is extensive and successful in water resources applications [e.g.Maier and Dandy (2000)] and in the prediction of daily suspended sediment data [e.gCigizoglu ( 2004)], it is quite sparse in the prediction of sediment concentration from other independent hydraulic variables.Nagy et al. (2002) reviewed some widely used sediment discharge equations and selected some of the dominant dimensionless variables of the problem as input neurons for an ANN that was trained and tested with field data.Bhattacharya et al. (2005) used dimensionless parameters obtained from the Engelund and Hansen (1967) formula in order to train and test an ANN with a mixture of flume and field data, whilst in similar studies Bhattacharya et al. (2004Bhattacharya et al. ( , 2007) ) scrutinized further the possible input parameters based on the same data.Yang et al. (2009) chose as input variables combinations of dimensional quantities and applied them to field data.All of these works used the back-propagation algorithm (Rumelhart et al., 1986) for training the ANNs and compared the results with some of the most popular sediment transport formulae.For all the cases, the ANNs generated superior results.

Symbolic regression based on genetic programming
Many seemingly different problems in artificial intelligence, symbolic processing and machine learning can be viewed as requiring discovery of a computer program that produces some desired outputs for particular inputs.The process of solving these problems can be reformulated as a search for a highly fit individual computer program in the space of possible ones.GP extends the concept of genetic algorithms and provides a way to search for this fittest individual computer program (Koza, 1992).GP works by randomly generating a population of computer programs (represented by tree structures) and each individual program in the population is measured in terms of how well it performs in the particular problem environment.This measure is called the fitness meas-ure (Koza, 1992) and usually is a sum of errors between the outputs predicted by the program and the actual ones.Initially, the generated computer programs will have exceedingly poor fitness.Nonetheless, some individuals in the population will turn out to be somewhat fitter than others.These differences in performance are subsequently exploited.The Darwinian principle of reproduction and survival of the fittest and the genetic operations of sexual recombination (crossover) and mutation are used to create a new offspring population of individual programs from the current population.The reproduction principle involves the selection, in proportion to fitness, of a computer program from the current population that survives from the generation by being copied into the new population.The genetic process of sexual recombination is used to create new offspring programs from two parental programs selected in proportion to fitness.The parental programs are typically of different sizes and shapes.The offspring programs are composed of subexpressions from their parents and are, typically, of different sizes and shapes as well.Intuitively, if two programs are somewhat effective in solving a problem, then some of their parts probably have some merit.By recombining randomly chosen parts of somewhat effective programs, the result may be the production of new programs that are even fitter in solving the problem (Koza, 1992).Mutation serves the potentially important role of restoring lost diversity in a population by replacing random subtrees of variable length with other random ones.Its purpose is to prevent premature convergence to unsatisfactory solutions.After the operations of reproduction, crossover and mutation are performed on the current population, the offspring population replaces the old one.Each individual in the new population of programs is then measured for fitness and the process is iterated for a predetermined number of generations.This algorithm will produce populations of programs, which over many generations tend to exhibit increasing average fitness in dealing with their environment.The individual computer program that performs best in the evolved generations is considered to be the fittest.
A multigene individual consists of multiple genes, each of which is a GP evolved tree.In multigene symbolic regression, each prediction ŷ of the output variable y is formed linearly by the weighted output of each of the genes plus a bias term (Searson, 2009).Each tree is a function of the input variables.Mathematically, a multigene regression model can be written as: ´ (1) where d 0 =bias (offset) term; d 1 , …, d M are the gene weights and M is the number of genes comprising the current individual.The gene weights are automatically determined by a least squares procedure for each multigene individual.The number and structure of the trees is evolved automatically during a run (subject to user defined constraints) using the training data.Hence, multigene symbolic regression combines the power of classical linear regression with the ability to capture nonlinear behavior without needing to pre-specify the structure of the nonlinear model.During a run, genes are acquired and deleted using a tree crossover operator called two-point high level crossover.This allows the exchange of genes between individuals and it is used in addition to the "standard" GP recombination operators (Searson et al., 2010).
GP has been implemented in hydraulic engineering in the last years with very good results.Babovic and Abbott (1997) applied GP to some representative problems, while Babovic and Keijzer (2000) highlighted the usage of GP as a data mining tool in which the human expert interprets models suggested by the computer, aiming at knowledge discovery.Minns (2000) suggests that the symbolic expressions obtained from GP may be less accurate than the ANN in mapping the experimental data.However, these expressions may be more easily examined in order to provide insights into the processes that created the data.In the context of sediment transport, Zakaria et al. ( 2010) applied gene-expression programming, which is similar to multigene symbolic regression, to predict the total bed material load for rivers using dimensional quantities from field data, and outperformed some of the traditional sediment load formulae.Azamathulla et al. ( 2010) utilized GP in order to predict the scour depth at bridge piers and obtained results superior to those of ANNs and regression equations.

Sediment transport
Sediment load is the material being transported, and it can be divided into wash load and bed material load.The wash load is the fine material of sizes, which are not found in appreciable quantities on the bed, and is not considered to be dependent on the local hydraulics of the flow, but instead is dependent on the upstream supply.As a practical definition, the wash load is considered to be the fraction of the sediment load finer than 0.062 mm.The bed material load is the material of sizes, which are found in appreciable quantities on the bed and it can be conceptually divided into the bed load (the portion of the load that moves near the bed) and the suspended load (the portion of the load that moves in suspension), although the division is not precise.The consequent difficulty, however, to separate bed load from turbulence dominated suspended load leads to a total load definition for the quantification of sediment transport in sand bed rivers.A dimensionless, commonly used measure for sediment quantification is concentration by weight in parts per million (ppm), which is the ratio of the sediment discharge to the discharge of the water-sediment mixture, both expressed in terms of mass per unit time, here called C t .This can be given as 6 10 For practical reasons, the density of the water-sediment mixture is taken to be approximately equivalent to the density of water.This approximation will cause errors of less than one percent for concentrations less than 16000 ppm (Brownlie, 1981a).
The parameters governing a sediment transport process can be described by (Yalin, 1977) ( ) , , , , , , , Sediment Transport Processes and Their Modelling Applications Since the data-driven schemes are trained and validated with flume data but tested with field data and in order to ensure dimensional consistency in the derived models, the input and output variables should be dimensionless.Instead of applying dimensional analysis and Buckingham's π theorem, the independent variables of Eq. ( 3) will be introduced by some common and well-known dimensionless variables that have physical meaning and have been utilized for the creation of various sediment transport formulae.These variables are directly related to quantities the engineer can readily visualize and measure; they are listed as follows and summarized in Table 1.
Froude number, which gives a measure of the ratio of inertial forces to gravitational forces of the flow.For the flume data, the depth will be the hydraulic radius of the bed which is equivalent to the mean depth of an infinitely wide channel with the same slope, velocity and bed friction as the flume, and is calculated according to the sidewall correction of Vanoni and Brooks (1957).This elaboration is due to the fact that in flume experiments the sand covered bed will generally be much rougher than the flume walls, and thus will be subjected to higher shear stresses.
Reynolds number, which gives a measure of the ratio of inertial forces to viscous forces of the flow Shear Reynolds number, the physical meaning of which, is the ratio of particle size to the thickness of the viscous sublayer δ, because δ is proportional to v/U * .
Dimensionless shear stress or Shields number Dimensionless grain diameter.It is a dimensionless expression for grain diameter that can be derived by eliminating shear stress from the two Shields parameters (Shields, 1936); or from the drag coefficient and Reynolds number of a settling particle, by eliminating the settling velocity; or dimensionally, with immersed weight of an individual grain, fluid density, and viscosity as the variables (Ackers and White, 1973).The dimensionless grain diameter is, therefore, generally applicable to coarse, transitional, and fine sediments and is the cube root of the ratio of immersed weight to viscous forces.Thus ( ) Dimensionless stream power.The power equation appears first to have been applied to sediment transport by Rubey (1933) and later by Velikanov (1955).It was again suggested by Knapp (1938), and was later introduced by Bagnold (1956) in a paper wherein the flowing fluid was regarded as a transporting machine.The available power supply, or time rate of energy supply, to unit length of a stream is the time rate of liberation in kinetic form of the liquid's potential energy as it descends the gravity slope S. Denoting this power by Ω, Bagnold (1966) derived the formula The mean available power supply to the column of fluid over unit bed area, to be denoted by ω, is therefore In order to define a dimensionless transport parameter that encapsulates Bagnold's view of sediment transport as a stream power related phenomenon, Eaton and Church (2011) developed the following formula Dimensionless unit stream power.Yang (1972) reviewed the basic assumptions used in the derivation of conventional sediment transport equations.He concluded that the assumption that sediment transport rate could be determined from water discharge, average flow velocity, energy slope, or shear stress is questionable.Consequently, the generality and applicability of any equation derived from one of these assumptions is also questionable.The rate of energy per unit weight of water, available for transporting water and sediment in an open channel with reach length x and total drop of Y, is Sediment Transport Processes and Their Modelling Applications Yang (1972) defines the unit stream power as the velocity-slope product and argues that the rate of work being done by a unit weight of water in transporting sediment must be directly related to the rate of work available to a unit weight of water.Thus, total sediment concentration or total bed material load must be directly related to unit stream power.While Bagnold (1966) emphasized the power that applies to a unit bed area, Yang (1972Yang ( , 1973) ) emphasized the power available per unit weight of fluid to transport sediments.The fact that sediment discharge or concentration is dominated by the unit stream power has been confirmed by Vanoni (1978) Yang (1977Yang ( , 2003) ) argued that total sediment discharge correlates best with unit stream power based on the plots of Figure 1.Nonetheless, equations based on the other hydraulic variables have been used successfully as well.

Data preparation and determination of the inputs
Since data-driven techniques require a large number of quality data that represent a wide spectrum of the considered problem in order to be trained efficiently, the database assembled by Brownlie (1981b) is utilized.Brownlie's (1981b) database contains 7027 records (5263 laboratory records and 1764 field records) in 77 data files.These data were subjected to a screening process similar to the one Brownlie (1981a) used for the derivation of his formula.Firstly, the measurements that were not verified by Brownlie, were incorrect or incomplete, were removed.Secondly, because only flows with sand beds were considered, median particle sizes were limited to values between 0.062 mm and 2.0 mm.To avoid samples with large amounts of gravel or fine, cohesive material, geometric standard deviations were restricted to values smaller than 5, and some other constraints were imposed in order to reduce sidewall effects, eliminate shallow water effects, and overcome accuracy problems associated with low sediment concentration.In addition to these, only flume measurements with uniform flows were considered and supercritical flows were removed due to the subcritical flows that usually prevail in nature, in sand bed rivers.Finally, the measurements with specific gravity outside the quartz density range were neglected as well as measurements that had extreme temperature values.Wherever the temperature was missing, a value of 15 o C was used for the calculation of kinematic viscosity.For the laboratory data, the sidewall correction of Vanoni and Brooks (1957) was utilized to adjust the hydraulic radius to eliminate the effects of the flume walls.If sediment concentration is correlated with velocity, however, the sidewall correction will be of little use.These restrictions are shown in Table 2. Since measurements in natural streams and rivers are notoriously difficult, and sometimes inaccurate, and the inclusion of field data to the training set would result in a model applicable only to rivers similar to those the data were obtained from, field data are excluded from the training set.Consequently, the training set consists solely of laboratory flume data so that the noise embedded in the training set is minimized.The testing set, however, comprises exclusively field data in order to test the derived mathematical models in actual problems that occur in nature.With this technique, the generated models will have general applicability to the data range for which they are trained.The final database consists of 984 laboratory records and 600 field records that lie within the range of the laboratory records that constitute the training set, due to the data sensitive nature of DDM.

Restriction Reason
Further pruning of the outliers in the training dataset and the subsequent increase of data homogeneity would be beneficial for the training procedure, however, this would be at the expense of the amount of training data, which are already significantly reduced from the screening process.Since most DDM methods perform well when the data has a distribution that is close to normal (Bhattacharya et al., 2005), a log-transformation of the input and out-put variables of all datasets was applied so that the distributions of the transformed variables were closer to normal.Figure 2 depicts the distribution of the flume sediment concentrations for the original and the log-transformed values.For the creation of training and validation sets the available 984 laboratory measurements were placed in descending order with respect to sediment concentration and for every three successive measurements that were picked for the training set, the fourth one was selected for the validation set.This procedure was iterated for all the laboratory data and the emerged training and validation sets comprise 739 and 245 measurements, respectively.The 600 field measurements constitute the test set.Table 3 shows some statistical measures of the potential variables of these sets.Table 4 shows the datasets from which the data used in this study were obtained and some representative values of each set.The abbreviations used in Table 4 are the same with those Brownlie (1981b) used in his data compilation; consequently, all the references to the original datasets may be obtained from that study.Data-driven techniques can be used for data mining since the only prerequisite for their function is the determination of the input parameters without the need to predefine the structure of the model and the degree of nonlinearity.The determination of the input parameters for the datadriven schemes will be made with a tentative assessment through a trial-and-error procedure.The correlation coefficient r has been employed in order to reveal any existing linear dependence in log-log plots between sediment concentration and any of the variables listed in Table 1 where Y denotes sediment concentration and X denotes the independent variable.Table 5 shows the correlation coefficient for log-log plots for the flume and field data of Tables 4a and 4b.
From the techniques proposed, the trial-and-error process will be accomplished with the aid of ANNs, due to their speed, and after the determination of the most promising combinations that may serve as an input layer, the other data-driven techniques will be implemented as well.The findings shown in Table 5 partially agree with the diagrams depicted in Figure 1, since sediment discharge is best correlated with unit stream power and stream power both for laboratory and for field data.

VS/ω VS/(gd
After the tentative assessment based on ANNs, of several input combinations, the most potent ones, which will be applied to the data-driven schemes, seem to be those listed in Table 6.These combinations include the independent variables of Eq. ( 3) and others that are relatively easily measured and commonly used in engineering.It is noteworthy that all combinations comprise dimensionless grain diameter and Froude number among others.Whilst Froude number gives a measure of the ratio of inertial forces to gravitational forces of the flow and is a commonly used variable in hydraulic engineering, the potential usage of dimensionless grain diameter is twofold.Firstly, it introduces kinematic viscosity and median grain diameter and secondly provides homogeneity in the input data.The necessity for the provided homogeneity can be seen from combination (a) where shear Reynolds number, which essentially includes dimensionless grain diameter, is included as well.The absence of any of these two terms in combination (a) has detrimental effects in the predictive capability of the generated model.The other variables for the combinations examined herein are those that most sediment transport formulae rely heavily on, namely dimensionless unit stream power, dimensionless stream power and dimensionless shear stress, and are best correlated with sediment concentration as shown in Table 5.For combination (a) Yang's dimensionless unit stream power was preferred to Vanoni's because, despite the fact that the calculation of fall velocity may be problematic, it reduced significantly the sum of errors between calculated and observed values.The other two combinations (b) and (c) comprise just three variables because shear is embedded in dimensionless stream power and dimensionless shear stress, respectively.Furthermore, it seems that there is no other potential input combination, besides those listed in Table 6, since any other combination tested gave results that declined by orders of magnitude.6. Input combinations that will be applied to the data-driven schemes

Applications and results
The potential of training a DDM scheme solely with flume data and subsequently applying it to a test set comprising exclusively field data has been shown in Kitsikoudis et al. (2012aKitsikoudis et al. ( , 2012b) ) where ANNs and symbolic regression were utilized, respectively, for the prediction of sediment concentration in sand bed rivers.In these studies, however, the data were not subjected to elaboration and screening, in order to demonstrate the potential modeling abili-ty of this technique with crude data.As a result, input data were kept in large numbers, and the generated models yielded very good results, better than those obtained from the common sediment transport formulae.However, it is known that the incorporation of knowledge can be proved beneficial to the predictive capability of DDM schemes as long as this is accomplished by transformation and elaboration of the fundamentals.Sediment transport and open channel hydraulics rely heavily on empirical equations and ideal flows; therefore, data transformation based on such assumptions does not guarantee the enhancement of the predictive capabilities of the DDM scheme.Nevertheless, the sidewall correction of Vanoni and Brooks (1957) was applied for the proper calculation of the shear stress in flume measurements and additionally the restrictions of Table 2 were imposed to the data for the removal of various biases resulting to a significantly reduced data amount.On the contrary, a criterion for the initiation of motion has been omitted, due to the stochastic character of turbulence, and was left up to the DDM scheme to define the effective portion of the flow that quantifies the transport rate.
Since every data-driven technique has its own syntax, the three possible input combinations of Table 6 are tested individually with the aid of both ANNs and symbolic regression.The evaluation of the modeled results P i with respect to the observed ones O i will be made on the basis of the root mean square error (RMSE), ( ) coefficient of determination (R 2 ) or Nash-Sutcliffe model efficiency coefficient (E) (Nash and Sutcliffe, 1970), 1 and discrepancy ratio (DR).The latter is the percentage of calculated concentrations that lie between one half and two times the respective measured concentrations.

ANNs application
This study was implemented in MATLAB with the aid of the neural network toolbox (Demuth et al., 2009).Since the usage of Levenberg-Marquardt training function gave the best results in a similar study in Kitsikoudis et al. (2012a), it was utilized for training in this application as well.Due to the importance of the initial values of the synaptic weights in the search for local minima of the error function, which is the mean square error between calculated and observed values, a MATLAB code was written, which determines the most efficient ANN within 5000 training executions, for each network architecture, with random initial weights for every repe-tition.The most efficient ANN is taken to be the one that yields only positive sediment concentrations, in order for the results to have physical meaning, and after the training provides the highest DR in the test set.For this evaluation, DR is preferred over RMSE, because the latter emphasizes on large concentrations.Models that derived slightly worse results than others, but had much simpler structure were preferred due to the principle of parsimony.Figures 3-5 depict the scatter plots of the best derived models, for each input combination of Table 6, for the field data of the test set.These models that perform best are described in Table 7. Table 8 shows the best models and their performance measures for the training, validation and test sets.Finally, Table 10 shows a comparison between the ANN induced models and some of the commonly used sediment transport functions for the rivers data constituting the test set.It should be mentioned that several of these formulae are calibrated with part of the data (especially the Brownlie formula) that are used for the comparison and despite that significant advantage they still generate inferior results to those of the ANNs.

Input combination from Table 6
Network architecture (neurons in input-hiddenoutput layers) Table 7. Best performing models for each possible input combination  From Table 8 can be inferred that any of the three combinations listed in Table 6 has its own merit and that sediment transport can be quantified by physical quantities that can be either vectors or scalars.

Symbolic regression application
The basic computation tool for the implementation of symbolic regression is provided by GPTIPS (Searson, 2009), which is an open source MATLAB toolbox.Since every problem has its own peculiarities, proper adjustments must be made to the GPTIPS parameters in order to obtain good results.The most important parameters are the population size, the number of generations, the using functions, the maximum number of genes and the maximum tree depth.Searson et al. (2010) have found that enforcing stringent tree depth restrictions often allows the evolution of relatively compact models that are linear combinations of low order nonlinear transformations of the input variables.After several runs, only input combination (b) gave results superior to those of the classical formulae.The GPTIPS derived formula for this combination is the following Figure 6 depicts the scatter plot of measured and calculated from Eq. ( 18) sediment concentrations for the field data of the test set, whilst Table 9 and Table 10 show its performance for the training, validation and testing set, and the comparison with other formulae, respectively.The results obtained from ANNs for all the combinations are superior to those of the classical sediment transport formulae in terms of DR, RMSE and R 2 .Combination (a) performed best in all evaluation measures, besides the second DR criterion in the range 0.25-4 where combination (b) gave better results.The third combination came up third with respect to all evaluation measures.However, these results by no means can be considered conclusive, since it is essentially unknown whether they are the best results derived from the ANN or just results obtained from the trapping in a local minimum of the minimization process in the network's training algorithm.From the results generated from symbolic regression, only combination (b) managed to surpass the classical sediment transport functions.The other two combinations gave results inferior to those of Engelund and Hansen and Brownlie formulae, but superior to those of the others.In addition, symbolic regression derived its best results without utilizing the log-transformation of the input data.Regarding the other sediment transport functions, the formula of Engelund and Hansen performed best.The small values of the coefficient of determination R 2 in Table 10 reflect the difficulty of predicting sediment transport rates in natural streams and rivers, due to random turbulent bursts that accentuate the stochastic nature and exacerbate the complexity of the problem.
Although these results cannot be considered conclusive, it seems that the ANNs yield better results.GPTIPS sometimes (usually when only a few input variables are involved) lags behind a neural network model in terms of raw predictive performance, but the equivalent GP models are often simpler, shorter and may be open to physical interpretation (Searson, 2009).This is partially due to the fact that ANNs are much faster than the time consuming GP and for given time they can run multiple times comparing to GP.Moreover, since the testing set comes from a database with different statistical distributions than the one from which the training set originates, the exploration of as many as possible local minima of the training function may prove beneficial to the training process.ANNs have this property, whilst GP is based on a stochastic concept seeking the global minimum.This may be one reason for the superiority of ANNs in this study, where the training data comprise flume measurements, whilst the testing data consists of field measurements.

Conclusions
This study utilized two widely used data-driven techniques, namely ANNs and symbolic regression, in a novel way since the data used for training and those used for testing came from datasets with different statistical distributions.This difference is owned to the fact that the training and validation set comprises exclusively laboratory flume data, while the testing set consists solely of field data.Based on this concept, the inclusion of noise emanated from the field measurements will not be embedded in the training data and additionally the generated models will have general applicability since the inclusion of field data in the training set would confine them to the specific streams from which the data were obtained.The determination of the input parameters was accomplished by a tentative assessment of some of the widely used dimensionless parameters in sediment transport and open channel hydraulics.This assessment showed that three combinations had the potential to serve as in-puts and were involved in this application, in which they all yielded very good results, better than those obtained from the commonly used formulae on the basis of root mean square error and the ratio of computed to measured transport rates.Unit stream power, stream power, and shear stress were the dominant independent variables of the three combinations, respectively, and the results have shown that each one, of these widely used variables in the context of sediment transport, has its own merit.The results generated from the ANNs were better from those obtained from symbolic regression; however, the explicit equation that was derived from the latter can be more easily interpreted.Finally, the results obtained in this study may enhance the confidence in using data-driven techniques, despite their blackbox nature, because, in order to perform well in a dataset from a different system from the one they were trained, the induced equations must have physical meaning.

Notation
The following symbols are used in this chapter:

Appendix A
In flume experiments, the sand covered bed will generally be much rougher than the flume walls, and thus will be subjected to higher shear stress.Separation of the shear force exerted on the bed from that on the lateral boundaries was first proposed by Einstein (1950).The line of analysis pursued as follows is that proposed by Johnson (1942) and modified by Vanoni and Brooks (1957).The principal assumption is that the cross-sectional area can be divided into two parts, A b and A w , in which the streamwise component of the gravity force is resisted by the shear force exerted in the bed and walls, respectively.It is further assumed that the mean velocity and energy gradient are the same for A b and A w , and that the Darcy-Weisbach relation can be applied to each part of the cross section as well as to the whole, i.e.
in which, p = the wetted perimeter; and the subscripts b and w refer to the bed and wall sections, respectively.For a rectangular channel p=2D+W; p w =2D; p b =W.Introducing the geometrical requirement A=A b +A w into Eq.( 19) results in ( ) The wall friction factor f w is further related to the ratio of Re/f, where Re=4VR/ν and f can be calculated from the experimental data.This relationship, which was originally given as a graph of f w against Re/f by Vanoni and Brooks (1957), can also be described by the function ( ) which is obtained by curve fitting (Cheng and Chua, 2005).Finally, f b is calculated from Eq. ( 20) and R b = A b /p b from Eq. ( 19).R b is consequently used for the calculation of the bed shear velocity and bed shear stress.
Despite its several obvious deficiencies (division of the cross section into two noninteracting parts, determination of friction factors for section components on the basis of a pipe friction diagram, use of the same mean velocity for each subsection, etc.), the side-wall correction procedure appears to yield fairly reliable estimates of the friction factors for flow over sand beds with no flume walls present (Vanoni, 2006).
form; they apply to plain, rippled, and duned configurations.Their mobility number for sediment is Coefficients C, A, m and n are related to the dimensionless grain diameter d gr based on bestfit curves of laboratory data with sediment sizes greater than 0.04 mm and Froude numbers less than 0.8.They are shown in Table 11.
( ) Finally, they related the bed material load to the mobility number as follows where X = rate of sediment transport in terms of mass flux per unit mass flow rate where q t = total sediment discharge by weight per unit width.Strictly speaking, the Engelund and Hansen formula should be applied to those flows with dune beds in accordance with the similarity principle.However, many tests have shown that it can be applied to the upper flow regime with particle size greater than 0.15 mm without serious deviation from the theory.
Sediment Transport Processes and Their Modelling Applications where q s =volumetric total sediment discharge per unit width.
Molinas and Wu formula: This empirical relation is based on Velikanov's gravitational power theory, which assumes that the power available in flowing water is equal to the sum of the power required to overcome flow resistance and the power required to keep sediment in suspension against gravitational forces.Molinas and Wu (2001) argued that the predictors of Ackers and White, Engelund and Hansen, and Yang have been developed with flume experiments representative of shallow flows and cannot be applied to large rivers having deep flow conditions.Motivated by the need for having a total bed material load predictor for application to large sand bed rivers, they used stream power and energy considerations together with data from large rivers (e.g., Amazon, Atchafalaya, Mississippi, Red River), to obtain an empirical fit for the total bed material load concentration in ppm ( ) where Ψ = universal stream power, which is defined as ( ) ( ) One advantage of this approximation is that the energy slope does not have to be measured directly, which is always a challenge in large alluvial rivers.On the other hand, since Molinas and Wu (2001) do not mention how the wash load was separated from the bed material load and the same large river data were used both to develop and to test their formulation, Eq. ( 38) might overestimate bed material load concentrations when applied to other large rivers not included in the calibration (Garcia, 2008).
Yang formula: To determine total sediment concentration, Yang (1973) used Buckingham's π theorem and the concept of unit stream power, which is given by the product of mean flow velocity and energy slope.The coefficients in Yang's equation were determined by running a multiple regression analysis for 463 sets of laboratory data.The equation obtained is The critical dimensionless unit stream power V cr S/ω is the product of the dimensionless critical velocity V cr /ω and the energy slope S, where  (41)

Figure 3 .Figure 4 .Figure 5 .
Figure 3. Scatter plot for the field data of the test set, of measured sediment concentration and computed from ANN, based on input combination (a)

14 m
= 9.66d gr -1 +1.34 logC = -3.53+2.86logdgr -(logd gr )2 laboratory flumes and 1.268 for field channels.Engelund and Hansen formula: Using Bagnold's stream power concept and the similarity principle,Engelund and Hansen (1967) established the following sediment transport formula Karim and Kennedy formula:Karim and Kennedy (1990) applied nonlinear multiple regression analysis to derive relations among flow velocity, sediment discharge, bed form geometry, and friction factor of alluvial rivers.A database comprising 339 river flows and 608 flume flows was used in their analysis.The obtained sediment load predictor is given

Table 1 .
Vanoni (1978) Yang divided unit stream power VS by fall velocity ω s to obtain a dimensionless variable,Vanoni (1978)divided the product VS by (gd 50 ) 1/2 .Both d 50 and ω s are commonly used for describing the size of sediment particles.However, d 50 can only reflect the physical size of sediment particles, while ω s can also reflect the interaction between sediment particles and water, which is affected by particle shape, water viscosity and temperature.On the other hand, the computation of fall velocity is problematic and a common source of errors.The emerging variables expressing dimensionless unit stream power according to Yang and Vanoni are, respectively, the following Dimensionless variables assessed for the determination of the dominant ones * Dimensionless shear stress, τ * Dimensionless grain diameter, d gr Dimensionless stream power, ω * Yang's dimensionless unit stream power, VS/ω s Vanoni's dimensionless unit stream power, VS/(gd 50 ) 1/2

Table 2 .
Restrictions imposed on data

Table 3 .
Statistical measures of the train, validation and test sets Sediment Transport Processes and Their Modelling Applications

Table 4
. (a) Range of laboratory variables, (b) Range of field variables

Table 5 .
Correlation between sediment concentration and independent dimensionless variables of the flume and field data of Table4in log-log plots

Table 8 .
Performance measures of the optima ANNs

Table 9 .
Performance measures of symbolic regression, based on combination (b)

Table 10 .
Comparison of ANNs of the input combinations (a), (b) and (c) and Eq.(18), derived from symbolic regression for the combination (b), with sediment transport formulae based on the river data of the test set Sediment Transport Processes and Their Modelling Applications

Table 11 .
Coefficients of the Ackers and White formula Brownlie formula: The Brownlie (1981a) relations are based on regressions of over 1000 experimental and field data points.For normal or quasi-normal flow, the transport relation takes the form Derivation of Sediment Transport Models for Sand Bed Rivers from Data-Driven Techniques http://dx.doi.org/10.5772/53432