Open access peer-reviewed chapter

Nonlinear Evapotranspiration Modeling Using Artificial Neural Networks

By Sirisha Adamala

Submitted: April 22nd 2018Reviewed: September 7th 2018Published: April 3rd 2019

DOI: 10.5772/intechopen.81369

Downloaded: 194

Abstract

Reference evapotranspiration (ETo) is an important and one of the most difficult components of the hydrologic cycle to quantify accurately. Estimation/measurement of ETo is not simple as there are number of climatic parameters that can affect the process. There exists copious conventional (direct and indirect) and non conventional/soft computing (artificial neural networks, ANNs) methods for estimating ETo. Direct methods have the limitations of measurement errors, expensive, impracticality of acquiring point measurements for spatially variable locations, whereas the indirect methods have the limitations of unavailability of all necessary climate data and lack of generalizability (needs local calibration). In contrast to conventional methods, soft computing models can estimate ETo accurately with minimum climate data which have advantages over limitations of conventional ETo methods. This chapter reviews the application of ANN methods in estimating ETo accurately for 15 locations in India using six climatic variables as input. The performance of ANN models were compared with the multiple linear regression (MLR) models in terms of root mean squared error, coefficient of determination and ratio of average output and target ETo values. The results suggested that the ANN models performed better as compared to MLR for all locations.

Keywords

  • evapotranspiration
  • ANN
  • climate
  • data
  • Gaussian
  • lysimeter

1. Introduction

Evapotranspiration (ET) is the combining process of evaporation and transpiration losses. Almost 62% of precipitation falls on continents are returned back to the atmosphere through the ET process [1]. ET plays a significant role in the hydrological cycle and its estimation is very important in various fields of water resources. A common procedure for estimating actual crop evapotranspiration (ETcrop) is to first estimate reference evapotranspiration (ETo) and to then apply an appropriate crop coefficient (kc). ETo is an important and one of the most difficult components of the hydrologic cycle to quantify accurately. ETo is measured from a hypothetic crop of uniform height (12 cm), active growing (crop resistance of 70 s m−1), completely shading the ground (albedo of 0.23) and unlimited supply of water [2]. The Food and Agricultural Organization (FAO) consider the above definition as standard and sole method for estimating ETo if sufficient climatic data are available [3, 4].

Estimation of ETo is complex due to influence of various climatic variables (maximum and minimum air temperature, wind speed, solar radiation and maximum and minimum relative humidity) and existence of nonlinearity in between climatic data and ETo. Though users have number of methods for measuring or estimating ETo directly or indirectly, most of them have some limitations regarding data availability or regional applicability. In addition, in order to use these methods, users are required to make reasonable estimates for some of the parameters in the employed ETo models, which involve some uncertainties and might not result in reliable ETo estimates [5]. Further, it is difficult to develop accurately representative physically based models for the complex non-linear hydrological processes, such as ETo. This is because the physical relationships involving in a system can be too complicated to be accurately represented in a physically based manner.

The above limitations lead to the need of developing some techniques that can accurately estimate ETo values with a minimum input data and are also easy to apply without knowing physical process inside the system. Artificial neural network (ANN) techniques, which can provide a model to predict and investigate the process without having a complete understanding of it, can be a useful tool for the above purpose. These techniques are also interesting because of its knowledge discovering property. In contrast to conventional methods, ANNs can estimate ETo accurately with minimum climate data, which may have the advantages of being inexpensive, independent of specific climatic condition, ignored physical relations, and precise modeling of nonlinear complex system. In the last decade, many researchers have used ANN techniques for modeling of the ETo process [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25].

2. Review of literature

This section follows the discussion of some of the significant contributions made by various researchers in the application of different ANN techniques for modeling ETo or pan evaporation (Ep). A radial basis function (RBF) neural network was developed in C language to estimate daily soil water evaporation [26]. The input layer of network consists of average relative air humidity, air temperature, wind speed (Ws) and soil water content. They compared the results of RBF networks with the multiple linear regression (MLR) techniques. A feed-forward back propagation (FFBP)-based ANN model was developed to estimate daily Ep based on measured weather variables [27]. They used different input combinations to model Ep. They compared the developed ANN models with the Priestly-Taylor & MLR models. RBF neural network model was developed to estimate the FAO Blaney-Criddle b factor [28]. The input layer to RBF model consists of minimum daily relative humidity (RHmin), day time Ws and mean ratio of actual to possible sunshine hours (n/N). The b values estimated by the RBF models were compared to the appropriate b values produced using regression equations. FFBP ANN models were implemented for the estimation of daily ETo using six basic climatic parameters as inputs [16]. They trained ANNs using three learning methods (with different learning rates and momentum coefficients), different number of processing elements in the hidden layers, and the number of hidden layers. The compared the results of developed ANN models with the Penman Monteith (PM) method and with a lysimeter measured ETo. ANN-based back propagation models for estimating Class A Ep with minimum climate data (four input combinations) were developed and compared with the existing conventional methods [22].

The potential of RBF neural network for estimating the rice daily crop ET using limited climatic data was demonstrated [23]. Six RBF networks, each using varied input combinations of climatic variables were trained and tested. The model estimates were compared with measured lysimeter ET. A sequentially adaptive RBF network was applied for forecasting of monthly ETo [29]. Sequential adaptation of parameters and structure was achieved using extended Kalman filter. Criterion for network growing was obtained from the Kalman filter’s consistency test. Criteria for neuron/connections pruning were based on the statistical parameter significance test. The results showed that the developed network was learned to forecast ETo,t + 1 (current or next month) based on ETo,t-11 (at a lag of 12 months) and ETo,t-23 (at a lag of 24 months) with high reliability. The study examined that whether it is possible to attain the reliable estimation of ETo only on the basis of the temperature data [24]. He developed RBF neural network for estimating ETo and compared the developed model with temperature-based empirical models.

ANN-based daily ETo models were trained based on different categories of conventional ETo estimation methods such as temperature based (FAO-24 Blaney-Criddle), radiation based (FAO-24 Radiation method for arid and Turc method for humid regions) and combinations of these (FAO-56 PM) [14]. A comparison of the Hargreaves and ANN methods for estimating monthly ETo only on the basis of temperature data was done [19]. They developed ANN models with the data from six stations and tested these developed models with the data from remaining six stations, which were not used in model development. The capability of ANN for converting Ep to ETo was studied using temperature data [18]. The conventional method that uses pan coefficient (Kp) as a factor to convert Ep to ETo was considered for this comparison. Generalized ANN (GANN)-based ETo models corresponding to FAO-56 PM, FAO-24 Radiation, Turc and FAO-24 Blaney-Criddle methods were developed [15]. These models were trained using the pooled data from four California Irrigation Management Information System (CIMIS) stations with FAO-56 PM computed values as targets. The developed GANN models were tested with different stations which were not used in training. Multilayer perceptron (MLP) neural networks for estimating the daily Ep using input data of maximum and minimum air temperature and the extraterrestrial radiation was developed [20]. The potential for the use of ANNs to estimate the ETo based on air temperature data was examined [21]. He also conducted comparison of estimates provided by the ANNs and by Hargreaves equation by using the FAO-56 PM model as reference model.

3. Study area and data collected

For the purpose of this study, 15 meteorological stations in India were chosen. Figure 1 shows the geographical locations of these selected stations and their related agro-ecological regions (AERs). These stations are having daily meteorological data of from 2001 to 2005 of following variables: minimum air temperature (Tmin), maximum air temperature (Tmax), minimum relative humidity (RHmin), maximum relative humidity (RHmax), mean wind speed (ws), and solar radiation (Sra). Table 1 shows the details of 15 climatic stations of India along with altitude and duration of available data. The study area is bounded between the longitudes of 68° 7′ and 97° 25′ E and the latitudes of 8° 4′ and 37° 6’ N. The annual potential evapotranspiration of India is 1771 mm. It varies from a minimum of 1239 mm in Jammu and Kashmir to a maximum of 2100 mm in Gujarat [30].

Figure 1.

Geographical locations of study sites in India.

AERLocationAlt. (m)PeriodTmax (°C)Tmin (°C)RHmax (%)RHmin (%)Ws (km h−1)Sra (MJ m−2 day−1)
Semi-aridParbhani4232001–200533.7518.3271.1341.025.0420.87
Solapur252001–200534.1520.1473.2845.096.1518.96
Bangalore9302001–200528.9017.7089.1547.308.6818.95
Kovilpatti902001–200535.1123.3780.3648.526.6019.30
Udaipur4332001–200531.8116.3372.3636.443.7419.45
AridAnantapur3502001–200534.4321.7873.3233.919.6420.27
Hissar2152001–200531.1716.2381.0044.275.2017.26
Sub-humidRaipur2982001–200532.6019.9180.6244.085.3317.80
Faizabad1332001–200531.5618.1887.0252.113.5117.88
Ludhiana2472001–200530.0617.4283.9749.144.2618.10
Ranichauri16002001–200520.089.6681.1561.554.9916.23
HumidPalampur12912001–200524.4113.2469.7057.885.5616.35
Jorhat862001–200527.9719.2392.7075.273.0014.68
Mohanpur102001–200532.2021.0496.1861.481.2718.06
Dapoli2502001–200531.1318.8793.7769.224.9218.02

Table 1.

Station locations and period of records.

4. Theoretical consideration

The concept of neural networks was introduced by [31]. The neural-network approach, also referred to as ‘connectionism’ or ‘paralleled distributed processing’, adopts a “Brain metaphor” of information processing. Information processing in a neural network occurs through interactions involving large number of simulated neurons. A neural network (NN) is a simplified model of the human brain nervous system consisting of interconnected neurons in a parallel distributed system, which can learn and memorize the information. In NN, the interneuron connection strengths, known as ‘synaptic weights’ are used to store the acquired knowledge [32]. In other words, ANN discovers the relationship between a set of inputs and desired outputs without giving any information about the actual processes involved; it is in essence based on pattern recognition. ANNs consist of a number of interconnected processing elements or neurons. How the inter-neuron connections are arranged determines the topology of a network. A neuron is the fundamental unit of human brain’s nervous system that receives and combines signals from other neurons through input paths called ‘dendrites’. Each signal coming to a neuron along a dendrite passes through a junction called ‘synapse’, which is filled with neuro-transmitter fluid that produce electrical signals to reach to the soma or cell body where processing of the signals occurs [16]. If the combined input signal after processing is stronger than the threshold value, the neuron activates, producing an output signal, which is transferred through the axon to the other neurons. Similarly, ANN consists of a large number of simple processing units called neurons (or nodes) linked by weighted connections. A comprehensive description of neural networks was presented in a series of papers [33, 34, 35], which provide valuable information for the researchers.

4.1 Model of a neuron

The main function of artificial neuron is to generate output from an activated nonlinear function with the weighted sum of all inputs. Figure 2 illustrates a nonlinear model of a neuron, which forms the basis for designing ANN. The input layer neurons receive the input signals (xi) and these signals are passed to the cell body through the synapses. A set of synapses or connecting links is characterized by its own weight or strength. A signal at the input of synapse ‘i’ connected to neuron ‘k’ is multiplied by the synaptic weight ‘wki’. The input signals, weighted by the respective synapses of the neuron are added by a linear combiner. An activation function or squashing function is used for limiting the permissible amplitude range of the output of a neuron to some finite value. An external bias (bk) has an effect of increasing or decreasing the net input of the activation function depending on whether it is positive or negative, respectively.

Figure 2.

A nonlinear model of a neuron.

In the mathematical form, a neuron k may be described by the following equations:

uk=i=1nwkixiE1
yk=ϕuk+bkE2

where x1, x2, x3, ……….. xn = input signals; wk1,wk2, …….wkn = synaptic weights of neuron k; uk = linear combiner output due to the input signal; bk = bias; φ(.) = activation function; yk = output signal of the neuron k.

Let vk be the induced local field or activation potential, which is given as:

vk=uk+bkE3

Now, Eqs. (1), (2) and (3) can be written as:

vk=i=0mwknxnE4
yk=ϕ(vk)E5

In Eq. (5), a new synapse with input x0 = +1 is added and its weight is wk0 = bk to consider the effect of the bias.

4.2 Neural network architecture parameters

Determination of appropriate neural network architecture is one of the most important tasks in model-building process. Various types of neural networks are analyzed to find the most appropriate architecture of a particular problem. Multilayer feed forward networks are found to outperform all the others. Although multilayer feed forward networks are one of the most fundamental models, they are the most popular type of ANN structure suited for practical applications.

4.3 Number of hidden layers

There is no fixed rule for selection of hidden layers of a network. Therefore, trial and error method was used for selection of number of hidden layers. Even one hidden layer of neuron (operating sigmoid activation function) can also be sufficient to model any solution surface of practical interest [36].

4.4 Number of hidden neurons

The ability of the ANN to generalize data not included in training depends on selection of sufficient number of hidden neurons to provide a means for storing higher order relationships necessary for adequately abstracting the process. There is no direct and precise way of determining the most appropriate number of neurons to include in hidden layer and this problem becomes more complicated as number of hidden layer increases. Some studies indicated that more number of neurons in hidden layer provide a solution surface that closely fit to training patterns. But in practice, more number of hidden neurons results the solution surface that deviate significantly from the trend of the surface at intermediate points or provide too literal interpretation of the training points which is called ‘over fitting’. Further, large number of hidden neurons reduces the speed of operation of network during training and testing. However, few hidden neurons results inaccurate model and provide a solution surface that deviates from training patterns. Therefore, choosing optimum number of hidden neurons is one of the important training parameter in ANN. To solve this problem, several neural networks with different number of hidden neurons are used for calibration/training and one with best performance together with compact structure is accepted.

4.5 Types of activation functions

The activation function or transfer function, denoted by φ(v), defines the output of a neuron in terms of the induced local field v. It is valuable in ANN applications as it introduces a degree of nonlinearity between inputs and outputs. Logistic sigmoid, hyperbolic tangent and linear functions are some widely used transfer function in ANN modeling.

Logistic sigmoid function: This function is a continuous function that reduces the output into the range of 0–1 and is defined as [32]:

φv=11+expvE6

Hyperbolic tangent function: It is used when the desired range of output of a neuron is between −1 and 1 and is expressed as [32]:

φv=tanhv=1e2v1+e2vE7

Linear function: It calculates the neuron’s output by simply returning the value passed to it. It can be expressed as:

φv=vE8

4.6 Neural network architectures

The manner in which the neurons of a neural network are structured is intimately linked with the learning algorithm used to train the network. This leads to the formation of network architectures. The neural network architectures are classified into distinct classes depending upon the information flow. The different network architectures are: (a) multilayer perceptions, (b) recurrent, (c) RBF, (d) Kohonen self-organizing feature map, etc.

4.7 Multilayer perceptions (MLPs)

MLPs are layered (single-layered or multi-layered) feed forward networks typically trained with static back-propagation (Figure 3). Therefore, it is also called as FFBP neural networks. These networks have found their way into countless applications requiring static pattern classification. This architecture consists of input layers, output layer(s) and one or more hidden layers. The input signal moves in only forward direction from the input nodes to the output nodes through the hidden nodes. The function of hidden layer is to perform intermediate computations in between input and output layers through weights. The major advantage of FFBP is that they are easy to handle and can easily approximate any input-output map [37].

Figure 3.

Types of neural network architectures [37]. (a) Multilayer perception; (b) recurrent neural network; (c) radial basis function network.

4.8 Recurrent neural networks (RNN)

RNN may be fully recurrent networks (FRN) or partially recurrent networks (PRN). FNN sent the outputs of the hidden layer back to itself, whereas PRN initiates the fully RNN and add a feed-forward connection (Figure 3). A simple RNN could be constructed by a modification of the multilayered feed-forward network with the addition of a ‘context layer’. At first epoch, the new inputs are sent to the RNN and previous contents from the hidden layer are passed to context layer and at next epoch, the information is fed back to the hidden layer. Similarly, weights are calculated hidden to context and vice versa. The RNN can have an infinite memory depth and thus find relationship through time as well as through the instantaneous input space. Recurrent networks are the state-of-the-art in nonlinear time series prediction, system identification, and temporal pattern classification [37, 38, 39].

4.9 Radial basis function (RBF) networks

RBF is a three-layer feed-forward network that consists of nonlinear Gaussian transfer function in between input and hidden layers and linear transfer function in between hidden and output layers (Figure 3). The requirement of hidden neurons for the RBF network is more as compared to standard FFBP, but these networks tend to learn much faster than MLPs [37]. The most common basis function used is Gauss function and is given by:

Ri=expi=1nxici22σij2E9

where Ri = basis or Gauss function; c = cluster center; σij = width of the Gaussian function. The centers and widths of the Gaussians are set by unsupervised learning rules, and supervised learning is applied to the output layer. After the center is determined, the connection weights between the hidden layer and output layer can be determined simply through ordinary back-propagation (gradient-descent) training. The output layer performs a simple weighted sum with a linear output and the weights of the hidden layer basis units (input to hidden layer) are set using some clustering techniques.

y=i=1nwiRixi+woE10

where wi = connection weight between the hidden neuron and output neuron; wo = bias; xi = input vector.

4.10 ANN learning paradigms

Broadly speaking, there are two types of learning process namely, supervised and unsupervised. In supervised learning, the network is presented with examples of known input-output data pairs, after which it starts to mimic the presented input output behavior or pattern. In unsupervised learning, the network learns on their own, in a kind of self-study without teacher.

Supervised learning: It is also called ‘associative learning’ involves a mechanism of providing the network with a set of inputs and desired outputs. It is like learning with the help of a teacher. The so-called teacher has the knowledge of the environment and the knowledge is represented by a set of input-output examples. The environment is, however, unknown to the neural network. The network parameters (i.e., synaptic weights and error) are adjusted iteratively in a step-by-step fashion under the combined influence of the training vector and the error signal. After the completion of training, the neural network is able to deal with the environment completely by itself [32]. In supervised learning, FFBP NN is the most popular ones. In the FFBP NNs, neurons are organized into layers where information is passed from the input layer to the final output layer in a unidirectional manner. Any network in ANN consists of ‘neurons or nodes or parallel processing elements’ which interconnects the each layer with weights (W). A three layer (input (i), hidden (j) and target/output (k)) FFBP NN with weights Wij and Wjk is shown in Figure 4. During training the FFBP NN, the initial or randomized weight values are corrected or adjusted as per calculated error in between output and target values and back-propagates these errors (from right to left in Figure 4) un till minimum error criteria achieved.

Figure 4.

A three layer feed-forward ANN model [7].

Unsupervised learning: Network is provided with inputs but not with desired outputs. The system itself must then decide what features it will use to group the input data. This is often referred to as self-organization or adaption. Provision is made for a task-independent measure of the quality of representation that the network is required to learn and the free parameters of the network are optimized with respect to that measure [32]. The most widely used unsupervised neural network is the Kohonen self-organizing map, KSOM.

4.11 Kohonen self-organizing map (KSOM)

KSOM maps the input data into two-dimensional discrete output map by clustering similar patterns. It consists of two interconnected layers namely, multi-dimensional input layer and competitive output layer with ‘w’ neurons (Figure 5).

Figure 5.

Kohonen self organizing map [40].

Each node or neuron ‘i’ (i = 1, 2, … w) is represented by an n-dimensional weight or reference vector wi = [wi1,….,win]. The ‘w’ nodes can be ordered so that similar neurons are located together and dissimilar neurons are remotely located on the map. The topology of network is indicated by the number of output neurons and their interconnections. The general network topology of KSOM is either a rectangular or a hexagonal grid. The number of neurons (map size), w, may vary from a few dozen up to several thousands, which affects accuracy and generalization capability of the KSOM. The optimum number of neurons (w) can be determined by below equation [41].

w=5NE11

where N = total number of data samples or records. Once ‘w’ is known, the number of rows and columns in the KSOM can be determined as:

l1l2=e1e2E12

where l1 and l2 = number of rows and columns, respectively; e1 = biggest eigen value of the training data set; e2 = second biggest eigen value.

4.12 Training the KSOM

The KSOM is trained iteratively: initially the weights are randomly assigned. When the n-dimensional input vector x is sent through the network, the Euclidean distance between weight ‘w’ neurons of SOM and the input is computed by,

xw=i=1nxiwi2E13

where xi = ith data sample or vector; wi = prototype vector for xi;│denotes Euclidian distance.

The best matching unit (BMU) is also called as ‘winning neuron’ is the weight that closely matching to the input. The learning process takes place in between BMU and its neighboring neurons at each training iteration ‘t’ with an aim to reduce the distance between weights and input.

wt+1=wt+αthlmxwtE14

where α = learning rate; l and m = positions of the winning neuron and its neighboring output nodes; hlm = neighborhood function of the BMU l at iteration t.

The most commonly used neighborhood function is the Gaussian which is expressed as:

hlm=explm22σt2E15

where l-m = distance between neurons l and m on the map grid; σ = width of the topological neighborhood.

The training steps are repeated until convergence. After the KSOM network is constructed, the homogeneous regions, that is, clusters are defined on the map. The KSOM trained network performance is evaluated using two errors namely, total topographic error (te) and quantization error (qe).

The topographic error, te, is an indication of the degree of preservation of the topology of the data when fitting the map to the original data set.

te=1Ni=1NuxiE16

where u(xi) = binary integer such that it is equal to 1 if the first and second best matching units of the map are not adjacent units; otherwise it is zero.

The quantization error, qe, is an indication of the average distance between each data vector and its BMU at convergence, that is, the quality of the map fitting to the data.

qe=1Ni=1NxiwliE17

where wli = prototype vector of the best matching unit for xi.

4.13 Type of ANN training algorithms

Training basically involves feeding training samples as input vectors through a neural network, calculating the error of the output layer, and then adjusting the weights of the network to minimize the error. There are different methods for adjusting the weights. These methods are called as “training algorithms”. The objective of the training algorithm is to minimize the difference between the predicted output values and the measured output values [6]. Different training algorithms are: (i) gradient descent with momentum backpropagation (GDM) algorithm, (ii) Levenberg-Marquardt (LM) algorithm, (iii) Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi Newton algorithm, (iii) resilient back propagation (RBP) algorithm, (iv) conjugate gradient algorithm, (v) one-step secant (OSS) algorithm, (vi) cascade correlation (CC) algorithm, and (vii) Bayesian regularization (BR) algorithm. The training algorithms used in this study are only briefly described below.

4.14 Gradient descent with momentum back propagation (GDM) algorithm

This method uses back-propagation to calculate derivatives of performance cost function with respect to the weight and bias variables of the network. Each variable is adjusted according to the gradient descent with momentum. The equation used for update of weight and bias is given by:

Δwjin=α.Δwjin1+ηEwjiE18

where Δwji(n) = correction applied to the synaptic weight connecting neuron i to neuron j; α = momentum; η = learning-rate parameter; E = error function. The equation is known as the generalized delta rule and this is probably the simplest and most common way to train a network [37].

4.15 Levenberg-Marquardt (LM) algorithm

This method is a modification of the classic Newton algorithm for finding an optimum solution to a minimization problem. In particular the LM utilizes the so called Gauss-Newton approximation that keeps the Jacobian matrix and discards second order derivatives of the error. The LM algorithm interpolates between the Gauss-Newton algorithm and the method of gradient descent. To update weights, the LM algorithm uses an approximation of the Hessian matrix.

Wk+1=WkJTJ+λI1JTeE19

where W = weight; e = errors; I = identity matrix; λ = learning parameter; J = Jacobian matrix (first derivatives of errors with respect to the weights and biases); JT=transpose ofJ; JTJ=Hessian matrix. For λ = 0 the algorithm becomes Gauss-Newton method. For very large λ the LM algorithm becomes steepest decent algorithm. The ‘λ’ parameter governs the step size and is automatically adjusted (based on the direction of the error) at each iteration in order to secure convergence. If the error decreases between weight updates, then the ‘λ’ parameter is decreased by a factor of λ. Conversely, if the error increases then ‘λ’ parameter is increased by a factor of λ+. The λand λ+are defined by user. In LM algorithm training process converges quickly as the solution is approached, because Hessian does not vanish at the solution. LM algorithm has great computational and memory requirements and hence it can only be used in small networks. It is often characterized as more stable and efficient. It is faster and less easily trapped in local minima than other optimization algorithms [37].

4.16 Online and batch modes of training

On-Line learning updates the weights after the presentation of each exemplar. In contrast, Batch learning updates the weights after the presentation of the entire training set. When the training datasets are highly redundant, the online mode is able to take the advantage of this redundancy and provides effective solutions to large and difficult problems. On the other hand, the batch mode of training provides an accurate estimate of gradient vector; convergence of local minimum is thereby guaranteed under simple conditions [23].

4.17 Multiple linear regression (MLR)

MLR technique attempts to model the relationship between two or more explanatory (independent) variables and a response (dependent) variable by fitting a linear equation to the observed data. The general form of a MLR model is given as [42]:

Yi=β0+β1X1,i+β2X2,i++βkXk,i+εiE20

where Yi = ith observations of each of the dependent variable Y; X1, i, X2,i, ⋯, Xk, i = ith observations of each of the independent variables X1, X2, ⋯, Xk respectively; β0, β1, β2, ⋯, βn = fixed (but unknown) parameters; εi = random variable that is normally distributed.

The task of regression modeling is to estimate the unknown parameters (β0, β1, β2, ⋯, βn) of the MLR model [Eq. (20)]. Thus, the pragmatic form of the statistical regression model obtained after applying the least square method is as follows [42].

Yi=b0+b1X1,i+b2X2,i++bkXk,i+eiE21

where i=1,2,.,n;b0,b1,b2,,bkestimates or unstandardized regression coefficients of β0,β1,β2,,βnrespectively; ei = estimated error (or residual) for the ith observation.

Therefore, estimate of

Y=Ŷ=b0+b1X1,i+b2X2,i++bkXk,iE22

The difference between the observed Y and the estimated Ŷis called the residual (or residual error).

The purpose of developing MLR models is to establish a simple equation which is easy to use and interpret. The MLR modeling is very useful, especially in case of limited field data. Moreover, it is versatile as it can accommodate any number of independent variables [43].

4.18 The FAO-56 Penman-Monteith method

The FAO-56 PM method is recommended as the standard method for estimating ETo in case of locations where measured lysimeter data is not available. The equation for the estimation of daily ETo can be written as [3]:

ETo=0.408ΔRnG+γ900T+273WseseaΔ+γ1+0.34WsE23

where ETo = reference evapotranspiration calculated by FAO-56 PM method (mm day−1); Rn = daily net solar radiation (MJ m−2 day−1); γ = psychrometric constant (kPa oC−1); Δ = slope of saturation vapor pressure versus air temperature curve (kPa oC−1); es and ea = saturation and actual vapor pressure (kPa), respectively; T = average daily air temperature (°C); G = soil heat flux (MJ m−2 day−1); Ws = daily mean wind speed (m s−1).

The ETo values obtained from above equation are used as target data in ANN due to unavailability of lysimeter measured values.

5. Methodology

For the purpose of this study, 15 different climatic locations distributed over four agro-ecological regions (AERs) are selected. The selected locations are Parbhani, Kovilpatti, Bangalore, Solapur, Udaipur (semi-arid); Anantapur and Hissar (arid); Raipur, Faizabad, Ludhiana, and Ranichauri, (sub-humid); and Palampur, Jorhat, Mohanpur, and Dapoli (humid). Daily climate data of Tmin, Tmax, RHmin, RHmax, Ws, Sra for the period of 5 years (January 1, 2001 to December 31, 2005) was collected from All India Coordinated Research Project on Agrometeorology (AICRPAM), Central Research Institute for Dryland Agriculture (CRIDA), Hyderabad, Telangana, India. These data were used for the development and testing of various ANN-based ETo models. Due to the unavailability of lysimeter measured ETo values for these stations, it is estimated by the FAO-56 PM method, which has been adopted as a standard equation for the computation of ETo and calibrating other Eqs. [10]. The normalization technique was applied to both the input and target data before training and testing such that all data points lies in between 0 and 1. The normalization process removes the cyclicity of the data. The following procedure was adopted for normalizing the input and output data sets. Each variable, Xi, in the data set was normalized (Xi, norm) between 0 and 1 by dividing its value by the upper limit of the data set, Xi, max. Resulting data was then used for mapping.

Xi,norm=Xi/Xi,maxE24

ANN simulated ETo was converted back to original form by denormalization procedure. The data from 2001 to 2005 was splitted into training (70% of 2001–2004), validation (30% of 2001–2004), and testing (2005) sets. ANN models were trained with the LM algorithm consists of one hidden layer (sigmoid transfer function) and one output layer (linear transfer function). The parameters that were fixed after a number of trials include: RMSE = 0.0001, learning rate = 0.65, momentum rate = 0.5, epochs = 500, and initial weight range = −0.5 to 0.5. The developed various ANN models were compared with basic statistical MLR models. The developed ANN models were evaluated and compared based on different error functions described in Table 2. Training window of the model contains general information used for training the networks like, error tolerance, Levenberg parameter (lambda) and maximum cycles of simulation. For weights selection, two options are there, weights can be randomized or it can be read from an existing weight file of previous training.

Evaluation criteriaFormulae
Root Mean Squared Error (RMSE)RMSE=1ni=1nTiOi2
Coefficient of determination (R2)R2=i=1nOiO¯TiT¯2i=1nOiO¯2i=1nTiT¯2
Ratio of average output and target ETo values (R)R=O¯T¯

Table 2.

Performance evaluations of ANN and MLR models.

where Ti and Oi = target (FAO-56 PM ETo) and output (ETo resulted from MLR or ANN models) values at the ith step, respectively; n = number of data points, T¯and O¯=average of target(FAO-56 PM ETo) and output (ETo from MLR or ANN models) values, respectively.

6. Results and discussion

6.1 Development of ANN models for daily ETo estimation

ANN model with six climatic variables (Tmax, Tmin, RHmax, RHmin,WS, and Sra) were trained and tested to evaluate the feasibility of ANN models corresponding to FAO-56 PM conventional ETo method for 15 individual locations in India. In order to highlight the necessity of using complex ANN models, it is necessary to show the results obtained using MLR models.

6.2 Training of ANN models for daily ETo estimation

All the ANN models were trained as per the procedure mentioned in methodology and after each training run; three performance indices (RMSE, R2, and Rratio) were calculated, to find the optimum neural network. Several runs were used for determining the optimal number of hidden neurons with different architectural configurations. The optimum neural network was selected based on criteria such that the model has minimum RMSE and maximum R2 values. Here, it is worth to mention that the Rratio is used only to know whether the models overestimated or underestimated ETo values. Training with higher number of hidden nodes might increase the performance of ANN models. But training with a several number of hidden nodes requires more computation time and cause complexity in architecture as it has to complete number of epochs [7]. Therefore, to avoid the above difficulty, the selection of an optimum node was fixed with a trial run of 1–15 hidden nodes only (i.e., not tried beyond 15 hidden nodes). Figure 6 shows the relationship between RMSE and number of hidden nodes of ANN models for four locations (Parbhani, Hissar, Faizabad, and Dapoli) during training. These locations are chosen randomly from each agro-ecological region such that Parbhani, Hissar, Faizabad, and Dapoli represent semi-arid, arid, sub-humid, and humid climates, respectively.

Figure 6.

RMSE variations with number of hidden nodes for ANN models.

For ANN models, the best network was resulted at a hidden node of i + 1 (where i = number of nodes in the input layer) for most of the locations. Thus, i + 1 hidden nodes are sufficient to model the ETo process using the ANN models [13, 14, 15, 16, 44, 45, 46]. Table 3 shows the performance statistics of ANN models for 15 locations during training. The results pertaining to the optimal network structure of ANN models, resulted at i + 1 hidden nodes, are only summarized in Table 3 for 15 locations.

AERLocationANN
RMSER2Rratio
Semi-aridParbhani0.1410.9910.997
Solapur0.2710.9691.000
Bangalore0.2960.9721.005
Kovilpatti0.2540.9911.000
Udaipur0.3910.9521.003
AridAnantapur0.3630.9720.986
Hissar0.0520.9991.000
Sub-humidRaipur0.2550.9810.982
Faizabad0.0600.9991.001
Ludhiana0.2890.9770.999
Ranichauri0.9090.4111.004
HumidPalampur0.1770.9880.999
Jorhat0.6150.9431.001
Mohanpur0.3770.9041.002
Dapoli0.1500.9901.000

Table 3.

Performance of ANN based ETo models during training.

RMSE = mm day−1; R2 and Rratio = dimensionless.

6.3 FAO-56 PM-based ANN models

ETo process is a function of various climatic factors (Tmax, Tmin, RHmax, RHmin, WS, and Sra). Therefore, it is pertinent to take into account the combined influence of all the climatic parameters on ETo estimation. The ANN models corresponding to the FAO-56 PM were developed considering Tmax, Tmin, RHmax, RHmin, Ws, and Sra as input and the FAO-56 PM ETo as target. Table 4 shows the performance statistics of ANN and MLR models for 15 locations during testing. Comparison of results obtained using MLR and ANN models indicated that the ANN models performed better than the MLR models for all locations except for Bangalore. This is confirmed from the low values of RMSE (mm day−1) and high values of R2 for ANN models as compared to the MLR models.

AERLocationMLRANN
RMSER2RratioRMSER2Rratio
Semi-aridParbhani0.3080.9631.0020.1150.9950.994
Solapur0.3130.9591.0030.2280.9790.988
Bangalore0.1590.9801.0000.2010.9680.994
Kovilpatti0.2330.9770.9990.2000.9841.004
Udaipur0.2950.9751.0010.1190.9960.992
AridAnantapur0.2750.9771.0000.2220.9840.998
Hissar0.4340.9510.9990.2800.9801.000
Sub-humidRaipur0.4200.9431.0020.2960.9721.005
Faizabad0.3570.9571.0020.2860.9731.011
Ludhiana0.3480.9710.9990.2790.9811.000
Ranichauri0.2650.9610.9990.1370.9891.005
HumidPalampur0.3130.9521.0030.2280.9791.031
Jorhat0.1510.9781.0000.1370.9851.019
Mohanpur0.1700.9831.0010.1230.9911.007
Dapoli0.1770.9731.0010.1520.9811.009

Table 4.

Performance of ANN and MLR based ETo models during testing.

RMSE = mm day−1; R2 and Rratio = dimensionless.

The Rratio values of MLR models for 15 locations are nearly approaching one, which simply indicates that on an average these models neither over- nor under-estimated ETo. However, high values of RMSE and R2 indicate that on a daily basis, these models over- and under-estimated ETo values. Though the performance of ANN models was good as compared to MLR models, in some locations these models over- or under-estimated the ETo values. The ANN models overestimated (Rratio > 1) ETo values at Palampur. The over- and under-estimations by all ANN models for the above locations were less than 3% which is negligible. The overall performance of all the models was represented as ANN > MLR for most of the locations except for Bangalore where, the performance of models was represented as MLR > ANN. The results suggest that the non-linearity of ETo process can be adequately modeled using ANN models.

The scatter plots of the FAO-56 PM ETo and ETo estimated with the ANN models for 15 climatic locations in India are shown in Figure 7. The scatter plots confirm the statistics given in Table 4. Regression analysis was performed between the FAO-56 PM ETo and ETo estimated with the ANN and the best-fit lines are shown in Figure 7. The values of R2 for ANN models were found to be >0.968. The fit line equations (y = a0x + a1) in Figure 7 gave the values of a0 and a1 coefficients closer to one and zero, respectively. Due to the superior performance of ANN models over the MLR models, the time series plots of these models with 1 year data (during testing) for four selected locations Parbhani, Hissar, Faizabad, and Dapoli are shown in Figure 8. The location figures indicated that, ETo estimated using ANN models matched well with the FAO-56 PM ETo except for a few peak values in case of Faizabad.

Figure 7.

Scatter plots of ANN models estimated ETo with respect to FAO-56 PM ETo for 15 climatic locations in India.

Figure 8.

Time series plots of ANN and FAO-56 PM ETo for (a) Parbhani, (b) Hissar, (c) Faizabad, and (d) Dapoli locations.

7. Summary and conclusions

Evapotranspiration is an important and one of the most difficult components of the hydrologic cycle to quantify accurately. Prior to designing any irrigation system, the information on crop water requirements or crop evapotranspiration is needed, which can be calculated using reference evapotranspiration. There exist direct measurement methods (lysimeters) and indirect estimation procedures (physical and empirical based) for modeling ETo. Direct methods have the limitations of arduous, cost-effective, and lack of skilled manpower to collect accurate measurements. The difficulty in estimating ETo with the indirect physically based methods is due to the limitations of unavailability of all necessary climate data, whereas the application of empirical methods are limited due to unsuitability of these methods for all climatic conditions and need of local calibration. ANNs are efficient in modeling complex processes without formulating any mathematical relationships related to the physical process. This study was undertaken to develop ANN models corresponding to FAO-56 PM conventional ETo method for 15 individual stations in India.

The potential of ANN models corresponding to the FAO-56 PM method was evaluated for 15 locations. The ANN models were developed considering six inputs (Tmax, Tmin, RHmax, RHmin, Ws, and Sra) and the FAO-56 PM ETo as target. The optimum number of hidden neurons was finalized with a trial of 1–15 hidden nodes. The ANN models gave lower RMSE values at i + 1 (i = number of inputs) hidden nodes for estimating ETo. Comparison results of MLR and ANN models indicated that the ANN models performed better for all locations. However, on an average the over- and under-estimations of ETo (<3% which is negligible) estimated by using MLR models was less as compared to ANN models. In brief, based on the above discussion on ETo modeling, the following specific conclusions are drawn:

  • For estimating ETo using ANN model, a network of single hidden layer with i + 1 (i = number of input nodes) number of hidden nodes was found as adequate.

  • ANN-based ETo estimation models performed better than the MLR models for all locations.

However, it should be noted that only climate data from different agro-ecological regions of India was used in this analysis and the results might be different for various climates in other countries.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Sirisha Adamala (April 3rd 2019). Nonlinear Evapotranspiration Modeling Using Artificial Neural Networks, Advanced Evapotranspiration Methods and Applications, Daniel Bucur, IntechOpen, DOI: 10.5772/intechopen.81369. Available from:

chapter statistics

194total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Influence of Landsat Revisit Frequency on Time-Integration of Evapotranspiration for Agricultural Water Management

By Ricardo Trezza, Richard G. Allen, Ayse Kilic, Ian Ratcliffe and Masahiro Tasumi

Related Book

First chapter

Landslides Caused by Climate Change and Groundwater Movement in Permafrost Mountain

By Wei Shan, Ying Guo, Zhaoguang Hu, Chunjiao Wang and Chengcheng Zhang

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us