Open access

Climate Change Detection and Modeling in Hydrology

Written By

Saeid Eslamian, Kristin L. Gilroy and Richard H. McCuen

Submitted: 08 December 2010 Published: 06 September 2011

DOI: 10.5772/24550

From the Edited Volume

Climate Change - Research and Technology for Adaptation and Mitigation

Edited by Juan Blanco and Houshang Kheradmand

Chapter metrics overview

2,717 Chapter Downloads

View Full Metrics

1. Introduction

Detection of a change is defined as the process of demonstrating that climate or a system affected by climate has changed in some defined statistical sense, without providing a reason for that change. Attribution is defined as the process of evaluation of the relative contribution of multiple causal factors to a change or event with an assignment of statistical confidence. However, the observed changes must be able to be detected (IPCC 2010).

Attribution to a change in climatic conditions includes the assessments that attribute an observed change in a variable of interest to a specific observed change in climate conditions based on the process knowledge and relative importance of a change in climate condition in determining the observed impacts (Hao et al. 2008; Liu and Xia 2011). The associated confidence levels should be evaluated for the data, model, methods, and the factors used in the study (IPCC 2010).

Seibert et al. (2010) used the three different approaches for change detection modeling employing a modified version of the HBV (Hydrologiska Byråns Vattenbalansavdelning) model (Bergstrom 1976, 1992) to conclude that catchment-scale runoff increases following severe wildfire. The application of the HBV model as a change detection tool indicated the increases in peak flows following severe wildfire and the related road building and harvesting of the dead and damaged forest vegetations.

The parameter uncertainty of various parameter sets is commonly known in hydrologic and climatologic modeling. It is an issue seldom addressed in modeling approaches for detecting changes (Pappenberger and Beven 2006; Seibert and McDonnell 2010). Employing a large number of parameter sets rather than a single set of parameter values facilitates the assessment of the associated uncertainty.

The detection of climate change impacts on the observed climate and elements of the hydrological cycle have made a great progress, recently (Amiri and Eslamian, 2010). Based on the climate model simulation, the optimal methods have been used to detect the responses of observed change to Green House Gas emissions from the other external forcing at large spatial scales. Presently, the detection of anthropogenic influence is not yet possible for all of the climate variables. It is still difficult to attribute the observed changes in climate or variables of interest on a spatial scale lower than five thousands kilometers and temporal scales of less than fifty years. For the basin aquifers recharged by precipitation or surplus irrigation and influenced by artificial and strong human activities, the detection studies mainly focused on analytical approaches to link physical impacts to changes in temperature or precipitation as a tool. For the basins with better observational data and more sensitivity towards climate change, the use of formal detection methods to identify the pattern responses of the hydrological cycle to external forcing is a valuable and promising area of further research (Liu and Xia 2011).

Xoplaki et al. (2008) investigated data requirement for climate change detection and modeling research in the Mediterranean. They indicated that data availability allows the validation of scientific results on climate change detection and attribution.

The need for long measurements of climate and hydrologic data for studies of climate variability and change is very important. New et al. (1999, 2000) have developed the fields for many climate variables and it is essential to develop these further and extend them to some hydrological variables such as discharge and runoff, for both climate variability and change studies and also climate model validation.

Most of the investigations in climate variability and change detection have focused on only temperature, due to well representation by the available network. The temperature measurement exhibits the relatively high correlation decay lengths. Both upcoming impacts and those of previous events are, however, much more dependent upon the changes in precipitation. The changes are not only vital for hydrology, but are also much more important than temperature for many other sectors, such as agriculture and range management. The studies of large-scale changes in precipitation are hampered by the requirement to obtain access to considerably more precipitation data than is conventionally available. A similar case can also be met for runoff data. Climate change detection studies need to be undertaken on a global scale, and both the available networks of runoff and precipitation data are inadequate. Presently, the best that can be achieved are the investigations on the regional and catchment scales (Cihlar et al. 2000).

A method for detecting the impacts of disturbance on catchment-scale hydrology is the paired catchment approach. The method combines rainfall-runoff modeling to account for natural fluctuations in daily streamflow, uncertainty analyses using the generalized likelihood uncertainty estimation method to identify and separate hydrologic model uncertainty from unexplained variation, and GLS regression change detection models to provide a formal experimental framework for detecting changes in daily streamflow relative to variations in daily hydrologic and climatic data (Zégre et al. 2010).

Precipitation data indicate both increasing and decreasing trends for different regions of the world. Zhang et al. (2007) detected the human influence on twentieth century precipitation trends.

The main objective of this study is to describe the statistical techniques for detecting changes in hydrological events. The flood records are selected for this purpose. Statistical tests and distributions, significance levels and confidence intervals, risk and uncertainty and nonstationarity are discussed in detail for the flood series.

Advertisement

2. Detection of change in flood records

Graphical analysis is generally the first attempt at detecting change in a flood record. Unfortunately, the natural variation of year-to-year flooding greatly exceeds the variation expected due to climate change of recent years. Thus, the latter would likely not be visually evident from a graphical portrayal of a flood record. In comparison, it takes a considerable level of urban development before the hydrologic effects of urbanization can be graphically detected, especially if the trend is temporally gradual rather than abrupt. Whether change is due to global warming or urbanization, we can not be certain whether the nonstationarity factor will cause a change in the probability distribution of floods or just its moments. Thus, more sophisticated methods of detection are needed.

Commonly, the next step in detection of change is with statistical methods. Some of the problems with statistical detection of the effects of climate change include uncertainty in the distribution from which the sample was drawn, outliers, poorly measured values, no knowledge as to when the climate change began to significantly influence flooding, and the compounding effects of land cover change such as deforestation. In addition to these factors, identifying a statistically significant change requires the specification of a statistical level of significance. The value selected is a central factor in statistical decision making, yet a systematic way of identifying the optimum level of significance is not known. The selection of a level of significance is not a trivial decision as the power of the test will depend on the level selected.

Eslamian et al. (2009) investigated to detect an existing trend in wind speed and to evaluate the effect of climate change on frequency analysis of wind speed in Iran. The purpose of this study was to present the recent trends and variations in measured wind speed at twenty-two gauging stations along the whole country of Iran. In addition, the effect of climate change was evaluated in frequency analysis and heterogeneity. For understanding wind behavior in time periods, the trend test and frequency analysis were performed for evaluating wind magnitude and duration.

Advertisement

3. Selection of statistical method to detect trend

Statistical methods are generally designed to be most sensitive to one type of change, such as the change in central tendency, the change in dispersion, or the change in the statistical distribution. Change can be gradual or abrupt and different statistical methods should be applied to such data. A change in a data set that is characterized by gradually varying flows may not be detected if the statistical method applied is more sensitive to abrupt change. Evidence does not currently exist as to the distributional effect that climate change introduces to a flood record. For example, will prolonged climate change cause annual maximum floods to follow a Generalized Extreme Value (GEV) distribution rather than the commonly accepted log-Pearson type III distribution (LP3)? Bulletin 17B, which was developed to estimate flood frequencies and, therefore, flood risk, assumes that hydrologic data follows a LP3 (Interagency 1982). However, many recent studies in regards to precipitation data are based on other distributions. For example, Koutsoyiannis (2004), Stedinger (2000), Gellens (2002), and Karin and Zwiers (2005) selected the GEV distribution to model extreme events while Wilby and Wigley (2002) and Semenov and Bengtsson (2002) chose the gamma distribution to model daily precipitation events. Therefore, if agreement does not currently exist on the appropriate distribution to represent hydrologic data, it will be difficult to determine the appropriate distribution as the concept of nonstationarity is introduced. To compound the problem, climate change is expected to be gradual and thus, the distribution may be subject to continual change.

In addition to the appropriate distribution, the effects of climate change on the moments of a distribution are unknown. The studies have suggested that climate change will increase the more intense rainfalls but have little effect on total annual rainfalls (Hennessy et al. 1997; Karl and Knight 1997; Wilby and Wigley 2002). Kharin and Zwiers (2005) found a significant change in the location and scale parameters for the GEV distribution in a global analysis of precipitation extremes; however, the effects of climate change are expected to vary regionally. Therefore, global analyses may not be applicable at the regional level. Since the expected changes in the statistical distribution as well as the moments are not known and, therefore, must be assumed, this reduces the ability of statistical tests to effectively decide whether or not change has occurred. Without this knowledge, the best statistical method to detect climate change can not be selected without recognizing the importance of this type of uncertainty.

Advertisement

4. The assumption of a start time

Before the problem of modeling can be solved, the first issue that must be addressed is detection of change. The most obvious question is: when did the effect of climate change begin to significantly influence the hydrologic variable of interest, e.g., annual maximum peaks? For example, Olsen et al. (1999) varied the start and end dates of flood records analyzed for gauges in the Missouri and Mississippi River basin. Based on the linear regression results, they found that different record lengths within the same flood record influenced the significance of the trend detected. Therefore, knowledge of the time at which nonstationarity began is necessary in order to correctly identify trends.

Identifying the start time of nonstationarity is important because the time that climate change is assumed to have become influential will influence the model used to represent the hydrologic change. If incorrectly selected, the model type can greatly affect projected changes in hydrologic data. For example, let’s assume that we know that climate change will introduce a linear trend in the hydrologic variable. If the start date is quite uncertain, then the slope of the linear trend will be biased depending on the assumed start time. An incorrectly assumed early start time will lead to an underpredicted slope. Likewise, assuming a late start time would result in a relatively steep slope and long-term overprediction. Figure 1 shows the mean annual discharge (cfs) for the USGS gauge 05464500 at Cedar Rapids, Iowa. A linear trend was fit to the data with two start times: 1903 and 1960, represented by the solid and dashed regression lines, respectively. If extrapolated to the year 2050, the model based on a 1960 start time projects a mean discharge that is 12% greater than the model based on a later start time. Therefore, the uncertainty of predictions due to inaccurate start times can be significant. A statistical test that is sensitive to the start date needs to have high statistical power. Otherwise, incorrect start times will result, with the subsequent impacts on models, future projections, and risk estimation.

The Anacostia River at Hyattsville, MD, can be used to illustrate the effect of start time on the trend of peak discharge rates. Figure 2 shows the annual maximum peak discharge from 1939 to 1988. In the late 1950’s, urban development influenced flood flows, with the effect apparent in Figure 2. Linear models were used to model the trend in flood peaks as a function of time:

19551988:qp=344+54.11*t          t=1in1955E1
19601988:qp=4178+29.57*t        t=1in1960E2

Figure 1.

Variation in Trend Modeled based on Different Start Times for the Mean Annual Discharge (cfs) for the USGS Cedar Rapids Gauge (05464500) with the Dashed and Solid Line Representing a 1960 and 1903 Start Time, Respectively

While the record lengths are similar (34 and 29 years, respectively), the equations are quite different. The estimated floods for the year 2011 would be 6528 cfs (185 m3/s) and 5716 cfs (162 m3/s) for Eqs. 1 and 2, respectively. This represents a difference of 13% based solely on the start time. This example illustrates the sensitivity of estimated flood magnitudes to the start time for modeling time trends.

Figure 2.

Peak Discharge Data for Annacostia River at Hyattsville, Maryland.

Assuming that the start time can be reasonably estimated, the next question of interest is: What affect will climate change have on the physical processes that determine the nature or characteristics of the climate change? Will the climatic change influence the statistics of the hydrologic variable, e.g., increase the mean or the variance, or will it change the distribution, e.g., from a LP3 to a GEV? The accuracy of projected discharges will greatly depend on the change assumed, which will subsequently influence the accuracy of risk estimates.

Advertisement

5. Selection of statistical distributions

As mentioned in the discussion of statistical method selection, the appropriate distribution for hydrologic data is unknown. While this leads to difficulties in trend detection, it also influences the projection of hydrologic events, such as flooding. Bulletin 17B currently recommends the LP3 distribution; however, the GEV distribution is recommended by many studies as well (Martins and Stedinger 2000). Both distributions represent extreme data; however, the extreme events projected by each distribution can vary. For example, Figure 3 compares the frequency curve fit to the Cedar Rapids annual maximum peak discharge at the USGS gauge 05464500 with both the LP3 and GEV distributions. The GEV distribution projects a 100-yr flood that is 12.6% greater than that of the LP3 distribution. Therefore, depending on the distribution selected, engineers may over or underestimate the 100-yr storm designing flood management structures. Therefore, it is important that the correct probability distributed is selected for hydrologic data for design and policy development.

Figure 3.

Frequency Curve for LP3 and GEV Distribution of Cedar Rapids Annual Maximum Peak Discharge Data.

Advertisement

6. Selection of statistical models

In addition to the start time and distribution selection, uncertainties in future predictions will result from the model form selected to represent the change being analyzed. For example, evidence points to a nonlinear trend in hydrologic data as the result of climate change, but some global models suggest an increasing function while other models suggest a decreasing function because of policies that control CO2 emissions. Likewise, even when urbanization is known to influence measured flood magnitudes, it has been difficult to identify the model structure that approximates the temporal effects of changes in the physical processes associated with urban land cover change. Linear trends are often assumed as other more complex functions do not lead to greater accuracy. Yet, the assumed model structure will dictate the magnitude of floods projected for future land cover conditions. Assuming an incorrect function form to represent the trend of increasing flood discharge rates will influence peak discharge estimated for the future. This is another source of uncertainty as an incorrect model structure can lead to overprediction or underprediction of design floods and their associated risks.

To illustrate the potential effect of model structure on floods estimated for future times, the flood series for the Anacostia River (see Figure 2) was fitted for the 1955-1988 period using a linear model (Eq. 1) and the following power or log-linear model:

19551988:qp=2384*t0.153E3

Based on this power model form, the 2011 estimated discharge would be 4425 cfs (125 m3/s), which differs from the discharge estimated using Eq. 1 by 2103 cfs (59.5 m3/s), or 38.4%. The effect of model structure is significant and this issue is must be considered in an attempt to model the effects of climate change on hydrologic data.

This same problem will influence the accuracy of modeling the effects of hydrologic nonstationarity due to a changing climate. It is difficult to even detect whether or not climate change has introduced systematic variation into a flood record let alone identifying the structural form of the temporal change induced by the climate change. Much effort will need to be expended on the detection of change and to identify the best model structure will be a central modeling issue. The model structure finally adopted will greatly influence assessments of future flood risk and the design of hydrologic and hydraulic infrastructure with design lives that will cover the period of climate change.

Advertisement

7. Confidence intervals under changing conditions

The third issue important to the modeler and to policy makers is: How can confidence intervals be computed on projected discharges when the distribution and parameters of future discharges are unknown? Given the lack of certainty in the distribution of climate-affected discharges, the most obvious choice of methods for computing confidence intervals would be those used for linear regression analysis. This approach requires a minimum of inputs, such as the standard error of estimate, the sample size of the existing record, and the standard deviation of the time variable. One problem with this approach is the lack of stationarity. Confidence intervals computed using traditional methods assume stationarity. A new approach will be needed. Uncertainty associated with the nonstationarity will likely lead to much wider confidence intervals on hydrologic variables such as peak discharge rates. An approach based on Monte Carlo simulation for different levels of nonstationarity may be necessary to produce more accurate assessments of the confidence of projected discharges.

With the current state of the art, nonparametric methods are the generally accepted approach to detection of change. Numerous tests are available, but many of these lack statistical power. For example, the Runs Test, which was designed to assess the presence or lack of randomness, i.e., independence, could be applied over portions of a flood record to identify the portions of the record that were not homogeneous. If the other causative factors, such as urbanization can be ruled out, then the test may detect an approximate time at which nonstationarity began. Given the low statistical power of the test, the best estimate of the start date will likely be very imprecise.

The Kendall Tau Test is one of the more commonly used tests for detecting nonrandomness. This test is often preferred because it is designed for data with a monotonically increasing trend, as opposed to an episodic change. It can be applied to either long or short flood records, although the accuracy of the decision will depend on the record length.

Tests for serial independence, such as the Pearson Test and the nonparametric Spearman Test, can be effective for identifying the existence of trends. However, when the Spearman Test is applied to hydrologic data where the data are ordered by year of occurrence, the critical values generally available do not apply. Some analyses have correlated the hydrologic variable with the integer of time, i.e., 1 to n, used as the second variable. The Spearman Test statistic has a different distribution function when the integer of time is applied as one of the variables rather than the adjacent value of the discharge value (Conley and McCuen 1997). Both of these tests should use only the peak discharge sequence rather than correlating discharge and time.

Advertisement

8. Level of significance for detection decisions

The above are all important issues to those involved in assessing the effects of climate change, yet they may not be the most important issue. Regardless of the distribution assumed or the statistical test selected, the significance of an effect will depend on the level of significance adopted for decision making. Karl and Knight (1997) used the 5% level of significance to determine whether increases in precipitation within the United States were significant in the 20th century. Burns and Elnur (2002) reported hydrologic trends detected based on a 10% level of significance. Olsen et al. (1999) identified trends detected in flood records with both a 1% and 5% levels of significance. Traditionally, a 5% level is used, but evidence that this is really appropriate for hydrologic variables has not been addressed. It is unlikely that a 5% level of significance would lead to detection of hydrologic change due to a changing climate, as the sampling variation is generally quite dominant and would overwhelm the effect of climate change. Additionally, use of a 5% level will likely lead to a test having low statistical power. For example, Figures 3a and b display two time series simulated based on normally distributed errors and the same intercept and slope coefficients; however, the standard error for was increased by 100% from the Figure 3a to the Figure 3b time series. The Kendall Tau Test was applied to each data set and Z-values equal to 2.85 and 1.31 were calculated for the data in Figure 3a and 3b, respectively. Therefore, at the 5% level of significance, the null hypothesis was rejected for Figure 3a and accepted for Figure 3b. The null hypothesis was rejected at the 10% level of significance for Figure 3a. Therefore, the variation within the data influences the level of significance at which a trend will be detected, which is a concern when dealing with variables that contain high variation, such as hydrologic data. Before any statistical test is adopted, the issue of statistical power and the level of significance needs to be studied.

Figure 3a,b.

a and b. Simulated Time Series with Intercept = 1000, Slope = 1.25, and Se = 100 and 200., respectively, with Resulting Kendall Tau Statistics equal to 1.31 and 2.85, respectively.

The null hypothesis of interest to this issue is: climate change has not introduced nonstationarity into the flood series. The alternative hypothesis would be that the flood series is nonstationary. Using a small level of significance of 5% or 1% gives some assurance that a true null hypothesis will not be falsely rejected, but it also increases the chance of not identifying an effect and accepting the null hypothesis, when, if fact, it is false. A 5% level of significance will then likely lead to not making an adjustment of the flood series for nonstationarity, whereas use of a higher level of significance would dictate making such an adjustment. It seems that adjusting a series with a minimal effect of climate change would be preferable to failing to make a needed adjustment, even if the adjustment is small. Thus, the proper level of significance to be used in climate change analyses need to be investigated.

Advertisement

9. Nonstationarity and flood risk

Engineering designs are commonly based on an estimate of the 100-yr discharge, where the discharge is based on an analysis of the historic flood record or on a regression model fitted using regional flood records. Assuming that global climate change models are correct and that extreme rainfalls are expected to increase over time, then it is reasonable to assume that the time series of annual maximum discharges will increase over the next century. Under these conditions, storms of the size on which a design was based will occur more frequently. Thus, the 100-yr discharge of future times will be larger than the current 100-yr discharge. Likewise, under nonstationary conditions the current 100-yr flood event will occur more frequently (Olsen et al. 1998). The likelihood of the bridge opening passing runoff magnitudes will decrease, which means that the risk of failure will continuously increase with time.

Engineering design and risk assessments need to consider this nonstationarity of the T-yr discharge, where T is the return period used in design, e.g., T = 100yrs. A design made in 2010 based on the 100-yr discharge assessed using current information and knowledge will not have the same risk of failure as the climate changes. This should be considered in the design. If the design life for the 2010 project is 50 years, it may not be appropriate to design for the estimated 100-yr event for 2010 meteorological and hydrological conditions as then the project would be underdesigned. Similarly, designing for 2060 climate conditions would assume overdesign for each of the 49 years between 2010 and 2059. The optimal design discharge under these nonstationary global climate conditions would need to account for the rate of change of discharge over time. As many climate change scenarios show an increasing trend with time, the nonlinearity of the nonstationarity would require a temporally adjusted risk analysis.

Analyses have shown that the location and scale parameters of annual maximum flood series are expected to increase with increasing global climate change. These would raise the frequency curve and increase the exceedence probability of a flood magnitude. This is easily shown using a binomial risk analysis. Consider the case of a site where the 2010 conditions indicate a flood skew of 0.3 with the log moments shown in Table 1 for the decades that define the design life of the project. Assume that a project is designed to control the 100-yr flood magnitude of 1411 cms (49821 cfs). Assuming climate change will cause the log moments to increase as shown, then the return period of the design discharge increases over the 30 year period from the current 100 years to a 41-year event in 30 years. The binomial risk for each decade would change from 9.56% in the first decade, to 14% in the second decade and 19.1% in the third decade. Therefore, over the design life of the project, the project as designed has an increased likelihood of being exceeded. This change in the expected exceedence probability would provide a benefit-cost ratio of the project that was less than the ration on which a design based on stationary conditions would provide. Failure to account for the effect of climate change in the design would lead to long-term under design.

This conclusion is not intended to suggest that the project should be designed to the 100-yr flood condition at the end of the design life, as this would reflect a long-term design that would exceed the 100-yr protection. For example, if the project were designed for the 2040 flood moments, with a discharge of 2733 cms (96521 cfs), then the annual exceedence probability for current conditions would suggest a design exceedence probability of 0.00383, which reflects a return period of 261 years. Except in the last year of the design life of the project, the facility would have a protection that exceeds the required value. Using this value in a project benefit-cost ratio would provide a value that would exceed the long-term ratio that could be expected over the design life.

DecadeLog meanLog sdFlow (cfs)Flow (cms)KpT (yrs)Prob.
20103.120.624982114112.5440.01001000.0956
20203.150.635660516032.4560.0126790.1074
20303.210.657307020682.2880.0177570.1416
20403.280.679652127332.1160.0242410.1908

Table 1.

Binomial Risk over Time

If the intent is to provide, on average, 100-yr protection, then an integrated procedure is needed. Such a method would need to consider the temporally changing flood potential at the site, as indicated by the changing moments. The continually changing flood risk would need to be estimated. A method developed to integrate the effect of the changing flood risk would be expected to account for these changes in flood potential.

Advertisement

10. Conclusions and recommendations

An important benefit of the modeling approach is that, in addition to quantification of change resulting from a disturbance, comparison of model parameters between pre- and post-event periods provides an indication of hydrological processes alteration by a severe event.

Detecting the effect of climate change in measured hydrologic data is a difficult, but important, task (Eslamian 2006). It has ramifications to assessing flood risk, the design of water resource infrastructure, and the avoidance of assigning too much weight to other nonstationary factors, such as urbanization, that contribute to hydrologic change. If the effect of climate change is even marginally significant, but not accounted for when attempting to assess the effects of urbanization in hydrologic data, then the effects of urbanization will likely be overstated. The results of such analyses will lead to biased designs. Infrastructure design that fails to account for climate change can be inadequate to meet the safety needs of a community, as the likelihood of severe flooding will increase because of climate change. Thus, floods that occur over the design life will likely be larger and more frequent than designed for. These situations are central to the issue of assessing flood risk, which has obvious implications to public safety, resource allocation, and the disruption of facility use.

The implications of global warming are significant, as policies will be made to address the issue and climate change may have significant economic effects. Therefore, uncertainties in projections of the effects of climate change must be considered in designing infrastructure, establishing public policies, and in economic decisions. A few of the uncertainties have been discussed in this paper, with an emphasis on uncertainties related to climate change modeling. Projecting to the future represents extrapolation, and given the uncertainties in modeling procedures, data, and our theoretical knowledge of the underlying processes, decision makers must consider these uncertainties. Record lengths of data used to calibrate climate models are short and contain very significant levels of nonsystematic variation. Such uncertainty will be carried over to projections made to the year 2100. The nonsystematic variation often reflects our lack of a full understanding of causal factors. Efforts through research need to be made to reduce these uncertainties in order to increase the accuracy of projections into the future.

Given the uncertainties of and interactions between the different model parameters, such explanations need to be approached with caution. Nevertheless, these suggestions of altered processes can direct further investigation and hypothesis formulation.

The following observations are especially important for the further climate variability investigations:

  • Daily runoff series for a few hundred smaller natural catchments, about 1000 km2 catchment area, employed for research purpose distributed over the globe.

  • Monthly runoff series for the top few hundred catchments around the world, possibly having natural flows.

  • Long time series of hydrological records

Impacts of climate change on water quality are also largely determined by hydrological changes and by the nature of pollutants as flushing or dilution-controlled. The most significant impact of urban development on water resources is an increase in overall surface runoff and the flashiness of the associated storm hydrograph. The increase in impervious surface area associated with urban development also contributes to degradation of water quality as a result of non-point source pollution. The modelling studies on the combined impacts of climate change and urban development have found that either change may be more significant, depending on scenario assumptions and basin characteristics, and that each type of change may amplify or ameliorate the effects of the other (Praskievicz and Chang 2009).

References

  1. 1. AmiriM. J.EslamianS. S.2010 Investigation of climate change in Iran, Journal of Environmental Science and Technology, 34208216
  2. 2. BergstromS.1976 Development and application of a conceptual runoff model for Scandinavian catchments”. SMHI Reports RHO, 7Norrko ping, Sweden, 134.
  3. 3. BergstromS.1992 The HBV model-its structure and applications”. SMHI Reports RH, 4Norrko ping, Sweden.
  4. 4. BurnD. H.ElnurM. A. H.2002 Detection of Hydrologic Trends and Variability”. Journal of Hydrology, 255107122
  5. 5. CihlarJ.GrabsW.LandwehrJ.2000 Establishment of a Global Hydrological Observation Network for Climate, Report of the GCOS/GTOS/HWRP Expert Meeting, Geisenheim, Germany, 2630June. Report GCOS 63, Report GTOS 26, Secretariat of the World Meteorological Organization, WMO /TD- 1047
  6. 6. ConleyL. C.Mc CuenR. H.1997 Modified Critical Values for Spearman’s Test of Serial Correlation”. Journal of Hydrologic Engineering, 23133135
  7. 7. EslamianS. S.HasanzadehH.2009 Detecting and Evaluating Climate Change Effect on Frequency Analysis of Wind Speed in Iran, International Journal of Global Energy Issues, Special Issue on Wind Modelling and Frequency Analysis (WMFA). 323
  8. 8. EslamianS. S.2006 Detection of Hydrologic Changes, International Symposium on Drylands Ecology and Human Security, Dubai, United Arab Emirates.
  9. 9. HaoX.ChenY.XuC.et al.2008 Impacts of climate change and human activities on the surface runoff in the Tarim River Basin over the last fifty years”. Water Resources Management, 2211591171
  10. 10. HennessyK. J.GregoryJ. M.MitchellJ. F. B.1997 Changes in Daily Precipitation under Enhanced Greenhouse Conditions". Climate Dynamics, 13667680
  11. 11. Interagency Advisory Committee on Water Data1982 Guidelines for Determining Flood Flow Frequency”, Bulletin 17B of the Hydrology Committee, USGS, Office of Water Data Coordination, Reston, VA.
  12. 12. IPCC2010 Meeting Report of the Intergornmental Panel on Climate Change Expert Meeting on Detection and Attribution Related to Anthropogenic Climate Change”. IPCC Working Group I Technical Support Unit, University of Bern, Switzerland, 55pp.
  13. 13. KarlT. R.KnightR. W. .1998 Secular Trends of Precipitation Amount, Frequency, and Intensity in the United States”. Bulletin of the American Meteorological Society, 792231241
  14. 14. KharinV. V.ZwiersF. W. .2005 Estimating Extremes in Transient Climate Change Simulations". Journal of Climate, 1811561173
  15. 15. KoutsoyiannisD.2004 Statistics of Extremes and Estimation of Extreme Rainfall: II. Empirical Investigation of Long Rainfall records”. Hyrdological Sciences Journal, 494591610
  16. 16. LiuC.XiaJ.2011 Detection and Attribution of Observed Changes in the Hydrological Cycle under Global Warming”. Advances in Climate Change Research, 213137
  17. 17. MartinsE. S.StedingerJ. R.2000 Generalized Maximum-Likelihood Generalized Extreme-Value Quantile Estimators for Hydrologic Data”. Water Resources Research, 363737744
  18. 18. NewM.HulmeM.JonesP. D.1999 Representing twentieth-century space-time climate variability, Part I: Development of a 1961-90 mean monthly terrestrial climatology”. Journal of Climate, 12829856
  19. 19. NewM.HulmeM.JonesP. D.2000 Representing twentieth-century space-time climate variability, Part II: Development of 1901-1996 monthly grids of terrestrial surface climate”. Journal of Climate, 1322172238
  20. 20. OlsenJ. R.LambertJ. H.HaimesY. Y.1998 Risk of Extreme Events Under Nonstationary Conditions”. Risk Analysis, 184497510
  21. 21. OlsenJ. R.StedingerJ. R.MatalasN. C.StakhivE. Z.1999 Climate Variability and Flood Frequency Estimation for the Upper Mississippi and Lower Missouri Rivers”. Journal of the American Water Resources Association, 35615091524
  22. 22. PappenbergerF.BevenK. J.2006 Ignorance is bliss: or seven reasons not to use uncertainty analysis”. Water Resources Research, 42(5): W05302.
  23. 23. PraskieviczS.ChangH.2009 A review of hydrological modelling of basin- scale climate change and urban development impacts”. Progress in Physical Geography, 335650671
  24. 24. SeibertJ.Mc DonnellJ. J.WoodsmithR. D.2010 Effects of wildfire on catchment runoff response: A modeling approach to change detection”. Hydrology Research, 415378390
  25. 25. SeibertJ.Mc DonnellJ. J.2010 Land-cover impacts on streamflow: A change detection modeling approach that incorporates parameter uncertainty”, Hydrological Sciences Journal, 553316332
  26. 26. SemenovV. A.BengtssonL.2002 Secular Trends in Daily Precipitation Characteristics: Greenhouse Gas Simulation with a Coupled AOGCM”. Climate Dynamics, 19123140
  27. 27. WilbyR. L.WigleyT. M. L.2002 Future changes in the distribution of daily precipitation totals across North America.” Geophys. Res. Lett., 29: 1135, doi:10.1029/2001GL013048.
  28. 28. XoplakiE.ToretiA.KuglitschF. G.LuterbacherJ.2008 Data availability in the Mediterranean: requirements for climate change detection and modeling research, June, MEDARE, Proceedings of the International Wokshop on Rescue and Digitization of Climate Records in the Mediterranean Basin, University of Bern, Switzerland.
  29. 29. ZégreN.SkaugsetA. E.SomN. A.Mc DonnellJ. J.GanioL. M.2010 In lieu of the paired catchment approach: Hydrologic model change detection at the catchment scale”. Water Resources Research, 46: W11544 EOFPP., doi:10.1029/2009WR008601.
  30. 30. ZhangX.ZwiersF. W.HegeriG. C.et al.2007 Detection of human influence on twentieth century precipitation trends. Nature, 448461465

Written By

Saeid Eslamian, Kristin L. Gilroy and Richard H. McCuen

Submitted: 08 December 2010 Published: 06 September 2011