Open access peer-reviewed chapter

Wind Speed Regionalization Under Climate Change Conditions

By Masoomeh Fakhry, Mohammad Reza Farzaneh, Saeid Eslamian and Rouzbeh Nazari

Submitted: April 24th 2012Reviewed: January 28th 2013Published: March 13th 2013

DOI: 10.5772/55985

Downloaded: 974

1. Introduction

Green energy and renewable energy are one of the most essential fundamentals for future developments of countries all over the world. Running out of fossil fuels in near future, makes the use of renewable energies almost inevitable. Evaluation of the capability of using these types of energies is an essential issue at present. Extreme dependency of mankind to the limited energy resources is both dangers and unsustainable and finding a way out is one of the most important challenges that we face. Achieving some unlimited energy sources has been human’s dream. Daily increase of energy demands and limitation of fossil energy resources from one side, and increase of environmental pollutants caused by using these resources in the other side, has made application of renewable energies more essential and widespread. The wind, as one of the climatological factors has a wide effects on agriculture, transport, pollution, energy, manufacturing and industrial plans. Wind power is one of the first energy sources discovered by humans. It is applied for different purposes such as power source for ship movement, irrigation and milling in some countries such as Iran and China. However, construction of wind power plants would make this type of energy more applicable. Nowadays, coal, oil and gas are the main resources to provide energy. In recent years, an increasing trend in price of the mentioned materials has become vivid due to the globalization and political events and mostly because of water crises. These increasing prices make us to find an appropriate solution to decrease the expenses and increasing the stability. Furthermore, renewable energies are considered as available, exchangeable and inexhaustible resources. New energies are available until these resources exist. Wind energy is used in two ways, first in direct way in which the wind is applied for drying and ventilation, and secondly, the wind is utilized indirectly for milling the grains, to pump water to the fields and to generate electricity. The evidence shows that in some countries such as Iran, Iraq, Egypt, China, Italy, Spain, the wind energy has been used for milling and irrigation. According to IPCC special report on renewable energy sources and decreasing the climate change damages, there is an increasing trend in the magnitude of the wind power plants from 17 meters high and 75 kilowatts in 1980 decade to 80 meters high and 1800 kilowatts in 2005-2010. It is predicted that the size of these generators should be 250 meters high and 20000 kilowatts in future years which is clarifying the increasing trend of application of this green energy. An important challenge in front of this kind of power plants is the uncertainty in accessible capacity of electrical power. This problem has been caused by random nature of effective factors such as random variation in mechanical forces generating wind power. In other words, due to the continuous variations in meteorological and climatological conditions, the wind speed, duration, density and power are randomly changing. Thus, for using its power, it is necessary to study the windy conditions in the area and statistical data reported by meteorology centers. However, the analysis of such large amount of data recorded in meteorology centers to estimate the mechanical power input of wind power plants is not possible except using some applied methods. Obviously, the study of the behavior and speed of winds will lead to more accurate estimation of accessible capacity in wind power plants. Moreover, spatial analysis of this climatological phenomenon will provide some essential knowledge about the areas with potential capacity of constructing wind power plants. The frequency analysis is an operative tool in assessing this aim. This study is going to describe these materials and their effects on regionalization of wind speed under effect of climate change.

2. Previous investigations on wind prediction

There are some studies reported in the literature about wind speed in future periods which some of them are summarized in Table 1.

ResultsPaper TypeLocationScenariosModelReferences
Historical wind time series for future applicationHarmsen et al. (2009)
Historical wind time series for future applicationDoria et al. (2006)
Wind speed change is stronger than the changes obtained for the same region on daily precipitation and temperatureCase StudyNebraska2 scenariosMPIBogardi and Matyasovszky (1996)
Dynamical DownscalingCase studyGerman2 scenariosECHAMHoyme and Zieleke (2001)
The GCM models of climate change become more reliable and as tools are refined for improving results at regional scales, it will be desirable to include improved estimates of vulnerabilities in the wind power site selection decision process.Case studyUSA2 scenariosHADCM2, CGCM1Breslow and Sailor (2002)
-ANN and statistical downscaling
-A2 is preferred to A1b changes in daily mean wind speeds at each location and they are presented and discussed with respect to potential implications for wind power generation.
Case studyUSAA1b, A24 GCMSailor et al.
(2000)
-Impact of the climate change scenarios on wind power may be as high as a 40% reduction in summer time generation potential.Case studyTexas and California2 scenariosCCMSailor et al. (2008)
In order to create the scenarios of changes in power density under climate change, a method is developed for mapping the daily-resolution downscaled GCM output to the hourly level.
The model results for either of the SRES scenarios were similar. In most cases, the impacts of the A2 scenario were slightly larger than those for the A1B scenario, but these inter-scenario differences were smaller than the inter-model differences, even after downscaling.
-The average of wind speed will increase significantly towards the end of this century under the changing climate impacts.Case studyFinlandA2HWIND and SIMAPeltola et al. (2010)
- Use of NCEP and ANN for downscaling.
- For the two locations considered, the increase in the 100-year wind was found to be varying from 44% to 74%.
Case studyIndianA2CGCM3Deepthi and Deo (2010)
-By the end of the twenty-first century, there is an evidence for small magnitude changes in the wind resource and increases in extreme wind speeds, and the declines in sea ice and icing frequencies.Review Paper-Some changes associated with climate evolution will likely benefit the wind energy industry while other changes may negatively have an impact on wind energy developments, with such ‘gains and losses’ depending on the region under consideration.
- Gumbel is presented for the probability distribution of extreme wind speeds
Pryor and

Barthelmie

(2010)
15 – 30 % wind speed growthCase studyBrazilA1BHADCM3Pereira et al. (2012)

Table 1.

Wind speed investigations in future period

3. Climate change impacts

Fast development of industries and its outcome as increasing of the emission of greenhouse gases, has led to destruction of climatic equilibrium of the earth. This phenomenon is called “Climate Change” (IPCC 2007, Leander et al. 2006). The research is indicating the negative impacts of this phenomenon on different systems such as water resources, agriculture, environment, health, industry, and economy. The importance and hazardous of climate change has been emphasized in different international communities such as the group of eight (G8) which is a forum for the governments of eight of the world's largest economies and some of its facing solutions to save water resources, agriculture, and environmental resources have been suggested. As the water is an important resource, which is extremely under effect of climate change, the analysis of its changes in future years can provide a very useful key for future droughts, floods, evapotranspiration and etc.

The first step in the study of climate change impacts on future resources is to simulate the behavior of climatological factors under the effect of greenhouse gases. A general circulation model (GCM) is a three dimensional mathematical models of the general circulation of a planetary atmosphere or ocean. Atmospheric and oceanic GCMs (AGCM and OGCM) are key components of global climate models, which are systems of differential equations. Using such models, scientists divide the atmosphere, hydrosphere, geosphere, cryosphere, and biosphere of the planet into a 3-dimensional grid, apply the basic equations, and evaluate the results. Atmospheric models calculate winds, heat transfer, radiation, relative humidity, and surface hydrology within each grid and evaluate the interactions with neighboring points. Different greenhouse gases emission scenarios such as A1, B1, A2, and B2 are going to be used during the simulation process.

4. Generation of climatic scenarios

The aforementioned 3-dimensional joint atmospheric-oceanic general circulation models (AOGCM) are used in this study among different available methods for the generation of climatic scenarios. The GCM models have a physical basis presented by mathematical relations. They are going to be solved in a 3-dimensional grid all over the planet. In order to simulate the climate of the planet, the fundamental climatic processes in the atmosphere, hydrosphere, geosphere, cryosphere, and biosphere will be simulated in separate secondary models. Then, these atmospheric and oceanic secondary models are joining together to form AOGCMs. To study the condition of climate in the past periods, the observed values of greenhouse gases, solar radiation changes, and volcanic eruption aerosols until the 2000 are entered as input to the GCM models and the climatic variables are simulated as time series. After simulation of these variables in the past periods by using the GCM models, the introduction of future greenhouse gas conditions is necessary for simulation of these variables in future periods. For this purpose, at first the amounts of emitted greenhouse gases presented by emission scenarios (which are always until 2100) are transformed to concentrate and then to the amount of solar radiation and these values would be the input of the GCM models. The results obtained from the GCM models under emission scenarios will form the time series of climatic variables up to 2100.

5. Downscaling

One of the main challenges using the output of the AOGCM models is the spatial scale of their calculation cell and the downscaling method is used to solve this challenge. These methods are generally consists of two main groups of dynamic and statistical ones. In these methods, the downscaling procedure is done by using the observed meteorological data. A considerable point in application of the final outputs is the different sources of uncertainty, which can be evaluated using the Bootstrap method (Efron, 1993) in each confidence level.

6. Frequency analysis

The magnitude of an extreme event has an inverse relation with its frequency. In other words, the higher magnitude is the event, the less is its frequency of occurrence. The primary objective of frequency analysis is to relate the magnitude of extreme events to their frequency of occurrence through application of probability distributions (Chow et al., 1988). The first assumption in this manner is that under study data are independent and identically distributed and their underlying system is random and is spatially and temporally independent. This would be available when there is no correlation between observations. In application, these conditions can be achieved by using annual maximum values noting the independency of events between years. However, the wind speed parameter has been rarely examined by this method among other meteorological parameters; therefore, the studies in this field are at the beginning.

7. Probability distribution function and frequency formula

To describe the probability distribution of a random variable  X, a cumulative distribution function (CDF) is used. The value of this function Fxis simply the probability Pof the event that the random variable takes on value equal to or less than the argument:

Fx=PXxE1

This is the probability of the random variable X, it will not exceed xand is shown by the non-exceedance probability Fx.

The occurrence of extreme events is not according to a constant regime or with a fixed magnitude and the time interval between two such events is variable. Thus, the return period defined as the average inter-arrival time between two extreme events is an applicable tool in such cases. An extreme event xTwith return period Tcan occur more than one time in a year and its exceedance probability can be expressed by:

PX>xT=1TE2

Thus, the above mentioned non-exceedance probability can be presented as follows:

FxT=PXxT=1-PX>xT=1-1TE3

Equation (3) is presenting the magnitude of an extreme event correspond to a return period T. For a preselected value of return period and corresponding non-exceedance probability (p=1-1T), it is possible to determine the quantile xpusing the inverse function of F:

xp=F-1p=F-11-1TE4

which gives the value of xpcorresponding to any particular value of por T.

8. Selection of probability distribution

The probability distribution is a function for describing the probability of occurrence of a random event. Large amount of statistical information will be summarized in the distribution and its parameters by fitting a probability distribution on a set of hydrologic data. The most important and widely used methods for estimation of distribution parameters from data samples are method of moments, linear moments, and maximum likelihood which will be described more in next paragraph.

9. Method of moments

The method of moments was first introduced by Pearson (1902). He found that the appropriate estimations of the parameters of a probability distribution are those which their moments match with corresponding sample moments in the best way. In this method, general formula for calculation of moments of order rof the distribution fxaround the mean is:

μr=-xrfxdxE5

The method of moments is describing the relation between moments and distribution parameters. The most important moments around the mean are the mean, variance, skewness, and kurtosis, which are the one to four order moments, respectively.

10. Maximum likelihood method

Maximum-likelihood estimation was recommended, analyzed and vastly popularized by R. A. Fisher between 1912 and 1922 (Aldrich, John 1997). He argued that the best value of a parameter of a probability distribution should be one, which maximizes the likelihood or joint probability of occurrence of the sample. Assume that the sample space is divided into parts with length dxand xiis selected from independent and identically distributed observations x1,x2,,xn. The probability density for xican be denoted by fxi. The probability that a random event occur in a distance consisting xiwould be equal to fxi dx. As the samples are independent, the joint density function for all observations can be calculated by:

fx1 dx.fx1 dxfxn dx=i=1nfxidxnE6

As the distance dxis constant, the maximization of above joint density function is equivalent to maximization of likelihood function defined as:

Lx1,x2,,xn=fx1,x2,,xn=i=1nfxiE7

The maximum likelihood method is theoretically the most accurate method in estimation of the parameters of probability distributions. In fact, it is estimating those parameters with minimum average error with respect to correct parameters.

11. L-moments method

Probability weighted moments (PWMs) are defined by Greenwood et al. (1979) as

βr=ExFxrE8

Which can be rewritten as:

βr=01xFF rdF  ,     r=0, 1, , sE9

Where F=Fxis CDF of xand xFis its inverse. In the case of r=0, βris equal to the mean of the distribution μ=Ex. PWMs are precursors of L-moments and developed more by the works of Hosking (1986, 1990) and Hosking and Wallis (1991, 1993, 1997). Hosking defined L-moments as:

λr+1=k=0rβr-1r-krkr+kkE10

The first four order L-moments can be calculated as bellow:

λ1=β0E11
λ2=2β1-β0E12
λ3=6β2-6β1+β0E13
λ4=20β3-30β2+12β1-β0E14

L-moment ratios, which are analogous to conventional moment ratios, are defined by Hosking (1990) as:

τ=λ2/λ1E15
τr=λr/λ2   ,   r3E16

where λ1is a measure of location, τis a measure of scale and dispersion (L-CV), τ3is a measure of skewness (L-skew), and τ4is a measure of kurtosis (L-kurt.). Analogous to conventional (product) moments, the L-moments of order one to four characterize location, scale, skewness and kurtosis, respectively (Karvanen, 2006). In order to estimate the distribution parameters in the method of L-moments, similar to other methods, sample L-moment ratios are calculated by replacing distribution L-moments λrby their sample estimates.

However, L-moments have significant advantages over PWM's, specially their ability to summarize a statistical distribution in a more meaningful way. Since L-moment estimators are linear functions of the ordered data values, they are virtually unbiased and less influenced by outliers. Also they have relatively small sampling variance and the bias of their small sample estimates remains quite small. L-moments have become popular tools for solving various problems related to parameter estimation, distribution identification, and regionalization in different fields such as hydrology, water resources, and especially in regional analysis of rainfall and floods.

12. At-station goodness of fit tests

After estimation of the parameter of the prescribed distributions, the usual question is the selection of the best fitted distribution to the observation sample. For this aim, the goodness-of-fit tests are used to compare fitted theoretical distributions and observations. Two very common goodness-of-fit tests with wide application in the literature are Chi-square and Kolmogorov-Smirnov tests. Another extensively used test is root mean square error (RMSE) which estimates the root of the square differences between observed values and calculated ones divided by the sample size. Among theoretical probability distributions, some of them are selected by researchers for describing wind speed, which are presented in Table 2.

YearDistribution functionScientists
1940 to 1945Pearson Type IIIPutnum*
1951Pearson Type IIISherlock*
1950 to 1970Bivariate Distributions of Two ComponentsEssenwanger *
Two-Parameter NormalCrutcher and Bear *
1970sIsotropic Gaussian Model of McWilliams et al.Justus and Koeppl *
1974, 1976, 1977Three-Parameter Log-normalLuna and Church * Kaminsky, Justus et al.
1976 to 1977Square-root Normal ModelWinger *
1978Three-parameter WeibullStewart and Essenvanger *
1980Three-Parameter Generalized GammaAuwera *
1980Inverse GaussianBardsley *
1983Pearson Type I (Beta)Lavagnini et al. *
1994WeibullStelios Pasardes *
1996Log-NormalBogardi and Matyasovski
2009BetaCarta et al.
2009Two-Component WeibullAkdog et al. *
2010Gumbel and WeibullDeepthi and Deo

Table 2.

Probability distribution function of wind speed in the literature

13. Regional frequency analysis

One of the main problems in frequency analysis is the lack of adequate long time data in locations under study, which beside insufficient accuracy in data recording in at-site estimates, has caused regional frequency analysis to be more applicable in such studies. Steps of regional frequency analysis are presented below.

14. Discordancy test

For the screening of similar sites, the discordancy measure, in terms of the sample L-moment ratios (L-CV, L-skew, and L-kurt.) of the gauging sites’ observed data is suggested by Hosking and Wallis (1997). The aim of the data screening performed using L-moments based on a discordancy measure Diis to identify data that are grossly discordant with the group as a whole for the regional flood frequency analysis. Hosking and Wallis (1997) defined the discordancy measure (Di) considering that there are Nsites in the sample. Let ui=ti,t3i,t4ibe a vector containing the sample L-moment ratios for site ii=1, 2, , N(Hosking and Wallis, 1993, 1997). The classical discordancy measure for any gauging site iis calculated as follows (Hosking and Wallis 1997):

Di=13ui-u-TS-1ui-u-E17
u-=1Ni=1NuiE18
S=1N-1i=1Nui-u-ui-u-TE19

Where N is the number of stations, uiis the vector of L-moments, u-and Sare the sample mean and covariance matrix respectively, and Tdenotes the transposition of a vector or matrix. Large values of Diindicate the sites that are the most discordant from the group as a whole and are the most suitable for investigating the existence of data errors (Hosking and Wallis 1993). Generally, if a site’s Dstatistic exceeds three when the number of sites in one region is greater than 15, its data are considered to be discordant from the rest of the regional data (Hosking and Wallis 1997). Large values of Diindicate that cautious investigation of the ith site should be carried out to detect the presence of data errors.

15. Heterogeneity test

L-moment heterogeneity tests allow assessing whether a group of sites may reasonably be treated as a homogeneous region. The heterogeneity measure compares the between-site variations in sample L-moments for a group of sites with what would be expected for a homogeneous region (Hosking and Wallis, 1993).

The homogeneity test used in this study is the test that is proposed by Hosking and Wallis (1993) and is based on various orders of sample L-moment ratios. It is particularly based on the variability of three different levels of tests: a test based on the L-CV only; a test based on the L-CV and L-skew; and a test based on the L-skew and L-kurt. These tests are called the V-statistics and are respectively defined as:

V1=i=1Nniτi-τR2/i=1Nni0.5E20
V2=i=1Nniτi-τR2+τ3i-τ3R20.5/i=1NniE21
V3=i=1Nniτ3i-τ3R2+τ4i-τ4R20.5/i=1NniE22

The heterogeneity measure is then defined as:

H=Vi-μViσVi   ;   i=1, 2, 3E23

where μViand σViare the mean and the standard deviation of the simulated value of the V, respectively. The regional averages L-moment ratios are determined by following equations:

τR=i=1Nniτi/i=1NniE24
τ3R=i=1Nniτ3i/i=1NniE25
τ4R=i=1Nniτ4i/i=1NniE26

where Nis the number of sites, niis the record length at site i, and τi, τ3i, and τ4iare the sample L-moment ratios at site i. After fitting a Kappa distribution to the regional average L-moment ratios, a large number of realizations Nsimof a region with Nsites, each having the Kappa distribution is simulated. The simulated regions are homogenous and have no cross-correlation or serial correlation.

Declare the region to be heterogeneous if His sufficiently large. Hosking and Wallis (1993) suggested that the region be regarded as “acceptably homogeneous” if H<1, “possibly heterogeneous” if 1H<2, and “definitely heterogeneous” if H2.

In order to achieve reliable estimates μVand σV, Hosking and Wallis (1993) recognized that the value of Nsim=500should usually be adequate. Furthermore, they judged that larger values may need to resolve Hvalues very close to 1 or 2.

Hosking and Wallis (1993) revealed that Hstatistics based on V2and V3lacks a power to distinguish between homogeneous and heterogeneous regions and the Hstatistics based on V1has much better discriminatory power and thus, it is suggested as heterogeneity measure.

16. Estimation of the parameters of regional frequency distribution

Four first orders L-moments for each site inside a homogeneous region is making dimensionless by dividing them by the average of the data. Weighted values of dimensionless L-moments are used to calculate standardized regional L-moments.

λrR=i=1Nniλrii=1NniE27

where λrRis the regional standardized L-moment of order r, λriis the standardized L-moment of order rin site i, niis the number of years in site i, and Nis the number of sites in the homogenous region. The parameters of best fitted distribution are estimated using the relation between distribution parameters and L-moments presented by Hosking (1989). Then the quantile values corresponding to different return periods are estimated for under study variable as regional quantile. The quantiles for the sites in each sub-region are determined by multiplying the regional quantile with the site’s mean. At any site, the ith quantiles for the subregions are calculated using Eq. (34)

QiF=λ1iqFE28

Where QFand qFare the at-site iand regional quantiles with non-exceedance probability, respectively.

17. Spatial interpolation

In classic statistics, the samples derived from a population are usually considered as random sets and the recorded value of a particular variable in an individual sample cannot present any details about the value of that variable in another sample with specific distance. In geostatistics, it is possible to link the values of one variable in a population and distance and direction of samples relating to each other. Furthermore, in classic statistics, it is assuming that the variables are randomly changing, while in geostatistics, some parts of the variable is random and some other parts have structure and is a function of distance and direction. Thus, using geostatistics, first the existence or the absence of a spatial structure between data is considered and then, in presence of a spatial structure, the data will be analyzed. It is possible to adjacent data to be spatially dependent together in a certain distance. In such cases, as in the presence of spatial structure, the variations in a certain space have more chances to be effective on near spaces with regard to more far ones, it is clear that variables are maybe more similar in closer samples.

Geostatistics is a field of statistics with the base of “local variables theory”. Any variable distributed in 3d space with spatial dependence is called local variable and can be studied and analyzed in geostatistical studies. Some method of geostatistical studies are Inverse Distance Weighting (IDW), Global Polynomial (GP), Local Polynomial (LP), Radial Basis Functions (RBF), Kriging (Simple, Ordinary, Universal, Disjunctive and CoKriging).

Applying these methods needs to parallel application of spatial and statistical analysis, which is possible only in some environments like ArcMap. To visualize the above process, the steps are summarized as follows in Figure 1.

Figure 1.

Spatial interpolation steps

18. Comparison of different methods

In above sections, all methods are separately presented but their comparison is very important to choose the most appropriate method for analysis. However, a wrong selection in this step will lead to large amount of uncertainty in output results.

ModelTypeOutput SurfacesSpeedExact InterpolationFlexibilityAdvantageDisadvantagesAssumptions
IDWDeterministicPredictionFastYesLittle flexibility, few parameter decisionFew decisionsNO assessment of prediction errors, bull's-eyes around data locationNone
GPDeterministicPredictionFastNoLittle flexibility, few parameter decisionFew decisionsNO assessment of prediction errors, may be too smooth, edge points have large influenceNone
LPDeterministicPredictionFairly fastNoSome flexibility, more parameter decisionFlexibleNO assessment of prediction errors, may be hard to choose a good local neighborhoodNone
RBFDeterministicPredictionFairly fastYesFlexibility, more parameter decisionFlexibleNO assessment of prediction errors, may be too automaticNone
KrigingStochasticPrediction. Standard Error, Probability, QuintileFairly fastYes without measurement error, No with measurement errorVery flexibility, assess spatial autocorrelation, obtain standard errors, many decisions more parameter decisionFlexible with modeling tools; prediction standard errorsMany decision on transformations trends, models, parameters, and neighborhoodsStationary, some methods require a normal data distribution
CokrigingStochasticPrediction. Standard Error, Probability, QuintileFairly fastYes without measurement error, No with measurement errorVery flexibility, assess spatial autocorrelation, obtain standard errors, very many decisionsFlexible with modeling tools; prediction standard errorsMany decision on transformations trends, models, parameters, and neighborhoodsStationary, some methods require a normal data distribution
*by compus.esri.com

Table 3.

Summarized properties of the interpolators

19. A case study

To make the above discussions more clear, a case study including all steps needed for assessment of climate change effects on positioning of wind power plant station is briefly presented. Figure 2 shows these steps applied in the assessment.

Figure 2.

Methodology of wind power positioning under climate change conditions

For this purpose, a region in Southern Khorasan, Iran is chosen. Five synoptic stations are considered as reference stations as displayed in Figure 3.

Figure 3.

Under study region

Considering high level of sensitivity of the results to different sources of uncertainty in this study, uncertainty analysis was applied with bootstrap method at 95% confidence interval on the results after downscaling steps (Efron, 1993; Khan, 2006; Fakhry, 2012a; Fakhry, 2012b). Figure 3 shows the results of uncertainty analysis of downscaled data.

Figure 4.

Uncertainty band before and after downscaling

After preparing average wind speed parameter for each station, frequency analysis is performed and the Weibull distribution is selected using L-moments method. Finally, the quantile values corresponding to each return period were derived. Then, geostatistics method was applied for local interpolation and final map for the historical period was prepared.

As the average wind speed was the single used parameter, only one map was drawn. In the case of many existed maps, final positioning could be possible by weighting, according to importance of the maps. Figure 4 shows precedence of potential locations to install wind power station in the region according to long duration of historical records. It is clear from the Figure that the regions with the highest and lowest potential of wind power plant construction are respectively located in the south eastern and north western parts.

Figure 5.

Final historical wind map

To investigate the effect of climate change on positioning process, first the output of HADCM3 model under A2 emission scenario is derived from IPCC website and downscaled by using statistical downscaling techniques. In this study, linear regression method is used and by making wind speed time series of under study stations for near future period (2010-2039), the frequency analysis is performed again and the future map is presented.

As presented in Figure 6, climate change impacts on wind power plant construction in the regions with low priority will be negligible but in high priority ones, the best points are concentrated in the north eastern part.

Figure 6.

Final future wind map under climate change conditions

20. Conclusions

This chapter made an attempt to initiate discussions on the impact of the climate change on construction of wind farms through various cases that have been presented. Climate change impact studies on wind farms should consider the following principle steps.

This study suggests that the main focus should be on using observational data, time series and frequency analysis and utilizing spatial interpolation methods to create the initial maps, which depict the wind velocity, direction and the average power generation could be calculated. The decision making of the wind farm location could start based on these initial maps and power analysis, which takes various factors into account. The following step would involve downscaling process of the GCM model for initially selected sites. Comparing the GCM model output for future conditions to the historical maps drawn based on existing data. This process will be instrumental in helping to create a path to coupe with a changing climate and its impact on the wind farms.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Masoomeh Fakhry, Mohammad Reza Farzaneh, Saeid Eslamian and Rouzbeh Nazari (March 13th 2013). Wind Speed Regionalization Under Climate Change Conditions, New Developments in Renewable Energy, Hasan Arman and Ibrahim Yuksel, IntechOpen, DOI: 10.5772/55985. Available from:

Embed this chapter on your site Copy to clipboard

<iframe src="http://www.intechopen.com/embed/new-developments-in-renewable-energy/wind-speed-regionalization-under-climate-change-conditions" />

Embed this code snippet in the HTML of your website to show this chapter

chapter statistics

974total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Biomass Conversion to Energy in Tanzania: A Critique

By Mashauri Adam Kusekwa

Related Book

First chapter

Strategies to Enhance Sustainability of Land Resources in Arid Regions

By Selen Deviren Saygin

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More about us