Open access

Towards Developing a Decision Support System for Electricity Load Forecast

Written By

Connor Wright, Christine W. Chan and Paul Laforge

Submitted: 18 June 2012 Published: 17 October 2012

DOI: 10.5772/51306

From the Edited Volume

Decision Support Systems

Edited by Chiang Jao

Chapter metrics overview

2,331 Chapter Downloads

View Full Metrics

1. Introduction

Short-term load forecasting (STLF) is an essential procedure for effective and efficient real-time operations planning and control of generation within a power system. It provides the basis for unit-commitment and power system planning procedures, maintenance scheduling, system security assessment, and trading schedules. It establishes the generation, capacity, and spinning reserve schedules which are posted to the market. Without optimal load forecasts, additional expenses due to uneconomic dispatch, over/under purchasing, and reliability uncertainty can cost a utility millions of dollars [1].

Many approaches have been considered for STLF. The benefits of increased computational power and data storage have enhanced the capabilities of artificial intelligence methods for data analysis within the power industry [1]. Yet, even with the advancements of technology, industry forecasts are often based on a traditional similar day forecasting methodology or rigid statistical models with reduced variable modifiers for forecasting aggregated system load.

Power systems with a large spatial presence provide an increased challenge for load forecasters, as they often face large diversity within their load centres as well as diverse weather conditions. These geographically separated load centres often behave independently and add considerable complexity to the system dynamics and forecasting procedure.

This chapter investigates forecasting of electrical demand at an electric utility within the province of Saskatchewan, Canada and proposes a multi-region load forecasting system based on weather-related demand variables. The control area examined consists of over 157,000 kilometres of power lines with transmission voltages of 72, 138, and 230KV. The control area was apportioned into twelve load centres, consisting primarily of conforming loads. These conforming loads were cities and rural load clusters not including large industrial customers. Their demand profile conforms to seasonal and weather influences and, thus, maybe referred to as conforming loads.

The chapter is organized as follows: Section 2 introduces the current load forecasting system used at this utility and describes challenges associated with developing a new decision support system. Section 3 discusses the weather diversity of the load centres and the load-weather patterns observed. Section 4 presents the load diversity analysis of the load centres. Section 5 identifies the methodology of the research as well as the load forecasting models examined, which consist of (1) a similar day aggregate model developed in conjunction with the utility’s load forecasting experts, (2) an ANN aggregate model, and (3) an ANN multi-region model. Section 6 describes the modelling processes and the performance evaluation methods. Section 7 presents the case study of predicting the hourly energy consumption throughout the 2011 year. The predicted results generated from each of the three models are also presented, which demonstrate the superior performance of the proposed multi-region load forecasting system over the aggregate load forecasting models. Section 8 presents some conclusions on the models examined.

Advertisement

2. Development and Implementation Challenges

Electric demand forecasting is a daily procedure required for efficient and effective grid operations, North American Electric Reliability Corporation (NERC) compliance, and planning procedures. The current practice of making forecasts typically involves system operators manually generating the forecast using similar day-based methodologies. The forecast methods vary depending on the individual operator, and hence are highly inconsistent. There is a tacit reluctance to embrace new technology among operators and the adoption of any new tools, such as decision support systems, would not happen unless they have passed stringent benchmarking criteria and scrutiny. The challenges in developing and implementing a new forecasting system are described in this section.

2.1. Load Forecasting Overview

Load forecasting is the science of predicting human energy usage in response to externalities. It is an essential procedure for effective and efficient real-time operations planning and control of generation within a power system. It provides the basis for unit-commitment and power system planning procedures (such as generation commitment schemes, contingency planning procedures, and temporary operating guidelines), maintenance scheduling, system security assessment (identifying stability concerns with unit-commitment and maintenance schemes), and trading schedules (market-posted generation/interconnection plans). It establishes the generation, capacity, and spinning reserve schedules which are posted to the market. Without optimal load forecasts, additional expenses due to uneconomic dispatch, over/under purchasing, and reliability uncertainty can cost a utility millions of dollars. A one percent reduction in load forecast uncertainty can mean the difference between forecasting an energy emergency or efficient system operation. Therefore a reduction in load forecast uncertainty provides considerable economic, reliability, and planning benefits [2].

Electric load is the demand for electricity by a population, which results from cultural and economic biases and is influenced by externalities [3]. Common drivers for electric loads include: end use relationships (appliances, industries, etc.); time of day; weather; and econometric data.

The accuracy of the predicted demand can impact a variety of power systems operations, such as:

  • The dispatch plan of generation units may not be optimal, resulting in economic losses.

  • Energy trading schedules may miss advantageous purchasing or selling options.

  • Maintenance scheduling may suffer missed opportunities for preventative maintenance.

  • System security may misidentify system stability on prospective generation plans.

Since electrical energy cannot yet be efficiently stored in bulk quantities, reliable forecasts are essential to provide efficient scheduling for an electric utility. The increasing regulatory presence in the electricity industry places increased importance on the need for accurate and efficient demand forecasting [4].

Load forecasting is traditionally divided into three categories: long term forecasts, predicting several months to several years into the future; medium term forecasts, predicting one or more weeks into the future; and short term forecasts, predicting several minutes to one week into the future. The focus of this chapter is on short term load forecasting.

Short term load forecasting is conducted not only for efficient operations and planning, but also to comply with regulations imposed by NERC. These forecasts predict either power demand, for real-time forecasting or peak forecasting in megawatts, or energy demand, for hourly or daily forecasting in megawatt-hours. Regardless of the class of load forecasting model utilized, understanding the relationship between electric demand and forecast drivers is essential for providing accurate and reproducible load forecasts.

Recent findings from NERC’s Load Forecasting Working Group have identified substantial inconsistencies in forecasting methodologies such that the reported data are not comparable [4]. While it is difficult to standardize forecasting methods across all regions, NERC has encouraged the collection and reporting of load data to include greater detail with respect to demand-side management in terms of regional diversity factors and non-member loads in forecasts. These suggestions indicated weaknesses in current practices of load forecast reporting, specifically, consideration for regional diversity and mixed aggregation methods were acknowledged as high-priority issues.

2.2. Existing Industry Model

Within the utility examined, load forecasting has been conducted based on a similar day aggregate load model, which was constructed based on expert knowledge elicited from operators, engineers, and/or analysts. The model supports forecasting future demand by comparing the demand of historically similar days. It exploits common electric load periodicity of three fundamental frequencies: (1) diurnal, in which the minima are found in the early morning and midday and maxima at mid-morning and evening; (2) weekly, in which demand is lowest during the weekends and approximately the same from Tuesday to Thursday; and (3) seasonal, in which heating and cooling needs increase electrical demand during the winter and summer months [5]. Furthermore, holidays and special events are treated as aberrant occurrences and modelled separately. These periodic load behaviours were analyzed and provided the basis for a sequence of representative days, which are adjusted for load growth and predicted weather phenomena.

The model consists of three components: base load, weather-influenced load, and special load. Base load considers the minimal load experienced throughout a day: an example of a base load application is lighting systems, whose usage is determined by the time of day. Weather-influenced load is the specific deviation from the averaged climactic conditions of the historically representative days. Common weather-influenced load typically pertains to heating and cooling devices such as furnaces and air conditioners. Special load includes the residual load use unaccounted for in the other load categories and is the most difficult to model. For example, special loads include the use of Christmas lights in winter or the outage of a major industrial customer. The weather-influenced loads usually rely on temperature variables, but a variety of forecast approaches exist from utility to utility [4].

Similar day models for load forecasting are often preferred for their simplicity. Operators are able to construct a forecast without a custom-made interface and manipulate data to examine the sensitivity of the system to simulated changes in weather. The models tend to be intuitive to even the most novice operators and can be easily adjusted when unforeseen weather changes occur.

While simple and easy-to-construct, similar day models are often poor at reflecting diversity among regional load forecasts. The diversity of metering infrastructure complicates this model’s accuracy as behind-the-meter generation often involves an aggregate estimate. This aggregate estimate ignores specific load details which may be unavailable. Furthermore, similar day models tend to use actual loads instead of weather-normalized loads; thereby reducing the ability of the model to predict electrical usage in a diverse weather environment. An additional weakness of the approach is that load uncertainty is difficult to model. Uncertainty in metering cannot be assessed as system load is an aggregate calculation and masks individual metering errors.

Despite the massive advancements of technology, many power utilities have yet to embrace the new opportunities available in enhanced metering, process automation, and control schemes based on artificial intelligence techniques. Within the grid control centre, short term load forecasting remains a manual procedure, which makes use of similar day-based models for aggregate load forecasting. Every day the tagging desk operator manually prepares a day-ahead forecast, which consists of hourly electric demand forecasts. The operator can use weather forecasts for multiple regions, or forecast as many as four major load regions to produce an aggregate system load. When a complete forecast has been generated, it is compared against similar competing models. The power system supervisor selects the best model based on his or her subjective criteria, and the forecast is posted to the market. Figure 1 illustrates the procedure of STLF based on the traditional similar day model. The operator accesses the historical loads database, runs a pre-processing command to normalize the loads (e.g. by eliminating factors of load growth year-over-year), filters the dataset according to weather variables and load modifiers, and applies data transforms and regression analysis to the dataset filtered by operator-selected input dates. Finally, the output forecast is produced.

Figure 1.

Traditional Similar Day Procedure.

Grid personnel tend to be reluctant to adopt new tools or models due to the long-standing process for creation and evaluation of forecasts. It seems that a surprisingly high-degree of statistical accuracy and simulation evidence is required for grid personnel to consider implementation of new models. Hence, a central challenge in developing a new load forecasting model is to supply substantial evidence to support its claims of accuracy, which is not limited to model performance. In this chapter, we propose that empirical evidence to support the case for multi-region forecasts is essential for grid operations personnel to adopt new modelling techniques. This evidence for the weather and load diversity in the control area will be presented in sections 3 and 4, while section 7 will provide evidence in terms of assessment results on model performance for the case study.

Advertisement

3. Analysis of Regional Weather

Given its large geographic area, northerly latitude, and distance from any major body of water, the province of Saskatchewan is prone to considerable weather diversity. The province transitions its climates from humid continental in the south to subarctic in the north. Precipitation patterns vary considerably, typically decreasing from northeast to southwest. The summers are hot and dry while the winters are frigid.

Wind chill statistics for each of the twelve load centres examined in this chapter were analyzed and compared. Regional weather data were recorded and analyzed from January 2005 to November 2011. Sufficient weather diversity was identified within the analysis for all the load centres to warrant adopting a multi-region approach to modelling. Thirteen weather variables were analyzed, which included: temperature (°C), relative humidity (%), pressure (millibars), wind direction (compass degrees), wind speed (km/hr), wind gust (km/hr), cloud cover (%), normal cloud cover (%), cloud ceiling (metres), visibility (km), low-lying cloud coverage (%), middle-lying cloud coverage (%), and high-lying cloud coverage (%). Based on sensitivity and statistical analysis conducted independently on the thirteen quantitative weather variables, only the variables of temperature, humidity, and wind speed were identified as statistically significant factors for explaining load variation due to weather. These three variables were included in the prediction models and the other weather-related variables were ignored because insignificant improvement was found from their inclusion in the model.

3.1. The Weather of Saskatchewan

Saskatchewan is a land-locked prairie province, bordered east and west by the provinces of Manitoba and Alberta, respectively. Its northern border connects with that of the Northwest Territories and its southern border is divided between the American states of Montana and North Dakota. Saskatchewan is a land of geographic diversity.

Containing an area of 651,900 square kilometres [6], Saskatchewan is immense. Much of Saskatchewan lies within the Great Plains and Interior Plains regions of North America, which comprise nearly half of the area of Saskatchewan, while the Canadian Shield dominates the northern half of the province.

Over 52% of Saskatchewan is covered by boreal forest, largely in the north, while arable land in the south represents roughly half of Saskatchewan’s total land area [6]. Due to its geography and location, Saskatchewan is further differentiated by its climate.

The dominant climates of Saskatchewan include: semi-arid in the southwest, humid continental in the south and central, and sub-arctic in the north [6]. The south is typically drier and the north is typically colder. Due to its northern location, distance from any major bodies of water, and relatively flat topography, Saskatchewan has a radical climate.

Summers are hot and short, though temperatures exceeding 32°C are not uncommon during the day, but the nights may quickly cool to near freezing. Humidity decreases from northeast to southwest due to the pacific westerlies. Winters are cold and long; often temperatures do not exceed -17°C for weeks at a time. The average summer temperature for the cities of Saskatchewan see highs of 25°C and lows of 11°C; while the average winter highs and lows are -12°C and -23°C respectively [6].

3.2. Weather-Diversity Analysis

To assess weather-diversity across Saskatchewan, climactic differences across the regions examined were empirically identified. Wind chill statistics for each of the twelve load centres were analyzed and compared. Table 1 contrasts the mean, maximum, and minimum temperatures observed in each of the load centres throughout the period of investigation, which is from January 2005 to December 2011.

Region Code Mean (°C) Maximum (°C) Minimum (°C)
Area01 -1.48 36.66 -53.88
Area02 -1.59 36.89 -52.77
Area03 -1.48 36.53 -53.89
Area04 -1.52 34.46 -51.66
Area05   0.11 38.33 -50.23
Area06 -1.58 36.61 -52.85
Area07 -1.67 35.09 -51.11
Area08  2.56 41.66 -46.66
Area09 -2.68 33.89 -52.22
Area10 -2.65 33.88 -52.28
Area11 -1.51 34.44 -52.77
Area12 -2.04 32.77 -52.23
Regional Average -1.29
35.93
-51.87

Table 1.

Wind Chill Temperature Statistics Across Load Centres.

Summer Average Daily Wind Chill Winter Average Daily Wind Chill
Region Code Max (°C) Min (°C) Max (°C) Min (°C)
Area01 18.77 5.05 -11.11 -23.92
Area02 19.66 4.68 -11.21 -24.75
Area03 19.87 5.02 -11.38 -24.95
Area04 17.71 4.83 -10.51 -23.21
Area05 20.35 5.69   -8.56 -21.47
Area06 20.71 4.14 -11.17 -24.53
Area07 18.38 5.98 -11.87 -23.92
Area08 22.84 7.84   -5.26 -18.89
Area09 16.70 7.04 -13.32 -25.16
Area10 17.16 6.88 -13.45 -25.05
Area11 19.52 5.85 -11.81 -24.91
Area12 17.06 4.28 -11.25 -23.01
Regional Average 19.06 5.61 -10.89
-23.64

Table 2.

Average Daily Wind Chill Temperatures During Summer and Winter Months (Jan. 2005 – Nov. 2011).

It can be observed from the dataset that the regions experience different weather conditions at different times such that the temperature distributions and the variances in temperature are not the same. Table 2 lists the seasonal average daily variation of wind chill temperatures experienced by each of the twelve load centres during the period of investigation from January 2005 to December 2011. Significant temperature variation exists among the twelve regions and individual load centres experience a considerable range of temperatures in an average day.

Thus, it can be seen from Tables 1 and 2 that the weather experienced in each of the twelve regions vary considerably. Weather diversity was evidenced by the seasonal differences, daily wind chill ranges, and distribution of wind chill temperatures among the regions. The evidence for weather diversity supports our proposal for the development of a multi-region model.

Advertisement

4. Analysis of Load-Diversity

Since load centres in diverse regions experience different weather conditions throughout a day, the electrical demands of these load centres, which are dependent on weather, also vary. Hence, the electricity demand of the load centres cannot be analyzed with a single aggregate model. Instead, the aggregate demand for electricity is best explained using multi-region modelling. This section presents an analysis of the twelve conforming load centres and weather data of the control region.

4.1. Aggregate and Multi-Region Load Modelling

The two approaches for developing load forecasting models include building aggregate models and multi-region models. An aggregate model does not differentiate between load sectors or physical locations. The strength of this approach is that it provides better analysis for load growth trends and is easier to use. An aggregate model performs well for a small geographic area which can include dense and undifferentiated load categories, such as in a suburb. This model type does not support assessing where and when electrical demand will occur throughout the system. As a consequence, aggregate models do not provide adequate inputs for analysis of grid integrity, and statistical modifiers are often applied on the model outputs so as to provide an average assessment of system response [7].

A multi-region model offers a more discrete analysis for distinct loads or load clusters. This model type is useful for large geographic areas where regionalized load profile trends differ considerably, which often results from economic or weather diversity within the forecast area. The benefit of these models is their ability to provide higher resolution prediction results within the grid, contributing to analysis of grid integrity. However multi-region models are more difficult to construct and operate, as the number of inputs grows with each additional region to the model. Irrespective of whether the aggregate or multi-region approach is adopted, constructing a load forecasting model involves considerations of four aspects: trend, cyclicality, seasonality, and a random white noise error [8].

To illustrate the load diversity among the regions, the region code, average load, and peak load for the twelve load centres from the period of January 2005 to December 2011 are listed in Table 3.

Region Code Average Load (MW-hour) Peak Load (MW-minute)
Area01 205.73   345
Area02 181.46   349
Area03   35.59     60
Area04   36.54     71
Area05   36.48     67
Area06   71.33   221
Area07   25.34     45
Area08   23.86     65
Area09   17.04     74
Area10   14.23     65
Area11   43.93   154
Area12     9.44     16
Aggregate System Load 700.97 1154

Table 3.

Region Code, Average Load, and Peak Load (January 2005 to December 2011).

It can be seen from Table 3 that the peak load for most load centres tends to be twice the average load, which indicates that considerable load swings are possible within each load centre. The aggregate load model approach would not be able to represent the possible load swings within each centre.

To demonstrate the seasonal trends in electricity demand of the load centres, the hourly aggregate electricity demands of the load centres over four years are shown in Figure 2. In this figure, it can be seen that these seasonal patterns correspond to periodic daily, weekly, and monthly variations. Peaks are found in the winter and summer months, while troughs are found in the spring and autumn months. Limited load growth is found during this period, but a considerable variance is possible within each season, which usually results from significant weather diversity.

The dark black line in Figure 2 indicates the seasonal trends of the system. Peaks are found in the winter and summer months and troughs in spring and autumn. These seasonal patterns correspond to periodic daily, weekly, and monthly variation. Limited load growth is found during this period, but a considerable variance is possible within each season, which usually results from considerable weather diversity.

Figure 2.

Hourly Aggregate Electricity Demands of Load Centres from January 1, 2005 to December 31, 2008.

4.2. Regional Peaking Responses Versus System Peaking Response

Demand for electricity is not static and varies according to a multitude of variables. Load centres will peak at certain periods during the day, usually conforming to business cycle and weather-related influences. The system peak response is the aggregate peak of the values of all the load centres within a control area, which may occur at a different time from the peak response of an individual load centre. A significant difference in peak response between regions and the system constitutes evidence for a diverse load environment. This evidence can provide motivation for the development of multi-region models.

To determine whether the observed load swings of the studied load centres occurred within the same time frame of the aggregate system response, the coincidence factor C [7, 9] is adopted, which is defined as,

C = i P i P A E1

Where, Pi is the peak load of a single load centre, and PA is the system peak load.

The coincidence factor describes the degree of discrepancy between regional peaking responses versus system peaking response. If C is greater than 1 and continues to appreciate across an increasing timeline, the load centres peak at different times than the aggregate system load, which provides evidence for the existence of load diversity among the regions. If C is greater than 1 but remains consistent, an aggregate model can be used to accurately predict load swings. In a somewhat consistent or non-diverse system, C will oscillate about 0 and a multi-region model is likely to be of little value in predicting load swings.

The load diversity among the twelve load centres was calculated by comparing the peak load of each load centre to the system peak load across an increasing time interval: beginning with a daily peak to a thirty-one day peak for the period of January 1, 2011 to January 31, 2011. The results of this calculation are shown in Figure 3.

Figure 3.

Average Load Diversity Factor Applied to an Increasing Time Interval (January 1, 2011 to January 31, 2011).

It can be seen from Figure 3 that the results of the diversity factor calculation are greater than 1 and its value increases over greater calculation time intervals. Both facts provide evidence for the existence of load diversity amongst the load centres examined. Considering the data presented in Table 3 and Figure 3, it is reasonable to conclude significant load diversity exists throughout the control region. Therefore, both the weather and load diversity observed within the control area provide justification for the development of multi-region forecasting models. The performance of both aggregate and multi-region models will be statistically benchmarked to identify the best model type and structure for STLF in Saskatchewan.

Advertisement

5. Load Forecasting Models

Three load forecasting models were developed: (1) Similar Day Aggregate Load Model, (2) ANN Aggregate Load Model, and (3) ANN Multi-Region Load Model. The similar day aggregate load model provides the industry benchmark. The ANN aggregate load model serves as the baseline to show the performance enhancement achieved by the ANN approach. The ANN multi-region load model demonstrates the performance enhancement achieved by the multi-region approach. All models were evaluated according to the same performance evaluation methods, which will be described in section 6. The models were tested with the same case study, which will be presented in section 7. A comparison of the characteristics of the models is presented in Table 4 and a comparison of the input variables to the models is presented in Table 5. The research methodology and modelling process for each of the three models will be described in this section.

Model Name Model Type Methodology Model Output Training Type
Aggregate Similar Day Model Similar Day Aggregate Aggregate
electrical demand
Knowledge Discovery in Databases
Aggregate ANN Model ANN Aggregate Aggregate
electrical demand
Supervised training
Multi-Region ANN Model ANN Multi-Region Aggregate
electrical demand
Supervised training

Table 4.

Summary of Model Properties and Methodologies.

Area01 Area02 Area03 Area04 Area05 Area06 Area07 Area08 Area09 Area10 Area11 Area12 System
Aggregate Similar Day Past Hour Load X
Air Temp. X X
Rel. Humidity
Wind Speed
Aggregate ANN Past Hour Load X
Air Temp. X X
Rel. Humidity X X
Wind Speed X X
Multi-Region ANN Past Hour Load X X X X X X X X X X X X
Air Temp. X X X X X X X X X X X X
Rel. Humidity X X X X X X X X X X X X
Wind Speed X X X X X X X X X X X X

Table 5.

Summary of Model Inputs.

5.1. Development of a Similar Day Model

The domain expertise for this research project was drawn from the grid control operating staff of the Saskatchewan utility, including the power system supervisors, capacity management engineers, and system operators. They were consulted to identify load patterns, select predictive parameters, and assist in development and pre-processing of both the load and weather datasets.

The load history of the control area and temperature variables from Area01 and Area02 are the generalized inputs to the automated similar day model. These variables are used as index for searching a Supervisory Control and Data Acquisition (SCADA) database to obtain the best-fit days, weighted according to the temporal distance from the forecasted day. Most similar day models are aggregated load models, driven by one or more regional temperature forecasts, typically corresponding to the weather of the largest load centres. They do not require training data in the sense that the model learns automatically. Instead the system combines historical data with expert predictions. Figure 1 illustrates the traditional forecast procedure based on the similar day model.

After consultation with experts including system operators and capacity management engineers, a similar day-based load forecasting model was developed. An examination of the hourly observations of system load over the period of 2005 – 2010 revealed that the data patterns can be summarized into four day types. The four day types with their associated electric demand behaviour and external influencing variables were represented in a parameterized rule base, shown in Table 6. This rule base can be used in conjunction with a database that consists of parameter values derived from the SCADA database so as to obtain the best-fit days, weighted according to the temporal distance from the target day for which the load is predicted. The weather variables were further subjected to sensitivity analysis to quantify each parameter’s influence on the aggregate load. The sensitivity analysis served to confirm significances of the parameters identified by the experts and the less significant ones were omitted from the input dataset.

Data Module Controller Module View Module
Updates and processes knowledge databases;
Communicates with SCADA database, updating data entries and database filtering;
Responds to data requests issued by controller module with ADO Recordset; and
Pre-filters data requests initiated by controller module.
Provides interface between user and data module;
Accepts input from user;
Translates user forecast queries into well-formatted SQL, which is then sent to the data module;
Instructs data module and view to perform actions based on user input;
Initiates data requests to the data module;
Retrieves data from the data module;
Negotiates weighting of best-fit days with the data module through parameterized rule base;
Transforms data from data module and sends results to the view module; and
Creates, opens, closes, and deletes projects.
Receives data and commands from the controller module;
Modifies the user interface to accommodate new data or applications requested by the user; and
Acts as the graphical user interface, while hiding non-essential information from the user.

Table 6.

Functions of Similar Day System Modules.

The input variables filtered the dataset to select normalized aggregate load which was further modified to correspond with the user-defined load pattern logic. The load pattern logic was generalized from the parameterized rule base. The implemented similar day model consists of three modules: data, control, and view. The functions of the similar day modules are listed in Table 6, while Figure 4 provides a screenshot of the application as viewed by the user through the view module. An example of the similar day model rule base is provided in Figure 5.

Figure 4.

Screenshot of Similar Day Load Forecast Application.

The data module leverages a database of normalized aggregated loads, pre-filtered to correspond with identified hourly and weekday groups. The module encapsulates the data storage and interface between the application and the database. The responsibilities of this module are: responding to data requests issued by the control module, updating and processing data entries within the database, and performing data filtering.

The control module provides the interface between the user and the data. The controller accepts input from the user and instructs the data and view to perform actions based on those inputs. Its responsibilities include: initiating data requests to the data model from the user, retrieving data from the data model, and outputting the data to the view module. The controller translates the user’s input into a well-formatted SQL query, which is then applied to the database. The database responds with an ActiveX Data Object (ADO) Recordset, which is then translated by the control module and output to the view module. The control module also creates, opens, closes, and deletes projects, including load pattern changes instigated by the user.

The view module receives data and commands from the control module, which directs the view module to modify the user interface to accommodate new data or applications requested by the user. All data transactions outside of this module are opaque to the user. The view module constitutes the graphical user interface of the aggregate similar day forecasting system, outputting data to the user for consideration.

In order to evaluate and modify the pattern set chosen by the user, a training dataset was used as initial testing data for model tuning. Initial results obtained during preliminary testing approximated those of the experts. However to further improve predictive accuracy, experts can be given the option to modify the model if a weather phenomenon such as a heat wave or a cold snap is forecasted. When the pattern set has been configured and stored in the data model module, the user can view the results of the pattern set against the test set.

Figure 5.

Example of Similar Day Model Represented in a Decision Tree Structure.

5.2. Limitations of a Similar Day Model

While extensively used, similar day models are susceptible to the following limitations:

  • As they are based on expert knowledge, similar day models may be difficult to develop given the possibility of expert contradictions and bias [10];

  • Similar day models rely on the expert to be correct in the knowledge engineering and training stages;

  • Linear models tend to be produced by similar day models that do not account for dynamic environments [10];

  • The prediction capabilities of a similar day model are only as good as the historical data and degree of specificity in the operators’ reasoning knowledge, which has been captured and represented in the similar day model; and

  • For the same reason, similar day models tend to be restricted to aggregate models due to the extensive knowledge acquisition required for developing a multi-region model [10].

5.3. Development of ANN Models

In order to deal with the considerable load diversity presented in Table 3 and Figure 3, as well as the weather diversity presented in Tables 1 and 2 and Figure 1, a new modelling structure consisting of individual load centre models fed by many weather region specific weather variables was developed. The multi-region model is used to forecast regional loads individually, then the results may be aggregated to forecast system load.

The ANN models were all created and evaluated using the Weka data mining software package. Weka, version 3.75, is a data mining tool, written in Java, and produced by the University of Waikato under the Waikato Environment for Knowledge Analysis. It is a collection of machine learning algorithms, statistical tools, and data transforms for data mining tasks, including: data pre-processing, classification, regression, clustering, association rules, and visualization.

Figure 6.

(Left) Topology of Aggregate ANN Model.

Figure 7.

(Right) Topology of Multi-Region ANN Model.

The ANN models utilize the three weather variable categories of: ambient air temperature, relative humidity, and wind speed. Each of the implemented ANN models holds a unique topology. This topology was manually configured for each of the models using the Weka Perceptron GUI. Figure 6 shows the topology of the Aggregate ANN model and Figure 7 shows the topology of the Multi-Region ANN model.

Each ANN model utilized a common training history and the same weather inputs; however, the Aggregate ANN model only used weather variables from the two largest regions, whereas the multi-region model used weather variables from all twelve regions.

In this research, a multi-layer (3 layered) perceptron classifier was chosen for the ANN model. This network architecture was chosen due to its conceptual simplicity, computational efficiency, and its ability to train by both supervised and unsupervised learning. The classifier uses backpropagation, binary classification, and a sigmoid activation function. The Aggregate ANN model used 7 inputs, while the multi-region model used a total of 48 inputs. The ANN model inputs are summarized in Table 5.

After the architecture and topology of the neural networks were determined, optimization of model coefficients was achieved by systematically varying model parameters and observing the response of each network. Both the Aggregate ANN model and the Multi-Region ANN model provided the single output of the forecasted system load, and the inputs were the conditional variables of weather and system load. Through a process of trial and error, the model configurations were updated until optimum values were realized.

5.4. Limitations of an ANN Model

Despite their learning capabilities, ANNs are subject to a number of limitations, including:

  • The size of the training set tends to be proportional to the accuracy of predictions, and a large training set is required. Since the network can become over-trained, care must be taken by the designer to tune the network and terminate the training at appropriate moments.

  • The training set must cover the range of all possible events which the network is expected to predict. Common events may dampen the response to critical, yet rare, scenarios. Yet in order to respond appropriately to these critical scenarios, transformations of the dataset may be required. Unfortunately the insufficient exposure to scenarios is only revealed after the network has been trained and tested. Therefore the designer must be cognizant of the contents of the training set.

  • Benchmarking efforts for neural network performance are difficult since the model may be optimized to locate local, rather than global, minima/maxima [8].

  • Network layers and connections are often implemented on a trial and error basis. While domain knowledge is an important aspect of any modelling efforts, neural networks often expose unconventional connections which lead to significant performance enhancements. Linear connections are often redundant when using an ANN [8].

The ANN models developed during this research were subject to the aforementioned limitations; however, efforts were taken to mitigate these impediments. The training set met or exceeded the size of similar STLF ANN models [1, 3, 7, 9, 10] and contained a number of scenarios, both common and diverse with respect to weather conditions and load response. The benchmarking process considered a case study of the 2011 year across all hours and weekdays, which exceeded the evaluation events used in similar STLF ANN models [1, 3, 7, 9] and utilized five statistical measures for model benchmarking, further described in section 6. Finally a systematic analysis of model optimization was enacted. Parameters were changed methodologically and performance was noted. Ultimately, the best modelling parameters were chosen based upon the analytical review of the model configurations.

Advertisement

6. Modelling Process and Performance Evaluation Methods

The three models were supplied with datasets for training/knowledge elicitation. No statutory holidays and adjacent days were used in either the training or testing processes of the model application. The testing dataset was not included in the modelling process and was kept entirely separate from the training sets. The weather data was not the forecast data, but the actual hourly-averaged weather recordings, so as to minimize error due to weather forecasts.

Energy and peak load forecasting was performed for weather-induced demand and profile-conformance. Aggregated versus individual load forecasting were evaluated and contrasted. Load and weather trends were identified and amalgamated into load forecasting methods for further optimization.

Databases of weather and electric loads were constructed and backfilled to January 1, 2005. Load calculations were created, monitored, and evaluated for integrity. A total of 12 conforming load centres were analyzed. Real time and historical weather reports were stored and updated for 10 weather stations.

Load variables were assessed according to their weather-sensitivity and profile-conformance. Sensitivity analysis combined with statistical methods was used to identify weather-induced demand variables. Load forecasting was evaluated with an expert-based Aggregate Similar Day model, an aggregate artificial neural network model, and a multi-region artificial neural network model.

For the purposes of this research, assessments of performance using the following statistical methods: correlation, Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Relative Absolute Error (RAE), and Root Relative Squared Error (RRSE). These methods assessed the forecasting models by the overall prediction accuracy and consistency.

Correlation performance was calculated by computing the correlation coefficient, which is a measurement of the statistical similarity of the predictor to the prediction. The coefficient is defined from 1, indicating perfectly correlated results, to 0, indicating no correlation present, to -1, indicating perfectly negatively correlated results. Correlation assesses errors different than any other method used in this research for benchmarking. Its scale is independent and untransformed, even if the output is scaled. Its assessment tracks the behaviour of the model, rather than its error [11]. Thus, a large correlation value is desirable, whereas a low error value is also desirable.

Correlation is defined as:

Correlation= S PA S P S A S P =S PA = n i (p i p ¯ )(a i a ¯ ) n1 S P n i (p i p ¯ ) 2 ^ n1  and S A = n i (a i a ¯ ) 2 n1 E2

Where ai is the actual value; pi is the predicted value; ā is the mean value of the actual; and n is the total number of values predicted.

Accuracy performance of the three load forecasting systems was established by comparing MAE and RMSE results.

MAE is defined as:

M A E = i = 1 n ( | p i a i | ) / n E3

Where ai is the actual value; pi is the predicted value; and n is the total number of values predicted. MAE is the magnitude of individual errors, irrespective of their sign. MAE does not exaggerate the effect of outliers, treating all errors equally according to their magnitude. MAE, however, does mask the tendency of a model to over or under predict values.

RMSE is defined as:

R M S E = i n ( p i a i ) 2 n E4

Where ai is the actual value; pi is the predicted value; and n is the total number of values predicted.

RMSE, like MAE, does not exaggerate large errors as is the case in squared error and root squared error measurements. By computing the square root in RMSE, the dimensionality of the prediction is reduced to that of the predictor [11]. These two methods equally consider all prediction errors.

In order to evaluate the consistency of the predictions, RAE and RRSE are utilized. RAE normalizes the total absolute error of the predictor against the average results to provide a distance-weighted result.

RAE is defined as:

R E A = i n | p i a i | | a i a ¯ | E5

Where ai is the actual value; pi is the predicted value; ā is the mean value of the actual; and n is the total number of values predicted.

The RRE, like RAE, evaluates the relative distance of magnitude errors. Outliers are emphasized and, like RMSE, the dimensionality of the prediction equals that of the predictor.

RRSE is defined as:

R R S E = i n ( p i a i ) 2 i n ( a i a ¯ ) 2 E6

Where ai is the actual value; pi is the predicted value; ā is the mean value of the actual; and n is the total number of values predicted.

The best model is one that has the highest correlation and the lowest error rates. The success rate must be evaluated according to each of the aforementioned benchmarking methods. Consistency is equally important to accuracy, a highly variable model may be correct sometimes, but has considerable uncertainty for future planning efforts. In the next section, the case study and the performance results are discussed with reference to the statistical performance indicators of: correlation, MAE, RMSE, RAE, and RRSE.

Advertisement

7. Case Study

Each of the models was evaluated according to the same testing dataset consisting of non-holiday loads from January 2nd, 2011 to December 30th, 2011. Each model had access to a training dataset (see Table 5 for a summary of model inputs) using the same hourly and weekday groups for modelling. The aggregate models were restricted to historical aggregate loads rather than regional loads, and the weather from the two largest load centres, whereas the multi-region model had full access to the training dataset.

This section presents the case study of predicting the hourly energy consumption throughout the 2011 year and an analysis of the prediction results generated from each of the three models. For the purposes of evaluation, historically recorded weather variables were used, rather than predicted weather variables such as a forecaster would use in reality. A summary of the prediction results, assessed with the benchmarking methods identified in section 6, and grouped by classification period (next day and next hour) are presented in Tables 7 and 8.

Similar Day Aggregate ANN Multi-Region ANN
Correlation 0.7282 0.7819 0.8131
MAE (MWhr) 33.1111 31.3465 32.2332
RMSE (MWhr) 43.8135 41.0459 41.2891
RAE (%) 49.15% 46.98% 47.91%
RRSE (%) 46.36% 49.61% 49.44%

Table 7.

Average Model Performance – Next Day.

It can be seen from Table 7 that for next day predictions, the performance of the aggregate models closely approximated the multi-region model. Of the two aggregate models, the Aggregate ANN model outperformed the Similar Day model in all categories, except for the RRSE. The Aggregate ANN model demonstrated a greater ability to track the behaviour of the load, produce more accurate predictions, and had greater consistency than the Similar Day model. However, the Similar Day model produced a better RRSE, which indicates it is slightly better at modelling behaviours, than the Aggregate ANN model.

Similar Day Aggregate ANN Multi-Region ANN
Correlation 0.7697 0.9359 0.9469
MAE (MWhr) 24.6104 16.9821 15.8962
RMSE (MWhr) 31.4624 21.7404 20.5349
RAE (%) 38.11% 26.30% 24.49%
RRSE (%) 39.45% 27.22% 25.50%

Table 8.

Average Model Performance – Next Hour.

It can be seen from Table 8 that for next hour predictions the performance of the Multi-Region ANN model, as compared to the aggregate models, was superior across all metrics. According to all the metrics, the Multi-Region ANN model was the most accurate and consistent for next hour predictions, and the Similar Day model was the least accurate and consistent.

The Multi-Region ANN model performed best overall for next hour intervals, but was second to the Aggregate ANN model for next day intervals. This was because the perceptron generalizes the data it receives into a single model. If this generalization was not achieved then the model becomes over trained and relies too heavily on the training set. Since weather is much more dynamic from one day to the next versus hour to hour, the Multi-Region ANN model was able to generalize weather/load responses for next hour conditions. However for next day conditions, the Multi-Region ANN model was unable to sufficiently generalize the impact from all its weather inputs and, consequently, was over-trained. The Aggregate ANN model was best able to generalize relationships for next day forecasts as its reduced input set enables it to better reflect changes in the major load centres, which significantly affected system demand. The Multi-Region ANN model was a more dynamic model in its response to varying weather conditions, but this only applies for same day forecasts.

The Similar Day model performed worst overall for both next day and next hour intervals. Since the Similar Day model operates by finding comparable days for inclusion into a weighted average, its performance will deteriorate during abnormal load/weather days. Its RRSE performance during next day predictions was second best to the Aggregate ANN model. As the Similar Day model is predicated upon the assumption that the past may be used to predict the future, the model relies significantly on direct load modelling; that is, the simple predictor of aggregate system load has a greater influence on the model’s calculations than the ANN models which model weather and load equally.

Comparing next day and next hour performance identified that model performance across all benchmark metrics improved when the time interval was shortened. This was expected as the previous hour’s energy demand has a high correlation with the next hour’s energy demand. Next hour predictions require a high ability to adapt to weather and load changes. The ANN models performed better than the Similar Day method across all metrics. Next day predictions require greater generalization of behaviour as the load value of the previous day does not have as great a correlation as compared to the load value of the previous hour.

When considering the performance of individual hour groups, the situation becomes more complicated. The multi-region model, in general, resulted in the lowest MAE and highest correlation; however, during next day predictions, the aggregate models often had better MAE and RMSE performance. Figures 8 and 9 illustrate the behaviour of the models for next day and next hour predictions within specific hourly groupings.

Figure 8.

STLF Model MAE Performance in Next Hour Predictions.

Figure 9.

STLF Model MAE Performance in Next Day Predictions.

It can be seen from Figure 8, that the prediction accuracy of the Aggregate ANN model and the Multi-Region ANN model are very similar. This is likely because their topologies are similar. In addition, these models are consistent in their errors across all hours. The similar profile of accuracy across the three models indicates certain hour groups are more difficult to forecast than others. The performance of the Similar Day model is best during off peak periods, as the greatest error associated with the model is found during the hour group of 17 – 21. This observation may be generalized for all the models as peak error was often found during the morning peak or evening peak periods, which suggests the impact of temperature on electrical demand is weakest during peak periods. When these results were shown to the experts, they noted the demands describing the peak periods are often attributed to the business cycle and the temperature would likely exert a less significant influence. In general, the experts identified the results of the hour group of 22 – 23 to be the most accurate. They suggested this hour group should be extended to include hours 22 – 23 and 0 – 3 as these periods typically have high baseload and temperature-dependency. A comparison of the models’ abilities in describing the behaviours of loads is shown in Figures 10 and 11.

The Aggregate ANN model and the Multi-Region Grouped ANN model are similar in both their correlation coefficients and predictive accuracy. The Similar Day model has the lowest correlation across all hours, and demonstrates low correlation at both peak periods.

As a conclusion, the multi-region model proved to be the best overall model, in terms of predictive accuracy and consistency. The Similar Day model was the easiest to build and offered to the operators an intuitive explanation for load behavior. However, it also performed the worst among the three models analyzed. The performances of the Aggregate ANN and Multi-Region ANN models were similar due to their topological similarities. This suggests that forecast environments with a considerable weather and load diversity should adopt a multi-region model for prediction of load instead of grouping the regions into a single ANN model. It can be observed that peak periods were the most difficult for the models to predict, and the forecast results have low accuracies.

Figure 10.

STLF Model Correlation Performance in Next Day Predictions.

Figure 11.

STLF Model Correlation Performance in Next Hour Predictions.

Advertisement

8. Conclusions and Future Work

Load forecasting continues to grow in importance within the electric utility industry. To date, no known study has been published which examines load forecasting within the province of Saskatchewan and/or within the control area examined. The increased importance of energy and environmental concerns, coupled with enhanced regulatory presence, has renewed interest in developing an accurate and easy-to-use load forecasting system within the control area.

The general objective of this research is to conduct load forecasting for a large geographic area which has considerable weather and load diversity. The specific research objective is to develop data-driven hourly prediction models for multi-region short term load forecasting (STLF) for twelve conforming load centres within the control area in the province of Saskatchewan, Canada. Since the load centres experience considerable diversity in terms of both weather and load, a multi-region based approach is needed and the ANN modelling approach was adopted for developing the models.

Due to their simplicity, ease of analysis, and long adoption history, many load forecasting systems currently used are based on a similar day methodology. However, the research results show that the multi-region ANN model improved prediction performance over the aggregate-based short term load forecasting ANN model and the similar day aggregate model in forecasting short term aggregate loads in next hour forecasts as well as next day forecasts. All models examined were weather-driven forecasting systems. The performance of the models was evaluated using the dataset from the 2011 year. Based on the measurements of Correlation, MAE, RMSE, RAE, and RRSE, it can be concluded that the ANN-based models provide superior prediction performance over existing similar-day forecasting systems. The developed models are able to reduce STLF inaccuracies and may be applicable for modelling other system concerns, such as system reliability.

Operational staff of grid control centres often adopt similar day models due to their simplicity and intuitive development, while paying less attention to the impacts of weather changes to electricity demand. This chapter has demonstrated the superior performance of the ANN-based models over the similar day models. This finding suggests that artificial-intelligence-based methods can potentially be used for enhancing performance of load forecasting in the operational environment. Future efforts in developing artificial intelligence-based forecasting systems can include efforts towards building more intuitive user interfaces, so as to promote greater user-adoption.

We believe that merging the ANN models with other methods such as fuzzy logic, support vector regression, and time series considerations can provide enhanced consistency for modelling reduced load interval datasets. Further analysis of heat wave theory and other weather trend electricity demand drivers is necessary for these methods to become applicable for conducting both short and medium term load forecasts. The results and methods of this work will be compared against other artificial intelligence models and statistical methods to identify further areas of improvement. Future work in this field is required to decrease forecast time intervals in order to provide a real-time operating model for intelligent automated unit-commitment algorithms, which operate at 15 minute intervals. Further efforts in weather trend analysis, such as heat wave theory will be investigated in order to quantitatively describe other weather-load trends.

Advertisement

Acknowledgments

We are grateful for the generous support of Research Grants from Natural Sciences and Engineering Research Council (NSERC) and the Canada Research Chair Program.

References

  1. 1. Salim N. A. 2009 Case Study of Short Term Load Forecasting for Weekends In: Proceedings of 2009 IEEE Student Conference on Research and Development November 16 - November 18, 2009 UPM Serdang, Malaysia
  2. 2. Wood A. 1996 Power Generation, Operation, and Control 2nd ed. New York Wiley-Interscience, John Wiley & Sons, Inc.
  3. 3. Sargent A. 1994 Estimation of diversity and kWHR-to-peak-kW factors from load research data Power Systems, IEEE Transactions on 9 3 1450 1456
  4. 4. North American Electric Reliability Corporation. Load Forecasting Survey And Recommendations http://www.nerc.com/docs/docs/pubs/NERC_Load_Forecasting_Survey_LFWG_Report_111907.pdf accessed September 17, 2011 2011
  5. 5. SaskPower. SaskPower. 2010 Load Forecast http://www.saskpower.com/ accessed July, 21, 2011
  6. 6. Ward N. 2011 Saskatchewan- The Canadian Encyclopedia http://www.thecanadianencyclopedia.com/articles/saskatchewan accessed October 27, 2011
  7. 7. Fan S. 2008 Multi-Area Load Forecasting for System with Large Geographical Area Industrial and Commercial Power Systems Technical Conference, IEEE May 4 - May 8, 2008 Clearwater Beach, Florida
  8. 8. Feinberg E. Applied Mathematics for Restructured Electric Power Systems New York Springer Publishing 2005
  9. 9. Fan S. Short-term Multi-Region Load Forecasting Based on Weather and Load Diversity Analysis In: 39th North American Power Symposium, IEEE September 30 - October 2, 2007 Las Cruces, New Mexico 2007
  10. 10. Witten I. 2005 DATA MINING Practical Machine Learning Tools and Technique San Fransisco Morgan Kaufmann
  11. 11. Nau R. 2011 What’s the bottom line? How to compare models http://www.duke.edu/~rnau/compare.htm accessed November, 10, 2011

Written By

Connor Wright, Christine W. Chan and Paul Laforge

Submitted: 18 June 2012 Published: 17 October 2012