## 1. Introduction

Certain regions are subject to frequent adverse weather events of sufficient intensity to cause damage to the electrical networks, requiring human intervention to verify and eliminate the causes of such disruption and thus leading to the creation of emergency service orders. These orders, with a high degree of uncertainty and urgency, along with commercial orders, with a previously defined route, become a task of great relevance for the network operation center of the electric distribution systems and qualifying even more the highly dynamic environment involved. The key issue for the resource planning, with regard to the service crews, refers to an estimation of service time that allows for more assertive planning possible, improving the crew throughput.

Considering the need to define a typical behaviour to those orders, this chapter proposes a predictive modelling of emergency services for resource planning when considering the geographic dispersion of such services and also the time windows that comprise the amount of service time demanded. The methodology to resource planning presented in this chapter is based on the predictive modelling of emergency service by defining random variables to denote how much service time is demanded in each portion of the whole geographic area considered and also how important such demands may be in the sense of reliability indexes, as number of customers and the amount of power not supplied.

Following this definition of random variables, one may obtain the service time to attend emergency services stratified by each region and also restricted to a certain time window, that is, 1-h interval. With this information, a decision-making process may be conducted to define the number of hours demanded by maintenance crews that will be needed to attend all the services in the 1-h interval. The whole picture when assuming 1 day demand will be possible by collecting all these 1-h intervals, furnishing the geographic location and the on-site service time to a possible proactive-routing approach.

In order to show how the methodology may be applied, a case study is developed considering a horizon of emergency service occurrences, from a given Brazilian Power Utility. The purpose of this case study is not only about the past but also about how to update the case study of actual occurrences in such a way that they may be considered for future estimations.

The process of conducting statistical analyses to construct the time series for each given 1-h interval is depicted in a graphic user interface also allowing the timeline description of the estimation of service time dispersed over the geographic area assumed, followed by the decision-making process of selecting the most relevant random variables to denote the service time demanded.

## 2. Service restoration by repair crews

Although the advent of smart grids has conducted to increasing the level of automation [1], the need for human intervention is particularly necessary in cases of extreme weather events or even in the occurrence of collisions in overhead networks, which become exposed by itself and may cause the lack of power supply.

This interruption refers to the switching from normal to the emergency operating condition [2], in all of the networks or even in a restricted part of it, conducting customers to the lack of power supply and affecting continuity indexes. In addition, companies still have the billing process affected by not addressing the load demand for a certain period.

The use of repair crews becomes imperative precisely to overcome the limitations that interfere with the remote control equipment, since the insulation of the defect can even be performed automatically but its correction requires human interference.

From this crew management comes the well-known emergency dispatching problem to repair crews already scheduled with pre-programmed services, which leads to a change of course and describes the problem of dynamic routing of vehicles [3]. Exactly in the context of service operations in electric power distribution utilities, the problem of this work is defined in [4].

The nature of the optimization problem has its roots on minimizing the waiting time between the detection of the defect and the arrival of crews at the service location, because the less this time the less the time without supply, thus contributing to the reliability of the electric power distribution system.

Another aspect worth mentioning is the random nature of events: the occurrence of emergency orders. In case of a reactive system, each new pending emergency order, or a set of them, gives rise to the problem of dynamic routing [5]. In addition, the routing problem associated with the context of this work also presents an important particularity: the service time in each service location, which is exactly the estimated service time for resolution. It follows that the dispatch problem is closed to the problem of minimum latency [6].

From the conclusions of [7], it can be observed that a decision that is completely reactive, responding to the disturbances generated by the creation of emergency orders, can lead to a sequence of mistaken decisions due to the loss of the ‘global vision’ related to the whole problem depicted after analysing all the past events over the day.

This chapter aims to anticipate the occurrence of emergency services in order to provide a proactive approach to dispatch, in which emergency occurrences can be assumed to be probable and therefore likely to be included in routing solutions of programmed orders.

## 3. Demand forecasting

The routing problem is usually addressed by assuming that attributes (customer demands, travel and service times, orders criticality) are previously known. However, most of the real cases present some level of uncertainty [8]. Anticipating these uncertainties already in planning phase is important because it allows a more precisely routing plan and favouring the crew management, operational cost reduction and also avoiding penalties from regulatory agencies through improvement of reliability indicators and the revenue from the energy sale, by reducing supply disruptions.

According to [9], the data of vehicle-routing problems can be ‘Static and deterministic’, ‘Static and stochastic’, ‘Dynamic and deterministic’ and ‘Dynamic and stochastic’. Exactly, this last class is that related to this work, referred many times as partially dynamic, this approach assumes that some of the unknown data are in stochastic format. A proactive planning can be realized by transforming the stochastic knowledge in dummy consumers with expected service time and temporal and spatial position.

With such definitions, the next section describes how to address the uncertainty by performing demand-forecasting techniques.

### 3.1. Demand forecasting

Considering the external service orders of electric power utilities, we treat two different types of situations, the commercial and the emergency orders. The commercial ones have a typical input behaviour, known deadline and, as a consequence of these characteristics, can be planned in advance, without technical or legal problems involving power failures. By contrast, the emergency orders are randomly generated, with the corresponding dynamic character [10].

As to the attendance capacity, all the available crews have commercial and emergency demand during the workday, overlapping one each other and respecting the attendance priority. This behaviour can be seen in **Figure 1**, where there is a considerable difference between the planned route and the route performed mainly due to the occurrence of emergency orders: they have higher priority with respect to commercial orders and cause modification on the route every time they are considered.

These constant changings on planned route by knowing customers and their attributes progressively classify the problem addressed as dynamic and stochastic [7]. One possible alternative that is adopted in this work is associated with an attempt to predict the emergency occurrences and answering when and where they may occur [11].

Indispensable in the planning and in the strategies for decision-making, demand forecasting oriented towards a future period should rely on precise techniques [11]. Of the commonly used techniques to perform the forecasting, the exponential-smoothing approach can be applied to predict continuous and discrete variables.

The exponential smoothing provides the service time to the analysis horizon, using Eq. (1), according to [12]

where *t* = actual period; *F*_{t+1} = future forecasting; *x*_{t} = demand in the period *t; F*_{t} = forecasting to *t*; α (0 < α < 1) is a smoothing constant.

With these definitions, what one has is the theoretical basis to realize the forecasts of time of service from the available time series.

However, it is necessary to consider that only the service time forecasting is not enough to answer the questions pointed out by this work, which are the service time related to a certain part of the geographic area considered and also related to a time interval defined [13, 14]. Exactly these challenges will be illustrated in the next section, which describes the methodological procedures that are employed.

## 4. Proposed methodology

Following the contribution by Ferrucci et al. [7], inspired by the paper of Ichoua et al. [15], the approach proposed in this work also considers the classification of orders information that has already occurred, based at service time of each request, as pointed out in **Figures 2** and **3**.

There is a well-defined pattern from the requested service time for each day of the week, very similar to business days unlike non-business days. In addition, **Figure 2** also shows that request demand is higher from 8 am to 8 pm. The last feature with regard to the influence on future requests is related to the geographical location: **Figure 3** depicts several layers, each one referring to one time interval, with their corresponding surface composed by cuboids; each cuboid’s height refers to a level of service time required in the respective area.

The division of the geographical area and the identification of time windows to define the required service time are performed by stratifying and separating the historical data in order to identify a particular time series for each part of the geographical area and also particular for each interval of 1 day of the week.

### 4.1. Demand-forecasting methodology

A natural treatment to deal with emergency-order uncertainties would be to optimize the expected values. However, the proposed methodology approaches a restrictive model of chances [16], where each viable solution must be at an acceptable and predetermined level of failure. In the presented model (see Algorithm 1), this constraint is represented by a coefficient, corresponding to the maximum value of variation allowed. This artifice prevents exceptional disturbances from leading to distortions in the statistic analysis process.

The relationship between the scale of the geographical area considered and the coefficient of variation is performed both temporally and spatially. In order to increase the forecasting accuracy, historical data are stratified to each quadrant of the area considered for each day of the week and for each hour of the day. The methodology to proceed with this stratification is detailed in the algorithm of **Table 1**.

Beta (β) corresponds to the maximum value of variation allowed, normally 0.5, without considering those regions without emergency-order occurrences. Therefore, the referred coefficient of variation has the following limits:

Instead of manually testing multiple values for the coefficient of variation, the algorithm minimizes a nonlinear function obtained from an interpolation in order to determine the best value for that coefficient. This procedure aims to minimize the size of the quadrants obtained taking into account the constraint related to the coefficient of variation.

The main contribution of this work is the stratification of the data in order to have a more accurate forecast, since the coefficient of variation is calculated for each defined quadrant.

## 5. Case study

This section describes the computational study, performed to evaluate the proposed model. The model presented in algorithm of **Table 1** was implemented in the computational tool *GNU Octave*, chosen for its efficiency features in numerical processing and matrix computation, as well as being open-source software.

### 5.1. Instance

The model was tested using an actual historical data set, provided by a power distribution utility from southern Brazil. Constituted of 9408 emergency orders in a period of 12 months, the sample is endowed with several attributes, as latitude, longitude, year, month, day, weekday, hour of day and service time, used for the analysis performed.

### 5.2. Parameters setting

*Size:* Vector representing the values referring to the side of the area analysed in the calculation. This measure is in kilometres. The higher the number of analysed values, the greater is the accuracy of the function obtained by the interpolation.

Another point to be considered is that the lower the value reported on size vector: the minimum value for the area side size may vary, depending on the study region, that is, urban or rural area. Therefore, the following vector sizes were chosen, measured in kilometres: sizeVector = [0.4 0.6 0.8 1].

*Beta (β)*: As previously informed, it corresponds to the maximum value to the coefficient of variation allowed to ensure the lowest acceptable variance. This value was set to 0.5.

*Interpolation*: In order to obtain a polynomial function that best represents the analysed data set, the Lagrange interpolation is used. It is important to highlight that the interpolation can be made with any other function which best describes the data set. The Lagrange interpolation method can construct a polynomial of *n* degree, which coincides with a given function in *n* + 1 points, according to the description of the method next presented by [17].

Lagrange interpolation theorem: Given *n*+1 distinct points, *z*_{0}, *z*_{1}, …, *z*_{n} and *n*+1 values, *w*_{0}, *w*_{1}, …, *w*_{n}, there exists a unique polynomial *p*_{n}(*z*) ∈ *P*_{n} for which

Let us introduce the following polynomials of degree *n*:

where *k* = 0,1,2,…,*n*.

Now let

Then

and hence it follows from Eq. (4) that

Eq. (7) becomes

Eq. (8) is known as the *Lagrange Interpolation Formula*, and the polynomials *l*_{k}(*z*) are called the *interpolating* or *sampling* of the Lagrange interpolation. In this problem, the *z*_{n} points are the *size* sample, and the *w*_{n} values the *coefficients of variation*.

### 5.3. Results

**Table 2** presents the optimization result, with the cell values corresponding to the coefficient of variation and the area side size.

Note that the NaN indicates that there was no coefficient of variation within the established limits.

The results presented in **Table 2** show that not all hours of the day respect the coefficient of variation constraint: one can note that periods with more incidences of emergency orders (e.g. from 19 to 23 h) presented more precise data, analytically we can see that in the period with less emergency-orders occurrence (e.g. from 1 to 6 h), the data did not present the necessary reliability.

**Figure 4** presents one solution, with the service time required for emergency-order attendance in the corresponding geographical area and restrict to a given hour of the day, in this case, it is a Tuesday evening, 19 h. From **Table 2**, one notes that the side area size of each quadrant is 0.4 km, with a coefficient of variation equal to 0.33325. It is worth noting that the dispersion depicted in **Figure 4** is for only those quadrants that respect the coefficient of variation constraint.

From the results of service times of **Figures 4**–**8**, one may note that the more precise the stratification, and the consequent reduction in the size of the side of the area (0.4 km), the greater the dispersion of the areas with the greatest demand. Such behaviour makes it more precise to define the area with the highest demand.

**Figures 9**–**12** depict the number of selected areas to forecast service time, considering a Tuesday and varying the area side size from 0.4 to 1 km. An important relationship between area side size and the number of selected areas: as area side size increases, the number of areas decreases and also the accuracy to define the dummy nodes to be further used in a routing solution approach.

## 6. Conclusion

An inherent issue of the vehicle-routing problem is due to uncertainty of real-time planning attached to the emergency orders, where the latency of an order is minimized generally conflicting with the goal of minimizing travel time, since the pre-established route is changed.

This work has presented an approach to support proactive real-time vehicle routing, by using an algorithm to estimate service time demand related to emergency orders and considering geographical attributes and time windows. From the results obtained by this approach, one may successfully use dummy nodes obtained by the forecast method proposed to be included in a further real-time routing solution. The main purpose is to support this solution in order to minimize the waiting time and the latency, by decreasing the displacements over the route directions over the day.

Computational results indicate that the forecast of service demand can be performed with great precision when optimizing the geographical area considered, showing that the integration of derived stochastic knowledge results in a more accurate dimensioning of the service teams, and significant reductions in the discrepancy between the planned and executed routes may be obtained.