## 1. Introduction

One of the main problems encountered in the modeling of power system components is the inclusion of frequency-dependent effects in a time domain simulation [1]. In practice, these effects are described by discrete frequency domain responses that are obtained via calculations or measurements [1]. For the time domain simulation, implementation of numerical convolutions is possible but is computationally inefficient. The frequency-dependent effects are usually performed in the frequency domain via complex-curve fitting processes, leading to rational function-based models which can be expressed in pole-zero form, pole-residue form, and polynomial form [2]. Once obtained, these models must be converted into a form suitable for representation in an electromagnetic Transient Program (EMTP-type) circuit simulator [3]. Commonly, such representations can be in the form of network synthesis or in ordinary differential equations (ODEs). Some methods have been adapted to particular problems due to the difficulty present in the development of a general methodology. These rational fitting techniques have been developed since the 1950s for model synthesis based on frequency response data. Some antecedents are presented below.

In 1959, Levy [4] presented a mathematical procedure of linearized Least Squares (LS) for model synthesis based on a polynomial form. Some years later, in 1963, Sanathanan and Koerner [5] improved the Levy method by introducing an iterative weighted method to reduce the biasing in the approximation caused by the linearization. Also, based on the Levy method, Lawrence and Rogers [6] presented in 1979 a sequential algorithm that allows the fitting progress of a transfer function point-to-point without matrix inversion.

In addition, a technique based on pole-zero form that quickly became popular for the modeling of overhead lines was presented by Marti [7] in 1982. This method is a numerical implementation of the well-known Bode diagrams technique. Recently, in 2017, Marti and Tavighi [8] presented an investigation based in the same rational approximation technique for the modeling of transmission lines. In this work, this fitting technique will be referred to as Asymptotic Approximation or Bode.

In 1993, Soysal and Semlyen [9] used the Gauss–Seidel optimization method to improve the results given by Levy and, 6 years later Gustavsen and Semlyen [1] developed the Vector Fitting method which has become one of the most widely used techniques.

In 2006, Gustavsen [10] proposed a modification of his algorithm in order to improve the ability of VF to relocate poles to better positions. This is achieved by replacing the high-frequency asymptotic requirement of the VF scaling function with a more relaxed condition. He called this method Relaxed VF (RVF).

The VF method is closely related to the Universal Line Model (ULM) [11]. The ULM is formulated in terms of rational approximation of the line parameters through VF. A recent publication, in 2016, [12] Bañuelos et al. propose the use of only real poles and zeroes in the ULM to improve its numerical efficiency.

In 2005, Noda [13] presented an iterative algorithm that partitions the entire frequency range in order to avoid ill-conditioning of the system when Levy method is used. Recently, in 2015, Bañuelos-Cabral et al. [2] propose the implementation of Damped Gauss-Newton (DGN) to increase the accuracy of this technique.

The aim of this book is to provide a MATLAB algorithm and a detailed description of the rational approximation techniques that are most commonly used to approximate functions in frequency domain. These techniques are: Bode’s Asymptotic Approximation (Bode), the Levy (Levy or LS) method, Iteratively Reweighted Least Squares (IRLS), the Sanathanan-Koerner (SK) method, the Noda (Noda) method, Vector Fitting (VF), the Levenberg–Marquardt (LM) method and the Damped Gauss-Newton (DGN) method.

In this chapter, the methodology for determining the state-space (SS) representation in concordance with the model used in the approximation is presented first. Next, the abovementioned fitting techniques are described in detail. Then, the techniques are tested by approximating a synthetic frequency domain response. The techniques are implemented in MATLAB environment. Finally, advantages of the methods are demonstrated in the rational approximation of the frequency-dependent parameters corresponding to a single-phase transmission line.

## 2. State-space model from a transfer function

The SS representation of a given system can be determined from its frequency response on rational form. Basically, there are three different rational forms (models) that can represent a measured or calculated frequency response: pole-zero form, polynomial form, and pole-residue form. In this section, a methodology is presented for determining the SS representation according to the given model type.

### 2.1. Pole-zero form

Usage of pole-zero form or series realization leads to a rational function-based model of *n*th-order given by (1), which is a ratio between products of first-order transfer functions,

The generalized graphic representation of pole-zero form with *n* = *m* is shown in **Figure 1** [14].

From **Figure 1**, it is possible to obtain the state equations and the output equation as

(2) |

Using algebraic manipulation in (2) and (3) to remove

where

### 2.2. Polynomial form

In this case, polynomial form or direct form realization leads to a rational function-based model of *n*th-order given by (6), which is a ratio of two polynomials,

A graphical representation of the polynomial form with *b _{m}* = 1 and

*n*=

*m*is shown in

**Figure 2**[14].

From **Figure 2**, it is possible to obtain the state equations and output equation as

To remove

where

### 2.3. Pole-residue form

Finally, pole-residue form or parallel realization leads to rational function-based model of *n*th-order given by (11), which is a sum of partial fractions

The generalized graphic representation of the pole-residue form is shown in **Figure 3** [14].

From this **Figure 3**, it is possible to obtain the state equations and output equation as

In matrix form it yields

Similarly to the pole-zero form and the polynomial form, the SS representation can be obtained through inspection. It should be mentioned that the pole-residue form produces an uncoupled SS system.

## 3. Fitting methods

This section provides a description of the most widely used techniques for rational approximation of frequency domain responses within the power systems area.

### 3.1. Asymptotic approximation (Bode) technique

The Asymptotic Approximation technique consists of a numerical implementation of the graphical technique known as Bode diagrams [15]. The Asymptotic Approximation was first implemented by Marti [7] to include the frequency dependence in transmission line modeling. This technique considers the pole-zero form with real poles and zeros only,

Considering *s* = *jω*, we get

Eq. (17) can be expressed in a standard form as

and in polar form

where *K*_{0} = *kz*_{1}*z*_{2}⋯*z _{n}*/

*p*

_{1}

*p*

_{2}⋯

*p*. The Asymptotic Approximation technique approximates only the magnitude of the frequency response, which can be expressed as

_{m}Finally, by using logarithmic properties in Eq. (20), it is obtained:

In Bode diagrams, each term of (21) is plotted individually in accordance with its already known asymptotic behavior and combined in order to obtain the desired diagram.

In the numerical implementation, the function to be fitted is compared with the sum of the line segments given by the addition of a pole or a zero in the model (21); the precision of the fitting depends on the sensitivity to locate these poles and zeros into the model.

### 3.2. Levy (Levy) method

This method was introduced by Levy [4] for complex-curve fitting. The Levy method makes the identification of the polynomial coefficients (22) in a LS sense. It is also known simply as Lest Squares (LS).

The numerical difference between the frequency response to be fitted and the model represents the error in the approximation, that is,

Multiplying both sides of Eq. (23) by *D*(*s*), we obtain

Considering that *ε*′ tends to zero, (24) can be expressed as

The unknown coefficients in (22) can now be solved by formulating (25) as a LS problem (**Ax** = **b**). Frequently, the sample size of the frequency response is greater than the number of polynomial coefficients to be calculated, so an overdetermined system is obtained, as follows:

where *k* denotes the *k*-th data sample. The objective function to be minimized is

This is called a linear least squares problem. The solution **x** satisfies the normal equations [16]:

Finally, the LS solution is obtained by

### 3.3. Weighted least squares (WLS)

The LS solution of a given system equation assumes that each equation (row in **A**) is equally important. However, the system equation itself may be biased, e. g. in the Levy method which is biased due to the multiplication of *F*(*s*) with *D*(*s*) in (24). In the rational approximation of frequency domain responses, Weighted Least Squares (WLS) is used to mitigate the biasing by giving more weight to specific equations in order to overcome the own deficiency of the used technique. Iteratively Reweighted Least Squares (IRLS), the Sanathanan-Koerner (SK) method and the Noda (Noda) method are techniques that implement the concept of WLS. These techniques are described below.

#### 3.3.1. Iteratively reweighted least squares (IRLS) iteration

A robust regression procedure is an alternative to LS solution when data are contaminated with outliers or influential observations (measurement and/or computation errors) [16, 17]. The idea of robust regression is to weight (less) these observations differently based on the proposal of weighting functions. IRLS can be considered as a robust regression procedure.

It is proposed to use IRLS for the rational approximation of frequency domain responses by using the weighting functions in inverse form, namely, data are not considered to be contaminated with outliers or influential observations; the error in the fitting in each iteration is used to weight (more) the frequency response data that have not been approximated correctly.

In WLS, the objective function to be minimized is

where **W** is a diagonal weighting matrix. Eq. (32) is called a linear weighted least squares problem and **x** the linear weighted least squares solution of the system. This solution satisfies the normal equations [16]:

Then, the WLS solution is given by

It is possible to solve (34) through an iterative process (35), where *i* is the iteration number.

Eq. (35) is the IRLS method. According to (35), **W** must be updated iteratively; the error in the approximation is used for this purpose, which can be expressed as

where **W** is updated according to

with *ψ*(*ε*) being the weighting function. In the beginning of the process, **W** is an identity matrix.

**Table 1** shows a list of weighting functions proposed to use in the implementation of IRLS. Weighting functions 1, 2, and 3 are the inverse of the functions used in robust regression [16, 17], and functions 4, 5, and 6 are proposed in this work.

Ultimately, in **Figure 4** the behavior of the weighting functions with *ε* = [−2, 2] is shown.

#### 3.3.2. Sanathanan-Koerner (SK) iteration

This method is based on the polynomial form:

The objective is to identify the coefficients for the polynomials *N*(*s*) and *D*(*s*) in (38) that minimize the error function

Sanathanan and Koerner [5] proposed an iterative procedure where (39) is divided by the denominator from the previous iteration; thus, (39) can be expressed as

The subscript *l* denotes the iteration number, and *D*(*s*) is considered 1 in the first iteration.

#### 3.3.3. Noda (Noda) iteration

This algorithm proposed by Noda [13] for the rational fitting identification is based on the function:

This technique corresponds to IRLS using the weighting function number 1 in **Table 1**. Additionally, Noda also proposed a procedure to prevent the ill-conditioning of the system by partitioning the given frequency response data into sections along the frequency axis. The technique is then applied to each section of the frequency response in order to identify the poles, and finally, the corresponding residues are obtained by means of a standard LS procedure, using the entire frequency response.

### 3.4. Vector Fitting (VF) method

Vector Fitting performs the fitting by replacing a set of heuristically calculated initial poles with a set of relocated ones through an iterative procedure [1]. VF works in two stages: First, it improves the position of the initial poles iteratively. Second, it calculates the residues in one step. This method is based on the pole-zero form

where *c _{n}* are the residues,

*p*are the poles,

_{n}*d*is the constant term and

*h*the proportional part.

#### 3.4.1. Pole identification

Considering the initial poles as *a _{n}*, and multiplying

*F*(

*s*) by an auxiliary function

*σ*(

*s*) gives

where *σ*(*s*) is defined as

with *σF _{fit}*(

*s*) being the fitting of

*σ*(

*s*)

*F*(

*s*) and

*σ*(

_{fit}*s*) being the fitting of

*σ*(

*s*). From (43) one can obtain,

Substituting (43) and (44) into (45) we obtain,

Eq. (46) indicates that the zeros of *σ*(*s*) are an approximation of the poles of *F*(*s*). To obtain the zeros one multiplies (44) by *F*(*s*) which results in

Equating Eqs. (43) and (47) yields

Algebraic manipulation in (48) gives

Eq. (49) can now be formulated as a LS problem (**Ax** = **b**). Usually, the sample size of the frequency response is greater than the number of coefficients to be calculated, so an overdetermined system is obtained:

The LS solution of the system (**Ax** = **b**) delivers the residues of *σ _{fit}*(

*s*); the zeros of this function are the poles

*a*for the next iteration, which correspond to the eigenvalues of [1],

_{n}where **G** is a diagonal matrix containing the poles and **k** is a unit column vector.

#### 3.4.2. Residue identification

Once the poles *p _{n}* for the rational approximation in (42) have been obtained, the residues can be found by solving (42) as a LS problem. The new overdetermined system is solved similarly as (50), (51) and (52).

#### 3.4.3. Relaxation (RVF)

The scaling function *σ*(*s*) (44) is observed to approach unity at high frequencies. This asymmetry with respect to the frequency gives a tendency to relocate poles in the direction of lower frequencies. In addition to this biasing, it also reduces the convergence speed and often leads to a reduced accuracy for the final model. This problem is effectively alleviated by the introduction of Relaxed Vector Fitting (RVF) [10], where the scaling function (44) is replaced by

with *σ*(*s*) over the frequency samples is fixed. This relaxed criterion allows *σ*(*s*) to freely vary in shape.

### 3.5. Levenberg-Marquardt (LM) method

The Levenberg–Marquardt (LM) method is a technique used to improve iteratively parameter values in order to reduce the sum of the squares of the errors between the model and the calculated or measured data [18, 19]. This method is actually a combination of two minimization methods: The gradient descent (GD) method and the Gauss-Newton (GN) method.

Consider the model *s* and *n* unknown parameters contained in vector **x**. The model is to be fitted to a set of frequency response data *F*(*s*) as follows:

where **x** = [*x*_{1}, *x*_{2}, ⋯*x _{n}*] is a vector that contains the coefficients to be fitted. The LS error between the fitting of the model and the frequency response data is

The aim of LM method is to find iteratively a perturbation **h** to modify the parameters **x** that reduces the objective function *γ*(**x**) [18].

#### 3.5.1. Gradient descent (GD) method

The gradient of a function indicates the direction of the maximum rate of change in a specific point of the function. GD updates the parameter values in the opposite direction to the gradient of the objective function. Thus, the gradient of *γ*(**x**) is:

where **J** is the Jacobian matrix which represents the local sensitivity of the model **x**. Finally, the perturbation **h** that moves the parameters in the direction of steepest descent is given by

#### 3.5.2. Gauss-Newton (GN) method

The Gauss-Newton (GN) method supposes that the objective function *γ*(**x**) is approximately quadratic in the parameters near the optimal solution. Approximating the model with a first-order Taylor series,

Substituting (62) into (57) gives

The perturbation **h** that minimizes *γ*(**x**) is reached when

Taking the derivative for (63) gives

and applying condition (64), we obtain the perturbation **h** calculated by the GN method

#### 3.5.3. Levenberg-Marquardt method (LM)

LM works more like the GD method when the parameters are far from their optimal value, and more like the GN method when the parameters are close to their optimal value. Levenberg [18] and Marquardt [19] proposed this method:

where *λ* is known as the Levenberg–Marquardt parameter. Clearly, for large values of *λ*, the LM method results in a GD update; large distance from the function. The GD method is utilized to provide steady and convergent progress toward the solution. As the solution approaches the minimum, *λ* is adaptively decreased; then, the LM method approaches the GN method, and the solution typically converges rapidly to the local minimum.

The following modification has also been suggested:

There are different methods for update the Levenberg-Marquardt parameter *λ*. A complete review is presented in Ref. [17]. Nevertheless, a simple method consists of reducing this parameter as the solution approaches the optimal value. The numerical implementation consists iteratively of:

Calculate the objective function

*γ*(**x**), (57).Calculate the Jacobian

**J**, taking into account the implemented model (1), (6) or (11).Calculate the perturbation

**h**, (67).If

*γ*(**x**+**h**) ≤*γ*(**x**), then**x**is replaced by**x**+**h**and*λ*is reduced.If

*γ*(**x**+**h**) >*γ*(**x**), then**x**is not replaced and*λ*is increased.Convergence is achieved when the parameters reach a tolerance value or when the iteration count exceeds a pre-specified limit.

#### 3.5.4. Damped gauss-Newton (DGN)

The DGN method can be applied to any of the alternative function-based models: pole-zero form (1), polynomial form (6), and pole-residue form (11). In the DGN method, a damping factor *α* is introduced

The DGN method (69) always takes the descendent direction that satisfies a linear search method. It has slow convergence when the parameters to be fitted are far from the solution, but the method maximizes its advantages when it is close to the solution [16]. For that reason, DGN should be used when the parameters are close to their optimal values.

Clearly, *α* defines a fraction of the step taken from GN to update the **x** parameters. In this work, we select *α* by a backtracking strategy, whereby *α* is reduced from unity until an acceptable value for *γ*(**x**) is found.

## 4. MATLAB algorithms and applications

In this section, algorithms for the rational approximation of frequency domain responses in MATLAB environment are provided. Rational fitting of synthetics functions through Bode, DGN, LS or Levy, IRLS, LM and VF are presented. Finally, the techniques are applied to the rational approximation of the frequency-dependent parameters corresponding to a single-phase transmission line.

### 4.1. Bode and DGN algorithms

Bode and DGN routines are presented in this section. The concept is simple, first Bode is implemented for the curve fitting process of a synthetic function, then DGN is used to improve the accuracy of the rational approximation given by Bode. A synthetic function is selected in the main program “Fitting_Bode_DGN.m” presented in **Table 2**. After application of the Bode routine “Bode_process.m,” shown in **Table 3**, a set of poles, residues and a constant term are obtained. These results are the initial values for the DGN routine “Damped_Gauss_Newton.m” presented in **Table 4**.

In **Figure 5** the asymptotic behavior of the Bode routine is shown, and in **Figure 6** the final rational approximation given by Bode is presented. Following, DGN routine improves the curve fitting process, this is presented in **Figure 7**. Moreover, the RMS-error on each iteration is shown in **Figure 8**.

### 4.2. LS or Levy algorithm

In this section the LS method or Levy method routine is introduced. In the main program “Fitting_OLS.m,” **Table 5**, a synthetic function can be selected by the user. Then the LS routine “Least_Squares_Method.m,” **Table 6**, is implemented for the rational approximation of this function.

**Figure 9** shows the synthetic frequency response data and the approximation given by the LS routine, additionally, the deviation in terms of absolute error is presented.

### 4.3. IRLS algorithm

Because in practice the Noda method and the SK method correspond to a specific weighting function in the IRLS technique, only the latter is presented. In the beginning of the implementation a synthetic function is selected in the main program “Fitting_IRLS.m,” presented in **Table 7**. Afterwards, the algorithm of the method “IRLS.m,” **Table 8**, is implemented for the rational approximation of the selected function.

In **Figure 10** the synthetic frequency response behavior and the rational approximation given by IRLS is presented, together with its deviation in terms of absolute error. Moreover, the RMS-error on each iteration is shown in **Figure 11**.

### 4.4. LM algorithm

In this section the LM routine is introduced. First, a synthetic function can be selected in the main program “Fitting_LM_Polynomials.m,” presented in **Table 9**. Afterwards, the algorithm of the method “LM_method.m,” **Table 10**, is implemented for the rational approximation of the selected function.

In **Figure 12** the synthetic frequency response behavior and the rational approximation given by IRLS is presented, together with its deviation in terms of absolute error. Moreover, the RMS-error on each iteration is shown in **Figure 13**.

### 4.5. VF algorithm

VF has become one of the main methodologies for the rational approximation of frequency domain responses, the MATLAB routine is available online [20], “vectfit3.m.” Here, we present an example of its application to the rational approximation of a synthetic function.

In the beginning of the implementation a synthetic function is selected in the main program “Fitting_VF.m,” presented in **Table 11**. Afterwards, initial poles for the fitting must be established “InitialPoles.m,” **Table 12**, for the VF routine. Finally in “VF.m,” VF is used (via vectfit3.m) to perform the rational approximation of the selected function. In VF.m routine, **Table 13**, the parameter opts.relax is set to 1. This implies that “relaxation” is being used.

%=====================================================================% Main program of VECTOR FITTING% (vectfit3 available in: https://www.sintef.no/projectweb/vectfit/% Author: Bjørn Gustavsen)% Authors: Eduardo Salvador Bañuelos Cabral% José Alberto Gutiérrez Robles% Bjørn Gustavsen%====================================================================clcclear allclose all%% Initial SettingsNs = 500; % Number of samplesf = logspace(-2,6,Ns); % Frequency (Hz)w = 2.*pi.*f; % Frequency (rad/seg)s = 1i*w; % Complex Frequency%% Synthetic functionschoice = menu('CHOOSE A FUNCTION','Function F1(s)',… 'Function F2(s)','Function F3(s)','Function F4(s)');if choice == 1 Np = 12; Fs =(0.83*s.^12-0.35*s.^11+0.32*s.^10+60000*s.^9+600.36*s.^8+650.23*s.^7+… 54.5*s.^6+45.6*s.^5 +0.1*s.^4-1240*s.^3+80025*s.^2+5547.*s+5008)./… (s.^12-0.89*s.^11-0.23*s.^10-4565.*s.^9+34569.02*s.^8-55450*s.^7+60*s.^6+… 1005*s.^5+8050*s.^4+7502*s.^3+10*s.^2+1014.*s-50);endif choice == 2 Np = 9; Fs = (6000*s.^9+600*s.^8+650*s.^7+54*s.^6+457*s.^5+100*s.^4+1240*s.^3+8000*s.^2+… 55400.*s+5000 )./(450*s.^9+3500*s.^8+500*s.^7-600*s.^6+1000*s.^5+… 700*s.^4-7500*s.^3+1000*s.^2+500.*s);endif choice == 3 Np = 7; Fs =(514*s.^7-364.25*s.^6-0.35*s.^5+635*s.^4-20802*s.^3+5304*s.^2+4520.*s-… 18020.025 )./(241*s.^7-32.02*s.^6+538.23*s.^5-2588*s.^4-22560*s.^3-… 604*s.^2+4150.*(s)-21052.31 );endif choice == 4 Np = 18; Ps = 2*pi*[-4500; -41000; -100+1i*5000; -100-1i*5000; -120+1i*15000; …-120-1i*15000; -3000+1i*35000; -3000-1i*35000; -200+1i*45000; -200-1i*45000;…-1500+1i*45000; -1500-1i*45000;-500+1i*70000; -500-1i*70000; -1000+1i*73000;…-1000-1i*73000; -2000+1i*90000; -2000-1i*90000]; Rs = [-3000; -83000; -5+1i*7000; -5-1i*7000; -20+1i*18000; -20-1i*18000;… 6000+1i*45000; 6000-1i*45000; 40+1i*60000; 40-1i*60000; 90+1i*10000; … 90-1i*10000; 50000+1i*80000; 50000-1i*80000; 1000+1i*45000; 1000-1i*45000;…-5000+1i*92000; -5000-1i*92000]; Ks = 0.2; Fs = zeros(1,Ns); for k = 1:length(Ps) Fs = Fs + (Rs(k)./(s.' - Ps(k))).'; end Fs = Fs + Ks ;end%% Vector Fitting Method[Ps]=InitialPoles(f,Np); %Initial poles subroutine[P,C,D,E,fVF] = VF(Ps,Ns,Fs,s,20,3); % Vector Fitting MethodFsfit = zeros(1,Ns);for k = 1:length(P) Fsfit = Fsfit + (C(k)./(s.' - P(k))).';endFsfit = Fsfit + D+ E.*s; % Fitting resulteVF = abs(Fs - Fsfit); % Deviationfigure(1)loglog(f,abs(Fs),'-k',f,abs(Fsfit),'‐‐r',f,abs(eVF),'b','LineWidth',2);xlabel('Frequency [Hz]'); ylabel('Magnitude [p.u.]')legend('Data','VF','Deviation') |

%=====================================================================% Function to use VECTOR FITTING% (vectfit3.m available in: https://www.sintef.no/projectweb/vectfit/% Author: Bjørn Gustavsen)% Authors: Eduardo Salvador Bañuelos Cabral% José Alberto Gutiérrez Robles% Bjørn Gustavsen%=====================================================================% Inputs % ‐‐‐ Ps, initial poles % ‐‐‐ Ns, number of samples % ‐‐‐ Fs, function to be fitted % ‐‐‐ s, complex frequency (rad/s) % ‐‐‐ ite, iterations % ‐‐‐ Ka, kind of fitting% Outputs % ‐‐‐ P, poles % ‐‐‐ C, residues % ‐‐‐ D, constant term % ‐‐‐ E. % ‐‐‐ RMS.function [P,C,D,E,RMS] = VF(Ps,Ns,Fs,s,Ni,Ka)weight=ones(1,Ns); % Vector of weightsopts.relax=1; % Use vector fitting with relaxed non-triviality constraintopts.stable=0; % Enforce stable polesopts.asymp=Ka; % Include both D, E in fittingopts.skip_pole=0; % Do NOT skip pole identificationopts.skip_res=0; % DO skip identification of residues (C,D,E)opts.cmplx_ss=1; % Create real-only state space modelopts.spy1=0; % No plotting for first stage of vector fittingopts.spy2=0; % Create magnitude plot for fitting of f(s)opts.logx=1; % Use linear abscissa axisopts.logy=1; % Use logarithmic ordinate axisopts.errplot=1; % Include deviation in magnitude plotopts.phaseplot=0; % Do NOT produce plot of phase angleopts.legend=1; % Include legends in plotsfor k = 1:Ni % Loop to make N-iterations [SER,Ps,rmserr,fit] = vectfit3(Fs,s,Ps,weight,opts); RMS(k) = rmserr;endP = Ps; % PolesC = SER.C; % ResiduesD = SER.D; % Constant termE = SER.E; % Proportional term |

**Figure 14** shows the synthetic frequency response behavior and the rational approximation given by VF, together with its deviation in terms of absolute error.

### 4.6. Single-phase transmission line modeling

Bode, Levy or LS, IRLS, VF, and LM are applied to the rational approximation of the frequency-dependent parameters corresponding to a single-phase transmission line. In **Figure 15a**, the diagram of the equivalent circuit for the test with *L* = 70 mH and *Ri* = 1.058 Ω is shown. The current source is considered as an ideal 60 Hz. The line length (*l*) is 100 km, with a diameter of 2.7 cm. The conductor is placed horizontally at a height of 20 m, with a 100 Ωm resistivity. The termination impedance at the end of line (*R _{l}*) is placed as 1e6 Ω, 1e-6 Ω, and 462 Ω for evaluation of the different line end cases.

The modeling consists of calculating rational approximations for the line series impedance **Z** and line shunt admittance **Y** by applying the aforementioned rational fitting techniques. **Figure 15b** shows the behavior of **Z** and **Y** as a function of frequency, calculated with 16,384 linearly spaced samples and **Figure 16** shows a flowchart for the implementation of this test case.

**Figure 17a** shows the results for the fitting of **Y** with *Np* = 3, in terms of model deviations (relative error) by Bode, Levy, IRLS, VF, and LM. It can be observed that the best approximations are obtained with Levy, IRLS, LM and VF which converge to practically the same deviation.

**Figure 17b** shows the results for the fitting of **Z** with *Np* = 14, in terms of the model deviations (relative error) by Bode, Levy, IRLS, VF, and LM. It can be seen that the best approximation is obtained with VF, IRLS, and LM.

Then, the effect of the modeling errors in **Y** and **Z** on a time domain response is evaluated. The line terminal nodal admittance matrix **Y*** _{n}* is established with respect to the two line ends 2 and 3 of

**Figure 15a**as

where **Y*** _{n}* is given by

and **Y***c*, **A** and **B** by

By using the Numerical inversion of Laplace Transform (NLT) [2, 12], the voltage responses on time domain (*v*_{1}, *v*_{2} and *v*_{3}) are calculated for the cases of open-ended, short-circuited, and perfectly matched lines end. These results are considered as a reference solution.

The rational function-based models obtained with Bode, Levy, IRLS, VF, and LM are used to calculate **Z** and **Y**. Using a similar procedure as mentioned for each technique, the voltage responses (*v*_{1}, *v*_{2} and *v*_{3}) are calculated through the NLT.

The objective is to calculate the error in the time domain introduced by the rational approximations of **Z** and **Y** in frequency domain, taking into account the reference solution.

The comparative results are shown in **Figures 18**–**20** for the cases of short-circuited, perfectly matched, and open-ended lines end, respectively. The absolute errors are consistent with the deviations for the rational approximation of **Z**.

## 5. Conclusions

In this book, Bode, Levy, Noda, SK, IRLS, VF, LM, and DGN methods have been described in detail for complex-curve fitting, and a routine implemented in MATLAB environment presented for each technique. Rational approximation of synthetic frequency responses have used to show the operation of the programs. Moreover, the techniques are used to approximate the series line impedance (**Z**) and the shunt line admittance (**Y**) corresponding to a single-phase transmission line. The main conclusions are:

Asymptotic Approximation or Bode has proved to be a reliable technique for the modeling of overhead transmission lines. Also, this method can realize the fitting on the magnitude of the function and uses only real poles and zeros. However, the error level in the approximation can be substantial and it depends on the sensitivity criterion used when inserting new poles and zeros in the fitting.

The Levy method is pioneered for complex-curve fitting. Many techniques have been developed based on this methodology. Nevertheless, the method is biased, because it weights the fitting on high frequencies too much; this fact is demonstrated in the test cases.

The IRLS method is an accurate technique for the rational fitting of frequency responses. This method permits the implementation of different weighting functions in order to improve the level error in the approximation. Its disadvantage lies in the numerical ill-conditioning encountered in approximations with a wide frequency range.

The VF method is a robust and accurate technique. This fact has positioned this methodology as one of the most important in this field.

The LM method is a technique that shows good results in terms of level error. An advantage is that it can be implemented in pole-zero form, pole-residue form, and polynomial form.

The main disadvantage for the optimization techniques like LM and DGN is that they can get stuck in a local minimum.

The rational approximation for

**Z**and**Y**in the single-phase transmission line modeling shows that the same technique does not always deliver the same result. Levy, LM and IRLS delivers more accurate result for the fitting of**Y**while VF reach the best result for the fitting of**Z**. The error levels obtained in time domain simulations are consistent with the fitting of**Z**, because in the modeling of overhead transmission lines, these parameters are more relevant, therefore VF reach the best level error.