Open access peer-reviewed chapter

Solution Methods of Large Complex-Valued Nonlinear System of Equations

Written By

Robson Pires

Submitted: 09 December 2019 Reviewed: 06 May 2020 Published: 13 August 2020

DOI: 10.5772/intechopen.92741

From the Edited Volume

Advances in Complex Analysis and Applications

Edited by Francisco Bulnes and Olga Hachay

Chapter metrics overview

723 Chapter Downloads

View Full Metrics

Abstract

Nonlinear systems of equations in complex plane are frequently encountered in applied mathematics, e.g., power systems, signal processing, control theory, neural networks, and biomedicine, to name a few. The solution of these problems often requires a first- or second-order approximation of nonlinear functions to generate a new step or descent direction to meet the solution iteratively. However, such methods cannot be applied to functions of complex and complex conjugate variables because they are necessarily nonanalytic. To overcome this problem, the Wirtinger calculus allows an expansion of nonlinear functions in its original complex and complex conjugate variables once they are analytic in their argument as a whole. Thus, the goal is to apply this methodology for solving nonlinear systems of equations emerged from applications in the industry. For instances, the complex-valued Jacobian matrix emerged from the power flow analysis model which is solved by Newton-Raphson method can be exactly determined. Similarly, overdetermined Jacobian matrices can be dealt, e.g., through the Gauss-Newton method in complex plane aimed to solve power system state estimation problems. Finally, the factorization method of the aforementioned Jacobian matrices is addressed through the fast Givens transformation algorithm which means the square root-free Givens rotations method in complex plane.

Keywords

  • large nonlinear system of equation solution in complex plane
  • complex-valued Newton-Raphson and gauss-Newton iterative algorithms
  • Cartesian coordinates

1. Introduction

This work is a tribute to Steinmetz’s contribution [1]. The reasons and motivations are stated throughout the whole document once the numerical solutions for solving power system applications are typically carried out in the real domain. For instance, the power flow analysis and power system state estimation are well-known tools, among others. It turns out that these solutions are not well suited for modeling voltage and current phasor. To overcome this difficulty, the proposal described in this chapter aims to model the aforementioned applications in a unified system of coordinates, e.g., complex domain. Nonetheless, the solution methods of these problems often require a first- or second-order approximation of the set of power flow equations; such methods cannot be applied to nonlinear functions of complex variables because they are nonanalytic in their arguments. Consequently, for these functions Taylor series expansions do not exist. Hence, for many decades this problem has been solved redefining the nonlinear functions as separate functions of the real and imaginary parts of their complex arguments so that standard methods can be applied. Although not widely known, it is also possible to construct an extended nonlinear function that includes not only the original complex state variables but also their complex conjugates, and then the Wirtinger calculus can be applied [2]. This property lies on the fact that if a function is analytic in the space spanned by x and x in R, it is also analytic in the space spanned by x and x* in C. In complex analysis of one and several complex variables, Wirtinger operators are partial differential operators of the first order which behave in a very similar manner to the ordinary derivatives with respect to one real variable, when applied to holomorphic functions, non-holomorphic functions, or simply differentiable functions on complex domain. These operators allow the construction of a differential calculus for such functions that is entirely analogous to the ordinary differential calculus for functions of real variables [2, 3]. Then, taken into account the Wirtinger calculus, this chapter shows how the Jacobian matrix patterns emerge in complex plane corresponding to the steady-state models of power flow analysis and power system state estimation, respectively.

In this chapter the classical Newton-Raphson and Gauss-Newton methods in complex plane aiming the numerical solution of the power flow analysis and power system state estimation are derived, respectively. Moreover, the factorization methods addressed to deal with the Jacobian matrices emerged from these approaches are included [4].

This chapter is organized as follows. The theoretical foundation which is based on Wirtinger calculus is summed up in Section 2. Section 3 describes two algorithms suggested to factorize Jacobian matrix in complex plane regardless if it is exactly determined or overdetermined. In Section 4, the complex-valued static model solution by using Newton-Raphson method is derived, whereas in Section 5, the Gauss-Newton method developed in complex plane is equally presented. Finally, in Section 6 some conclusions are gathered and stated the next issues to be investigated in the near future.

Advertisement

2. Theoretical foundation

2.1 Complex differentiability

A complex function is defined as

fx=uab+jvab,E1

where x=a+jb and uab, vab are real functions, u, v: R2R. Functions like Eq. (1) are in general complex but may be real-valued in special cases, e.g., squared error cost function Je2. The definition of complex differentiability requires that the derivatives defined as the limit be independent of the direction in which Δx approaches 0 in complex plane:

fx0=limΔx0fx+ΔxfxΔx.E2

This requires that the Cauchy-Riemann equations be satisfied, i.e.,

ua=vb,va=ub.E3

These conditions are necessary for fx to be complex differentiable. If the partial derivatives of uab and vab are continuous on their entire domain, then they are sufficient as well. Therefore, the complex function fx is called an analytic or holomorphic function [2]. As an example, let fx=x2 be a complex function with x=a+jb. Then,

fx=x2=a2b2=u+j2ab=v=y,E4

which under differentiation rule leads to

ua=2a=vb=2a;ub=2b=va=2b.E5

These results show that the Cauchy-Riemann equations hold, and hence fx=y=x2 is a holomorphic function.

2.2 CR-Calculus or Wirtinger calculus

Introduced by Wilhelm Wirtinger in 1927 [2], the CR-Calculus, also known as the Wirtinger calculus, provides a way to differentiate nonanalytic functions of complex variables. Specifically, this calculus is applicable to a function fx given by Eq. (1) if uab and vab have continuous partial derivatives with respect to a and b, yielding

fx=faax+fbbx.E6

Since we have

a=x+x2,a=x+x2,E7
b=jxx2,b=jxx2,E8

and by setting xx to zero, it follows that

fx=12fajfb.E9

Note that the Cauchy-Riemann conditions for f to be analytic in x can be expressed compactly using the gradient as fx=0, i.e., f is a function of only x.

Similarly, if we take the derivative of f with respect to x, that is,

fx=faax+fbbx.E10

By setting xx to zero, we get

fx=12fa+jfb.E11

Again, the Cauchy-Riemann conditions for f to be analytic in x can be expressed compactly using the gradient as fx=0, i.e., f is a function only of x.

In other words, the gradient (respectively conjugate gradient) operator acts as a partial derivative with respect to x (respectively to x), treating x (respectively x) as a constant. Formally, we have

fxcx=fxxxx=Const=12fajfb,E12
fxcx=fxxxx=Const=12fa+jfb.E13

As an example, let fxc=fxx=xx=x2=a2+b2 be a real function of complex variable which is the squared Euclidean distance to the origin, with x=a+jb. Then,

fxc=fxx=xx=a2+b2=u+jabab=v=yE14

as v=0; clearly the Cauchy-Riemann equations do not hold, and hence fxc=fxx=xx is not analytic and thus is non-holomorphic function. To overcome this apparent difficult, by applying the CR-Calculus leads to

fxcx=x;fxcx=x,E15

which suggests the geometric interpretation shown in Figure 1. Its analysis allows us to infer that the direction of maximum rate of change of the objective function is given by the conjugate gradient defined in Eq. (13). Observe that its positive direction is referred to a maximization problem (dot arrow), whereas the opposite direction concerns to the cost function minimization.

Figure 1.

Contour plot of the real function of complex variable.

Hereafter, a real-valued or complex-valued function and its argument are provided with a subscript c if it is a function in the complex conjugate coordinates, i.e., xx. Moreover, when the CR-Calculus is extended to the vector case, it is denoted that the multivariate CR-Calculus and the basic rules for the scalar case remain unchanged.

Advertisement

3. Solution of the problem: Am×nx¯n×1=y¯m×1Cm×n

As well in the real domain, there are two classes of methods for the numerical solution of large linear system of equations in complex plane:

  • Direct methods: Which produce the exact solution assuming the absence of truncation and round-off errors, by performing a finite number of flops in a finite known number of steps. These methods are usually recommended when most of the entries in the coefficient matrix are nonzero and the dimension of the system is not too large, for instance, the Gaussian elimination, the LU decomposition and QR factorization, to cite a few.

  • Iterative methods: This class of methods is beyond of this work. Notice the solution provided by these methods is approximated and the accuracy is imposed by the user. The number of ν accomplished iterations depends on the given precision or convergence criterion. This class of methods is preferred when the majority of the coefficients are equal to zero and the number of unknowns is very large. The methods of Jacobi and Gauss-Seidel are good examples, besides the classical Conjugate gradient method [5].

3.1 Three-angle complex rotation algorithm

The first studied algorithm is the three-angle complex rotations (TACR), which is derived in polar coordinates [6]. Nonetheless, the key idea behind QR decomposition is to eliminate the square roots needed for the computation of the cosine and sine which represent a bottleneck in real-time applications. Consequently, this algorithm is devoid of interest for the solution of large linear systems of equations.

3.2 Complex-valued fast givens rotations

On the other hand, the fast plane rotation [7] that is derived in complex plane and rectangular coordinates is a square root- and division-free Givens rotations [8]. In this sense, the fast plane rotation which is also referred as the complex-valued fast Givens rotations (CVFGR) is a very efficient algorithm, aiming a QR-decomposition of matrices once the computations are performed incrementally, i.e., as the data arrives sequentially in time. Thus, it allows us to reduce the overall latency and hardware resources drastically. In the forthcoming contribution, the CVFGR performance will be compared to the well-known approaches which were successfully applied to the power system state estimation [9, 10, 11], once they are accordingly converted from real to complex domain. Further proposals in the updated state of the art will be equally considered, e.g., [12] and [13], to cite a few.

The complex fast Givens transformation M is computed using Algorithm 1 such that the second component of MHx¯ is zero and MHDM is a diagonal matrix, as shown below:

M=β11αtype=1or1αβ1type=2E16
MHx¯=r0&MHdf00dg=DM=dfnew00dgnew=Dnew.E17

Notice the superscript H denotes the Hermitian operation, i.e., complex conjugate transpose.

Algorithm 1. Complex fast Givens transform.

[α,β, r, type,dfnew,dgnew] = fast.givens (f, g,df,dg);

iff=0then

type=1; α=β=0; r=g;

dfnew=dg; dgnew=df;

else ifg=0then

type=2; α=β=0; r=f;

dfnew=df; dgnew=dg;

else iff2g2then

type=1; i=f/g; s=dg/df;

α=i; β=si;

γ=si2; r=g1+γ;

dfnew=1+γdg; dgnew=1+γdf;

else

type=2; i=g/f; s=df/dg;

α=i; β=si;

γ=si2; r=f1+γ;

dfnew=1+γdf; dgnew=1+γdg;

end if

The matrices Q, M, and D are connected to the following equation:

Q=MDnew1/2=Mdiag1/sqrtdinew.E18

In the sequence the QR-decomposition using complex fast Givens transformations is presented as Algorithm 2.

Algorithm 2. Sequential fast Givens QRD decomposition.

[Qα,Qβ, type,R]=fast.givens_QRD (A);

[m,n]=size(A);

R=zerosn; D=In;

R11:n=A11:n;

fori=2:mdo

new=Ai1:n;dnew=1;

ifi<=nthen

k=i;

else

k=n+1;

end if

forj=1:k1do

Step 1: Get alpha and beta using Algorithm 1:

[α,β, Rjj, type,dfnew,dgnew]=fast.givens (Rjj, new1j,df,dg);

Step 2: Update elements based ontype

ifj<n and type=1 then

Rjj+1:nnew1j+1:n=1βα1HRjj+1:nnew1j+1:n

else ifJ<n and type=2 then

Rjj+1:nnew1j+1:n=β11αHRjj+1:nnew1j+1:n

end if

end for

ifk<=nthen

Rk1:n=new11:n;dk=dnew;

end if

end for

Note that the complex fast Givens QRD does not require any square root operation, and during each incremental QRD-update step, the incoming input data row-vector is stored, i.e., vector new. In the sequence, the input data row-vector elements are zero-out (inner for loop) in order to update upper triangular matrix R. The new vector is overwritten each time till the QRD-algorithm has exhausted all the input data, i.e., the upper triangular matrix R is entirely updated.

Advertisement

4. Complex-valued Newton-Raphson method

Aiming the solution of any set of exactly determined equations in complex plane, the vector of unknowns is regularly taken into account in the iterative algorithm as follows:

x¯c=x1x2xN1x1x2xN1T,E19

and the residual vector hereafter termed as “mismatches” vector leads to

M¯x¯c=M1M2MN1M1M2MN1T.E20

Nonetheless, here the goal is to calculate x¯c that satisfies

M¯x¯c=Ye¯x¯cYs¯=0,E21

where in Eq. (21), Ys¯ is a vector of specified quantities, i.e., constant term; Ye¯x¯c=JcΔx¯c is a vector of calculated quantities at each iteration. Consequently, the linearization of Eq. (21) from one step to the sequel leads to

M¯x¯cν1+Jν1Δx¯cν=0,E22

or

Δx¯cν=Jν11M¯x¯cν1,E23

where J is the complex-valued Jacobian matrix which the dimension is 2N1×2N1. It means that at least one complex-valued state variable have to be specified, i.e., is known.

As a further advantage provided by the Wirtinger calculus [2, 3], the Jacobian matrix which emerged in Cartesian coordinates needs lesser algebra task as well as minor implementation effort (encoding) than the former procedure in real domain [14]. Thereby, the Jacobian matrix in expanded form may be represented through four partitions matrix, yielding

J=M¯1x¯1M¯1x¯2M¯1x¯N1M¯1x¯1M¯1x¯2M¯1x¯N1M¯2x¯1M¯2x¯2M¯2x¯N1M¯2x¯1M¯2x¯2M¯2x¯N1M¯N1x¯1M¯N1x¯2M¯N1x¯N1M¯N1x¯1M¯N1x¯2M¯N1x¯N1M¯1x¯1M¯1x¯2M¯1x¯N1M¯1x¯1M¯1x¯2M¯1x¯N1M¯2x¯1M¯2x¯2M¯2x¯N1M¯2x¯1M¯2x¯2M¯2x¯N1M¯N1x¯1M¯N1x¯2M¯N1x¯NM¯N1x¯1M¯N1x¯2M¯N1x¯N1.E24

For instance, Figure 2 displays the pattern for the IEEE-57 bus system in R-domain and complex plane. This latter is given by Eq. (24).

Figure 2.

Sparsity structure of (a) real-valued Jacobian matrix; (b) complex-valued Jacobian matrix of the IEEE 57-bus system.

4.1 Jacobian matrix factorization

In order to factorize the Jacobian matrix required in Eq. (23), the recommended procedure is to operate the factorization on the augmented Jacobian matrix, yielding

Ja=JM¯x¯cν1,E25

which the dimension is 2n×2n+1, resulting

J˜a=TcM¯˜x¯cν1,E26

where Tc is an upper triangular matrix of dimension (2n×2n) and M¯˜x¯cν1 comprises the corresponding rows in the updated rhs vector of dimension (2n×1). Finally, Eq. (23) is solved by performing a back-substitution via

Δx¯cν=TcM¯˜x¯cν1.E27

On the other hand, it is recommendable to perform the convergence checking over the infinity norm of two vectors. Firstly, as the former, it occurs over the corrections to be applied to the state variables and simultaneously over the mismatches vector. This latter is included, aiming to be aware against ill-conditioned systems [14], yielding

Δx¯cνandMx¯cνtole.g.1012.E28

If Eq. (28) is satisfied, stop and print out the results. Otherwise, the state vector is updated as shown below:

x¯cν=x¯cν1+Δx¯cν,E29

and the iteration counter is increased followed by the updating of the mismatch vector and the Jacobian matrix factorization. This latter task can be mandatory or not once the Jacobian matrix may be kept constant throughout the iterative process (approximate, instead of full gain) which is a decision very often adopted after the second iteration aiming to lighten the computational burden. Further details can be found in [14], but in the sequence, a small example is presented forwarded of simulations carried out on large systems. However, as any other application in the industry, the power flow model lies in the solution of a system of linear algebraic equations which is summarized thereafter.

4.2 Nodal equation

This approach requires the nodal admittance matrix building, e.g.,

I¯=YbusV¯,E30

thus the complex nodal power can be expressed as

S¯=diagV¯I¯,E31

or

S¯=diagV¯YbusV¯.E32

Then, the nodal complex power at busk, i.e., Sk, is

Sk=VkykkVk+Vkm=0mkNykmVm,E33

where N is the number of network nodes. Therefore, the unknowns to be determined are the voltages at each node or bus into the system and the general power flow equations that model any type of branch in an electrical network, i.e., transmission lines and phase- and phase-shifting-transformers can be written, yielding

Skm=VkykmtkmtkmjbkmshVkVkykmtkmVm,E34
Smk=VmykmjbkmshVmVmykmtkmVk.E35

and their complex conjugate counterpart are

Skm=Vkykmtkmtkm+jbkmshVkVkykmtkmVm,E36
Smk=Vmykm+jbkmshVmVmykmtkmVk.E37

In Eqs. (34)(37), tkm=akmejφkm is the general off-nominal tap transformer model which is composed by an ideal transformer with complex turns ratio tkm:1 in series with its admittance or impedance. Thus, if the corresponding branch is referred to.

1. Off-nominal tap transformer: bkmsh=0 and φkm=0.

2. Pure-shifter: bkmsh=0 and akm=1.

3. Phase-shifter: bkmsh=0.

4. πtransmission line: akm=1 and φkm=0.

4.3 Small example

The power flow model in complex plane as detailed in [14] is applied to a small example system in which the diagram is shown in Figure 3, while the corresponding branch parameters and bus data, both in pu (Vbase=230kV;Sbase=100MVA), are presented in Table 1.

Figure 3.

Small 3-bus system.

BranchSeriesShunt
ijRXChargingY/2
pupuMVArpu
1-20.00120.002139.20.196
1-30.01500.0400
2-30.30001.6000
BusSpecified quantities in pu
TypePgVPloadQload
PV-21.00001.00000.21600.0918
PQ-32.7001.620

Table 1.

Branch and bus data.

The nodal admittance matrix, Ybus, leads to

Ybus=+213.3474j380.8922205.1282+j358.97448.2192+j21.9178205.1282+j358.9744+205.2414j359.38210.1132+j0.60378.2192+j21.91780.1132+j0.6037+8.3324j22.3256.E38

The whole set of intermediary results throughout the iterative process is presented in the sequence.

As in the real domain, the elements of the complex-valued Jacobian matrix remain practically unchanged after the second iteration, which suggest that we may keep them constant thereafter. Moreover, the computation of some entries can be avoided because they are complex conjugates of other entries; it turns out that these elements are PV-nodes.

Interestingly, the numerical values of the state variables corrections, state variables, and mismatches vectors calculated in the complex plane are displayed in the Tables 24, respectively.

Δx¯Δx¯ν=0Δx¯ν=1Δx¯ν=2Δx¯ν=3
Δx20.0023×e+j90.000.0003×ej90.390.0127×ej90.080.1708×e+j89.92
Δx30.1278×ej138.750.0185×ej174.560.4267×ej179.440.2326×ej179.18
Δx20.0023×ej90.000.0003×e+j90.370.0127×e+j90.070.1708×ej89.91
Δx30.1278×e+j138.750.0203×e+j178.810.4795×e+j173.220.2615×e+j173.47
Δx¯a0.1278090.0203164.795255×1042.614490×107

Table 2.

Unknown correction vector.

Convergence criteria: ΔX¯<tol.104.


x¯x¯ν=0x¯ν=1x¯ν=2x¯ν=3
x21.000×ej0.01.000×e+j0.1321.000×e+j0.1151.000×e+j0.115
x31.000×ej0.00.907×ej5.3260.889×ej5.5480.889×ej5.552
x21.000×ej1801.000×ej0.1321.000×ej0.1151.000×ej0.115
x31.000×ej1800.907×e+j5.3260.887×e+j5.4210.887×e+j5.420

Table 3.

Unknown vector.

M¯M¯ν=0M¯ν=1M¯ν=2M¯ν=3×103
M21.5680+j0.0000+0.2178+j0.0000+0.0093+j0.0000+0.1229+j0.0000
M3+2.7000+j1.4240+0.1363+j0.3648+0.0021+j0.0087+0.0011+j0.0047
M2+0.0000+j0.0000+0.0000j0.0000+0.0000j0.0000+0.0000j0.0000
M3+2.7000j1.4240+0.1363j0.36480.0021j0.00870.0011j0.0047

Table 4.

Mismatch vector.

Convergence criteria: M¯<tol.104.

4.4 Performance in larger systems

The algorithm described earlier was encoded in MATLAB by using sparsity technique and column approximate minimum degree (colamd) ordering scheme. The numerical tests were executed by using an Intel® Core™ i5-4200 CPU at 1.60 Hz 2.30 GHz, 6GB of RAM, and 64-bit operating system. A flat start condition is assigned to the state variables in all simulations.

Thereby, Table 5 presents the performance on larger systems of the Newton-Raphson method in complex plane which is highlighted in bold. The corresponding performance is compared with those of the former Newton-Raphson method developed in polar and rectangular coordinates, both in real domain. In all simulations the convergence criterion of 1×1012 is assumed.

CV-Jacobian matrix
dimension
2N1×2N1
AlgorithmsNumber of iterationsTime/iteration (s)Total time (s)
IEEE-141.RVNRMp50.01840.1647
2.RVNRMr50.01500.0942
26×263.CVNRMr50.00980.0767
IEEE-301.RVNRMp50.02060.1791
2.RVNRMr60.01050.1121
58×583.CVNRMr60.00910.0645
IEEE-571.RVNRMp60.01320.1653
2.RVNRMr60.01370.1303
112×1123.CVNRMr60.01100.0810
IEEE-1181.RVNRMp50.01620.1575
2.RVNRMr60.01960.1840
234×2343.CVNRMr50.01210.1056
SIN-19161.RVNRMp70.46423.4732
2.RVNRMr80.30952.6430
3830×38303.CVNRMr70.76994.5561

Table 5.

Performance in larger systems—squared matrices.

(p)—polar coordinates; (r)—rectangular coordinates; tol., 1×1012.

The results presented in the aforementioned table allows us to infer that the Newton-Raphson method in complex plane has very good performance. Except for the SIN-1916 bus system, the time consuming required to achieve the solution is lower than the remainder approaches.

In the next section, the Gauss-Newton method is presented. The goal is to solve overdetermined systems of equations, i.e., the former nonlinear least-squares method in complex plane.

Advertisement

5. Complex-valued weighted-least-squares method (CV-WLS)

As shown in [3], the complex-valued WLS state estimator minimizes an objective function defined as

argminx¯cJx¯c=12z¯ch¯cx¯cHΩc1z¯ch¯cx¯c,E39

where z¯c = z¯z¯ is a vector of specified complex values of dimension (2m×1), x¯c = x¯x¯ is a vector of complex-valued state variables of dimension (2n×1), h¯c(x¯c) is a vector of nonlinear functions of dimension (2m×1) that maps z¯c to x¯c, ω¯c is a vector of random errors in complex plane which dimension is (2m×1), and Ωc=Eω¯cω¯cH is a Hermitian positive-definite covariance matrix of ω¯c which dimension is (2m×2m). The superscript H stands for Hermitian operator, i.e., the transpose complex conjugate operation. Thus, the necessary condition of optimality is given by

Jx¯cx¯c=Hx¯cHΩc1z¯ch¯cx¯c=0.E40

By applying a first-order Taylor series expansion of h¯cx¯c about x¯cν, we get

h¯cx¯c=h¯cx¯cν+Hx¯cνxcx¯cν.E41

By replacing Eq. (41) into Eq. (40), we obtain

Hx¯cνHΩc1z¯ch¯cx¯cνHx¯cνxcx¯cν=0,E42

yielding the updated estimated state vector expressed as

x¯cν+1=x¯cν+Gx¯cν1Hx¯cνHΩc1Δz¯cν,=x¯cν+Δx¯cν,E43

where

Δx¯cν=Gx¯cν1Hx¯cνHΩc1Δz¯cν.E44

Notice Gx¯cν=Hx¯cνHΩc1Hx¯cν and Δz¯cν=z¯ch¯cx¯cν. Thus, the iterations are stopped when

Δx¯cνtol,e.g.,103,E45

where is the infinity norm and ν is the iteration counter.

Note that in Eq. (40), Hx¯c is the Jacobian matrix of dimension (2m×2n) defined in the complex domain, yielding

Hx¯c=Δh¯cx¯cx¯c=Δh¯cx¯cx¯h¯cx¯cx¯h¯cx¯cx¯h¯cx¯cx¯.E46

Let Jh=h¯cx¯cx¯ and Jhd=h¯cx¯cx¯ be Jacobian submatrices of dimension (m×n). They are obtained through the Wirtinger partial derivatives with respect to the complex and the complex conjugate state variables using the chain differentiation rule. Now, let us define the Jacobian matrix as

Jcx¯c=JhJhd.E47

In the important special case given by Eq. (39) where Jx¯c is a real-valued function of complex variables, the following property holds

Jx¯cRh¯cx¯cx¯=h¯cx¯cx¯=Jhd.E48

Therefore, taking into account the chain rule differentiation mentioned earlier and the property stated in Eq. (48), Eq. (46) becomes

Hx¯c=JhJhdJhdJh=Jcx¯cJcx¯cS,E49

where S is a swap operator that permutes blocks of m rows or blocks of n columns depending upon whether S pre-multiples or post-multiples a matrix, respectively. Moreover, this operator is an isomorphism from C to the dual space C, which obeys the properties S1=ST=S. It shows that S is symmetric and is equal to its own inverse, that is, S2=I. For instance, as shown in [2], this matrix is defined as

S=Δ0InIn0,E50

where In is the (n×n)-identity matrix.

Now, the complex-valued gain matrix Gx̂¯c in expanded form can be expressed as

Gx̂¯c=Gx¯x¯Gx¯x¯Gx¯x¯Gx¯x¯,E51

where Gx¯x¯=Gx¯x¯ and Gx¯x¯=Gx¯x¯, all of dimension (n×n). Then, it follows from Eq. (47) that

Gx¯x¯=12h¯x̂¯HΩc1h¯x̂¯+h¯x̂¯HΩc1h¯x̂¯,=12JhHΩc1Jh+JhdHΩc1Jhd,E52

Similarly, we get

Gx¯x¯=12h¯x̂¯HΩc1h¯x̂¯+h¯x̂¯HΩc1h¯x̂¯,=12JhHΩc1Jhd+JhdHΩc1Jh.E53

The investigation of the sparsity structure of the Jacobian matrix in complex plane given by Eq. (49), e.g., Figure 4, reveals that the Jacobian matrices in the C-domain are sparser than its counterpart in the R-domain [15]. Figure 4 is referred to the Brazilian equivalent 730-bus system which the Jacobian matrix in the C-domain has 10,396 nonzero elements, while in the R-domain, it has 18,403 nonzero elements; it is about 45% sparser.

Figure 4.

Sparsity structure of (a) real-valued Jacobian matrix; (b) real-valued gain matrix; (c) complex-valued Jacobian matrix; and (d) complex-valued hessian matrix of the Brazilian equivalent 730-bus system.

Nonetheless, the recommended numerical procedure aiming to solve the complex-valued power system state estimation problem is addressed by solving the weighted form of the right-hand-side (rhs) of Eq. (40) instead of Eq. (44), yielding

H˜x̂¯cΔx¯cν=Δz¯˜c,E54

where H˜x̂¯c=Ωc1/2Hx̂¯c is of dimension (2m×2n) and Δz¯˜c=Ωc1/2Δz¯c is of dimension (2m×1). Thus, the incremental changes in the state vector are calculated via

Δx¯cν=H˜x̂¯cΔz¯˜c,E55

where the operator is defined as the Moore-Penrose pseudoinverse [3].

Aiming to avoid explicitly store, the Q-matrix, we apply the QR-transformation to the augmented matrix, Hax̂¯c, given by

Hax̂¯c=H˜x̂¯cΔz¯˜c.E56

By storing the rotations in compact form, the complex-valued Jacobian matrix can be kept constant and only the right-hand-side vector is updated throughout the final iterations. The solution of the state vector increment given by Eq. (55) is found by executing a simple back-substitution of Eq. (56) after performing unitary transformations to the latter matrix, resulting in

H˜ax̂¯c=TcΔz¯˜˜c.E57

Here, Tc is an upper triangular matrix of dimension (2n×2n), and Δz¯˜˜c comprises the corresponding rows in the updated rhs vector of dimension (2n×1). Finally, Eq. (55) is solved by performing a back-substitution via

Δx¯cν=TcΔz¯˜˜c.E58

5.1 Small example

The one-line diagram of a 2-bus system is depicted in Figure 5. The system is provided with two PMU measurements that meter the nodal voltage magnitudes and phase angles and two real and reactive power flow measurements, which are identified by means of black bullets and red triangles, respectively. In Table 6, the transmission line parameters are given in pu.

Figure 5.

2-Bus power system.

BranchSeriesShunt
ijRXChargingY/2
pupuMVArpu
1-20.02030.131862.620.3131

Table 6.

Branch data.

From Figure 5, the complex power injections at sending and receiving end, respectively, are written yielding

S1=V1y12jb12shV1y12V2,E59
S2=V2y12jb12shV2y12V1.E60

Applying the Wirtinger calculus to Eqs. (59) and (60) leads to the Jacobian matrix given by Eq. (47) at each iteration as shown in the sequence. Here, the complex power injection measurement at each end, i.e., S1 and S2, is equal to the corresponding power flow measurement, i.e., S12 and S21, respectively.

As observed in the above expressions of Jc, the sub-matrix Jh is more sparse than the sub-matrix Jhd, which affects the sparsity structure of the complex-valued Jacobian matrix as shown earlier in Figure 4. The state variables at each iteration are provided in Table 7, while the estimated measured quantities are given in Table 8. The residual vector and the chi-squared index throughout the iterations are presented in Table 9. Notice that the estimated values obtained in the C-domain are exactly equal to those extracted from the power flow report in the R-domain. Furthermore, the Gauss-Newton iterative algorithm converges in three iterations in both vector spaces, i.e., real and complex domain.

x¯x¯ν=0x¯ν=1x¯ν=2x¯ν=3
V11.00001.00971.00001.0001
e+j0.0ej0.1015e+j0.0018e+j0.0004
V21.00000.89730.89540.8956
e+j0.0ej14.9708ej15.0924ej15.0904
V11.00001.00971.00001.0001
e+j0.0e+j0.1015ej0.0018ej0.0004
V21.00000.89730.89540.8956
e+j0.0e+j14.9708e+j15.0924e+j15.0904

Table 7.

Estimated state variables.

ẑiz¯̂ν=0z¯̂ν=1z¯̂(ν=2)z¯̂ν=3
S120.0000 − j 0.3131+1.4747 + j 0.1745+1.8873 + j 0.4815+1.8827 + j 0.4248
S210.0000 − j 0.3131−1.4014 − j 0.0707−1.8036 − j 0.5094−1.7997 − j 0.4500
S10.0000 − j 0.3131+1.4747 + j 0.1745+1.8873 + j 0.4815+1.8827 + j 0.4248
S20.0000 − j 0.3131−1.4014 − j 0.0707−1.8036 − j 0.5094−1.7997 − j 0.4500

Table 8.

CV-estimated quantities.

r¯ν=ir¯ν=0r¯ν=1r¯ν=2r¯ν=3
rV10.0000+j0.00000.1991+j0.00000.0097+j0.00180.0000j0.0000
rV20.1349j0.23330.1634j0.00000.0017j0.00150.0006j0.0001
rS121.8827+j0.73750.4080+j0.24990.0046j0.05710.0000j0.0004
rS211.7998j0.13660.3984j0.37900.0038+j0.05970.0001+j0.0003
rS11.8827+j0.73750.4080+j0.24990.0046j0.05710.0000j0.0004
rS21.7998j0.13660.3984j0.37900.0038+j0.05970.0001+j0.0003
Jx̂¯ν=i14.76541.12880.0138258.8×107

Table 9.

CV-residual vector.

5.2 Performance in larger systems

In the sequel, Table 10 presents the performance on larger systems of the Gauss-Newton method in complex plane which is highlighted in bold. In all simulations the convergence criterion of 1×103 is assumed.

CV-Jacobian matrix dimension
2m×2n
AlgorithmsNumber of iterationsTime/iteration (s)Total time (s)
IEEE-1460×283.CVNRMr30.0453030.108547
IEEE-30124×603.CVNRMr30.0671180.158415
IEEE-118368×2363.CVNRMr80.1502051.064942
SIN-3401704×6803.CVNRMr70.9928726.724038
SIN-19169642×38323.CVNRMr829.856246242.695989

Table 10.

Performance in larger systems—overdetermined matrices.

(r)—rectangular coordinates; tol.—1×103.

The comparative analysis of the results presented in the aforementioned table allows us to infer that the Newton-Raphson method in complex plane has very good performance. Except for the SIN-1916 bus system, the time consuming required to achieve the solution is lower than the remainder approaches.

Advertisement

6. Conclusions and future developments

The chapter’s prime goal is to present the advances aiming to solve nonlinear system of equations in complex plane, regardless if it is exactly or overdetermined. Firstly, it develops the former Newton-Raphson method addressed to solve exactly determined problem. Thereafter, the weighted-least-squares (WLS) is developed through the Gauss-Newton method. In both cases the applications are carried out on small examples and larger systems. All results demonstrate the advantages of the algorithms developed in complex plane over the former procedure, although the computational burden is still a bottleneck for larger systems. Highlight that there are many enhancements that can be addressed to mitigate this problem. To cite a few, they are shown as follows:

  1. Ordering schemes based on rows overlapped by columns ordering schemes [12, 13].

  2. Suppressing of complex conjugate calculation storing during the Jacobian matrices building.

Advertisement

Nomenclature

x¯cVector of the state variables in the conjugate coordinate system
x, x∗Complex and complex conjugate state variables
t, t∗Complex and complex conjugate tap position
ℜ⋅, ℑ⋅Real and imaginary part of a complex variable
JComplex-valued Jacobian matrix for the exactly determined system of equations
HComplex-valued Jacobian matrix for the overdetermined system of equations
Complex-valued mismatch vector
⋅cQuantity in the conjugate coordinate system
⋅2Squared Euclidean norm
⋅∞Infinity norm
νIteration counter

References

  1. 1. Steinmetz CP. Complex quantities and their use in electrical engineering. In: Proceedings of the International Electrical Congress. Chicago, IL: AIEE Proceedings; 1893. pp. 33-74
  2. 2. Kreutz-Delgado K. The Complex Gradient Operator and the CR-Calculus. San Diego, USA: Electrical and Computer Engineering—Jacobs School of Engineering; University of California; 2009. pp. 1-74. ArXIV e-print, arXIV:0906.4835v1 [math.OC]
  3. 3. Sorber L, Van Barel M, de Lathauwer L. Unconstrained optimization of real functions in complex variables. Society for Industrial and Applied Mathematics—SIAM. 2012;22(3):879-898
  4. 4. Pires R. Complex-valued steady-state models as applied to power flow analysis and power system state estimation [PhD dissertation]. Itajuba, Brazil: Institute of Electrical Systems and Energy—ISEE; Federal University of Itajuba—UNIFEI, 2018. Available from: https://repositorio.unifei.edu.br/xmlui/handle/123456789/55
  5. 5. David AHJ. A generalization of the conjugate-gradient method to solve complex systems. IMA Journal of Numerical Analysis. 1986;6(4):447-452
  6. 6. Maltsev A, Pestretsov V, Maslennikov R, Khoryaev A. Triangular systolic array with reduced latency for QR-decomposition of complex matrices. In: IEEE International Symposium on Circuits and Systems—ISCAS. 2006. pp. 385-388
  7. 7. Awasthi A, Guttal R, Al-Dhahir N, Balsara PT. Complex QR decomposition using fast plane rotations for MIMO applications. IEEE Communications Letters. 2014;18(10):1743-1746
  8. 8. Gentleman WM. Least squares computations by givens transformations without square roots. Journal of Mathematical Analysis and Applications. 1974;12:329-336
  9. 9. Simões-Costa AJA, Quintana VH. An orthogonal row processing algorithm for power sequential state estimation. IEEE Transactions on Power Apparatus and Systems. 1981;100(8):3791-3800
  10. 10. Wang JW, Quintana VH. A decoupled orthogonal row processing algorithm for power system state estimation. IEEE Transactions on Power Apparatus and Systems. 1984;PAS-103(8):2337-2344
  11. 11. Vempati N, Slutsker IW, Tinney WF. Enhancement to givens rotations for power system state estimation. IEEE Transactions on Power Systems. 1991;6(2):842-849
  12. 12. Catalyurek UV, Aykanat CT, Kayaaslan E. Hypergraph partitioning-based fill-reducing ordering for symmetric matrices. SIAM Journal on Scientific Computing. 2011;4:1996-2023
  13. 13. Kaya O, Kayaaslan E, Uçar B, and Duff I. Fill-in reduction in sparse matrix factorizations using hypergraphs [Research Report—8448]. 2014
  14. 14. Pires R, Mili L, Chagas G. Robust complex-valued Levenberg-Marquardt algorithm as applied to power flow analysis. International Journal of Electrical Power & Energy Systems. 2019;113:383-392. DOI: 10.1016/j.ijepes.2019.05.032
  15. 15. Pires RC, Mili L, Lemos FAB. Constrained robust estimation of power system state variables and transformer tap positions under erroneous zero-injections. IEEE Transactions on Power Systems. 2014;29(3):1144-1152. ISSN: 0885-8950

Written By

Robson Pires

Submitted: 09 December 2019 Reviewed: 06 May 2020 Published: 13 August 2020