Open access

Numerical Algorithms of Finding the Branching Lines and Bifurcation Points of Solutions for One Class of Nonlinear Integral Equations

Written By

B. M. Podlevskyi

Submitted: 27 April 2012 Published: 24 October 2012

DOI: 10.5772/48735

From the Edited Volume

Nonlinearity, Bifurcation and Chaos - Theory and Applications

Edited by Jan Awrejcewicz and Peter Hagedorn

Chapter metrics overview

1,941 Chapter Downloads

View Full Metrics

1. Introduction

When investigating the nonlinear equations of the form where the operator nonlinearly depends both on the parameter and the function, the formalistic approach, which is based on linearization, is applied. The application of this approach shows, that the branching points of equation can be only those values of parameter, for which unit () is the eigenvalue of the corresponding linearized equation (see, eg, [20])

with the operator-valued function (is a set of linear operators, is the spectral parameter), nonlinearly depending on the parameter. If the linearized equation linearly depends on the parameter, i.e., then its eigenvalues will be the branching points of initial equation. In a general case the curves of eigenvalues appears and then the branching points will be those values of parameter of the problem

for which

E1

The theory of branching solutions of nonlinear equations arose in close connection with applied problems and development of its ever-regulated by the new applied problems. Some of these problems is reflected in the monographs [3, 4, 20], as well as in several articles (see, eg, [5, 6] and references therein)

Application of the cited above approach to the nonlinear integral operator arising at synthesis of the antenna systems according to the given amplitude directivity pattern, brings to the nonlinear two-parameter eigenvalue problem

with an integral operator analytically depending on two spectral parameters and.

The essential difference of the two-parameter problems from the one-parameter ones is that the two-parameter problem can not have at all the solutions or, on the contrary, to have them as a continuum set, which in the case of real parameters are the curves of eigenvalues.

Such problems are still not investigated because there are still many open questions connected with this problem such as, for example, the existence of solutions and their number, and also the development of numerical methods of solving such spectral problems for algebraic, differential and integral equations.

In the given work an algorithm of finding the branching lines of the integral equation arising in the variational statement of the synthesis problem of antenna array according to the given amplitude directivity pattern as, for example, in [2] is proposed.

Advertisement

2. Preliminary. Nonlinear synthesis problem

We consider the radiating system, which consists of identical and identically oriented radiators of the same for all radiators directivity pattern (DP), in which the phase centers are located on the plane (grid plane) of Cartesian coordinate system. We believe that the coordinates of radiators form a rectangular equidistant grid, focused on the axes and symmetric with respect to these axes. Then the function that describes the DP (plane array factor) of equidistant plane system of radiators (plane array) has the form [2].

E2

where are the complex currents on the radiators, are the angular coordinates of a spherical coordinate system whose center coincides with the center of the Cartesian coordinate system, is the integer function that sets the number of elements in the th row of the array. Thus, the number of elements in this array is equal to.

We introduce the generalized variables

and denote by and, respectively, the distance between adjacent radiators along the axes and. Then the coordinates of the radiators are calculated as

and the plane array factor (1) can be represented as

E3

Where

E4

Note that the function is periodic with a period for the variable and with a period for the variable. Denote by the region that corresponds to one period and assume that the required amplitude directivity pattern is given in some region and is described by the function that is continuous and nonnegative in and is equal to zero outside.

We must find such currents on radiators that created by them directivity pattern will approach by the amplitude to the given directivity pattern in the best way. To this end, we consider the variational statement of the problem as, for example, in [2] or [18].

2.1. Variational statement of the synthesis problem

Thus, the synthesis problem we formulate as a problem of minimizing the functional [18]

E5

on the space, i.e.

which characterizes the magnitude of mean-square deviation of modules of the given directivity pattern and the synthesized one in the region.

From the necessary condition of the functional minimum, we obtain a nonlinear system of equations for the optimum currents on radiators

E6

or the equation for the optimum directivity pattern, which is equivalent to (4)

E7

Where

E8

is the kernel, which essentially depends on the coordinates of antenna array.

Next, consider the rectangular grid with geometric center at the origin, which consists of elements. Here. We believe also that the amplitude directivity pattern is given in the region. Denote by and the intervals of change of the angle in the region at and, respectively, and introduce new variables

Then, and the kernel in equation (5) is real and takes the form [2]

E9

where and are the main parameters of the problem. Thus, equation (5) for optimal DP takes the form

E10

and equation (4) for optimal currents takes the form

E11

Equivalence of equations (7) and (8) means that between the solutions of these equations one-to-one correspondence exists, i.e., to each solution of equation (7) corresponds the solution of equation (8) and vice versa. This means that if is a solution of equation (8), the corresponding to it solution of equation (7) is determined by the formula

E12

and if is a solution of equation (7), the corresponding to it solution of equation (8) is determined by the relation

E13

Since equations (7) and (8) are nonlinear equations (Hammerstein type), they may have nonunique solutions, the number and properties of which depend on the number of elements in the antenna array and their placement, and also on the properties of the given amplitude directivity pattern.

It is easy to see that one of possible solutions of equation (7) (call it trivial) is

E14

Experimental results of numerical synthesis of the directivity pattern for different values of parameters and show that with growth of parameters and there are other solutions that branch off from a trivial solution and they are more effective in terms of the values of functional (3), from 0% to %

In particular, for the given directivity pattern the values of functional (3), which it takes on the optimal solution for different values of the main parameters and (0.739543, correspond, to 0.57, 0.719989 - 0.60, 0.644291 - 0.65, 0.559552 - 0.70, 0.493709 - 0.75) is smaller than the values of functional (3) for the trivial solution at the same parameter values and (0.739769, 0.741211, 0.734128, 0.707903, 0.661929), respectively by 0.03%, 2.86%, 12.24%, 20.95% and 25.41%.

Numerical examples of the trivial and branching solutions for the given directivity pattern and the basic parameters of are shown in Fig. 1 -- Fig. 4.

In Fig. 1 shows the trivial solution, which creates a symmetrical inphase current distribution on the radiators of array (Fig. 2). The amplitude of the synthesized directivity pattern, which branches off from, is shown in Fig. 3, and the optimal current on the radiators that it creates, is asymmetric and is shifted to the first quadrant relatively of the center of array (Fig. 4).

Figure 1.

Amplitude directivity pattern of a trivial solution

Figure 2.

Current cophasal distribution on the radiators, which creates the diagram

Figure 3.

Amplitude directivity pattern of the solution branched from

Figure 4.

Optimal current distribution on radiators, which creates the branched solution

Figure 5.

Given amplitude directivity pattern

The branching solutions are still more effective for directivity patterns that do not have central symmetry. For example, for the given directivity pattern and the main parameters, which is shown in Fig. 5, numerical examples of the trivial and the branching solutions are shown in Fig. 6 - Fig. 9. From these figures we see that the branching solution (the amplitude directivity pattern of which is shown in Fig. 8, and the optimal distribution of the current on the radiators that it generates is shown in Fig. 9) more accurately than the trivial solution (11) (amplitude directivity pattern is shown in Fig. 6, which is created by symmetric inphase current (Fig. 7)) approximates the given directivity pattern not only in the mean square approximation (in terms of the values of functional (3) - 0.050109 in comparison with 0.185960) to about 73%, but also with respect to the form.

Thus, in most cases from the practical point of view the nontrivial solution, that branches off from with growth of parameters and is interesting.

Figure 6.

Amplitude directivity pattern of trivial solution

Figure 7.

Inphase current distribution on the radiators, which creates the diagram

Figure 8.

Amplitude directivity pattern of solution branched off from

Figure 9.

Optimal distribution of current on the radiators, which creates the branching solution

2.2. Problem of finding the branching lines

The points of possible branching of solutions of integral equation (7) are such values of real physical parameters, in which homogeneous integral equation [18]

E15

obtained by linearization of equation (7), has solutions distinct from identical zero [20]. Thus, we have obtained the nonlinear (with respect to parameters and) two-parameter eigenvalue problem

E16

It is easy to be convinced, that at arbitrary finite values, , the function is the eigenfunction of equation (12). From this it follows, that the operator has a spectrum, which coincides with the first quadrant of the plane.

The problem consists in finding such range of real parameters and of the problem (13), for which there appear the solutions different from.

It should be noted that in a special case, when it is possible to separate variables in the function, i.e. to present as, the equation (12), provided that function can also be presented as, decomposes in two independent one-parameter equations, i.e. with operators .

The study of such equations is carried out in [2, 18], and it is possible to apply, for example, the algorithms of the work [11, 13, 15] to solution of such equations.

In the given work the numerical algorithms to solve more complicated problem when the variables are not separated, are proposed.

Advertisement

3. Basic equations

First we will show that the kernel (as in one-dimensional case [18]) in the region for arbitrary and is a positive kernel of the integral operator

To this end, consider the scalar product

Substituting the expression for (6), we obtain

Obviously, the last inequality transforms into equality only when. From this it follows that is positive, and positive operator leaves invariant a cone of the continuous nonnegative functions on. As a result, we obtain that is positive on function. Taking it into account, we shall reduce the operator (12) to a selfajoint form by a standard method. Introducing a new function

E17

where, we obtain the integral equation

E18

with a symmetric kernel

Since at arbitrary and the function is the eigenfunction of the equation (12), then with regard for (14), the eigenfunction of the equation (15) at arbitrary and will be the function

which corresponds to a spectrum of the operator (15), coinciding with the first quadrant of the plane.

To find the solutions distinct from, we shall eliminate this function from the kernel, then the equation (15) will be reduced to the integral equation

E19

with a symmetric kernel

E20

From Schmidt's lemma [20] it follows, that will not be the eigenfunction of equation (16) anymore. That is, we have eliminated a continuum set of eigenvalues from a spectrum of the operator (16), which coincides with the first quadrant of the plane that corresponds to the function. So, we obtain a self-adjoint generalized eigenvalue problem

E21

with the operator which is a continuously differentiable with respect to the parameters and. The existence of partial Frechet derivatives, , and, at arbitrary points follows from the continuity of the kernel on the set of its variables in the region and the existence and continuity in the derivatives, and, which because of their bulky form, are not presented.

Using the property of degeneracy of the kernel, we will reduce equation (16) to an equivalent system of algebraic equations.

Using the formula (6), we write the kernel as where

Then equation (16) takes the form

Where

E22

and the unknown coefficients are determined as solutions of a homogeneous system of linear algebraic equations where

So, we have obtained the two-parameter nonlinear (with respect to the spectral parameters ) matrix eigenvalue problem equivalent to (18)

E23

with symmetric matrix of dimension, is the identity matrix of dimension, ,.

Thus, the problem of finding lines the branching of solutions of equation (7) is reduced to finding the eigenvalues curves of nonlinear two-parameter spectral problem (19). Obviously, in order the problem (19) to have a nonzero solution it is necessary that

E24

i.e. the eigenvalues of problem (19) are zeros of function.

Advertisement

4. Algorithm of finding the eigenvalue curves

The main calculational part of algorithm proposed is the implementation method proposed in [14, 15] to compute all eigenvalues of the nonlinear matrix spectral problem

E25

belonging to some given range of the spectral parameter at the given value of parameter. In the problem (21), and is the real matrix whose elements depend nonlinearly on the parameters and. In order to detail how the method [15] is applied to the problem under consideration in this paper, we present the necessary results from [15].

Thus, we replace in the problem (21), for example, the parameter by the expression and consider the appropriate one-parameter problem

E26

at the given fixed values and. Then, obviously, the eigenvalues of problem (21) are zeros of function

E27

where is a real matrix whose elements depend nonlinearly on the parameter.

One should determine how many zeros of the function, and, therefore, the eigenvalues of the problem are in some given range of change of parameter and calculate each of them.

4.1. The argument principle of meromorphic function

In the basis of algorithm of finding number of zeros and their approximations, which are in some areas, is the statement that follows from the argument principle of meromorphic functions.

Advertisement

Statement. Let the meromorphic function have in the region zeros (with regard for their multiplicity) and no zeros on the boundary of region, then the number is determined in accordance with the principle of the argument

E28

and relations

E29

are true, where

E30

Thus, knowing, from the system (24) we can find the zeros of functions that are in the region.

By putting the interval in the region, such as a circle with center at and radius, and applying the above statement to the meromorphic function, you can find all the eigenvalues of problem (22), belonging to the given region, i.e. to the given interval. The integrals in (23) and (25) we can replace by some approximate quadrature formulas, such as rectangles at points on, and since is a circle, then to calculate quantities, we obtain the relation

E31

where. The system itself (24) we solve using Newton's method, by choosing the initial approximation on the border of the region:

E32

The found eigenvalues can be refined, using them as initial approximations for Newton's method

E33

or for one of bilateral analogies of Newton's method [15], for example,

E34

The argument principle (23) and formula of the argument principle type (24), (25) were repeatedly applied in solving various spectral problems (see, for example, [1, 7, 9] and references therein), but the peculiarity of the proposed algorithm is to compute the values of function and its derivatives basing on LU-decomposition of the matrix.

4.2. Numerical procedure of calculating the derivatives (the first and the second) for the matrix determinant

Theorem. If the elements of square matrix are differentiable functions with respect to parameter, then for any for derivatives of determinant of matrix the relations

E35
E36

are true, where and are, respectively, the elements of the upper triangular matrix, and in decompositions

E37
E38
E39

and is the lower triangular matrix with single diagonal elements.

Proof. It is known that matrix of order, which for any has the major minorities of all orders from 1 to, differen from zero, by using the LU-decomposition can be written as (32), where is the lower triangular matrix with single diagonal elements, and is the upper triangular matrix. Then

Since the elements of a square matrix (and hence the matrix) are differentiable functions with respect to, then for any we obtain that

E40
E41

To find the values we differentiate (32) with respect to. We obtaine (33), i.e.

where, , , and are the elements of matrix. Now, differentiating the last equality with respect to, we obtain (34), namely: where, , , and а are the elements of matrix. Thus, from (33) and (34) we obtain (35) and (36), i.e. (30) and (31). Theorem is proved.

Therefore, to calculate, , and it is necessary to calculate

E42

at a fixed, from which

E43

Elements of matrices in the decompositions (37) can be calculated using the recurrent relations,

If some of the principal minors of the matrix of order are zero, then the decomposition (32) may not exist or, if it exists, it is ambiguous.

In practice, the best way to establish the possibility of LU-decomposition is to try to calculate it. It may happen that (is the order of the main minor of the matrix, which is zero). To avoid this, in the process of decomposition one may use a series of permutations of rows (and/or columns) of matrix with a choice of principal element. In this case the decomposition (37) can be written as

E44
E45

where is a permutation matrix, moreover, where is a number of permutations (for example, rows ). In this case the relations (38) take the form

E46

Since in the relations (38) the value of function and its derivative is calculated only on the boundary region, i.e. at the given points, then for their calculation we use the same numerical procedure of decomposition of matrices (37). As a result, to calculate the values we obtain the relation

E47

where are the elements of matrices in decomposition (37) at fixed.

Now, if we know some approximation to the eigenvalue, then the correction to construct the successive approximations for Newton's method (28) assumes the form

E48

and bilateral analogue of Newton's method (29) takes the form

E49

where are the elements of matrices and in the decompositions (37) at fixed, and are the elements of matrices in the decompositions (37) at fixed.

Thus, the algorithm of finding the eigenvalue curves of the problem (19) consists of the following steps.

Algorithm 1.

Determine the interval, where we find the eigenvalues of problem (21). This can be a single-spaced interval or a sequence of intervals such that. For this purpose we put this interval in a circle (area), setting the center of the circle and radius, and also the number of points of partition of the boundary of the region, i.e. the circle.

Determine the value of parameter giving the next meaning for the values and.

Using the decomposition (37) for complex values, we determine the number of eigenvalues that are in the selected area, by the formula and their approximate values we find by solving the system of equations (24), after calculating the right part of the formula (42).

Using the decomposition (37) for real values, we refine all eigenvalues that fall in the area, using the Newton method or bilateral analogue of Newton's method (44). As initial approximation we take the approximate values obtained in Step 3.

Go to Step 2.

If necessary, we correct the area by changing it center and / or radius and go to Step 2, otherwise go to Step 7.

The end.

Application of modification of algorithm for linear two-parameter problems was considered in [16].

Advertisement

5. Algorithm of finding the bifurcation points of eigenvalue curves

Note that if two eigenvalue curves intersect at some point, this point is called the point of bifurcation (or branch the point). Sufficient criterion for the existence of such points have been known long ago (see, for example, [8]) and consists in that the point is a bifurcation point of equation

E50

if conditions, are satisfied, and the second order partial derivatives are different from zero. But this criterion was not often used in practical calculations, because it requires calculation of derivatives of the determinant of matrix.

Using the algorithm of computing derivatives of the determinant of the matrix proposed in section 2 this criterion can be effectively used to calculate the bifurcation points of equation (45).

Thus, the problem consists in determining such parameters and which are the solutions of two nonlinear algebraic equations

E51

Note that, with some approximation to solution of (46), for its solution the iterative process of Newton's method can be applied as in [12]

E52

where

E53

Further we assume that the determinant of matrix of the second derivatives (48) whose elements are calculated at point different from zero.

Thus, at each step iterative process to compute the function and its partial derivatives (first and second) only for fixed values of the parameters and. This can be realized in a numerical procedure using the LU-decomposition of matrix, namely: where, , , , , are the diagonal elements of matrices, , , , and in decompositions which are obtained from decomposition

Here, , , , ,.

From this it follows that to calculate, , , , and it is necessary for fixed and to calculate

E54

from which

E55

The elements of matrix from decomposition (49) can be calculated directly using the recurrent relations

which are generalization of recurent relations of Section 3.2.

Thus, the algorithm of finding of the bifurcation points of eigenvalue curves of two-parametric spectral problem consists of the following steps.

Algorithm 2.

To set the accuracy of calculations: with respect of the parameters - and with respect of the function -

Initialize

Step 3. for m =1,2, … up to achievement of accuracy do

Calculate the matrix, , and, , , for.

Using the decomposition (49) and relations (50) we calculate, , , , and and construct the matrix of the second derivatives of (48).

Compute the next approximation to and by the formula (47)

Step 7. end for m

Step 8. if then go to Step 10.

Step 9. else Initialize a different initial approximation to the bifurcation point and go to Step 3.

Step 10. The end

Advertisement

6. Analysis of numerical results

In conducting a series of numerical experiments on the synthesis of antenna arrays, Algorithms 1 and 2 were used to find the curves of eigenvalues for two-parameter eigenvalue problem, which are the branching lines of solutions of nonlinear synthesis equation (7) and their bifurcation points. Numerical calculations were carried out as for the problems in which in the function that describes the given directivity pattern of the array, the variables are separated and are not separated.

In Fig. 10 - Fig. 13 are shown the curves of eigenvalues for four problems, in which the given directivity patterns are defined by the formulas, , and, respectively.

In conducting numerical experiments the interval of changing of parameter is divided into sequence intervals, each of which is puted in a circle of corresponding radius with respective center. Number of points partition of boundary of each circle was constant and equal. On the step 2 of algorithm the values of was selected with the interval with a step as well.

Table. 1 presents the bifurcation points for four directivity patterns, when the variables are separated and three directivity patterns when the variables are not separated. For the first four diagrams a bifurcation point is shown also, which can be obtained by other methods, provided that in the function the variables are separated.

The Table shows that when the variables in the functions and are separated, the results obtained by different approaches (reduction of one-parameter problems to the transcendental equations and solving them [2], by methods of descent [18] and also by bilateral methods proposed in [11, 13, 15]) are the same.

Note that the bifurcation points (at least their rough estimates) may be obtained graphically from Fig. 10 - Fig. 13, and may be clarified by the Algorithm 2.

Figure 10.

Eigenvalue curves for

Figure 11.

Eigenvalue curves for

Figure 12.

Eigenvalue curves for

Figure 13.

Eigenvalue curves for

Table 1.

Bifurcation points for different types of the given directivity pattern

Advertisement

7. Concluding remarks

Numerical experiments with the calculation of eigenvalues and eigenvectors, realized for certain specified types of directivity pattern by the proposed algorithms, and comparison them with existing results obtained by other methods shows their efficiency (in terms of bilateral approximations and convergence rate). Developed and implemented algorithms for numerical finding of the branching lines of nonlinear integral equations, the kernels of which nonlinearly depend on two spectral parameters and their bifurcation points, yielded the new results, namely:

We have found all valid solutions (the curves of eigenvalues) of the problem (19), which fall in the interval of modified parameters and, that we are interested in.

For the problems in which the function admits separation of variables, another solution to the problem (19) has been found (for example, for and this is, what is shown in Fig. 10 and Fig. 11, respectively), which corresponds to the synthesized directivity patterns in which the variables are not separated.

For the problems in which the function does not allow separation of variables, we have found the solutions to the problem (19) (for example, for and are and, shown in Fig. 12 and Fig. 13, respectively), which are supposed to exist only for the diagrams, where the variables are separated.

We have calculated the bifurcation point of eigenvalue curves for the problems in which the function does not allow separation of variables (eg, and). For these diagrams there are no known results. The results have been obtained for the first time.

Since the spectral parameters are the geometric and electromagnetic characteristics of radiating systems, the solution of this problem makes it possible to obtain the necessary information at the design stage, choosing the optimal ones with respect to the size and electrodynamic characteristics of the radiating system.

Note that such two-dimensional problem was studied also in the works [10, 17, 19], but there numerical results obtained for some directivity patterns are not reliable.

To complete we shall mark, that the offered algorithm of calculation of derivatives of matrix determinant can be used and in the approach in which basis the implicit function theorem is. In such approach it is necessary to solve the Cauchy problem

E56

for which the right part of equation (51) can be calculated by the algorithm of calculation of derivatives of matrix determinant. Besides by the algorithm, given in this paper, it is possible numerically to define a number of eigenvalues, and, therefore, the eigenvalue curves, which are in the given range of spectral parameters and to calculate the initial value for Cauchy problem for each curve.

References

  1. 1. AbramovA. A.Ul’yanovaV. I.YukhnoL. F.1998The Argument Principle in a Spectral Problem for Systems of Ordinary Differential Equations with Singularities, Comput. Math. Math. Phys, 38
  2. 2. AndriychukM. I.VoitovichN. N.SavenkoP. O.TkachukV. P.1993The antenna synthesis according to prescribed amplitude radiation pattern: numerical methods and algorithms, Naukova Dumka, Kiev. [in Russian].
  3. 3. AwrejcewiczJ.1989Bifurcation and Chaos in Simple Dynamical Systems, World Scientific, Singapore.
  4. 4. AwrejcewiczJ.1991Bifurcation and Chaos in Coupled Oscillators, World Scientific, Singapore.
  5. 5. AwrejcewiczJ.1990Numerical investigations of the constant and periodic motions of the human vocal cords including stability and bifurcation phenomena, Dynamics and Stability of Systems Journal, 5
  6. 6. CliffeK. A.SpenceA.TavenerS. J.2000The numerical analysis of bifurcation problems with application to fluid mechanics, Acta Numerica, 9
  7. 7. DelvesL. M.LynessJ. N.1967A numerical method for locating the zeros of analytic function, Math. Comput, 21
  8. 8. GoursatE.1930Cours d’analyse mathematique.Tome I., Gauthier-Villars, Paris.
  9. 9. KravanjaP.SakuraiT.van BarelM.1998On locating clusters of zeros of analytic functions, BIT, 38
  10. 10. KravchenkoV. F.ProtsakhL. P.SavenkoP. O.TkachM. D.2010The Mathematical Features of the Synthesis of Plane Equidistant Antenna Arrays According to the Prescribed Amplitude Directivity Pattern, Antennas, 333445in Russian].
  11. 11. PodlevskyiB. M.2007On certain two-sided analogues of Newton’s method for solving nonlinear eigenvalue problems, Comput. Math. Math. Physics, 47
  12. 12. PodlevskyiB. M.2008Newton’s Method as a Tool for Finding the Eigenvalues of Certain Two-Parameter (Multiparameter) Spectral Problems, Comput. Math. Math. Phys, 48
  13. 13. PodlevskyiB. M.2009Bilateral analog of the Newton method for determination of eigenvalues of nonlinear spectral problems, J. Mathematical Sciences, 160
  14. 14. PodlevskyiB. M.2010On one approach to finding the branching lines and bifurcation points of solutions of nonlinear integral equations whose kernels depend analytically on two spectral parameters, J. Mathematical Sciences, 171
  15. 15. PodlevskyiB. M.2011The numerical methods and algorithms of solution of the generalized spectral problems. The thesis for a Doctor’s Degree in Physics and Mathematics, Institute of Mathematics NAS of Ukraine, Kyiv. [in Ukraine]
  16. 16. PodlevskyiB. M.KhlobystovV. V.2010On one approach to finding eigenvalue curves of linear two-parameter spectral problems, J. Mathematical Sciences, 167
  17. 17. ProtsakhL. P.SavenkoP. O.TkachM. D.2006Method of implicit function for solving eigenvalue problem with nonlinear two-dimensional spectral parameter, Math. Methods and Physicomechanical Fields, 49in Ukraine].
  18. 18. SavenkoP. O.2002Nonlinear problems of radiating systems synthesis (theory and methods of solution), IAPMM NASU, Lviv. [in Ukraine].
  19. 19. Savenko, Petro & Tkach, Myroslava2010Numerical Approximation of Real Finite Nonnegative Function by Modulus of Discrete Fourier Transform, Applied Math., 1
  20. 20. VainbergM. M.TrenoginV. A.1969Branching theory of the solution of nonlinear equations, Nauka, Moscow. [in Russian].

Written By

B. M. Podlevskyi

Submitted: 27 April 2012 Published: 24 October 2012