Open access peer-reviewed chapter

Investigation of Fuzzy Inductive Modeling Method in Forecasting Problems

Written By

Yu. Zaychenko and Helen Zaychenko

Submitted: 05 November 2018 Reviewed: 15 April 2019 Published: 27 May 2019

DOI: 10.5772/intechopen.86348

From the Edited Volume

Introduction to Data Science and Machine Learning

Edited by Keshav Sud, Pakize Erdogmus and Seifedine Kadry

Chapter metrics overview

791 Chapter Downloads

View Full Metrics

Abstract

This paper is devoted to the investigation and application of fuzzy inductive modeling method group method of data handling (GMDH) in problems of forecasting in the financial sphere. GMDH method belongs to self-organizing methods and allows to discover internal hidden laws in the appropriate object area. The advantage of GMDH algorithms is the possibility of constructing optimal models. In the generalization of GMDH in case of uncertainty, new method fuzzy GMDH is described which enables to construct fuzzy models almost automatically. The algorithms of fuzzy GMDH for different membership functions are considered. The extensions of fuzzy GMDH for different partial descriptions—orthogonal polynomials of Chebyshev and trigonometric polynomials of Fourier—are considered. The problem of adaptation of fuzzy models obtained by FGMDH is considered, and the corresponding adaptation algorithm is described. The experimental investigations of the suggested FGMDH in the problem of forecasting macroeconomic indicators of Ukraine are carried out, and comparison with classic GMDH and neural network BP is performed.

Keywords

  • fuzzy GMDH
  • orthogonal partial descriptions
  • model adaptation
  • forecasting

1. Introduction

One of the most important problems in the sphere of economy and finance is the problem of forecasting economic and financial processes. The distinguishing properties of these processes are the following:

  1. The form of functional dependence is unknown, and only model class is determined.

  2. Short data samples.

  3. Time series x i t in general case is nonstationary.

In this case the application of traditional methods of statistical analysis (e.g., regression analysis) is impossible, and it’s necessary to apply methods based on computational intelligence (CI). To this class belongs group method of data handling (GMDH) developed by Ivakhnenko [1, 2] and extended by his colleagues. GMDH method belongs to self-organizing methods and allows to discover hidden laws in the appropriate object area. The advantage of GMDH algorithms is the capability of constructing optimal models.

But classic GMDH has the following shortcomings:

  1. GMDH utilizes least squared method (LSM) for finding the model coefficients, but matrix of linear equations may be close to degenerate, and the corresponding solution may appear non-stable. Therefore, the special methods for its regularization should be used.

  2. GMDH doesn’t work in the case of qualitative or fuzzy input data.

Therefore in the last 10 years, the new variant of GMDH—fuzzy GMDH—was developed and extended which may work with fuzzy input data and is free of classical GMDH drawbacks [3, 4, 5].

Fuzzy GMDH is also based on the same principles as classical GMDH but construct fuzzy models.

The main goals of this paper are to investigate different modifications of FGMDH, analyze their properties, and investigate its efficiency as compared with classical GMDH in forecasting problems.

Advertisement

2. Problem formulation

A set of initial data is given inclusive of input variables X 1 X 2 X N and output variables Y 1 Y 2 Y N , where X = x 1 x 2 x n is -n-tuple vector, N is a number of observations, and input data may be incomplete or fuzzy, in particular given in interval form. The task is to construct an adequate fuzzy forecasting model Y = F x 1 x 2 x n , and besides, the obtained model should have the minimal complexity.

2.1 Principal ideas of GMDH: fuzzy model construction

As it well known, the drawbacks of GMDH are the following [3, 4]:

  • GMDH utilizes LSM for finding the model coefficients, but matrix of linear equations may be close to degenerate, and the corresponding solution may appear non-stable and very volatile. Therefore, the special regularization methods should be applied.

  • After application of GMDH point-wise estimations is obtained, but in many cases, it’s desirable to find interval value for coefficient estimates.

  • GMDH doesn’t work in the case of incomplete, qualitative, or fuzzy input data.

Therefore, in the last 10 years, the new variant of GMDH—fuzzy GMDH—was developed and improved which may work with fuzzy and qualitative input data and is free of classical GMDH drawbacks [3].

As it is well known, GMDH method is based on the following principles [1, 2, 3]:

  1. The principle of multiplicity of models

  2. The principle of external complement which means that the whole sample should be divided into two parts—training subsample and test subsample

  3. The principle of self-organization

  4. The principle of freedom of choice

Fuzzy GMDH is also based on these principles but construct fuzzy models. Let’s consider its main ideas.

In works [3, 4, 5], the linear interval regression model was considered:

Y = A 0 Z 0 + A 1 Z 1 + + A n Z n E1

where A i is a fuzzy number of triangular form described by a pair of parameters A i = α i c i , where α i is interval center, c i is its width, and c i 0 , Zi is the input variables.

Then Y is a fuzzy number, parameters of which are determined as follows:

The interval center

α y = α i z i = α T z E2

The interval width

c y = c i z i = c T z E3

For example, for the partial description (PD) of the kind

f x i x j = A 0 + A 1 x i + A 2 x j + A 3 x i x j + A 4 x i 2 + A 5 x j 2 E4

it’s necessary to substitute in the general model (1)

z 0 = 1 z 1 = x i z 2 = x j z 3 = x i x j z 4 = x i 2 z 5 = x j 2 .

Let the training sample be z 1 z 2 z M , y 1 y 2 y M . Then for the model (1) to be adequate, it’s necessary to find such parameters α i c i i = 1 , n ¯ , which satisfy the following inequalities:

α T z k c T z k y k α T z k + c T z k y k , k = 1 , M ¯ E5

Let’s formulate the basic requirements for the linear interval model of a kind (4).

It’s necessary to find such values of the parameters α i c i of fuzzy coefficients for which:

  1. Real values of the observed outputs y k should drop in the estimated interval for Y k .

  2. The total width of the estimated interval for all sample points should be minimal.

These requirements lead to the following linear programming (LP) problem [3, 4]:

min ( C 0 M + C 1 k = 1 M x ki + C 2 k = 1 M x kj + C 3 k = 1 M x ki x kj + + C 4 k = 1 M x ki 2 + C 5 k = 1 M x kj 2 E6

under constraints

a 0 + a 1 x ki + a 2 x kj + a 3 x ki x kj + a 4 x ki 2 + a 5 x kj 2 ( C 0 + C 1 x ki + C 2 x kj + + С 3 x ki x kj + С 4 x ki 2 + С 5 x kj 2 ) y k E7
a 0 + a 1 x ki + a 2 x kj + a 3 x ki x kj + a 4 x ki 2 + a 5 x kj 2 + ( С 0 + C 1 x ki + C 2 x kj + + С 3 x ki x kj + С 4 x ki 2 + С 5 x kj 2 ) y k , C p 0 , p = 0 , 5 k = 1 , M ¯ E8

where k is a number of a point.

As one can easily see, the task (6)(8) is a LP problem. However, the inconvenience of the model (6)(8) for the application of standard LP methods is that there are no constraints of non-negativity for variables α i . Therefore for its solution, it’s reasonable to pass to the dual LP problem by introducing dual variables δ k and δ k + M , k = 1 , M ¯ . Using simplex method after finding the optimal solution for the dual problem, the optimal solutions α i c i of the initial direct problem will be also found.

Advertisement

3. Description of fuzzy GMDH algorithm

Let’s present the brief description of the algorithm FGMDH [3, 4].

  1. Choose the general model type by which the sought dependence will be described.

  2. Choose the external criterion of optimality (criterion of regularity or non-biasedness).

  3. Choose the type of partial descriptions (e.g., linear or quadratic one).

  4. Divide the sample into training N train and test N test subsamples.

  5. Put zero values to the counter of model number k and to the counter of iteration number r.

  6. Generate a new partial model f k (4) using the training sample. Solve the LP problem (6)(8), and find the values of parameters α i , c i .

  7. Calculate the value of external criterion ( N ubk r or δ k 2 r ) at the test sample.

  8. k = k + 1 . If k > C N 2 for r = 1 or k > C F 2 for r >1, then k = 1 , r = r + 1 , and go to step 9; otherwise go to step 6.

  9. Calculate the best value of the criterion for models of rth iteration δ 2 r or N ub r . If r = 1 , then go to step 6; otherwise go to step 10.

  10. If N ub r N ub r 1 ε or δ 2 r δ 2 r 1 , then go to step 11; otherwise select F best models, assign r = r + 1 and k = 1 , go to step 6, and execute (r + 1)th iteration.

  11. Select the best model of the previous iteration using external criterion and moving back by its connections and successively passing all the previous rows find analytical form the constructed model.

Advertisement

4. Analysis of different membership functions

In the first papers devoted to fuzzy GMDH [3], the triangular membership functions (MFs) were considered. But as fuzzy numbers may also have the other kinds of MF, it’s important to consider the other classes of MF in the problems of modeling using FGMDH. In paper [4] fuzzy models with Gaussian and bell-shaped MFs were investigated.

Consider a fuzzy set with Gaussian MF:

μ B x = e 1 2 x 1 2 c 2 E9

Let the linear interval model for partial description of FGMDH take the form (4). Then the problem is formulated as follows:

Find such fuzzy numbers B i , with parameters a i c i , that:

  • The observation y k would belong to a given estimate interval for the set Y k with degree not less than α , 0 < α < 1 .

  • The width of estimated interval of the degree α would be minimal.

In [4, 6] it was shown that the problem of finding optimal fuzzy model will be finally transformed to the following LP problem:

min ( C 0 M + C 1 k = 1 M x ki + C 2 k = 1 M x kj + C 3 k = 1 M x ki x kj + + C 4 k = 1 M x ki 2 + C 5 k = 1 M x kj 2 E10

under constraints

a 0 + a 1 x ki + + a 5 x kj 2 + С 0 + C 1 x ki + + С 5 x kj 2 2 ln α y k a 0 + a 1 x ki + + a 5 x kj 2 С 0 + C 1 x ki + + С 5 x kj 2 2 ln α y k k = 1 , M ¯ E11

To solve this problem like the case with triangular MF, it’s reasonable to pass to the dual LP problem of the form

max k = 1 M y k δ k + M k = 1 M y k δ k E12

with constraints of equalities and inequalities

k = 1 M δ k + M k = 1 M δ k = 0 , k = 1 M X ki δ k + M k = 1 M X ki i δ k = 0 k = 1 M X kj 2 δ k + M k = 1 M X k j 2 δ k = 0 E13
k = 1 M δ k + k = 1 M δ k + M M 2 ln α k = 1 M X ki δ k + M + k = 1 M X ki δ k k = 1 M X ki 2 ln α k = 1 M X kj 2 δ k + M + k = 1 M X kj 2 δ k k = 1 M X kj 2 2 ln α E14
δ k 0 , k = 1 , 2 M ¯ E15

Analyzing dual LP program (12)(15), it’s easy to notice that this problem is always solvable as there trivial solution δ k = 1 k = 1 , 2 M ¯ always exists. Therefore the initial problem (10) and (11) also always has solutions with any data.

Thus, fuzzy GMDH allows to construct fuzzy models and has the following advantages:

  1. The problem of optimal model determination is transformed to the problem of linear programming, which is always solvable.

  2. As the result of method work, the interval regression model is being built.

Advertisement

5. Fuzzy GMDH with different partial descriptions: orthogonal polynomials

As it is well known from the general GMDH theory, model pretenders are generated on the base of so-called partial description—elementary models of two variables. Usually as partial descriptions, linear or quadratic polynomials are used. The alternative to this class of models is application of orthogonal polynomials. The choice of orthogonal polynomials as partial descriptions is determined by the following advantages:

  • Due to orthogonal property, the determination of polynomial coefficients goes faster than for non-orthogonal polynomials.

  • The coefficients of polynomial approximating equation don’t depend on the real degree of initial polynomial model, so if a priori the real polynomial degree is not known, one may calculate the polynomials of various degrees, and by this property the coefficients obtained for polynomials of lower degrees remain the same after transfer to higher polynomial degrees. This property is the most important during investigation of real degree of approximating polynomial.

5.1 Chebyshev’s orthogonal polynomials

Chebyshev’s orthogonal polynomials in general case have the following form [5]:

F ν ξ = T ν ξ = cos ν arccos ξ , 1 ξ 1 E16

These polynomials have the following orthogonality property:

1 1 T μ ξ T ν ξ 1 ξ 2 = 0 if μ ξ ; π 2 if μ = ξ 0 ; π if μ = ξ = 0 . E17

where 1 ξ 2 is a weighting coefficient ω ξ in the Eq. (17).

The approximating Chebyshev’s orthogonal polynomial for y ¯ is obtained on the base of function S minimization:

S = 1 1 ω ξ y ξ i = 0 m b i T i ξ 2 E18

where from (18) we obtain the following expression for coefficients:

b k = 1 π 1 1 y ξ 1 ξ 2 , k = 0 2 π 1 1 y ξ T k ξ 1 ξ 2 , k 0 E19

Hence, the approximating equation takes the form

y ¯ ξ = k = 0 m b k T k ξ E20

As it may be readily seen from the presented expressions, coefficient b k in Eq. (19) doesn’t depend on the choice of degree m. Thus, the variable m doesn’t demand recalculation of b j , j m , while such recalculation is necessary for non-orthogonal approximation.

The best degree m of approximating may be obtained on the base of hypothesis that investigation results y i , i = 1 , 2 , , r have independent Gaussian distribution in the bounds of some polynomial function y ¯ of definite degree, for example, m + μ , where

y ¯ m + μ x i = j = 0 m + μ b j x i j E21

and a dispersion σ 2 of distribution y y ¯ don’t depend on μ.

It’s clear that for very small m ( m =0,1,2,…), σ m 2 decreases as m grows.

As in accordance with previously formulated hypothesis, dispersion doesn’t depend on μ; therefore, the best degree m is a minimal m, for which σ m σ m + 1 .

For determining m it’s necessary to calculate the approximating polynomials of various degrees. As coefficients b j in Eq. (20) don’t depend on μ , the determination of the best degree of polynomial is accelerated.

Let us have the forecasted variable Y and input variables x 1 , x 2 , x n . Let’s search the relation between them in the following form:

Y = A 1 f 1 x 1 + A 2 f 2 x 2 + + A n f n x n E22

where A i is a fuzzy number of triangular type given as A i = α i c i ,

functions f i are determined so [5, 6]

f i x i = j = 0 m i b ij T j x i E23

The degree m i of function f i is determined using hypothesis defined in the preceding section. So if we denote z i = f i x i , we’ll get linear interval model in its classical form.

5.2 Investigation of trigonometric polynomials as partial descriptions

Let function f x be periodic with period 2 π defined at the interval π π , and its derivative f x is also defined at the interval π π . Then the following equality holds

S x = f x ; x π π E24

where

S x = a 0 2 + j = 1 a j cos jx + b j sin jx E25

Coefficients a j and b j are calculated by Euler formulas:

a j = 1 π π π f x cos jx dx ; b j = 1 π π π f x sin jx dx ; E26

5.3 Definition

A trigonometric polynomial of the degree M is called the following polynomial:

T M x = a 0 2 + j = 1 M a j cos jx + b j sin jx E27

The following theorem is true stating that exists such M , where 2 M < N , which minimizes the following criterion:

j = 1 N f x i T M x i 2 E28

Hence the coefficients of corresponding trigonometric polynomial are determined by formulas

a j = 2 N i = 1 N f x i cos jx i ; b j = 2 N i = 1 N f x i sin jx i ; E29

Let it be the forecasted variable Y and input variables x 1 , x 2 , x n . Let’s search the dependence among them in the form

Y = A 1 f 1 x 1 + A 2 f 2 x 2 + + A n f n x n E30

where A i is a fuzzy number of triangular type given as A i = α i c i , functions f i are determined in such a way:

f i x i = T M i x i E31

The degree M i of function f i is determined by the theorem described in the preceding section. Therefore, if we assign z i = f i x i , the linear interval model will be obtained in its classical form.

Advertisement

6. Adaptation of fuzzy GMDH models

While forecasting by self-organizing methods (fuzzy GMDH, in particular), the problem of adaptation arises in the case of the training sample size increase when it’s needed to correct the obtained model in accordance with new available data. Taking into account new information obtained while forecasting adaptation may be done by two approaches. The first one is to correct parameters of a forecasting model with new data assuming that model structure didn’t change. The second approach consists in adaptation of not only model parameters but its optimal structure as well. This way demands the repetitive use of full GMDH algorithm and is connected with huge volume of calculations.

The second approach is used if adaptation of parameters doesn’t provide good forecast and the new real output values don’t drop in the calculated interval for its estimate.

In our work the first approach is used based on adaptation of FGMDH model parameters with new available data. Here the recursive identification methods are preferably used, especially the recursive LSM. In this method the parameter estimates at the next step are determined on the base of estimates at the previous step, model error, and some information matrix which is modified during all estimation process and therefore contains data which may be used at the next steps of adaptation process [5].

Hence, model coefficient adaptation will be simplified substantially. If we store information matrix obtained while identification of optimal model using fuzzy GMDH, then for model parameters adaptation, it will be enough to fulfill only one iteration by recursive LSM method.

6.1 The application of recurrent LSM for model coefficients adaptation

Consider the following model:

y k = θ T Ψ k + v k E32

where y k is a dependent (output) variable, Ψ k is a measurement vector, v k are random disturbances, and θ is a parameter vector to be estimated.

The parameters estimate θ at the step N is performed due to such formula [5, 6]:

θ N = θ N 1 + γ N y N θ T N 1 Ψ N E33

where γ N is a coefficient vector which is determined by formula

γ N = P N 1 Ψ N 1 + Ψ T N P N 1 Ψ N E34

where P N 1 is so-called an information matrix, determined by formula

P N 1 = P N 2 P N 2 Ψ N 1 Ψ T N 1 P N 2 1 + Ψ T N 1 P N 2 Ψ N 1 E35

As one can easily see in (35), the information matrix may be obtained independent on parameter estimation process and parallel to it. The adaptation of two parameter vectors θ 1 T = α 1 α m ; θ 2 T = C 1 C m ; is performed in such a way using the formulas (35)

θ 1 N = θ 1 N 1 + γ 1 N y N θ 1 T N 1 Ψ 1 N θ 2 N = θ 2 N 1 + γ 2 N y c N θ 2 T N 1 Ψ 2 N E36
y c N = y N θ 1 T N 1 Ψ 1 N

where Ψ 1 T = z 1 z m ; Ψ 2 T = z 1 z m .

Advertisement

7. Experimental investigations of FGMDH in forecasting

The goal of experiments was the forecasting of macroeconomic indicators of Ukraine and estimating of efficiency of suggested FGMDH. In experiments, the database was utilized which contains monthly values of 24 macroeconomic indicators of Ukrainian economy, since July 1995 till 2013. As forecasting variables consumer price index (CPI) and gross national product (GNP) were chosen.

While constructing forecasting models, the technique of sliding window was utilized, whose size was determined automatically by regression analysis. For determination of input variables significant for forecasting, the methods of regression analysis were also used.

The following experiments were performed:

  1. Forecasting model construction with application of different membership functions: triangular, Gaussian, and bell-wise one

  2. For macroeconomic indicators (CPI and GNP) forecasting the construction of forecasting models using different partial descriptions—classic Chebyshev’s polynomials and trigonometric polynomials.

  3. For adaptation of models, the algorithm of stochastic approximation and recurrent least squared method (RLSM) were applied.

  4. Comparative analysis of the suggested algorithms with classic GMDH and neural networks (NN), in particular neural network backpropagation, was performed.

7.1 Comparison of different membership functions

The experimental investigations of fuzzy forecasting models were carried out with following MF: triangular, Gaussian, and bell-wise. As accuracy criteria RMSE was chosen. RMSE values while forecasting CPI are presented in Figure 1.

Figure 1.

Forecasting accuracy of PCI with different MF.

As one can see, the most efficient for constructing linear interval models is application of bell-wise membership functions for fuzzy coefficients, on the second place are Gaussian MFs, and the worst forecasting accuracy was achieved with triangular MF. In the next experiment, the task was to forecast GPD values.

In Figure 2 the obtained RMSE values for forecasting GNP are presented.

Figure 2.

Forecasting accuracy of GNP with different MF.

As one can see, the results are practically the same as in the previous experiment. The best accuracy was attained with bell-wise MF.

7.2 Comparison of different partial descriptions

In the next series of experiments, the investigations of FGMDH models with the following partial descriptions were carried out: quadratic polynomials, Chebyshev’s polynomials, trigonometric polynomials, and ARIMA models. In Figure 3 accuracy of forecasting PCI is presented with different PD.

Figure 3.

Forecasting accuracy of PCI for different PD.

As we can see, the best results are obtained with models which use trigonometric polynomials as PD. Somewhat worse are results with classic quadratic polynomials. And the worst turned out to be ARIMA models as PD. It may be explained by the fact that ARIMA models are functions of one variable. That is a serious drawback of such models.

7.3 Comparison of crisp and fuzzy GMDH

For more comprehensive efficiency comparison of crisp and fuzzy GMDH, existing implementation of GMDH was extended by inclusion of new types of PD orthogonal polynomials: Chebyshev’s and trigonometric and ARIMA models as PD. As adaptation algorithm stochastic approximation and recurrent LSM were implemented. In Figures 4 and 5, the mean RMSE values for crisp and fuzzy GMDH in the whole range of data variation are presented for different types of PD without adaptation and with adaptation algorithms.

Figure 4.

Forecasting accuracy of classical and fuzzy GMDH for PCI.

Figure 5.

Forecasting accuracy of classical and fuzzy GMDH for GDP.

As one can easily see from presented results, the fuzzy algorithm GMDH shows better forecasting accuracy than classic GMDH for all adaptation algorithms.

So the results of experiments have confirmed indisputable advantages of fuzzy GMDH over classic GMDH for problem of forecasting macroeconomic indicators. In the next experiments, the comparison of fuzzy GMDH with results of neural network (NN) backpropagation was performed. The final results—MSE values on five forecasting points while forecasting CPI and GNP—are presented in Table 1.

Adaptation algorithm Without adaptation Stochastic approximation RLSM
CPI GNP CPI GNP CPI GNP
Triangle MF + quadratic polynomial 0.308 530.3 0.184 330.0 0.173 311.9
Gaussian MF + quadratic polynomial 0.294 531.3
Bell-wise MF + quadratic polynomial 0.268 497.9
Triangular MF + Chebyshev’s polynomial 0.403 621.4 0.341 458.1 0.337 377. 2
Triangular MF + Laguerre polynomial 0.372 589.5 0.264 442.9 0.293 378.5
Triangular MF + trigonometric polynomial 0.261 537.7 0.185 347.9 0.165 331.9
Triangular MF + ARIMA model 0.862 704.3 0.683 513.5 0.597 472.6
GMDH + quadratic polynomial 0.343 596.7 0.204 428.2 0.192 369.2
GMDH + Chebyshev’s polynomial 0.425 641.4 0.351 473.2 0.347 398.4
GMDH + Laguerre polynomial 0.396 598.5 0.292 459.0 0.274 376.4
GMDH + trigonometric polynomial 0.291 574.8 0.182 349.5 0.177 332.2
GMDH + ARIMA model 0.902 728.4 0.749 518.7 0.714 498.3
NN backpropagation1 0.954 792.3
NN backpropagation2 0.741 668.6

Table 1.

Forecasting accuracy (MSE) for different forecasting methods.

Neural network constructed with Neural Networks Toolbox 4.0.6 (MathWorks).


Neural network constructed with Alyuda Forecaster 1.6 (Alyuda Research).


Summing the experimental results, the following conclusions were made:

  1. Forecasting accuracy of fuzzy GMDH algorithms are, in a whole, better than of non-fuzzy GMDH.

  2. Forecasting accuracy of non-fuzzy and fuzzy GMDH algorithms are better than that of NN backpropagation. Modification of membership functions doesn’t lead to significant changes of forecasting quality, but the best results were obtained with bell-wise and Gaussian MF.

  3. The best forecasting accuracy for considered problems was obtained with models of fuzzy GMDH using quadratic and trigonometric partial descriptions.

  4. The best adaptation algorithm for fuzzy GMDH models is recurrent RLSM.

In [7, 8] the generalization of fuzzy GMDH for case when input data are also fuzzy was considered. Then a linear interval regression model takes the following form:

Y = A 0 Z 0 + A 1 Z 1 + + A n Z n ,

Consider the case of symmetrical membership function for parameters Ai, so they can be described by the pair of parameters ( a i , c i ), where

A ¯ i = a i c i , A ¯ i = a i + c i , c i is the interval width, c i 0,

and Z i is input variable which is also a fuzzy number of triangular shape, defined by three parameters Z i ¯ Z i Z i ¯ , where Z i ¯ is a lower border, Z i is a center, and Z i ¯ is an upper border of fuzzy number.

It was shown that corresponding model is also LP problem, and corresponding algorithm FGMDH was developed for such case [7, 8].

Advertisement

8. Conclusions

In this paper fuzzy inductive modeling method FGMDH is considered.

The algorithms of FGMDH with different membership functions and different partial descriptions, including orthogonal polynomials, were presented and analyzed.

The experimental investigations of GMDH and fuzzy GMDH in problems of macroeconomic index forecast in Ukrainian economy were carried out.

The comparative investigations of FGMDH with ARIMA and neural network backpropagation were performed.

Experimental result analysis has confirmed the high accuracy of fuzzy GMDH in problems of forecasting in macroeconomy.

References

  1. 1. Ivakhnenko AG, Mueller IA. Self-Organization of Forecasting Models. Kiev: Publ. House “Technika”; 1985
  2. 2. Ivakhnenko AG, Zaychenko YP, Dimitrov VD. Decision-Making on the Basis of Self-Organization. Moscow: Publ. house “Soviet Radio”; 1976. p. 363
  3. 3. Zgurovsky Mikhail Z, Zaychenko YP. Inductive modeling method (GMDH) in problems of intellectual data analysis and forecasting. In: Studies in Computational Intelligence. Switzerland: Springer Intern, Publishing AG; 2016. p. 406
  4. 4. Zaychenko YP, Kebkal AG, Krachkovsky VF. Fuzzy group method of data handling and its application for macro-economic indicators forecasting. Scientific Papers of NTUU “KPI”. 2000;2:18-26
  5. 5. Zaychenko YP, Zayets IOЮП. Synthesis and adaptation of fuzzy forecasting models on the basis of self-organization method. Scientific Papers of NTUU “KPI”. 2001;3:34-41
  6. 6. Zaychenko Y, Zayets IO, Kamotsky OV, Pavlyuk OV. The investigations of different membership functions in Fuzzy Group Method of Data Handling. Control Systems and machines. 2003;2:56-67
  7. 7. Zaychenko YP. Fuzzy models and methods in intellectual systems. Kiev: Publ. House “Slovo”; 2008. p. 354
  8. 8. Zaychenko Y. Fuzzy Group Method of Data Handling under fuzzy input data. System Research and Information Technologies. 2007;3:100-112

Written By

Yu. Zaychenko and Helen Zaychenko

Submitted: 05 November 2018 Reviewed: 15 April 2019 Published: 27 May 2019