## 1. Introduction

Artificial neural network models (NN) have been widely adopted on the field of time series forecasting in the last two decades. As a kind of soft-computing method, neural forecasting systems can be built more easily because of their learning algorithms than traditional linear or nonlinear models which need to be constructed by advanced mathematic techniques and long process to find optimized parameters of models. The good ability of function approximation and strong performance of sample learning of NN have been known by using error back propagation learning algorithm (BP) with a feed forward multi-layer NN called multi-layer perceptron (MLP) (Rumelhart et. al, 1986), and after this mile stone of neural computing, there have been more than 5,000 publications on NN for forecasting (Crone & Nikolopoulos, 2007).

To simulate complex phenomenon, chaos models have been researched since the middle of last century (Lorenz, 1963; May, 1976). For NN models, the radial basis function network (RBFN) was employed on chaotic time series prediction in the early time (Casdagli, 1989). To design the structure of hidden-layer of RBFN, a cross-validated subspace method is proposed, and the system was applied to predict noisy chaotic time series (Leung & Wang, 2001). A two-layered feed-forward NN, which has its all hidden units with hyperbolic tangent activation function and the final output unit with linear function, gave a high accuracy of prediction for the Lorenz system, Henon and Logistic map (Oliveira et. al, 2000).

To real data of time series, NN and advanced NN models (Zhang, 2003) are reported to provide more accurate forecasting results comparing with traditional statistical model (i.e. the autoregressive integrated moving average (ARIMA)(Box & Jankins, 1976)), and the performances of different NNs for financial time series are confirmed by Kodogiannis & Lolis (Kodogiannis & Lolis, 2002). Furthermore, using benchmark data, several time series forecasting competitions have been held in the past decades, many kinds of NN methods showed their powerful ability of prediction versus other new techniques, e.g. vector quantization, fuzzy logic, Bayesian methods, Kalman filter or other filtering techniques, support vector machine, etc (Lendasse et. al, 2007; Crone & Nikolopoulos, 2007).

Meanwhile, reinforcement learning (RL), a kind of goal-directed learning, has been generally applied in control theory, autonomous system, and other fields of intelligent computation (Sutton & Barto, 1998). When the environment of an agent belongs to Markov decision process (MDP) or the Partially Observable Markov Decision Processes (POMDP), behaviours of exploring let the agent obtain reward or punishment from the environment, and the policy of action then is modified to adapt to acquire more reward. When prediction error for a time series is considered as reward or punishment from the environment, one can use RL to train predictors constructed by neural networks.

In this chapter, two kinds of neural forecasting systems using RL are introduced in detail: a self-organizing fuzzy neural network (SOFNN) (Kuremoto et al., 2003) and a multi-layer perceptron (MLP) predictor (Kuremoto et al., 2005). The results of experiments using Lorenz chaos showed the efficiency of the method comparing with the results by a conventional learning method (BP).

## 2. Architecture of neural forecasting system

The flow chart of neural forecasting processing is generally used by which in Fig. 1. The * t*th step time series data

*-dimensional space*n

*-dimensional reconstruction is shown in Fig. 2. The output layer of neural forecasting systems is usually with one neuron whose output*3

There are various architectures of NN models, including MLP, RBFN, recurrent neural network (RNN), autoregressive recurrent neural network (ARNN), neuro-fuzzy hybrid network, ARIMA-NN hybrid model, SOFNN, and so on. The training rules of NNs are also very different not only well-known methods, i.e., BP, orthogonal least squares (OLS), fuzzy inference, but also evolutional computation, i.e., genetic algorithm (GA), particle swarm optimization (PSO), genetic programming (GP), RL, and so on.

### 2.1. MLP with BP

MLP, a feed-forward multi-layer network, is one of the most famous classical neural forecasting systems whose structure is shown in Fig. 3. BP is commonly used as its learning rule, and the system performs fine efficiency in the function approximation and nonlinear prediction.

For the hidden layer, let the number of neurons is * K*, the output of neuron k is

*th hidden neuron with output neuron and input neurons, respectively. Activation function*k

*(*f

*) is a sigmoid function (or hyperblolic tangent function) given by Eq. (4).*u

Gradient parameter β is usually set to 1.0, and to correspond to * f* (

*), the scale of time series data should be adjusted to (0.0, 1.0).*u

BP is a supervised learning algorithm, using sample data trains NN providing more correct output data by modifying all of connections between layers. Conventionally, the error function is given by the mean square error as Eq. (5).

Here * S* is the size of train data set,

*(*y

*+1) is the actual data in time series. The error is minimized by adjusting the weights according to Eq. (6), Eq. (7) and Eq. (2), Eq. (3).*t

Here

### 2.2. MLP with RL

One important feature of RL is its statistical action policy, which brings out exploration of adaptive solutions. Fig. 4 shows a MLP which output layer is designed by a neuron with Gaussian function. A hidden layer consists of variables of the distribution function is added. The activation function of units in each hidden layer is still sigmoid function (or hyperbolic tangent function) (Eq. (8)-(10)).

And the prediction value is given according to Eq. (11).

Here * k*th hidden neuron with neuron μ,σ in statistical hidden layer and input neurons, respectively. The modification of

### 2.3. SOFNN with RL

A neuro-fuzzy hybrid forecasting system, SOFNN, using RL training algorithm is shown in Fig. 5. A hidden layer consists of fuzzy membership functions * t* = 1, 2,..., S (Eq. (12)).

The fuzzy reference

Where * i* = 1, 2,...,

*,*n

*means the number of membership function which is 1 initially,*j

*th membership function for input*j

*means each of membership function which connects with*c

*th rule, respectively.*k

c

*, (*j

*= 1, 2,...,*j

*), and*l

*is the maximum number of membership functions. If an adaptive threshold of*l

The output of neurons

Where

## 3. SGA of RL

### 3.1. Algorithm of SGA

A RL algorithm, Stochastic Gradient Ascent (SGA), is proposed by Kimura and Kobayashi (Kimura & Kobayashi, 1996, 1998) to deal with POMDP and continuous action space. Experimental results reported that SGA learning algorithm was successful for cart-pole control and maze problem. In the case of time series forecasting, the output of predictor can be considered as an action of agent, and the prediction error can be used as reward or punishment from the environment, so SGA can be used to train a neural forecasting system by renewing internal variable vector of NN (Kuremoto et. al, 2003, 2005).

The SGA algorithm is given below.

Step 1. Observe an input

Step 2. Predict a future data

Step 3. Receive the immediate reward

Step 4. Calculate characteristic eligibility

Here * i*th internal variable vector.

Step 5. Calculate

Here * b* denotes the reinforcement baseline.

Step 6. Improve policy by renewing its internal variable

Here

Step 7. For next time step * t*+1, return to step 1.

Characteristic eligibility

### 3.2. SGA for MLP

For the MLP forecasting system described in section 2.2 (Fig. 4), the characteristic eligibility

respectively.

The initial values of * r*,

### 3.3. SGA for SOFNN

For the SOFNN forecasting system described in section 2.3 (Fig. 5), the characteristic eligibility

Here membership function * r*, threshold of evaluation error

## 4. Experiments

A chaotic time series generated by Lorenz equations was used as benchmark for forecasting experiments which were MLP using BP, MLP using SGA, SOFNN using SGA. Prediction precision was evaluated by the mean square error (MSE) between forecasted values and time series data.

### 4.1. Lorenz chaos

A butterfly-like attractor generated by the three ordinary differential equations (Eq. (28)) is very famous on the early stage of chaos phenomenon study (Lorenz, 1969).

Here * o*(

*) of Eq. (29) in forecasting experiments, where*t

The size of sample data for training is 1,000, and the continued 500 data were served as unknown data for evaluating the accuracy of short-term (i.e. one-step ahead) prediction.

### 4.2. Experiment of MLP using BP

It is very important and difficult to construct a good architecture of MLP for nonlinear prediction. An experimental study (Oliveira et. al, 2000) showed the different prediction results for Lorenz time series by the architecture of * n :* 2

*1, where*n : n :

*denotes the embedding dimension and the cases of*n

*= 2, 3, 4 were investigated for different term predictions (long-term prediction*n

For short-term prediction here, a three-layer MLP using BP and * 3* :

*:*6

*structure shown in Fig. 3 was used in experiment, and time delay*1

and the finish condition of training was set to * E*(

*) <*W

### 4.3. Experiment of MLP using SGA

A four-layer MLP forecasting system with SGA and * 3* :

*:*60

*:*2

*structure shown in Fig. 4 was used in experiment, and time delay*1

constants of sigmoid functions

finish condition of training was set to 30,000 iterations where the convergence * E*(

*) could be observed. The prediction results after 0, 5,000, 30,000 iterations of training are shown in Fig. 9, Fig. 10 and Fig. 11 respectively. The change of prediction error during training is shown in Fig. 12. The one-step ahead prediction results are shown in Fig. 13. The 500 steps MSE of one-step ahead forecasting by MLP using SGA was 0.0112, forecasting accuracy was 13.2% upped than MLP using BP.*W

### 4.4. Experiment of SOFNN using SGA

A five-layer SOFNN forecasting system with SGA and structure shown in Fig. 5 was used in experiment, time delay * r* was set by Eq. (31), and the finish condition of training was also set to 30,000 iterations where the convergence

*(*E

*) could be observed. The prediction results after training are shown in Fig. 14, where the number of input neurons was 4 and data scale of results was modified into (0.0, 1.0). The change of prediction error during the training is shown in Fig. 15. The one-step ahead prediction results are shown in Fig. 16. The 500 steps MSE of one-step ahead forecasting by SOFNN using SGA was 0.00048, forecasting accuracy was 95.7% and 96.3% upped than the case by MLP using BP and by MLP using SGA respectively.*W

One advanced feature of SOFNN is its data-driven structure building. The number of membership function neurons and rules increased with samples (1,000 steps in training of experiment) and iterations (30,000 times in training of experiment), which can be confirmed by Fig. 17 and Fig. 18. The number of membership function neurons for the 4 input neurons was 44, 44, 44, 45 respectively, and the number of rules was 143 when the training finished.

## 5. Conclusion

Though RL has been developed as one of the most important methods of machine learning, it is still seldom adopted in forecasting theory and prediction systems. Two kinds of neural forecasting systems using SGA learning were described in this chapter, and the experiments of training and short-term forecasting showed their successful performances comparing with the conventional NN prediction method. Though the iterations of MLP with SGA and SOFNN with SGA in training experiments took more than that of MLP with BP, both of their computation time were not more than a few minutes by a computer with 3.0GHz CPU.

A problem of these RL forecasting systems is that the value of reward in SGA algorithm influences learning convergence seriously, the optimum reward should be searched experimentally for different time series. Another problem of SOFNN with SGA is how to tune up initial value of deviation parameter in membership function and the threshold those

were also modified by observing prediction error in training experiments. In fact, when SOFNN with SGA was applied on an neural forecasting competition “NN3” where 11 time series sets were used as benchmark, it did not work sufficiently in the long-term prediction comparing with the results of other methods (Kuremoto et. al, 2007; Crone & Nikolopoulos, 2007). All these problems remain to be resolved, and it is expected that RL forecasting systems will be developed remarkably in the future.

## Acknowledgments

We would like to thank Mr. Yamamoto A. and Mr. Teramori N. for their early work in experiments, and a part of this study was supported by MEXT-KAKENHI (15700161) and JSPS-KAKENHI (18500230).

## References

- 1.
Box G. E. P. Jenkins G. 1970 - 2.
Casdagli M. 1989 Nonlinear prediction of chaotic time series. - 3.
Crone S. Nikolopoulos K. 2007 Results of the NN3 neural network forecasting competition. - 4.
Engle R. F. 1982 Autoregressive conditional heteroscedasticity with estimates of the variance of U. K. inflation. - 5.
Kimura H. Yamamura M. Kobayashi S. 1996 Reinforcement learning in partially observable Markov decision process: A stochastic gradient ascent (in Japanese). - 6.
Kimura H. Kobayashi S. 1998 Reinforcement learning for continuous action using stochastic gradient ascent. - 7.
Kodogiannis V. Lolis A. 2002 Forecasting financial time series using neural network and fuzzy system-based techniques. - 8.
Kuremoto T. Obayashi M. Yamamoto A. Kobayashi K. 2003 Predicting chaotic time series by reinforcement learning. - 9.
Kuremoto T. Obayashi Kobayashi K. 2005 Nonlinear prediction by reinforcement learning. In: - 10.
Kuremoto T. Obayashi Kobayashi K. 2007 Forecasting time series by SOFNN with reinforcement learning. - 11.
Lendasse A. Oja E. Simula O. Verleysen M. 2007 Time series prediction competition: The CATS benchmark. - 12.
Leung H. Lo T. Wang S. 2001 Prediction of noisy chaotic time series using an optimal radial basis function. - 13.
Lorenz E. N. 1963 Deterministic nonperiodic flow. - 14.
May R. M. 1976 Simple mathematical models with very complicated dynamics. - 15.
Oliveira K. A. Vannucci A. Silva E. C. 2000 Using artificial neural networks to forecast chaotic time series. - 16.
Rumelhart D. E. Hinton G. E. Williams R. J. 1986 Learning representation by back-propagating errors. - 17.
Sutton R. S. Barto A. G. 1998 - 18.
Takens F. 1981 Detecting strange attractor in turbulence. - 19.
Williams R. J. 1992 Simple statistical gradient following algorithms for connectionist reinforecement learning. - 20.
Zhang G. P. 2003 Time series forecasting using a hybrid ARIMA and neural network model.