## 1. Introduction

During the last few decades, fractional calculus has gained massive attention of physicists and mathematicians because of its numerous interdisciplinary applications. Many recent researches are ended up demonstrating the significance of fractional-order differential equations as valuable instruments to model several physical phenomena such as the non-linear oscillation of earthquake and the fluid dynamics traffic can be elegantly modelled with fractional derivatives [1, 2]. Various physical processes show fractional-order behaviour that might change with respect to time or space. The adoption of fractional calculus concepts is well known in many scientific areas such as physics, diffusion and wave propagation, heat transfer, viscoelasticity and damping, electronics, robotics, electromagnetism, signal processing, telecommunications, control systems, traffic systems, system identification, chaos and fractals, biology, genetic algorithms, filtration, modelling and identification, chemistry, irreversibility, as well as economy and finance [3–5].

Modelling of different physical phenomena gave rise to a special differential equation known as Riccati differential equation that was named after an Italian mathematician Count Jacopo Francesco Riccati. Due to many applications of fractional Riccati differential equations such as in stochastic controls and pattern formation, many researchers studied it to get the exact or approximate solutions. Such as fractional variational iteration method was applied in [6] to give an approximate analytical solution of non-linear fractional Riccati differential equation. His modified homotopy perturbation method (MHPM) was used on quadratic Riccati differential equation of fractional order [7]. The results of fractional Riccati differential equation were also obtained on the basis of Taylor collocation method [8]. Fractional Riccati differential equations were solved by means of variational iteration method and homotopy perturbation Pade technique [9, 10], and the numerical results were attained by using Chebyshev finite difference method [11]. Adomian decomposition method was presented for fractional Riccati differential equation [12], the problem was described by means of Bernstein collocation method [13], and enhanced homotopy perturbation method (EHPM) was used to study this problem [14]. Recently, artificial neural network and sequential quadratic programming have been utilized to obtain the solution of Riccati differential equation [15]. The problem was also explained by Legendre wavelet operational matrix method [16] and the results of fractional Riccati differential equation by new homotopy perturbation method (NHPM) [17] were obtained.

In recent years, artificial neural network (ANN) is one of the methods that are attaining massive attention of researchers in the area of mathematics as well as in different physical sciences. The concept of ANN started to develop in 1943 when a neurophysiologist and a young mathematician [18] gave the idea on working of a neuron with the help of an electric circuit. Later, a book [19] was written to clarify the working of neurons then in 1949. Bernard Widrow and Marcian Hoff developed a model MEDALIN that was used to study the first real-world problem of neural network. Researchers continued to study the single-layered neural network, but in 1975, the concept of multilayer perceptron (MLP) was introduced, which was computationally exhaustive due to multilayer architecture. The excessive training time and high computational complexity of MLP gave rise to functional neural network by which the complexity of multilayers was overcome by introducing variable functions [20]. Functional link neural network has been implemented to several problems such as modified functional link neural network for denoising of image [21], active control of non-linear noise processes through functional link neural network [22], and the problem of channel equalization in a digital communication system was solved by functional link neural network [23]. Due to less computational effort with easy to implement procedure, functional link neural network was also implemented to solve differential equations [24, 25].

## 2. Definitions and preliminaries

The Riemann-Liouville, Grünwald-Letnikov and Caputo definitions of fractional derivatives of order

**Definition 1:** The Riemann-Liouville fractional derivative operator can be defined as follows:

where here

**Definition 2:** The definition of fractional differential operator was introduced by Caputo in late 1960s that can be defined as follows [27]:

where here

The Caputo fractional derivative satisfies the important attribute of being zero when applied to a constant. In addition, it is attained by computing an ordinary derivative followed by the fractional integral, while the Riemann-Liouville is acquired in the contrary order.

## 3. ChSANN and LSANN

### 3.1. Methodology

The functional mapping of (LSANN) and (ChSANN) is shown in **Figure 1** demonstrating the structure of both methods, but for the convenience of the reader, a stepwise explanation of both the methods is also presented.

The combined steps for both the methods are explained because of the structural similarity in them, except the polynomial basis that affects the accuracy of the results.

Step 1: The summation of the product of network adaptive coefficients (NAC) and Chebyshev or Legendre polynomials is calculated for the independent variable of fractional differential equation for an arbitrary value of **Figure 1**.

Step 2: The activation of **Figure 1**.

Step 3: The trial solution will be calculated by using initial conditions as in the study of Lagaris and Fotiadis [28].

Step 4: Required derivatives of the trial solution will be calculated.

Step 5: The optimization of mean square error (MSE) or learning of NAC will be executed by the thermal minimization methodology known as simulated annealing. The equation used to calculate MSE would be discussed in next section. Before optimization, the independent variable will be discretized by an array of trial points.

Step 6: If the value of MSE is in an acceptable range, then the values of trial points and NAC will be replaced in trial solution to get the output. On the other hand, the procedure will be repeated from step 1 with a different value of

### 3.2. Implementation on fractional Riccati differential equation

In this section, the ChSANN and LSANN are employed for the fractional Riccati differential equation of the type:

For implementing both methodologies, Eq. (1) can be written in the following form:

while for ChSANN,

where here

while hile

For LSANN, the activation function and

whereas hereas

where here *N* will be used according to the method.

Trial solution can be written in expanded form for ChSANN at

Fractional derivative in Caputo sense of Eq. (10) is as follows:

(11) |

The mean square error (MSE) of the Eq. (1) will be calculated from the following:

whereas here as

*Example 1:*

Consider the following Riccati differential equation with initial condition as:

The exact solution for

The above fractional Riccati differential equation is solved by implementing the ChSANN and LSANN algorithms for various values of **Figure 2** shows the combined results of ChSANN for different values of *α*. **Table 1** depicts the comparison of results obtained from both the methods with exact solution and the absolute error values for both the methods. Absolute error (AE) values for ChSANN and LSANN can be viewed in **Table 1** but can be better visualized in **Figure 3**. Implementation of ChSANN and LSANN for **Table 2** shows the numerical comparison for **Tables 3** and **4** demonstrate the numerical comparison of the proposed methods with the methods in [7, 13, 14] for **Table 5**.

ChSANN | LSANN | IABMM [14] | EHPM [14] | MHPM [7] | Bernstein [13] | |
---|---|---|---|---|---|---|

0 | 0 | 0 | 0 | 0 | 0 | 0 |

0.2 | 0.30018 | 0.29937 | 0.3117 | 0.3214 | 0.3138 | 0.30997 |

0.4 | 0.47512 | 0.47486 | 0.4885 | 0.5077 | 0.4929 | 0.48163 |

0.6 | 0.59334 | 0.59320 | 0.6045 | 0.6259 | 0.5974 | 0.59778 |

0.8 | 0.67572 | 0.67571 | 0.6880 | 0.7028 | 0.6604 | 0.67884 |

1.0 | 0.73748 | 0.73430 | 0.7478 | 0.7542 | 0.7183 | 0.73683 |

ChSANN | LSANN | MHPM [7] | |
---|---|---|---|

0 | 0 | 0 | 0 |

0.1 | 0.299658 | 0.299416 | 0.273875 |

0.2 | 0.422837 | 0.422808 | 0.454125 |

0.3 | 0.494204 | 0.494145 | 0.573932 |

0.4 | 0.545856 | 0.545773 | 0.644422 |

0.5 | 0.585660 | 0.585619 | 0.674137 |

0.6 | 0.616648 | 0.616647 | 0.671987 |

0.7 | 0.641558 | 0.641543 | 0.648003 |

0.8 | 0.662486 | 0.662452 | 0.613306 |

0.9 | 0.681101 | 0.681237 | 0.579641 |

1.0 | 0.702813 | 0.703857 | 0.558557 |

ChSANN | LSANN | IABMM [14] | EHPM [14] | MHPM [7] | Bernstein [13] | |
---|---|---|---|---|---|---|

0 | 0 | 0 | ||||

0.2 | 0.234602 | 0.236053 | 0.2393 | 0.2647 | 0.2391 | 0.23878 |

0.4 | 0.419229 | 0.419898 | 0.4234 | 0.4591 | 0.4229 | 0.42258 |

0.6 | 0.563627 | 0.564474 | 0.5679 | 0.6031 | 0.5653 | 0.56617 |

0.8 | 0.672722 | 0.673241 | 0.6774 | 0.7068 | 0.6740 | 0.67462 |

1.0 | 0.753188 | 0.755002 | 0.7584 | 0.7806 | 0.7569 | 0.75458 |

No of NAC | No of training points | Mean square error | Absolute error | |
---|---|---|---|---|

4 | 10 | 9.7679 × 10^{−5} | 0.760078 | 1.51570 × 10^{−3} |

5 | 20 | 2.3504 × 10^{−7} | 0.761644 | 5.02121 × 10^{−5} |

6 | 20 | 5.5016 × 10^{−9} | 0.761584 | 9.94216 × 10^{−6} |

*Example 2:*

Consider the nonlinear Riccati differential equation along with the following initial condition:

The exact solution when

ChSANN and LSANN algorithms are executed on the above test experiment with **Table 6** shows the absolute errors and the numerical comparison with exact values for both the methods, while graphical comparison can be better envisioned through **Figure 4**. **Tables 7** and **8** display the numerical comparison of the proposed methods with the results obtained in [7] for **Table 9**. The effects on accuracy of results with variable NAC and training points can be understood through **Table 10**.

ChSANN | LSANN | MHPM [7] | |
---|---|---|---|

0 | 0 | 0 | 0 |

0.1 | 0.22983 | 0.21885 | 0.216866 |

0.2 | 0.46136 | 0.45018 | 0.482292 |

0.3 | 0.69478 | 0.68545 | 0.654614 |

0.4 | 0.92279 | 0.91423 | 0.891404 |

0.5 | 1.13531 | 1.12664 | 1.132763 |

0.6 | 1.32357 | 1.31532 | 1.370240 |

0.7 | 1.48314 | 1.47660 | 1.594278 |

0.8 | 1.61485 | 1.61045 | 1.794879 |

0.9 | 1.72401 | 1.71972 | 1.962239 |

1.0 | 1.81844 | 1.80882 | 2.087384 |

ChSANN | LSANN | Reference [13] | |
---|---|---|---|

0.2 | 0.31018 | 0.30567 | 0.314869 |

0.4 | 0.69146 | 0.68661 | 0.697544 |

0.5 | 0.89758 | 0.89230 | 0.903673 |

0.6 | 1.10220 | 1.09708 | 1.107866 |

0.8 | 1.47288 | 1.46889 | 1.477707 |

1.0 | 1.76276 | 1.76355 | 1.765290 |

## 4. Results and discussion

In this chapter, two new algorithms have been developed and verified for the Riccati differential equation with fractional order, based on the functional neural network, Chebyshev and Legendre polynomials and simulated annealing for fractional differential equations. Substantiation of the methods is carried out by examining two benchmark examples that were already solved by some previously renowned methods. The numerical evaluation with previously obtained results for fractional-order derivative exhibited the achievement of proposed methods.

For test example 1, better results with less value of mean square error were obtained for each method. Comparison of the mean square errors **Table 1** and **Figure 2** that ChSANN gave the better results with slightly more mean square error than LSANN. It can be noted from **Table 5** that better results can be attained with variable number of weights and training points, while the trend witnessed from **Table 5** indicated that for ChSANN, decreasing value of mean square error is directly proportional to the absolute error for *1*.

The test example 2 showed quite similar trends as of example 1. **Tables 6** and **9** exhibited that for *1*, less mean square error for ChSANN than LSANN was noted due to which, more accurate results were achieved by ChSANN at *0.9* as compared to LSANN that can be viewed in **Figure 4**. The results obtained for fractional values of derivatives are compared with MHPM for *0.75* and a collocation-based method of Bernstein polynomials for *0.9* as presented in **Tables 7** and **8**. The comparison showed that the results achieved by ChSANN and LSANN are quite similar to the results obtained by MHPM and collocation-based method of Bernstein polynomials. However, according to the observations from the case of *1*, it can be assumed that the results obtained for *0.75* will be accurate up to 2–3 decimal places because the MSE was detected up to

The methods proposed in this study are capable of handling highly non-linear systems. Both the proposed neural architectures are less computational and exhaustive than MLP. With ease of computation, the suggested activation function has made fractional differential equations possible to solve. Training of NAC by simulated annealing with Chebyshev and Legendre neural architecture minimized the MSE up to a tolerable level that leads to more accurate numerical approximation. Simulated annealing is a probabilistic procedure that is mostly free of initial values and can easily escape from local optimum to global optimum unlike other methods. As well as it can successfully optimize the functions with crests and plateaus. The methods can be enhanced by introducing more advanced optimization techniques. The motivation behind the work is the successful implementation of neural algorithms in the field of calculus that gave the solution of fractional differential equations a new direction with ease of implementation.

## 5. Conclusion

In this chapter, ChSANN and LSANN have been developed for fractional differential equations and successfully employed on two benchmark examples of Riccati differential equations. The proposed methods gave the excellent numerical approximation of the Riccati differential equation of fractional order. The most remarkable advantage of the proposed methods is the accurate prediction of the result for the fractional values of derivative. These procedures are easy to implement and can be used to find the exact solution in the fractional values of derivative. ChSANN displayed more accurate results than LSANN for the similar applied conditions. Both the proposed algorithms are non-iterative and can be implemented through mathematical software and Mathematica 10 was used in this study to obtain all the results displayed in **Tables 1**–**10** and **Figures 2**–**4**.