Open access peer-reviewed chapter

Reactive Distillation Modeling Using Artificial Neural Networks

Written By

Francisco J. Sanchez-Ruiz

Submitted: 18 September 2021 Reviewed: 18 October 2021 Published: 31 August 2022

DOI: 10.5772/intechopen.101261

Chapter metrics overview

104 Chapter Downloads

View Full Metrics

Abstract

The use of artificial intelligence techniques in the design of processes has generated a line of research of interest, in areas of chemical engineering and especially in the so-called separation processes, in this chapter the combination of artificial neural networks (ANN) is presented and fuzzy dynamic artificial neural networks (DFANN). Applied to the calculation of thermodynamic properties and the design of reactive distillation columns, the ANN and DFANN are mathematical models that resemble the behavior of the human brain, the proposed models do not require linearization of thermodynamic equations, models of mass and energy transfer, this provides an approximate and tight solution compared to robust reactive distillation column design models. Generally, the models must be trained according to a dimensionless model, for the design of a reactive column a dimensionless model is not required, it is observed that the use of robust models for the design and calculation of thermodynamic properties give results that provide better results than those calculated with a commercial simulator such as Aspen Plus (R), it is worth mentioning that in this chapter only the application of neural network models is shown, not all the simulation and implementation are presented, mainly because it is a specialized area where not only requires a chapter for its explanation, it is shown that with a neural network of 16 inputs, 2 hidden layers and 16 outputs, it generates a robust calculation system compared to robust thermodynamic models that contain the same commercial simulator, a characteristic of the network presented is the minimization of overlearning in which the network by its very nature is low. In addition, it is shown that it is a dynamic model that presents adjustment as a function of time with an approximation of 96–98% of adjustment for commercial simulator models such as Aspen Plus (R), the DFANN is a viable alternative for implementation in processes of separation, but one of the disadvantages of the implementation of these techniques is the experience of the programmer both in the area of artificial intelligence and in separation processes.

Keywords

  • reactive distillation
  • neural networks
  • dynamic fuzzy neural network
  • thermodynamics properties
  • design column
  • azeotropic mix

1. Introduction

Reactive distillation is a separation process that is implemented for the separation of complex mixtures because it combines a chemical reaction in a single piece of equipment, that is, one or more of the stages of the separation column has the function of a chemical reactor, in which the catalyzed or uncatalyzed reaction will be carried out, this type of process is implemented for mixtures that present azeotropes, these with very close boiling points that can be complex or require an excess of energy for the separation of the components, on some occasions the process is implemented for the purification of substances through a thermally integrated process, the reactive distillation process is carried out by mass transfer both in the liquid phase and in the vapor phase, or on the surface of the catalyst [1].

The calculation and design of a reactive distillation system introduce a term in the mass balances of the stages which makes it become reactive stages Mi,jEq. (1) thus becomes.

Mi,j=nL,j1xi,j1+nv,j+1yi,j+1+nFjxF,i,jnlj+nSL,jxi,jnvj+nSVjyi,jVL,Hjn=1nRxυi,nrj,n=0E1

where (VLH)j is the volumetric liquid holdup at stage j, νi,n is the stoichiometric coefficient of component i in reaction n, rj,n rate of reaction n on stage j, and nRx is the number of chemical reactions.

The modification of stage energy balance is in the definition of Qj in Eq. (2), where the heat of reaction is included.

Hj=nL,j1hLj1+nvj+1hvj+1+nF,jhF,jnL,j+nSL,jhLjnvj+nSVjhvjQj=0E2

In these equations, n represents mole flow, x mole fraction in the liquid phase, y mole fraction in the vapor phase, K equilibrium constant, h molar enthalpy, and Q heat flow. The subscript i represents a component, j stage, L liquid, V vapor, SV side vapor, SL side liquid, F feed, and N last stage, respectively [1].

A first approximation is carried out using a mathematical model based on equations in steady-state, these equations are taken as the basis for modeling in a dynamic state, which is necessary for the implementation of fuzzy dynamic artificial neural networks, therefore; the models presented in this chapter are those that are implemented for artificial intelligence systems.

Dynamic fuzzy neural networks have been implemented to solve non-linear mathematical models. In the areas of process engineering, it has been implemented in temperature control systems. In this chapter the use of artificial intelligence techniques to calculate a temperature is shown. Reactive column, where azeotropes are present in a ternary mixture.

Advertisement

2. Artificial neural networks

Artificial neural networks arise from the analogy that is made between the human brain and computer processing, from the first analyzes of the human brain carried out by Ramón y Cajal [2]. This analogy is made from the aspects of the neural structure to processing capacity.

Artificial neural networks are mathematical models that attempt to mimic the capabilities and characteristics of their biological counterparts. Neural networks are made up of simple calculation elements, all of them interconnected with a certain topology or structure, such as neurons called perceptron’s, which are the simplest elements of a network. The basic model of a neuron is formed as observed by the following elements (Figure 1) [3, 4]:

  • A set of synapses, which are the inputs of the neuron weighted by a weight.

  • An added that simulates the body of the neuron and obtains the level of excitation.

  • The activation function generates the output if the excitation level is reached and restricts the output level, thus avoiding network saturation.

  • The output of the neuron is given by the expression:

Figure 1.

Elementary neuron.

yi=φj=1nwijsj+wi0E3

where n indicates the number of inputs to neuron i and φ denote the excitation function [5, 6]. The argument of the activation function is the linear combination of the inputs of the neuron. If we consider the set of inputs and the weights of neuron i as a vector of dimension (n + 1), the expression is concluded as follows:

yi=φWiTsE4

where

s=1s1s2snTE5
wi=wi0wi1winTE6

Neural networks are classified into static and dynamic networks, the first of these have a broader field of application mainly due to their characteristic of no change for time, dynamic networks are applied more specifically to problems that present changes for time [7, 8].

The static and dynamic neural networks have the characteristics of similar this in mathematical structures, training in addition to principles of architectures of the same neural networks, the most commonly used networks are the so-called multilayer neural networks this mainly because they resemble structures of the human brain, they can be networks with forwarding propagation but also networks with backward propagation. The selection of the same depends on the type of study system and the application of the network [9, 10]. For prediction systems of breakdown curves in adsorption processes, the so-called multilayer neural networks with forwarding propagation are generally used, this is because it is not necessary to use a backward propagation of information as a means of comparison, the latter are most commonly applied in control processes [11, 12, 13].

2.1 Multilayer networks

A multilayer network has a defined structure, it consists of an input layer, hidden layers and an output layer (Figure 2), the definition of structure of a multilayer neural network has the characteristic of avoiding problems with the training of the network which generally results in prediction problems of the breakdown curve of the adsorption process, the process of establishing the architecture of the neural network is mainly based on a series of trial and error although in some of the cases if the programmer is an expert this is significantly reduced, mainly to the fact that there are already established mechanisms to determine architecture, Hecht-Nielsen (1989) [14] based on Kolmogorov’s theorem [15, 16, 17], “The number of neurons in the hidden layer does not need to be greater than twice the number of inputs” using this theorem, the neuron approximation equation is established in the hidden layer [18, 19] Eq. (7).

Figure 2.

Multilayer network.

h=23n+mE7

where h represents the number of neurons in the hidden layer, n number of inputs and m is defined as the number of hidden layers, using this rule a stop parameter is established which means that the number of neurons in the hidden layer will never be required. More than twice the number of entries h < 2n. When it comes to a multilayer network with a single hidden layer, it is recommended that the number of neurons is 2/3 of the number of inputs [20, 21].

The next step in structuring a neural network is the establishment of the excitation functions, these functions can propagate the information and use them for the training of the same network, the information introduced into the network is known as synaptic weights, alluding to the synapses of biological neurons [22, 23]. The excitation functions are of different types, their choice depends on the type of process to be modeled, each excitation function is found in each of the neurons, both in the hidden layers and in the inputs and outputs. The most commonly used functions are the type function: tangential sigmoidal Eq. (8), logarithmic Eq. (9) and radial base type functions Eq. (10), this last function is one of the complex ones generally used for systems dynamic, in non-dynamic processes it can be used but this increases the computing time and information processing mainly because it becomes more specific in its application Eqs. (11)(16) [24, 25, 26, 27].

φ=ewi+ewiewiewiE8
φ=11+ewiE9
φ=i=1NwiΦwwciE10

Gaussian function

Φw=ewi2E11

Multi-quadratic function

Φw=1+wi2E12

Reciprocad multi-quadratic function

Φw=11+wi2E13

Armonic-poli function

Φw=wikk=1,3,5,E14
Φw=wiklnwik=2,4,6,E15

Slim quadratic function

Φw=wi2E16

Once the excitation function or also called the transfer function has been selected, the neural network is trained for which there are different types of training, as with the selection of the architecture, the training is also selected by trial and error but if the experienced programmer can initiate selection with training for a certain type of neural structure, the most commonly used training is backward propagation (BP) training [28, 29], other types of training most used are Levenberg-Maquart (LM) and Broyden Fletcher Goldfarb Shanno (BFGS). Backward propagation training is the basis for all other training, for that reason, only this type of training will be discussed [30, 31].

The error signal at the output of neuron j in iteration k is defined by:

ejk=djkyjkE17

The instantaneous value of the error is defined for neuron j, the sum of the instantaneous errors squared is formulated as:

εn=12jhoutlej2kE18

where hout is the set of output neurons, hout=12l. The average error eav it is obtained by averaging the instantaneous errors corresponding to the N training pairs.

εavn=1Nk=1NεkE19

The objective is to minimize εav with respect to weights. You need to calculate Δwjik.

εkwjikE20
εkwjik=εkejkejkyjkyjkvjkvjkwjikE21
vjk=i=0pwjikyjkE22
yjk=φvjkE23

The components to calculate the error are defined as follows.

εkejk=ejnE24
ejkyjk=1E25
yjkvjk=φjvjkE26
vjkwjik=yjkE27

The gradient of the error is determined with Eq. (35).

εkwjik=ejkφjvjkyjkE28

2.2 Dynamic fuzzy artificial neural network (DFANN)

The DFANNs use an excitation function based on asymmetric radial type function, which implies that the system behaves like a Takegi-Sugeon model (T-S) which has a characteristic pulse of a radial function bias. For the inputs of the fuzzy neural network, it is necessary to establish the limits of the inputs within a known interval, when this type of network is applied in the determination of properties, the inputs must be defined within known ranges, to avoid overlearning of the same artificial neural network. The structure of DFNN is shown in Figure 3, which is similar to the traditional models of artificial neural networks with the difference of the propagation of the synaptic weights in the radial basis excitation function, which can be biased or unbiased, the structure is defined below [32]:

Figure 3.

Dynamic fuzzy artificial neural network (DFANN).

Layer 1: Each node represents an input linguistic variable.

Layer 2: Each node represents a membership function (MF) which is in the form of Gaussian function Eq. (29).

MFij=expxicij2σj2i=1,,randj=1,uE29

Where MFij is the jth membership function of xi, cij is the center of jth Gaussian membership function of xi and σj is the width of the jth Gaussian membership function of xi, r is the number of input variables and u is the number of membership function [32].

Layer 3: Each node represents a possible IF, part for fuzzy rules. For the jth rule Rj, its output is:

ORj=expj=1rxicij2σj2j=1,,uE30
ORj=expXCj2σj2E31

Where X = (x1,…, xr) and Cj is the center of the jth Radial Basic Function (RBF) unit.

Layer 4: Nodes as N (Normalized) nodes. The number of N nodes is equal to that of layer 3 the output of Nj is:

ONj=ORjk=1uORk=expXCj2σj2k=1uexpXCk2σk2E32

Layer 5: Each node in this layer represents an output variable, which is the weighted sum of the incoming signals. Have:

yx=k=1uONkw2k=k=1uw2kexpXCk2σk2k=1uexpXCk2σk2E33

Where y is the value of an output variable and w2k is the connection weight of each rule:

For the TSK (Takagi Sugeon and Kang).

w2k=kj0+kj1x1++kjrxrj=1,2,,uE34
Advertisement

3. Methodology

3.1 Reactive distillation using neural networks artificial

Reactive distillation is implemented to separate mixtures of components that generally have more than one azeotrope, artificial human networks can be implemented to determine thermodynamic properties, for the solution of the differential equations of mass and energy transfer, the case study that presents artificial neural networks were implemented to determine the thermodynamic properties, as well as the solution for the mass transfer equations and the output compositions at the top of the column and the bottom (Figure 4).

Figure 4.

Schematic distillation reactive [33].

3.2 Case study

A multicomponent mixture of ethanol, water, ethyl acetate, acetic acid and butanol is studied, the latter in a small proportion, it is observed that according to the multicomponent mixture, 2 azeotropic mixtures and azeotrope are formed in a ternary mixture of water-ethyl acetate-ethanol, which means that it is a complex reaction system, for the determination of thermodynamic properties as well as the design of a conventional distillation column.

Figure 5 shows the binary azeotrope between ethanol-water, Figure 6 shows the azeotrope between ethyl acetate-water, Figure 7 between ethanol-ethyl acetate, at different temperatures; this implies that the conventional distillation column must be large in a number of plates, as geometry. Figure 8 shows the ternary diagram of azeotrope formation.

Figure 5.

Azeotrope ethanol-water.

Figure 6.

Azeotrope ethyl acetate-water.

Figure 7.

Azeotrope ethanol-ethyl acetate.

Figure 8.

Ternary diagram ethanol-ethyl acetate-water.

In the compartment model (CM), one of the compartments is defined to consist of multiple single stages. Without loss of generality, balance equations for one single stage, the so-called sensitivity stage, can be replaced by the overall compartment balances. Assuming that stages Nc, 1 to Nc, 2 form compartment c (Figure 9), we obtain [1].

Figure 9.

Compartment model (CM). Dashed lines depict the compartment boundaries [33].

Mc=i=Nc,1Nc,iMiE35
xcj=1Mci=Nc,1Nc,iMixijj=ComponentE36
hcL=1Mci=Nc,1Nc,iMihcLE37

Assuming the compartments to be sufficiently large, single-stage dynamics can be neglected compared to overall compartment dynamics. Consequently, single-stage balance equations are assumed stationary. Thus, the entire equation system for compartment c consists for stages j = Nc, 1 … Nc, 2 and the steady-state versions for stages j = Nc, 1 … Nc, 2 except for the sensitivity stage [1].

We derive the proposed model from the representation of the original compartment model as an:

ẋt=f̂x̂tûtŷtE38
0=ĝx̂tûtŷtẑtE39

In Eqs. (38) and (39), we introduce the following notation: differential compartment states are denoted by x(t). Compartment inputs, which are handed over by the neighboring compartments, are denoted by û(t) which corresponds to states of column stages Nc, 1–1 and Nc, i + 1. Compartment outputs, which are required in the equation system of neighboring compartments, are denoted y(t), which corresponds to the states of column stages Nc, 1 and Nc, i. The remaining (algebraic) compartment states are denoted z(t).

When solving Eqs. (40) and (41), the main computational effort is spent in the solution of the highly nonlinear algebraic that mainly originates from the thermodynamic relations (Eqs. (38) and (39)). To reduce the computational effort, Linhart and Skogestad [34], propose interpolation between tabulated solved solutions.

ŷtẑt=g1̂x̂tûE40

Sophisticated computer codes are readily available for efficient training of the ANNs. In particular, ANNs can also be fitted very efficiently to large data sets, which arise from a sampling of the high-dimensional input space.

x̂t=f̂x̂tûtŷtE41
ŷt=ĝANNx̂tûtE42

We highlight that model formulation Eqs. (41) and (42) is only one possibility of an ANN-based compartmentalization approach. The choice of this system, however, seems appealing as it shows an analogy to the dynamic modeling of simple flash units, that is, the model outputs can be calculated as an explicit function of the model inputs (Tx-flash or single-stage). Other possibilities for using ANNs exist as well. For instance, using a surrogate model for the ordinary differential equation (ODE) form thermodynamic system. Such are, the ANN could also be used to replace specific parts of mapping o searching thermodynamics properties.

Where a combination of ANN and CM result in a new model of reactive distillation, but new mathematical model show relationship between stochiometric and mass transfer, such relation to be.

Tit=i=Nc,1Nc,2OMc,iT2i=k=1mi=Nc,1Nc,iMiT2,kexpTTi2φk2expMMi2φk2k=1mi=Nc,1Nc,iexpTTi2φk2expMMi2φk2E43
Pit=i=Nc,1Nc,2OKc,iP2i=k=1mi=Nc,1Nc,iKiP2,kexpPPi2φk2expKKi2φk2k=1mi=Nc,1Nc,iexpPPi2φk2expKKi2φk2E44

Subsequently, for reactive distillation, an approach to reaction kinetics of study mixture is necessary, this is established by the following Eq. (45).

rt=k1x¯2x¯0k1x¯1x¯3E45

where xo represents the liquid fraction of acetic acid, x1 of water, x2 ethanol fraction, and x3 ethyl acetate fraction.

Starting from the reaction kinetic equation, balances are established for each of the components as a function of time, for each stage of the separation process, that is, for each plate in the reactive column with DFANN.

Mjdx¯i,jdt=+Vy¯i,j1Lx¯i,jVy¯i,j+ζMjRE46
i,jnwi,jijmx¯i,jtMj=injmLx¯i,j+1+Vy¯i,j1Lx¯i,jVy¯i,j+ζMjRE47

where i = 0, 2, 3 components and j = 1, …, n number plates, yi,j is fraction steam in column ζ stoichiometric coefficient.

In condenser with DFANN

Mjdx¯i,ndt=Vy¯i,n1Lx¯i,nDhx¯i,n+ζMnRE48
i,jnwi,jijmx¯i,ntMj=injmVy¯i,n1Lx¯i,nDhx¯i,n+ζMnRE49

Reboiler with DFANN

dM0x¯i,0dt=Lx¯i,0Vy¯i,0+ζM0RE50
i,jnwi,jijmx¯i,0tM0=injmLx¯i,0Vy¯i,0+ζM0RE51

where dM0dt=LV and x¯Dc,j=1i=AcCcx¯i,j from j = 0, 1,…, n; the mole fraction of vapor meets the constraint.

i=AcAcy¯i,j=i=AcPcPi,jTix¯i,jPj=1.E52

where Dc,j represents the amount of bottom distillate, likewise the partial pressures of the vapor and liquid phase are determined and Mj represents. Table 1 shows the constants for the simulation.

M0 (kg/s)4.798k12900exp(−7150/T(K))
Mj (kg/s)0.0125k27380exp(−7150/T(K))

Table 1.

Constants for simulation.

The density with DFANN of the mixture is calculated by Eq. (53).

ρl=i=1md=1nAdBd1Tr2/7wdi,djiE53

The case study simulation was performed through modular programming in Aspen Plus®, using code linking in Matlab ®, the simulation parameters in Aspen Plus are shown in Figures 1016, shows in a summarized way the parameters entered in a RadFrac column of Aspen, the calculation of properties was carried out with the binding of Matlab® and Aspen using both the NRTL methods for Aspen, as well as the DFANN methods in Matlab®.

Figure 10.

Global design.

Figure 11.

Configuration.

Figure 12.

Stages of feed.

Figure 13.

Pressure design.

Figure 14.

Condenser design.

Figure 15.

Reboiler design.

Figure 16.

Reaction equilibrium.

3.3 Training dynamic fuzzy artificial neural network

The neural network has a structure of 16 inputs, two hidden layers with 12 neurons in each of the layers, and 16 output neurons, the training is based on an unsupervised training of the Quasi-Newton-Function (QSF) type, without backward propagation.

It is worth mentioning that the structure of the neural network was optimized using a supervised algorithm, which is based on the heuristic rule of 2n, where n is the number of inputs to the neural network, which implies that the minimum number of neurons present is sought. To avoid overlearning, Figures 17 and 18 show the learning settings.

Figure 17.

Training surface.

Figure 18.

Training response.

3.4 Results and discussion case study

Figure 19 schematically shows a distillation column using the commercial Aspen Plus® simulator, two streams are introduced, the acetic acid stream separated from the azeotropic mixture, this to facilitate the transfer of mass and energy, each stream is fed in one stage superior and in under stage, to facilitate the mentioned phenomena.

Figure 19.

Schematic of reactive distillation.

Simulations are performed using the same configuration with 17 separation stages (Table 2), the feeds were carried out in stages 4 and 5 respectively, the compositions of the dome and the bottom of the column are compared to determine the purity of the components, where yi is the output composition in mole fraction, it is observed that the column that simulates the process with artificial intelligence provides better results based on its ability to displace the azeotropes present.

DomeBottomDomeBottomDomeBottom
Componentyi (Aspen)yi (Aspen)yi(ANN)yi(ANN)yi(DFANN)yi(DFANN)
Ethanol0.96980.03020.98540.01460.99780.0022
Water0.95190.04810.97230.02770.98220.0178
Acetic-Acid0.16920.83080.20560.79440.33280.6672
Ethyl Acetate0.92880.07120.95230.04770.98550.0145

Table 2.

Composition reactive distillation.

The reaction that takes place in the reactive stages is as follows:

Ethanol+Acetic AcidEthylAcetate+Water

In Figure 20 the behavior of the liquid in each of the stages of the column is shown (Figure 21), it is observed that in the initial stages there is a transfer of both mass and energy, this can be verified in Figure 22 where shows the temperature profile across the reactive column, in Figures 23 and 24 represent the liquid and vapor profiles in each of the stages.

Figure 20.

Composition profile per stage using DFANN.

Figure 21.

Composition profile per stage using Aspen plus®.

Figure 22.

Temperature profile per stage using DFANN.

Figure 23.

Profile per stage vapor flow using DFANN.

Figure 24.

Profile per stage liquid flow using DFANN.

In the same simulations, comparisons were made between thermodynamic properties such as: Enthalpy and Entropy, these used in the thermodynamic models used.

In Figure 25, properties thermodynamic calculated using the intelligence algorithm are shown.

Figure 25.

Profile enthalpy DFANN.

Advertisement

4. Conclusion

The implementation of artificial intelligence techniques such as artificial neural networks to the separation processes, provides promising results in matters of feasibility and process dynamics, in computational time the use of robust models for the calculation of properties compared to the use of networks. Similarly, with a significant decrease for neural network models, prediction-based fit and azeotrope separation based on variables such as temperature and pressure, neural networks provide better results compared to robust thermodynamic models with Aspen Plus®, which are models that in some cases implement statistical molecular mechanics. Fuzzy artificial neural networks adjust to the dynamics of the reactive column process, where separation of 99% is obtained, which implies that the azeotrope moves, in comparison with traditional models, adjusting the parameters according to the change in stoichiometry, one of the advantages in the ability to predict the change of azeotrope as a function of temperature and pressure; system, as well as the ability to establish the variables in permissible limits and limitations of the number of stages, without being a large design as the robust models mentioned, can give.

References

  1. 1. Haydary J. Chemical Process Design and Simulation: Aspen Plus and Aspen Hysys Applications. John Wiley & Sons; 2019
  2. 2. Ramon Y, Cajal S. Textura del Sistema Nervioso del Hombre y de los Vertebrados. Vol. 2. Madrid: Nicolas Moya; 1904
  3. 3. Yegnanarayana B. Artificial Neural Networks. PHI Learning Pvt Ltd; 2009
  4. 4. Hopfield JJ. Artificial neural networks. Circuits and Devices Magazine, IEEE. 1988;4(5):3-10. DOI: 10.1109/101.8118
  5. 5. Drew PJ, Monson JR. Artificial neural networks. Surgery. 2000;127(1):3-11. DOI: 10.1067/msy.2000.102173
  6. 6. Abraham A. Artificial neural networks. In: Handbook of Measuring System Design. 2005
  7. 7. Mäkisara K, Simula O, Kangas J, Kohonen T. Artificial Neural Networks. Vol. 2. Elsevier; 2014
  8. 8. Narendra KS, Parthasarathy K. Identification and control of dynamical systems using neural networks. IEEE Transactions on Neural Networks. 1990;1(1):4-27. DOI: 10.1109/72.572089
  9. 9. Gupta M, Jin L, Homma N. Static and Dynamic Neural Networks: From Fundamentals to Advanced Theory. John Wiley & Sons; 2004. DOI: 10.1002/0471427950
  10. 10. Chiang YM, Chang LC, Chang FJ. Comparison of static-feedforward and dynamic-feedback neural networks for rainfall–runoff modeling. Journal of Hydrology. 2004;290(3):297-311. DOI: 10.1016/j.jhydrol.2003.12.033
  11. 11. Pearlmutter BA. Learning state space trajectories in recurrent neural networks. Neural Computation. 1989;1(2):263-269. DOI: 10.1162/neco.1989.1.2.263
  12. 12. Basheer IA, Hajmeer M. Artificial neural networks: Fundamentals, computing, design, and application. Journal of Microbiological Methods. 2000;43(1):3-31. DOI: 10.1016/S0167-7012(00)00201-3
  13. 13. Miller WT, Werbos PJ, Sutton RS. Neural Networks for Control. MIT Press; 1995. Available from: https://dl.acm.org/doi/abs/10.5555/104204
  14. 14. Hecht-Nielsen R. Theory of the backpropagation neural network. In: Neural Networks. 1989, June, 1989. IJCNN., International Joint Conference on IEEE. pp. 593-605. DOI: 10.1016/B978-0-12-741252-8.50010-8
  15. 15. Hecht-Nielsen R. Neurocomputing: picking the human brain. IEEE Spectrum. 1988;25(3):36-41. DOI: 10.1109/6.4520
  16. 16. Hecht-Nielsen R. On the algebraic structure of feed forward network weight spaces. Advanced Neural Computers. 1990:129-135. DOI: 10.1016/B978-0-444-88400-8.50019-4
  17. 17. Kůrková V. Kolmogorov’s theorem and multilayer neural networks. Neural Networks. 1992;5(3):501-506. DOI: 10.1016/0893-6080(92)90012-8
  18. 18. Hornik K, Stinchcombe M, White H. Multilayer feedforward networks are universal approximators. Neural Networks. 1989;2(5):359-366. DOI: 10.1016/0893-6080(89)90020-8
  19. 19. Kolmogorov AN. The representation of continuous functions of many variables by superposition of continuous functions of one variable and addition. Doklady Akademii Nauk SSSR. 1957;114(5):953-956. Available from: http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=dan&paperid=22050&option_lang=eng
  20. 20. Yager RR, Kacprzyk, J. (Eds.). The Ordered Weighted Averaging Operators: Theory and Applications. Springer Science & Business Media; 2012. Available from: https://link.springer.com/chapter/10.1007/978-94-009-0125-4_44
  21. 21. Chen AM, Hecht-Nielsen R. On the geometry of feedforward neural network weight spaces. In: Artificial Neural Networks, 1991, Second International Conference on IET. 1991. pp. 1-4
  22. 22. Kohonen T. Self-organized formation of topologically correct feature maps. Biological Cybernetics. 1982;43(1):59-69
  23. 23. Mehrotra K, Mohan CK, Ranka S. Elements of Artificial Neural Networks. MIT Press; 1997
  24. 24. Rojas R. Neural Networks: A Systematic Introduction. Springer Science & Business Media; 2013
  25. 25. Cybenko G. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems. 1989;2(4):303-314. DOI: 10.1007/BF02134016
  26. 26. Hashem S. Sensitivity analysis for feedforward artificial neural networks with differentiable activation functions. In: Neural Networks, 1992 IJCNN, International Joint Conference on IEEE. Vol. 1. 1992. pp. 419-424. DOI: 10.1109/IJCNN.1992.287175
  27. 27. Shen W, Guo X, Wu C, Wu D. Forecasting stock indices using radial basis function neural networks optimized by artificial fish swarm algorithm. Knowledge-Based Systems. 2011;24(3):378-385. DOI: 10.1016/j.knosys.2010.11.001
  28. 28. White H. Artificial neural networks: Approximation and learning theory. Blackwell Publishers Inc; 1992
  29. 29. Haykin SS, Haykin SS, Haykin SS, Haykin SS. Neural Networks and Learning Machines. Vol. 3. Upper Saddle River: Pearson Education; 2009
  30. 30. Leonard J, Kramer MA. Improvement of the backpropagation algorithm for training neural networks. Computers & Chemical Engineering. 1990;14(3):337-341. DOI: 10.1016/0098-1354(90)87070-6
  31. 31. Goh ATC. Back-propagation neural networks for modeling complex systems. Artificial Intelligence in Engineering. 1995;9(3):143-151. DOI: 10.1016/0954-1810(94)00011-S
  32. 32. Wu S, Er MJ, Liao J. A novel learning algorithm for dymanic fuzzy neural networks. In: Proceedings of the 1999 America Control Conference (Cat. No 99CH36251. Vol. 4. IEEE; 1999. pp. 2310-2314. DOI: 10.1109/ACC.1999.786445
  33. 33. Schäfer P, Caspari A, Kleinhans K, Mhamdi A, Mitsos A. Reduced dynamic modeling approach for rectification columns based on compartmentalization and artificial neural networks. AICHE Journal. 2019;65(5):e16568. DOI: 10.1002/aic.16568
  34. 34. Linhart O, Gela D, Rodina M, Kocour M. Optimization of artificial propagation in European catfish, Silurus glanis L. Aquaculture. 2004;235(1–4):619-632. DOI: 10.1016/j.aquaculture.2003.11.031

Written By

Francisco J. Sanchez-Ruiz

Submitted: 18 September 2021 Reviewed: 18 October 2021 Published: 31 August 2022