1. Introduction
Artificial intelligence (AI) is a branch of computer science that seeks to create intelligence. While humans have been using computers to simplify several tasks, AI provides new options to use computers. For instance, voice recognition software uses AI to transform the sounds to the equivalent text words. There are several techniques that AI includes. An artificial neural network (ANN) is one of these techniques.
Humans use their intelligence to solve complex problems and perform daily tasks. Human intelligence is provided by the brain. Small processing units called neurons are the main components of the human brain. ANNs try to imitate partially some of the human brain behavior. Thus, artificial neurons are designed to mimic the activities of biological neurons.
Humans learn by experience: they are exposed to events that encourage their brains to acquire knowledge. Similarly, ANNs extract information froma data set; this set is typically called the training set and is organized in the same way that schools design their courses’ content. ANNs provide an excellent way to understand better biological neurons. In practice, some problems may be described by a data set. For instance, an ANN is typically trained using a data set. For some problems, building a data set may be very difficult or sometimes impossible as the data set has to capture all possible cases of the experiment.
Simulated annealing (SA) is a method that can be used to solve an ample set of optimization problems. SA is a very robust technique as it is not deceived with local minima. Additionally, a mathematical model is not required to apply SA to solve most optimization problems.
This chapter explores the use of SA to train an ANN without the requirement of a data set. The chapter ends with a computer simulation where an ANN is used to drive a car. Figure 1 shows the system architecture. SA is used to provide a new set of weights to the ANN. The ANN controls the acceleration and rotation speed of the car. The car provides feedback by sending vision information to the ANN. The distance traveled along the road fromthe
2. Artificial neural networks
An ANN is a computational method inspired in biological processes to solve problems that are very difficult for computers or humans. One of the key features of ANNs is that they can adapt to a broad range of situations. They are typically used where a mathematical equation or model is missing, see [4]. The purpose of an ANN is to extract, map, classify or identify some sort of information that is allegedly hidden in the input, [13].
2.1. Neuron
The human brain is composed of processing units called neurons. Each neuron is connected to other neurons to form a neural network. Similarly, the basic components of an ANN are the neurons. Neurons are arranged in layers inside the ANN. Each layer has a fixed number of neurons, see [5]. For instance, the ANN shown in the Figure 2 has three layers: the input layer, the hidden layer, and the output layer.
An ANN accepts any kind of input that can be expressed as a set of numeric values; typical inputs may include: an image, a sound, a temperature value, etc. The output of an ANN is always dependent upon its input. That is, a specific input will produce a specific output. When a set of numeric values is applied to the input of an ANN, the information flows from one neuron to another until the output layer generates a set of values.
2.2. Activation function
The internal structure of an artificial neuron is shown in Figure 3(a). The output value
where each
The activation function of Equation 2 is plotted in Figure 3(b). The activation function (in this figure) is real, continuous, limited and has a positive derivative.
A neuron can be active or inactive, when the neuron is active its output is 1, when it is inactive its output value is 0. Some input values activate some neurons, while other values may activate other neurons. For instance, in Figure 4, the sound of the word
3. Learning
Before an ANN can be used for any practical purpose, it must be trained. An ANN learns during its training. For the duration of the learning process the ANN weights are recurrently adjusted.
3.1. Structured learning
In some instances, ANNs may learn from a data set. This set is typically known as the training set, and it is used on a new ANN (as its name indicates) for training. The training set has two parts: the input and the target. The input contains the set of inputs that must be applied to the network. The target includes the set of desired values at the output of the ANN. In other words, each sample (case) in the training set completely specifies all inputs, as well as the outputs that are desired when those inputs are presented, see [7]. During the training, each case in the training set is presented to the network, and the output of the network is compared with the desired output. After all cases in the training set have been processed, an epoch or iteration has completed by updating the weights of the network. There are several methods for updating the weights at each epoch or iteration. All these methods update the weights in such a way that the error (measured between the actual output of the network and the desired output) is reduced at each epoch, see [7].
Some training methods are based on the gradient of the error (they are called gradient based methods). These methods quickly converge to the closest local minima. The most typical gradient based methods to train an ANN are: the variable metric method, the conjugate gradient method, and method of Levenberg-Marquardt. To make the training of an ANN robust, it is always recommended to combine gradient based methods with other optimization methods such as SA.
When an ANN is trained using a data set, typically the set includes many training cases, and the training is done at once. This kind of training is called structured learning because the knowledge is organized in the data set for the network to learn. One of the main disadvantages of structured learning is that the training set has to be prepared to describe the problem at hand. Another disadvantage of structure learning is that if more cases are added to the training set, the ANN has again to be trained starting from scratch. As ANN training is time consuming, structured learning may be inadequate for problems that require continuous adaptation.
3.2. Continuous learning
In continuous learning, an ANN does not require a data set for training, the ANN learns by experience and is able to adapt progressively by incorporating knowledge gradually. Because some problems cannot be appropriately described by a data set, and because training using a data set can be time consuming, continuous learning is important for real-time computing (RTC) where there is a "real-time constraint".
3.3. Validation
After the ANN training has been completed, the network performance has to be validated. When using ANNs, the validation process is extremely important, as a matter of fact, the validation is as important as the training, see [7]. The purpose of the validation is to predict how well the ANN will behave in other conditions in the future. The validation process may be performed using a data set called the validation set. The validation set is similar to the training set but not equal. Under normal circumstances (when the ANN is properly used), the error obtained during training and during validation should be similar.
4. Simulated annealing
SA is an optimization method that can be used to solve a broad range of problems, [11]. SA is recommended for complex optimization problems. The algorithm begins at a specifictemperature; as time passes the temperature gradually decreases following a cooling scheduleas shown in Figure 5. The solution is typically described by a set of variables, but it canbe described by other means. Once the algorithm has started, the solution approachesprogressively the global minimum that presumably exists in a complex error surface, see [16]and [15]. Because of its great robustness, SA has been used in many fields including thetraining of ANNs with structured learning, [9].
One of the key features of SA is that it always provides a solution, even though the solution may not be optimal. For some optimization problems that cannot be easily modeled, SA may provide a practical option to solve them.
5. Simulated annealing evolution
Simulated annealing evolution includes the use of: ANNs, continuous learning and SA. In simulated annealing evolution, an ANN does not require a training set; instead the ANN gradually learns new skills or improves existing ones by experience. Figure 5 shows how SA evolution works. In this figure, a typical cooling schedule used in SA is displayed. Suppose that there is a 2D landscape with valleys and hills as shown in this figure. Suppose also that it is desired to find the deepest valley on this landscape. Each of the balls, in this figure, represents an ANN. At the beginning of the simulation, the high initial temperature produces a state of high energy; the balls shake powerfully and are able to traverse easily through the high hills of the 2D terrain. In other words, each ANN is exploring, that is, the ANN is in the initial step of learning. As the temperature decreases, the energy of the balls decreases, and the movement of the balls is more restricted than at high temperatures, see [1]. Thus, as the temperature diminishes, the ANN has less freedom to explore new information as the network has to integrate new knowledge with the previous one. By the end of the cooling schedule, it is desired that one of the balls reached the deepest valley in the landscape, in other words, that one ANN learned a set of skills.
At each temperature, an ANN (in the set) has the chance to use its knowledge to perform a specific task. As the temperature decreases, each ANN has the chance to improve its skills. If the ANNs are required to incorporate new skills, temperature cycling can be used, see [6]. Specifically, an ANN may learn by a combination of SA and some sort of hands-on experience. Thus, simulated annealing evolution is the training of an ANN using SA without a training set.
Each ANN may be represented by a set of coefficients or weights. For illustrative purposes only, suppose that an ANN may be described by a set of two weights
6. Problem description
To illustrate how to use simulated annealing evolution, this section presents a simple learning problem. The problem consists of using an ANN to drive a car in a simple road. Clearly, the problem has two objects: the road and the car. The road includes a
The simulation was performed using object oriented programming (OOP); the respective UML diagrams for the simulation are shown in Figures 8 and 9. The two basis classes are shown in the diagram of Figure 8. This diagram includes two classes: Point and Bounds. The Point class in the diagram represents a point in a 2-Dimensional space, the diagram indicates that this structure includes only two floating point variables:
6.1. The object class
Figure 9 shows the respective UML diagram for the Object class. This class represents a static object in a 2-Dimensional space. The class name is displayed in italics indicating that this class is abstract. As it can be seen the
6.2. The mobile class
This class represents a moving object in a 2-Dimensional space. Each
where
where the symbol
In some cases, the object may be rotating at a constant speed and, hence, the rotation angle has also to be updated at each period of time.
During the experiments, the methods
6.3. The road
The simulation experiments were performed using only two classes: the Road class and the Car class. The UML diagram of the Road class is shown in Figure 9. The Road class is derived directly from the
The straight segment from
6.4. The car
The car used for the simulation is shown in Figure 10. The car has a position represented by
Figure 11 illustrates how the car is able to receive information about its surroundings. Thecar had seven vision points illustrated by the arrows in the figure. To prevent the ANN fromdriving backwards, no vision lines were placed in the back of the car. Each value of
In real life, a car driver is not able to modify directly the position or velocity of the car, thedriver only controls the acceleration and the turning speed. Asmention before, each car in the simulation has an ANN to do the driving. At each period of time, the ANN receives the visioninformation from the surroundings and computes the acceleration and the rotation speed ofthe car. Figure 12 shows the ANN of the car, the ANN has 8 inputs and two outputs. As itcan be seen from this figure, the speed of the car is also applied to the input of the network;this is very important because the ANN needs to react differently depending on the currentspeed of the car. For instance, if the car moves a high speeds and faces a turn, it needs toappropriately reduce its speed before turning. As a matter of fact, the ANN needs to be readyfor an unexpected turn and may regulate the speed of the car constantly.
7. Experimental results
This section explains how SA was used to train the ANN. The implementation of SA wasdivided in three steps: initialization, perturbation and error computation.
The ANN training process using SA is illustrated in Figure 13. The simulation starts byrandomly setting the network weights using a uniform probability distribution
The cooling schedule used in the simulations is described by Equation 7,
where
Observe, that each time the ANN weights are perturbed, the ANN is allowed to drive the car.Then, the error is computed and the oracle makes a decision about whether the new set ofweights is accepted or rejected using the probability of acceptance of Equation 8, see [5] and[11]. Some implementations of SA accept a new solution only if the new solution is better thanthe old one, i.e. it accepts the solution only when the Error decreases; see [7] and [10]. Theprobability of acceptance is defined as
where
A uniform probability distribution was used to generate states for subsequent consideration.At high temperatures, the algorithm may frequently accept an ANN (a set of weights) evenif the ANN does not drive better than the previous one. During this phase, the algorithmexplores in a very wide range looking for a global optimal ANN, and it is not concernedtoo much with the quality of the ANN. As the temperature decreases, the algorithm is moreselective, and it accepts a new ANN only if its error is less than or very similar to the previoussolution error following the decision made by the oracle.
7.1. SA initialization
Because of the properties of the activation function of Equation 2, the output of an ANN islimited. As mentioned before, an ANN is trained by adjusting the weights that connect theneurons. The training of an ANN can be simplified, if the input applied to the ANN is limited.Specifically, if the input values are limited from −1 to 1, then the ANN weights are limited toapproximately from −30 to 30, [8] and [12]. To simplify the simulation, the input values of theANN were scaled from −1 to 1. Therefore, the SA initialization consisted in simply assigninga random value from −30 to 30 using a uniform probability distribution to each of the ANNweights as shown in the C++ code shown in Figure 14. Observe that the random numbergenerator uses the (
7.2. SA perturbation
The code of Figure 15 shows the implementation of the SA perturbation using the C++language. First, each ANN weight was perturbed by adding a random value from –
7.3. SA error computation
In order to measure the driving performance of the ANN, an error function
where the variable
The code of Figure 16 illustrates the implementation of the error function. The function startsby setting the ANN weights. The variable
7.4. Results
Several experimental simulations were performed using different configurations to analyzethe behavior of the ANN and the car.In the first simulation, the speed of the car was not applied at the input of the ANN, in allcases, the ANN was not able to turn at point B. At some unexpected point, the ANN was ableto see the approaching turn of point B and did not have enough time to reduce the speed ofthe car.
In the second simulation, the number of neurons in the hidden layer was varied from 0 to5. When the number of neurons in the hidden layer was zero, the ANN was able to drivethe car to the
The third experiment consisted in varying the number of vision lines described in Figure 11.The number of vision lines was varied from 3 to 7. When using 3 vision lines, the ANN wasable to reach 50% of the times to point
The SA parameters were set to be compatible with the ANN weights. The initial temperaturewas 30, the final temperature was 0.1. Some experiments were performed by using lowerfinal temperatures, but there were not any noticeable changes in the performance of the ANN.The number of temperatures was set to 10 using 20 iterations per temperature. Some testswere performed using more numbers of iterations, but there were not improvements. All thesimulations were run using an exponential cooling schedule.
To validate the training of the ANN, another road similar to the shown in Figure 7 was built.In all cases, the ANN behaved similar in both roads: the road for training and the road forvalidation.
8. Conclusion
An ANN is a method inspired in biological processes. An ANN can learn from a training set.In this case, the problem has to be described by a training set. Unfortunately, some problemscannot be easily described by a data set. This chapter proposes the use of SA to train an ANNwithout a training set. We call this method simulated annealing evolution because, the ANNlearns by experience during the simulation.
Simulated annealing evolution can be used to train an ANN in an ample set of cases.Because human beings learn by experience, simulated annealing evolution is similar to humanlearning.
An optimization problem was designed to illustrate how to use SA to train an ANN. Theproblem included a car and a road. An ANN was used to drive the car in a simple road.The road had several straight segments and turning points. The objective of the ANN was todrive the car from the
During the simulation, the car had a set of vision lines to compute the distance to the closestobjects. The distance from each vision line (measured from the car to the closest object) wasapplied to the input of an ANN. It was noticed that the ANN performed much better whenthe speed of the car was also applied to the input of the ANN.
The number of neurons in the hidden layer of the ANN was varied during the simulations. Itwas observed that when the number of neurons in the hidden layer was increased, the ANNwas able to reach quicker the
The car vision consisted in a set of lines. Experimental simulations were performed varyingthe number of vision lines form 3 to 7. The experimental results indicated that when 3 visionlines are used, the ANN does not have enough information and cannot drive successfullyto the
References
- 1.
Bandyopadhyay S. Saha S. Maulik U. Deb K. A. Simulated-Based Annealing. Multi-objective Optimization. Algorithm A. M. O. S. A. I. E. E. E. Transactions on. Evolutionary Computation. 2008 12 3 269 283 - 2. Buckland M. AI Techniques for Game Programming, Ohio USA: Premier Press; 2002.
- 3.
Ingber L. Simulated annealing. Practice versus. theory Mathematical. Computer Modelling. 1993 18 11 29 57 - 4.
Media;Jones M. T. A. I. Application Programming. Massachusetts Charles. River 2005 - 5.
LLC;Jones M. T. Artificial Intelligence. A. Systems Approach. Massachusetts Infinity. Science Press. L. L. 2008 - 6.
Temperature Cycling onSimulated Annealing for Neural Network Learning, In: A. Gelbukh and A.F. KuriMorales (Eds.): MICAI 2007, LNAI 4827: proceedings of the Mexican InternationalConference on Artificial Intelligence MICAILedesma S. Torres M. Hernandez D. Avina G. Garcia G. 2007 4 10 November 2007, Aguascalientes,Mexico, Springer-Verlag Berlin Heidelberg2007 - 7.
Masters T. Simulated Annealing Evolution San Diego, USA: Academic PressInc.;1993 - 8.
Masters T. Simulated Annealing Evolution hnWiley & Sons Inc.;1994 - 9.
Masters T. Simulated Annealing Evolution hn Wiley &Sons Inc.;1995 - 10.
Equation of StateCalculations by Fast Computing Machines. Journal of Chemical PhysicsMetropolis N. Rosenbluth M. N. Rosenbluth A. H. Teller E. 1953 21 6 1087 1092 - 11.
Numerical Recipes in C++:The Art of Scientific Computing (Third Edition), Cambridge, New York, Melbourne,Madrid, Cape Town, Singapore and Sao Paulo: Cambridge University Press;Press W. H. Teukolsky S. A. Vetterling W. T. Flannery B. P. 2007 - 12.
MITPress;Reed R. D. Marks R. J. Neural Smithing. Cambridge Massachusetts. U. S. A. The M. I. T. 1999 - 13. [13] Russel S. J. and Norvig P. Artificial Intelligence: A Modern Approach (3rd edition),Upper Saddle River, NJ USA: Prentice Hall; 2009.
- 14.
IEEE, Houston, Texas, USA;Shamos M. . Hoey D. Geometric intersection. problems 17th. Annual Symposium. on Foundations. of Computer. Science October. 1976 - 15.
ComputationSmith K. I. Everson R. M. Fieldsend J. E. Murphy C. Misra . Dominance-Based R. Multi-objective Simulated. Annealing I. E. E. E. Transactions on. Evolutionary 2008 12 3 323 342 - 16. Stefankovic D., Vempala S. &Vigoda E. Adaptive simulated annealing: A near-optimalconnection between sampling and counting, Journal of the ACM 2009;56(3).