Open access

A Simulated Annealing Algorithm for the Satisfiability Problem Using Dynamic Markov Chains with Linear Regression Equilibrium

Written By

Felix Martinez-Rios and Juan Frausto-Solis

Submitted: 23 November 2011 Published: 29 August 2012

DOI: 10.5772/46175

From the Edited Volume

Simulated Annealing - Advances, Applications and Hybridizations

Edited by Marcos de Sales Guerra Tsuzuki

Chapter metrics overview

3,099 Chapter Downloads

View Full Metrics

1. Introduction

Since the appearance of Simulated Annealing algorithm it has shown to be an efficient method to solve combinatorial optimization problems such as Boolean Satisfiability problem. New algorithms based on two cycles: one external for temperatures and other internal, named Metropolis, have emerged. These algorithms usually use the same Markov chain length in the Metropolis cycle for each temperature. In this paper we propose a method based on linear regression to find the Metropolis equilibrium. Experimentation shows that the proposed method is more efficient than the classical one, since it obtains the same quality of the final solution with less processing time.

Today we have a considerable interest for developing new and efficient algorithms to solve hard problems, mainly those considered in the complexity theory (NP-complete or NP-hard) [8]. The Simulated Annealing algorithm proposed by Kirkpatrick et al. [18] and Cerny [5, 6] is an extension of the Metropolis algorithm [23] used for the simulation of the physical annealing process and is specially applied to solve NP-hard problems where it is very difficult to find the optimal solution or even near-to-optimum solutions.

Efficiency and efficacy are given to Simulated Annealing algorithm by the cooling scheme which consists of initial (ci) and final (cf) temperatures, the cooling function (f(ck)) and the length of the Markov chain (Lk) established by the Metropolis algorithm. For each value of the control parameter (ck) (temperature), Simulated Annealing algorithm accomplishes a certain number of Metropolis decisions. In this regard, in order to get a better performance of the Simulated Annealing algorithm a relation between the temperature and Metropolis cycles may be enacted [13].

The Simulated Annealing algorithm can get optimal solutions in an efficient way only if its cooling scheme parameters are correctly tuned. Due this, experimental and analytical parameters tuning strategies are currently being studied; one of them known as ANDYMARK [13] is an analytical method that has been shown to be more efficient. The objective of these methods is to find better ways to reduce the required computational resources and to increment the quality of the final solution. This is executed applying different accelerating techniques such as: variations of the cooling scheme [3, 27], variations of the neighborhood scheme [26] and with parallelization techniques [12, 26].

In this chapter an analytic adaptive method to establish the length of each Markov chain in a dynamic way for Simulated Annealing algorithm is presented; the method determines the equilibrium in the Metropolis cycle using Linear Regression Method (LRM). LRM is applied to solve the satisfiability problems instances and is compared versus a classical ANDYMARK tune method.

Advertisement

2. Background

In complexity theory, the satisfiability problem is a decision problem. The question is: given the expression, is there some assignment of TRUE and FALSE values to the variables that will make the entire expression true? A formula of propositional logic is said to be satisfiable if logical values can be assigned to its variables in a way that makes the formula true.

The propositional satisfiability problem, which decides whether a given propositional formula is satisfiable, is of critical importance in various areas of computer science, including theoretical computer science, algorithmics, artificial intelligence, hardware design, electronic design automation, and verification. The satisfiability problem was the first problem refered to be as NP complete [7] and is fundamental to the analysis of the computational complexity of many problems [28].

2.1. Boolean satisfiability problem (SAT)

An instance of SAT is a boolean formula which consists on the next components:

  • A set S of n variables x1, x2, x3,..., xn.

  • A set L of literals; a literal li, is a variable xi or its negation x˜i.

  • A set of m clauses: C1, C2, C3,..., Cm where each clause consists of literals li linked by the logical connective OR (∨).

This is:

Φ =C1C2C3...CmE1
(1)

where Φ, in Equation 1, is the SAT instance. Then we can enunciate the SAT problem as follows:

Definition 1. Given a finite set {C1, C2, C3,..., Cm} of clauses, determine whether there is an assignment of truth-values to the literals appearing in the clauses which makes all the clauses true.

NP-completeness in SAT problem, only refers to the run-time of the worst case instances. Many of the instances that occur in practical applications can be solved much faster, for example, SAT is easier if the formulas are restricted to those in disjunctive normal form, that is, they are disjunction (OR) of terms, where each term is a conjunction (AND) of literals. Such a formula is indeed satisfiable if and only if at least one of its terms is satisfiable, and a term is satisfiable if and only if it does not contain both x and x˜ for some variable x, this can be checked in polynomial time.

SAT is also easier if the number of literals in a clause is limited to 2, in which case the problem is called 2 − SAT, this problem can also be solved in polynomial time [2, 10]. One of the most important restrictions of SAT is HORN-SAT where the formula is a conjunction of Horn clauses (a Horn clause is a clause with at most one positive literal). This problem is solved by the polynomial-time Horn-satisfiability algorithm [9].

The 3-satisfiability (3-SAT) is a special case of k-satisfiability (k-SAT), when each clause contains exactly k = 3 literals. 3-SAT is NP-complete and it is used as a starting point for proving that other problems are also NP-hard [31]. This is done by polynomial-time reduction from 3-SAT to the other problem [28].

Advertisement

3. Simulated Annealing algorithm

Simulated Annealing improves this strategy through the introduction of two elements. The first is the Metropolis algorithm [23], in which some states that do not improve energy are accepted when they serve to allow the solver to explore more of the possible space of solutions. Such "bad" states are allowed using the Boltzman criterion: e−ΔJ/T > rnd(0, 1), where ΔJ is the change of energy, T is a temperature, and rnd(0, 1) is a random number in the interval [0, 1). J is called a cost function and corresponds to the free energy in the case of annealing a metal. If T is large, many "bad" states are accepted, and a large part of solution space is accessed.

The second is, again by analogy with annealing of a metal, to lower the temperature. After visiting many states and observing that the cost function declines only slowly, one lowers the temperature, and thus limits the size of allowed "bad" states. After lowering the temperature several times to a low value, one may then "quench" the process by accepting only "good" states in order to find the local minimum of the cost function.

The elements of Simulated Annealing are:

  • A finite set S.

  • A cost function J defined on S. Let S*S be the set of global minima of J.

  • For each iS, a set S(i) ⊂ Si is called the set of neighbors of i.

  • For every i, a collection of positive coefficients qij, jS(i), such that Σj∈S(i) qij = 1. It is assumed that jS(i) if and only if iS(j).

  • A nonincreasing function T : N → (0,∞), called the cooling schedule. Here N is the set of positive integers, and T(t) is called the temperature al time t.

  • An initial state x(0) ∈ S.

The Simulated Annealing algorithms consists of a discrete time inhomogeneus Markov chain x(t) [4]. If the current state x(t) is equal to i, chose a neighbor j of i at random; the probability that any particular jS(i) is selectec is equal to qij. Once j is chosen, the next state x(t +1) is determined as follows:

if J(j)J(i) thenx(t+1)=jif J(j)J(i) thenx(t+1)=jwith probability e(J(j)J(i))/T(t)elsex(t+1)=i%E2
(2)

In a formal way:

P[x(t+1)=j|x(t)=i]={qije[1T(t)max{0,J(j)J(i)}]ji,jS(i)0ji,jS(i)E3
(3)

In Simulated Annealing algorithm we are considering a homogeneus Markov chain xT(t) wich temperature T(t) is held at a constant value T. Let us assume that the Markov chain xT(t) is irreducible and aperiodic and that qij = xjii, j, then xT(t) is a reversible Markov chain, and its invariant probability distribution is given by:

πT(i)=1ZTe[J(i)T]E4
(3)

In Equation 4 ZT is a normalized constant and is evident that as T → 0 the probability πT is concentrate on the set S* of global minima of J, this property remains valid if the condition qij = qji is relaxed [11].

In the optimization context we can generate an optimal element with high probability if we produce a random sample according to the distribution πT, known as the Gibbs distribution. When is generated an element of S accomplished by simulating Markov chain xT(t) until it reaches equilibrium we have a Metropolis algorithm [23].

The Simulated Annealing algorithm can also be viewed as a local search method occasionally moves to higher values of the cost function J, this moves will help to Simulated Annealing escape from local minima. Proof of convergence of Simulated Annealing algorithm can be revised [4].

3.1. Traditional Simulated Annealing algorithms

Figure 1 shows the classic algorithm simulated annealing. In the algorithm, we can see the cycle of temperatures between steps 2 and 5. Within this temperature cycle, are the steps 3 and 4 which correspond to the Metropolis algorithm.

As described in the simulated annealing algorithm, Metropolis cycle is repeated until thermal equilibrium is reached, now we use the formalism of Markov chains to estimate how many times it is necessary to repeat the cycle metropolis of so that we ensure (with some probability) that all solutions of the search space are explored.

Similarly we can estimate a very good value for the initial and final temperature of the temperature cycle. All these estimates were made prior to running the simulated annealing algorithm, using data information SAT problem is solved.

Figure 1.

Simulated Annealing algorithm

It is well known that Simulated Annealing requires a well defined neighborhood structure and other parameters as initial and final temperatures Ti and Tf. In order to determine these paratmeters we follow the next method proposed by [30]. So following the analysis made in [30] we give the basis of this method.

Let PA(Sj) be the accepting probability of one proposed solution Sj generated from a current solution Si, and PR(Sj) the rejecting probability. The probability of rejecting Sj can be established in terms of PA(Sj) as follows:

PR(Sj)=1PA(Sj)E5
(5)

Accepting or rejecting Sj only depends on the cost deterioration size that this change will produce to the current solution, that means:

PA(Sj)=g[J(Si)J(Sj)]=g(ΔJij)E6
(6)

In Equation 6, J(Si) and J(Sj) are the cost associated to Si and Sj respectively, and gJij) is the probability to accept the cost difference ΔJij = J(Si) − J(Sj).

The solution selected from Si may be any solution Sj defined by the next neighborhood scheme:

Definition 2. Let {∀SiS, ∃ a set VSiS|VSi = V : S → S} be the neighborhood of a solution Si, where VSiis the neighborhood set of Si, V : S → S is a mapping and S is the solution space of the problem being solved.

It can be seen from the Definition 2 that neighbors of a solution Si only depends on the neighborhood structure V established for a specific problem. Once V is defined, the maximum and minimum cost deteriorations can be written as:

ΔJVmax=max[J(Si)J(Sj)],SjVSi,SiSE7
(7)
ΔJVmin=min[J(Si)J(Sj)],SjVSi,SiSE8
(8)

where ΔJVmaxandΔJVminare the maximum and minimum cost deteriorations of the objective function through J respectively.

3.2. Markov Chains and Cooling Function

The Simulated Annealing algorithm can be seen like a sequence of homogeneous Markov chains, where each Markov chain is constructed for descending values of the control parameter T > 0 [1]. The control parameter is set by a cooling function like:

Tk+1=f(Tk)E9
(9)

and Tk must satisfy the next property:

limkTk=0TkTk+1k1E10
(10)

At the beginning of the process Tk has a high value and the probability to accept one proposed solution is high. When Tk decreases this probability also decreases and only good solutions are accepted at the end of the process. In this regard every Markov chain makes a stochastic walk in the solution space until the stationary distribution is reached. Then a strong relation between the Markov chain length (Lk) and the cooling speed of Simulated Annealing exists: when Tk → ∞, Lk → 0 and when Tk → 0, Lk → ∞.

Because the Markov chains are built through a neighborhood sampling method, the maximum number of different solutions rejected at Tf when the current solution Si is the optimal one, is the neighborhood size|VSi|. In this regard the maximum Markov chain length is a function of|VSi|. In general Lk can be established as:

LkLmax=g(|VSi|)E11
(11)

In Equation 11, Lmax is the Markov chain length when Tk = Tf, and g(|VSi|) is a function that gives the maximum number of samples that must be taken from the neighborhood VSi in order to evaluate an expected fraction of different solutions at Tf. The value of Lmax only depends on the number of elements of VSithat will be explored at Tf.

Usually a Simulated Annealing algorithm uses a uniform probability distribution function G(Tk)given by a random replacement sampling method to explore VSi at any temperature Tk, where G(Tk) is established as follows:

G(Tk)={1|VSi|SjVSi0SjVSiE12
(12)

In this regard, the probability to get the solution Sj in N samples is:

PA(Sj)=1eN|VSi|E13
(13)

Notice in Equation 13 that PA(Sj) may be understood as the expected fraction of different solutions obtained when N samples are taken. From Equation 13, N can be obtained as:

N=ln(1PA(Sj))|VSi|E14
(14)

In Equation 14, we define:

=ln(1PA(Sj))=ln(PR(Sj))E15
(15)

You can see that PR(Sj) = 1 − PA(Sj), PR(Sj) is the rejection probability. Constant C establishes the level of exploration to be done In this way different levels of exploration can be applied. For example: if a 99% of the solution space is going to be explored, the rejection probability will be PR(Sj) = 0.01, so, from Equation 15 we obtain C = 4.60.

Definition 3. The exploration set of the search space, ΦC, is defined as follows:
  • Given the set of probability of acceptance ΦPA= {70, 75, 80, 85, 90, 95, 99, 99.9, 99.99, 99.999,...}

  • Using Equation 15: ΦC = {1.20, 1.39, 1.61, 1.90, 2.30, 3.00, 4.61, 6.91, 9.21, 11.51,...}

Then in any Simulated Annealing algorithm the maximum Markov chain length (when Tk = Tf ) may be set as:

Lmax=N=C|VSi|E16
(16)

Because a high percentage of the solution space should be explored, C varies from 1 ≤ C ≤ 4.6 which guarantees a good level of exploration of the neighborhood at Tf.

When the process is at the beginning the temperature Ti is very high. This is because in the Boltzman distribution the acceptance probability is directly related with the cost incrementPA=e(ΔJ/Tk); where Tk is the temperature parameter, therefore:

Tk=ΔJln(PA)E17
(17)

At the beginning of the process, PA is close to one (normally 0.99, [21]) and the temperature is extremely high. Almost any solution is accepted at this temperature; as a consequence the stochastic equilibrium of a Markov cycle is reached with the first guess solution. Similarly, when the process is ending the acceptance probability (tipically 0.01) and the temperature closer to zero but the Metropolis cicle is very long.

For instance SAT values ΔJVmaxandΔJVmin in the energy of different states can be estimated at the beginning of the execution on the simulated annealing algorithm. To estimate these values, we can count the maximum number of Clauses containing any of the variables of the problem, the largest number of clauses that can change when we change the value of a variable, is an upper bound to change maximum of Energy and:

Ti=ΔJVmaxln(PA)=maxnumberofclausesln(0.99)E18
(18)

Similarly, the minimum of change Energy can be estimated by counting the clauses that are changed when creating a new neighbor and obtain the lowest of these values:

Tf=ΔJVminln(PA)=minnumberofclausesln(0.01)E19
(19)

Some criticisms about Simulated Annealing are about the long time of execution of standard Boltzmann-type Simulated Annealing, has many times driven these projects to utilize a temperature schedule too fast to satisfy the sufficiency conditions required to establish a true ergodic search. In this chapter we use a logarithmic an exponential temperature schedule that is consistent with the Boltzmann algorithm follow:

Tk=T0e[(α1)k],0<α<1E20
(20)

From Equation 20 we can obtain:

ΔTΔk=Tk(α1),k1E21
(21)and
ΔT=Tk(α1)Δk,k1E22
(22)

if in the previous expression Δk is equal to 1 then obtain the equation for two successive values of the temperature

Tk+1=αTk,0<α<1,k1E23
(23)

where Tk is the “temperature,” k is the “time” index of annealing [16, 17].

3.3. Simulated Annealing algorithm with the Markov chain Lenght dynamically

In [13, 20, 21] authors shown a strong relation between the cooling function and the length of the Markov chain exists. For the Simulated Annealing algorithm, the stationary distribution for each Markov chain is given by the Boltzmann probability distribution, which is a family of curves that vary from a uniform distribution to a pulse function.

At the very beginning of the process (with Tk = Ti), Simulated Annealing has a uniform distribution, henceforth any guess would be accepted as a solution. Besides any neighbor of the current solution is also accepted as a new solution. In this way when Simulated Annealing is just at the beginning the Markov chain length is really small, Lk = Li ≈ 1. When running the temperature cycle of simulated annealing, for values of k greater than 1, the value of Tk is decremented by the cooling function [16], until the final temperature is reached (Tk = Tf ):

Tk+1=αTkE24
(24)

In Equation 24 α is normally in the range of [0.7, 0.99][1].

In this regard the length of each Markov chain must be incremented at any temperature cycle in a similar but in inverse way that Tk is decremented. This means that Lk must be incremented until Lmax is reached at Tf by applying an increment Markov chain factor (β). The cooling function given by Equation 24 is applied many times until the final temperature Tf is reached. Because Metropolis cycle is finished when the stochastic equilibrium is reached, it can be also modeled as a Markov chain as follows:

Lk+1=βLkE25
(25)

In previous Equation 25, Lk represents the length of the current Markov chain at a given temperature, that means the number of iterations of the Metropolis cycle for a Tk temperature. So Lk+1 represents the length of the next Markov chain. In this Markov Model, β represents an increment of the number of iterations in the next Metropolis cycle.

If the cooling function given by Equation 24 is applied over and over, n times, until Tk = Tf, the next geometrical function is easily gotten:

Tf=αnTiE26
(26)

Knowing the initial (Ti) and the final (Tf ) temperature and the cooling coefficient (α), the number of times that the Metropolis cycle is executed can be calculated as:

n=lnTflnTilnαE27
(27)

If we make a similar process for increasing the equation of the Markov chain length, another geometrical function is obtained:

Lmax=βnL1E28
(28)

Once n is known by Equation 27, the value of the increment coefficient (β) is calculated as:

β=e(lnLmaxlnL1n)E29
(29)

Once Lmax (calculated form Equation 16), L1 and β are known, the length of each Markov chain for each temperature cycle can be calculated using Equation 27. In this way Lk is computed dynamically from L1 = 1 for Ti until Lmax at Tf. First we can obtain Ti from Equation 18 and Tf from Equation 27, with both values and Equation 29 algorithm can calculate β [30].

In Figure 2 we can see the simulated annealing algorithm modifications using Markov chains described above. Below we will explain how we will use the linear regression for the simulated annealing algorithm run more efficiently without losing quality in the solution.

Figure 2.

Simulated Annealing algorithm with dinamically Markov chain

Advertisement

4. Linear Regresion Method (LRM)

We explain, in Section 3.2, how to estimate the initial and final temperature for SAT instances that will be provided to the simulated annealing algorithm to determine if it is satisfiable or not.

As shown in the Figure 3, the algorithm found Metropolis various configurations with different energy at a given temperature.

The typical behavior of the energy for a given temperature can be observed in Figure 3. We set out to determine when the cycle of Metropolis reaches the equilibrium although not all of the iterations required by Markov have been executed. In order to determine this zone in adaptive way, we will fit by least squares a straight line and will stop the Metropolis cycle if the slope of this line is equal or smaller than zero. This Linear Regression Method LRM is a well known method but never was applied to detect Metropolis equilibrium in Simulated Annealing.

Suppose that the data set consists of the points:

(xi,yi),i=1,2,3,...,nE30
(30)

We want to find a function f such that f (xi) ≈ yi. To attain this goal, we suppose that the function f is of a particular form containing some parameters (a1, a2, a3,...., am) which need to be determined.

Figure 3.

Energy of different states, explored in the Metropolis cycle, for a fixed temperature

In our problem:

yif(xi,a,b)=axi+bE31
(31)

In Equation 31 a and b are not yet known. In our problem f (xi, a, b) = f (i, a, b) = Ji.

As usual, we now seek the values of a and b, that minimize the sum of the squares of the residuals as follows:

S=i=1n[yi(axi+b)]2E32
(32)

As it is well known regression equations are obtained by differentiating S in Equation 32 with respect to each parameter a and b, and we obtain this system of linear equations:

ai=1nxi2+bi=1nxi=i=1nxiyiE33
(33)
ai=1nxi+bi=1n1=i=1nyiE34
(34)

In Equation 33 and Equation 34 we can define the following constants:

A=i=1nxi2,B=i=1nxi,C=i=1nxiyi,D=i=1nyiE35
(35)

Then the system of equations (Equation 33 and 34) can be rewritten as:

aA+bB=CE36
(36)
aB+bn=DE37
(37)

We recall that parameter a, the slope of Equation 31 is:

a=CnBDAnB2E38
(38)

In our data xi = 1, 2, 3,..., n, then we can write:

A=i=1nxi2=i=1ni2=n(n+1)(2n+1)6E39
(39)and
B=i=1nxi=i=1ni=n(n+1)2E40
(40)

in the same way:

C=i=1niJiE41
(41)and
D=i=1nJiE42
(42)

By substitution of equations: 39, 40, 41 and 42; in Equation 38 finally we get the equation

a=CnBDn3nE43
(43)

In order to apply LRM to traditional Simulated Annealing, we apply the following strategy:

  1. Metropolis cycle is running as usual, just as explained in the Section 3.3, using the maximum value of Markov chain length calculated by the Equation 28LmaxC, C is calculated PAiΦPA

  2. When the repeats of metropolis, L, are equal to LmaxC1(LmaxC1is calculated by Equation 28 with PAi1ΦPA).

  3. If the value of the slope a, in the equation of the line (Equation 31), found by Equation 43 is close to zero, then stop the cycle of Metropolis although this has not reached the value Lmax.

Notice from Equation 43 and for Figure 4, that the computation of LRM is O(n) where n is the number of points taken to compute the slope. So the complexity of Simulated Annealing with LRM is not affected [19, 22].

Figure 4.

Simulated Annealing algorithm with dinamically Markov chain and LRM

Advertisement

5. Experimental results

In order to prove LRM algorithm we used the SAT instances in Table 1 and Table 2. Some of these instances were generated using the programs proposed by Horie et al. in 1997 [15] and other are in SATLIB [14]. We generated several instances that had the same relation of clauses and variables σ [24, 25].

The measurement of efficiency of this algorithm was based on the execution time and we also obtained a solution quality measure (SQM), SQM is taken as the number of “true” clauses in an instance at the end of the program execution.

SAT problemIdVariablesClausesσSAT?
aim-100-1_6-yes1-1a11001601.60Yes
aim-50-1_6-yes1-3a250801.60Yes
aim-200-1_6-no-1a72003201.60No
aim-50-1_6-no-2a850801.60No
g2_V100_C200_P2_I1g11002002.00Yes
aim-50-2_0-no-4a10501002.00No
aim-50-2_0-yes1-1a3501002.00Yes
aim-50-2_0-no-3a9501002.00No
dubois21d2631682.67No
dubois26d1782082.67No
dubois27d3812162.67No
BMS_k3_n100_m429_161b11002832.83Yes
g2_V300_C900_P3_I1g153009003.00Yes
g2_V50_C150_P3_I1g17501503.00Yes
BMS_k3_n100_m429_368b21003083.08Yes
hole6h2421333.17No
par8-1p135011493.28Yes
aim-50-3_4-yes1-2a4501703.40Yes
hole7h3562043.64No
par8-3-cp2752983.97Yes
par8-5-cp3752983.97Yes

Table 1.

able 1.SAT instances for testing algorithms

Both algorithms: Simulated Annealing Algorithm with the Markov Chain Length dynamically (SA_C) and Simulated Annealing with Linear Regresion Method (SA_LRM), were implemented in Dell Lattitude with 1 Gb of Ram memory and Pentium 4 processor running at 2.13 GHz.

5.1. Experiment design

The experiments with these algorithms require a considerable run-time, because each instance SAT, is solved several times to take average values of performance.

Another important element to consider is to guarantee that the conditions of execution on the various algorithms are similar (because we measure time of execution on). In this regard, the first thing we did was on the Evaluation of a set of computers with similar hardware and software conditions.

SAT problemIdVariablesClausesσSAT?
g2_V100_C400_P4_I1g31004004.00Yes
hole8h4722974.13No
uuf225-045u42259604.27No
RTI_k3_n100_m429_150r11004294.29Yes
uf175-023u11757534.30Yes
uuf100-0789u81004304.30No
uf50-01u7502184.36Yes
uuf50-01u9502184.36No
ii8a2i31808004.44Yes
g2_V50_C250_P5_I1g19502505.00Yes
hole10h11105615.10No
ii32e1i222211865.34Yes
anomalya6482615.44Yes
aim-50-6_0-yes1-1a5503006.00Yes
g2_V50_C300_P6_I1sg20508006.00Yes
jnh201j11008008.00Yes
jnh215j31008008.00No
mediumm11169538.22Yes
jnh301j21009009.00Yes

Table 2.

able 2.SAT instances for testing algorithms, continuation
Computer123456789
Floating Point Math (MOS)168.1168.3168.0168.0168.2168.1167.9168.2168.2
Integer Maths (MOS)33.6133.5633.6033.5933.6533.6233.5633.6533.62
Search for prime numbers (MOS)126.6126.6126.4126.5126.7126.7126.3126.6126.6
String Sorting (MSS)415.6416.0415.3408.3415.9415.8415.7415.6415.7
Memory blocks transfer (MbTS)470.9476.6475.9465.7466.4467.4468.7469.0459.7
Cache memory read (MbTS)11411142114111411142114111411142.1088
Non cache memory read (MbTS)831.8831.7832.0831.2830.7831.1830.6831.6831.2
Memory write (MbTS)377.3379.7378.2378.3378.0379.1378.1378.1377.0
Overall calculation speed232.5232.6232.1232.0232.4232.5232.1232.5232.6
Overall memory speed209.9210.8210.4209.6209.5210.0209.8209.8209.9

Table 3.

able 3.Performance of computers used for experiments

To run this task, there were used programs available from the Internet, that perform different computers test and give us the result of this evaluation [29, 32, 33].

In Table 3, MOS means: million operations per second, MSS million strings per second and MBTS represent millions of bytes transferred per second. As we can see Table 3 the differences between computers are at most equal to 1.6 percent, so we can infer that we obtain similar results in one or another computer.

Each SAT instance was executed 100 times with a slow cooling function (0.99 in Equation 20 and we obtained the average time of the executions and the average quality of the solution.

The SQM is established by the next expression:

SQM=clauses truetotal clauses×100E44
(44)

Both results, SA_C and SA_LRM, were compared using two quotients which we denominated time improvement Qtime and quality improvement Qquality defined by:

Qtime=AverageTimeSA_LRMAverageTimeSA_C×100E45
(45)
Qquality=SQMSA_LRMSQMSA_C×100E46
(46)

If Qquality is close to 100% this means that both algorithm found good solutions, however Qquality factor must decrease, which implies that the new algorithm SA_LRM is faster than SA_C.

LmaxC= 4.61
LmaxC1= 3.00
LmaxC= 3.00
LmaxC1= 2.30
LmaxC= 2.30
LmaxC1= 1.96
InstanceQqualityQtimeQqualityQtimeQqualityQtime
a199.413.099.312.999.211.7
a1099.011.099.211.699.69.7
a299.511.498.711.299.09.8
a399.911.399.411.699.29.6
a499.29.899.610.498.98.8
a599.510.199.110.699.59.0
a699.034.599.332.199.227.8
a798.713.399.113.998.811.8
a899.611.399.212.299.49.8
a999.210.998.711.699.49.5
b199.654.099.751.099.946.0
b2100.257.799.853.599.949.2
d199.211.899.411.699.210.1
d299.511.699.511.599.410.1
d3100.011.599.111.399.19.5
g199.871.799.969.799.864.7
g1599.728.799.728.399.929.0
g1799.311.799.212.799.510.5
g1999.611.598.611.599.39.8.0
g2099.510.599.511.199.49.5.0

Table 4.

able 4.Experimentals results
LmaxC= 4.61
LmaxC1= 3.00
LmaxC=3.00
LmaxC1= 2.30
LmaxC= 2.30
LmaxC1= 1.96
InstanceQqualityQtimeQqualityQtimeQqualityQtime
g399.820.899.621.299.618.4
h1100.154.599.653.599.947.1
h299.722.099.323.299.519.1
h3100.130.599.729.799.524.9
h498.941.099.338.599.032.7
i299.155.098.954.499.745.7
i3100.065.0100.265.4100.358.7
j199.711.699.712.899.611.1
j299.711.699.812.399.810.4
j399.511.399.712.399.810.2
m1100.574.599.970.999.766.8
p199.966.099.965.499.959.7
p299.413.899.013.399.311.8
p399.314.099.512.499.511.0
r199.424.099.722.699.521.1
u199.649.6100.049.8100.044.5
u499.753.3100.050.999.348.1
u799.914.7100.015.299.513.1
u899.726.0100.023.999.620.0
u999.113.899.514.099.211.9

Table 5.

able 5.Experimentals results, continuation

From Table 4 and Table 5 experimental results we can obtain the average values for the magnitudes Qtime and Qquality, as shown in the following Table 6.

As you can see, in Table 6, the quality factor of the solutions, Qquality is very close to 100%, which implies that the SA_LRM algorithm finds solutions as good as the SA_C algorithm, it is important to note that 37% of SAT instances used for the experiments, are not-SAT, which implies that their respective SQM can not be equal to 100% and therefore the quality factor must be less than 100%.

Also in Table 6, we see that the factor Qtime diminishes values less than 30%, showing that our algorithm, SA_LRM, is 70% faster than the SA_C algorithm but maintains the same quality of the solution.

As shown in Figure 5 are some instances in which the reduction of run time is only 25% while other reducing runtime up to 90%.

LmaxCLmaxC1Qtime (%)Qquality (%)
4.613.0099.627.3
3.002.3099.526.8
2.301.9699.523.8

Table 6.

able 6.Qtime and Qquality averages for all instances tested

Figure 5.

Qtimefor SAT instances

Advertisement

6. Conclusions

In this paper a new adaptive Simulated Annealing algorithm named SA_LRM that uses least squares method as a way to find the equilibrium zone in Metropolis cycle is presented. When this zone is found our algorithm abort the Metropolis cycle, although the iterations calculated with the dynamic chains of Markov have not been completed. After experimentation we show that SA_LRM is more efficient than those tuned using only an analytical method.

References

  1. 1. AartsE.KorstJ.1989Simulated annealing and Boltzman machines: An stochastic approach to combinatorial optimization and neural computing, John Wiley and Sons.
  2. 2. AspvallB.TarjanM. F. P. R. E.1979Alinear-time algorithm for testing the truth of certain quantified boolean formulas, Information Processing Letters 8(3).
  3. 3. AtiqullahM.2004An efficient simple cooling schedule for simulated annealing3045396404
  4. 4. BertsimasD.TsitsiklisJ.1993Simulated annealingStatistical Science 81015
  5. 5. CernyV.1982A thermodynamical approach to the travelling salesman problem: An efficient simulation algorithm, Comenius University.
  6. 6. CernyV.1985Thermodynamical approach to the traveling salesman problem: an efficient simulation algorithm, Journal of Optimazation Theory and Applications 45(1).
  7. 7. CookS. A.1971The complexity of theorem proving proceduresProceedings on the third annual ACM symposium on theory of computing.
  8. 8. CrescenziP.KannV.1998How to find the best approximation results- a follow-up to garey and johnson, ACM SIGACT News 2949097
  9. 9. DowlingW. F.GallierJ. H.1984Linear-time algorithms for testing the satisfiability of propositional horn formulae,Journal of Logic Programming 13267284
  10. 10. EvenS.ItaiA.ShamirA.1976On the complexity of timetable and multicommodity flow problemsSIAM Journal on Computing54691703URL: http://link.aip.org/link/?SMJ/5/691/1
  11. 11. FaigleU.KernW.1991Note on the convergence of simulated annealing algorithmsSIAM journal on control and optimization 291153159URL: http://e-archive.informatik.uni-koeln.de/67/
  12. 12. FleischerM. A.1996Cybernetic optimization by simulated annealing: Accelerating convergence by parallel processing and probabilistic feedback controlJournal of Heuristics1225246
  13. 13. Frausto-SolisJ.SanvicenteH.ImperialF.2006Andymark: An analytical method to establish dynamically the length of the markov chain in simulated annealing for the satisfiablity problem, Springer Verlag.
  14. 14. HoosH. H.StutzleT.2000Satlib: An online resource for research on satSAT 2000 283292Disponible en http://www.satlib.org/.
  15. 15. HorieS. .WatanabeO.1997Hard instance generation for satISAAC ‘97: Proceedings of the 8th International Symposium on Algorithms and Computation, Springer-Verlag, 2231
  16. 16. IngberL.1993Simulated annealing: Practice versus theoryMathematical and Computer Modelling18112957
  17. 17. IngberL.1996Adaptive simulated annealing (asa): Lessons learnedControl and Cybernetics 253354
  18. 18. KirkpatrickS.GelattC. D.VecchiM. P.1983Optimization by simulated annealingScience (4598)(220): 671-680.
  19. 19. Martinez-RiosF.Frausto-SolisJ.2007A hybrid simulated annealing threshold accepting algorithm for satisfiability problems using dynamically cooling schemes, Electrical and Computer Engineering Series WSEAS 282286
  20. 20. Martinez-RiosF.Frausto-SolisJ.2008aGolden annealing method for job shop scheduling problem, Mathematics and Computers in Science and Engineering, 1790-2769
  21. 21. Martinez-RiosF.Frausto-SolisJ.2008bGolden ratio annealing for satisfiability problems using dynamically cooling schemesLecture Notes in Computer Science4994215224
  22. 22. Martinez-RiosF.Frausto-SolisJ.2008cSimulated annealing for sat problems using dynamic markov chains with linear regression equilibrium, MICAI 2008, IEEE 182187
  23. 23. MetropolisN.RosenbluthA. W. R. M. N.TellerA. H.1953Equation of state calculations by fast computing machinesThe journal of Chemicla Physics 2110871092
  24. 24. MezardM.ParisiG.ZecchinaR.2002Analytic and algorithmic solution of random satisfiability problems,Science 2975582812815
  25. 25. MezardM.ZecchinaR.2002The random k-satisfiability problem: from an analytic solution to an efficient algorithm,Phys. Rev. E66(056126 EOF
  26. 26. MikiM.HiroyasuT.OnoK.2002Simulated annealing with advanced adaptive neighborhood, Second international workshop on Intelligent systems design and application, Dynamic Publishers, Inc., 113118
  27. 27. MunakataT.NakamuraY.2001Temperature control for simulated annealing,PHYSICAL REVIEWE 64.
  28. 28. PapadimitriouC. H.1994Computational Complexity, Addison-Wesley.
  29. 29. PassMark2007Cpu burnintest. http://www.passmark.com/download/index.htm.URL: http://www.passmark.com/download/index.htm
  30. 30. Sanvicente-SanchezH.Frausto-SolisJ.2004A method to establish the cooling scheme in simulated annealing like algorithmsLecture Notes in Computer Science
  31. 31. SchaeferT. J.1978The complexity of satisfiability problems, Proceedings of the tenth annual ACM symposium on Theory of computing, STOC ‘78, ACM, 216226URL: http://doi.acm.org/10.1145/800133.804350
  32. 32. van WandelenC. J.2007Cpubench. http://cpubench.softonic.com/eie/21660.URL: http://cpubench.softonic.com/eie/21660
  33. 33. VorteB.2007Cpumathmark. http://files.aoaforums.com/code.php?file=931.URL: http://files.aoaforums.com/code.php?file=931

Written By

Felix Martinez-Rios and Juan Frausto-Solis

Submitted: 23 November 2011 Published: 29 August 2012