InTechOpen uses cookies to offer you the best online experience. By continuing to use our site, you agree to our Privacy Policy.

Engineering » Chemical Engineering » "Recent Advances in Thermo and Fluid Dynamics", book edited by Mofid Gorji-Bandpy, ISBN 978-953-51-2239-5, Published: December 21, 2015 under CC BY 3.0 license. © The Author(s).

Chapter 6

Information Thermodynamics and Halting Problem

By Bohdan Hejna
DOI: 10.5772/61900

Article top

Information Thermodynamics and Halting Problem

Bohdan Hejna1

1. Introduction

The formulations of the undecidability of the Halting Problem assume that the computing process, being observed, the description of which is given on the input of the ’observing’ Turing Machine, is the exact copy of the computing process running in the observing machine itself (the Cantor diagonal argument in the Minski’s proof [18]). By this way the Auto-Reference or an infinite cycle in computing sense or the Self-Observation in information sense or an analogue of the stationary (equilibrium) state in thermodynamic sense is created, shielding now what is to be possibly discovered - the infinite cycle in the observed computing process - also for its normal input. This shield is the real result of the Cantor diagonal argument which has been used for solving the Halting Problem, but on the contrary, creates it [18]. This shield is, also, a certain image of the sought possible infinite cycle. This shield could be, in the thermodynamic point of view, ceased or ended whether the performance of the Perpetuum Mobile functionality was possible, which is not (when, e.g., the equation x=x+1 would be solvable) or by an external activity or approach.

This situation is recognizable and as such is decidable and solvable in all cases of its realizations by a certain ’step-aside’. For this, we use the previously studied [5, 6, 9, 11] congruence between a cyclical thermodynamic process represented by the Carnot Cycle and a repeatable information transfer represented by the Shannon Transfer Chain but we enrich this effort now by their another congruence with the computing process running in the Turing Machine. Considerations of Gibbs Paradox [7, 8, 11] are used to illustrate the main idea of the term ’step-aside’ which is our main methodological tool for looking for an infinite cycle in a Turing computing process and which enables us to avoid the traditional attempts of solving the Halting Problem. The gap in their formulations is due to that fact that they assume that the computing process observes itself by following itself in the same ’time-click’ of its activity. For this case the claim of the unsolvability of such a situation is, of course, valid. But with a time delay or the ’step-aside’ [or a memory (a storage)] considered it works with ’another’ data and ’is able to see’ on its own previous activity as the ’normal’ input data. The idea of the ’step-aside’ is based just on the result giving the solution to Gibbs Paradox and its information meaning [7, 8, 11, 12, 14], and enables us to be in compliance with the II. Principle of Thermodynamics during such a process.

Thus, we show that it is possible to recognize the infinite cycle, but with the time delay or a staging (instead of the time delay the staging is usable, lastng a longer time interval in each of its repetition) in evaluating the trace of the observed computing process. The trace is a message or a record, both about the input data and about the structure of the computing process being observed (the listing, the cross-references and the memory dump in the language of programmers). In this phase the observing Turing Machine (we ourselves) is raising the question: "Is there an infinite cycle?" Following the trace the observing machine gains the answer. In our case, the trace is a recorded sequence of configurations of the observed Turing Machine. These configurations can be simplified to their general configuration types which create now a word of a regular language [12, 14]. Furthermore, the control unit of any Turing Machine is a finite automaton. Both these facts enable the Pumping Lemma in the observing Turing Machine to be used. In compliance with the Pumping Lemma, we know (the observing Turing Machine knows) that certain general configuration types must be periodically repeated in the case of the infinite length of their regular language. This fact enables us (the observing Turing Machine) to decide that the observed computing process has entered into an infinite cycle. This event is performed in a finite time and is, by this way, recognizable in the finite time, too. When the described method is used it becomes an instance of observation. By application to ’itself’ it becomes a higher instance of observation, now observing the trace of its previous instances. Thus the method of staging of the observed process will be used again. This method is applicable to all computing processes and as such it represents a contribution to the dead lock indication problem.

This sequence of ideas differs from the Cantor diagonal argument. Mainly it is achieved by the physical, especially by the thermodynamic and information (structure) type of our consideration respecting the II. Principle of Thermodynamics as the very principal root for methodological approaches of all types, both excluding and also enriching the mere empty logic.

2. Notion of Auto-Reference

Paradoxical claims (paradoxes, noetical paradoxes, contradictions, antinomia) have two parts - both parts are true, but the truth of one part denies the truth of the second part.

They can arise by not respecting the metalanguage (semantic) level - which is the higher level of our thinking about problems and the language (syntactic) level - which is the lower level of formulations of our ’higher’ thoughts. Also they arise by not respecting a double-level organization and description of measuring - by not respecting the need of a ’step-aside’ of the observer from the observed. And also they arise by not respecting various time clicks in time sequences. As for the latter case they are in a contradiction with the causality principle. The common feature for all these cases is the Auto-Reference construction which itself, solved by itself, always states the requirement for ceasing the II. Principle of Thermodynamics and all its equivalents [8-14].

Let us us introduce the Russel’s criterion for removing paradoxes[1] - : Within the flow of our thinking and speech we need and must distinguish between two levels of our thinking and expressing in order not to fall in a paradoxical claim by mutual mixing and changing them.

These levels are the higher one, the metalanguage (semantic) level and the lower one, the language (syntactic) level. Being aware of the existence of these two levels we prevent ourselves from their mutual mixing and changing, we prevent ourselves from application our metalanguage claims on themselves but now on the language level or vice versa.

We must be aware that our claims about properties of considered objects are created on the higher level, rather richer both semantically and syntactically than the lower one on which we really express ourselves about these objects. The words and meanings of this lower (and ’narrower’) level are common to both of them. Our speech is formulated and performed on the lower level describing here our ’higher’ thoughts and on which the objects themselves have been described, defined yet too, of course from the higher level, but with the necessary (lower) limitations. (As such they are thought over on the higher level.) From this point of view we understand the various meanings (levels) of the same words. Then any mutual mixing and changing the metalanguage and language level or the autotereference (paradox, noetical paradox, contradiction, antinomium) is excluded.

2.1. Auto-Reference in Information Transfer, Self-Observation

In any information transfer channel K the channel equation


it is valid [23]. This equation describes the mutual relations among information entropies [(average) information amounts] in the channel K.

The quantities H(X), H(Y), H(X|Y) and H(Y|X) are the input, the output, the loss and the noise entropy.

The difference H(X)H(X|Y) or the difference H(Y)H(Y|X) defines the transinformation T(X;Y) or the transinformation T(Y;X) respectively,

H(X)H(X|Y)T(X;Y) = T(Y;X)H(Y)H(Y|X)

When the channel K transfers the information (entropy) H(X), but now just at

the value of the entropy H(X|Y), H(X)=H(X|Y), then, necessarily, must be valid

T(X;Y)=0 [=H(Y)H(Y|X)]
  • For H(Y|X)=0, we have T(X;Y)=H(Y)=0.

  • For H(Y|X)0 we have H(Y)=H(Y|X)0

In both these two cases the channel K operates as the interrupted (with the absolute noise) and the output H(Y) is without any relation to the input H(X) and, also, it doesn’t relate to the structure of K. This structure is expressed by the value of the quantity H(X|Y). We assume, for simplicity, that H(Y|X)=0.

From the (1)-(3) follows that the channel K can’t transfer (within the same step p of its transfer process) such an information which describes its inner structure and, thus, it can’t transfer - observe (copy, measure) itself. It it is valid both for the concrete information value and for the average information value, as well.

Any channel K can’t transfer its own states considered as the input messages (within the same steps p). Such an attempt is the information analogy for the Auto-Reference known from Logics and Computing Theory. Thus a certain ’step-aside’ leading to a nonzero tranfer output, H(Y)=H(X)H(X|Y)>0, is needed.

2.2. Auto-Reference and Thermodynamic Stationarity

The transfer process running in an information transfer channel K is possible to be comprehended (modeled or, even, constructed) as the direct Carnot Cycle O [6, 8]. The relation OK is postulated. Further, we can imagine its observing method, equivalent to its ’mirror’ OK. This mirror O is, at this case, the direct Carnot Cycle O as for its structure, but functioning in the indirect, reverse mode [6, 8].

Let us connect them together to a combined heat cycle OO in such a way that the mirror (the reverse cycle O) is gaining the message about the structure of the direct cycle O. This message is (carrying) the information H(X|Y) about the structure of the transformation (transfer) process (OK) being ’observed’. The mirror OK is gaining this information H(X|Y) on its noise ’input’ H(Y|X) [while H(X)=H(Y) is its input entropy].

The quantities ΔQW, ΔA and ΔQ0 or the quantities ΔQW, ΔA and ΔQ0 respectively, define the information entropies of the information transfer realized (thermodynamically) by the direct Carnot Cycle O or by the reverse Carnot Cycle O (the mirror) respectively (the combined cycle OO is created),

H(X)=ΔQWkTW, resp. H(Y)=ΔQWkTWH(Y)=ΔAkTW, resp. H(X)=ΔAkTWH(X|Y)=ΔQ0kTW, resp. H(Y|X)=ΔQ0kTW

Our aim is to gain the nonzero output mechanical work ΔA* of the combined heat cycle OO, ΔA*>0. We want to gain nonzero information H*(Y*)=ΔA*/kTW>0.

To achieve this aim, for the efficiencies ηmax and ηmax of the both connected cycles O and O (with the working temperatures TW=TW and T0=T0, TWT0>0), it must be valid that ηmax>ηmax ; we want the validity of the relation[2] -


When ΔQ0=ΔQ0 should be valid, then must be that ΔQW<ΔQW [(ηmax>ηmax)] and thus it should be valid that

ΔA*=ΔQWηmaxΔQWηmax>0 but ΔQWηmaxΔQWηmax=ΔQ0ΔQ0=0

Thus the output work ΔA*>0 should be genarated without any lost heat and by the direct change of the whole heat ΔQWΔQW but within the cycle OO. For ηmax<ηmax the same heat ΔQWΔQW should be pumped from the cooler with the temperature T0 to the heater with the temperature TW directly, without any compensation by a mechanical work. We see that ΔA*=0 is the reality.

Our combined machine OO should be the II. Perpetuum Mobile in both two cases. Thus ηmax=ηmax must be valid (the heater with the temperature TW and the cooler with the temperature T0 are common) that

ηmax=ηmax<1 andthen ΔQW=ΔQW

We must be aware that for ηmax=ηmax<1 the whole information entropy of the environment in which our (reversible) combined cycle OO is running changes on one hand by the value

H(X)ηmax=ΔQWkTW(1β)>0, β=1ηmax=T0TW

and on the other hand it is also changed by the value H(X)ηmax=ΔQWkTW(1β)

Thus it must be changed by the zero value


The whole combined machine, or the thermodynamic system with the cycle OO is, when the cycle OO is seen, as a whole, in the thermodynamic equilibrium. (It can be seen as an unit, analogous to an interruptable operation in computing.)


Figure 1.

Stationarity of the double cycle OO

Thus, the observation of the observed process O by the observing reverse process O with the same structure (by itself), or the Self-Observation, is impossible in a physical sense, and, consequently, in a logical sense, too (see the Auto-Reference in computing).

Nevertheless, the construction of the Auto-Reference is describable and, as such, is recognizable, decidable just as a construction sui generis. It leads, necessarily, to the requirement of the II. Perpetuum Mobile functionality when the requirements (5) and (6) are sustained.

(Note, that the Carnot Machine itself is, by its definition, a construction of the infinite cycle of the states of its working medium and as such is identifiable and recognizable.)

2.3. Gibbs Paradox and Auto-Reference in Observation and Information Transfer

Only just by a (thought) ’dividing’ of an equilibrium system A by diaphragms [20], without any influence on its thermodynamic (macroscopic) properties, a non-zero difference of its entropy, before and after its ’dividing’, is evidenced.

Let us consider a thermodynamic system A in volume V and with n matter units of ideal gas in the thermodynamic equilibrium. The state equation of A is pV=nRΘ. For an elementary change of the internal energy U of A we have dU=ncvdΘ.

From the state equation of A, and from the general law of energy conservation [for a (substitute) reversible exchange of heat δq between the system and its environment] we formulate the I. Principle of Thermodynamics, δq=dU+pdV

From this principle, and from Clausius equation ΔS=DefΔqΘ, Δq=cvΔΘ+RΘΔVV, Θ>0, follows that


Let us ’divide’ the equilibrial system A in a volume V and at a temperature Θ, or, better said, the whole volume V (or, its whole state space) occupiable, and just occupied now by all its constituents (particles, matter units), with diaphragms (thin infinitely, or, ’thought’ only), not affecting thermodynamic properties of A supposingly, to m parts Ai, i{1, ..., m}, m1 with volumes Vi with matter units ni. Evidently n=i=1mni and V=i=1mVi.

Let now S0(n)=0 and S0i(ni)=0 for all i. For the entropies Si of Ai considered individually, and for the change ΔS, when volumes V, Vi are expressed from the state equations, and for p=pi, Θ=Θi it will be gained that σ[i]=Rn[i]lnn[i]. Then, for Si=σi=ni(cvlnΘ+RlnVi) is valid, we have that


Let us denote the last sum as B further on, B<0. The quantity B expressed in (11) is information entropy of a source of messages with an alphabet [n1, n2, ... ,nm] and probability distribution [ni/n]i=1m. Such a division of the system to m parts defines an information source with the information entropy with its maximum lnm.

The result (11), ΔS=nRB, is a paradox, a contradiction with our presumption of not influencing a thermodynamic state of A by diaphragms, and, leads to that result that the heat entropy S (of a system in equilibrium) is not an extensive quantity. But, by the definition of the differential dS, this is not true.

Due to this contradiction we must consider a non-zero integrating constants S0(n), S0i(ni), in such a way, that the equation ΔS=(σ+S0)i=1m(σi+S0i)=0 is solvable for the system A and all its parts Ai by solutions S0[i](n[i])=n[i]Rln(n[i]/γ[i]).

Then S[i]S[i]Claus, and we write and derive that

SClaus=i=1mSiClaus=i=1mniRlnγi=nRlnγ γ=γi; ΔS=0.

Now let us observe an equilibrium, S=SClaus=SBoltz=kNB=kNlnN.

Let, in compliance with the solution of Gibbs Paradox, the integration constant S0 be the (change of) entropy ΔS which is added to the entropy σ to figure out the measured entropy SClaus of the equilibrium state of the system A (the final state of Gay-Lussac experiment) at a temperature Θ. We have shown that without such correction, the less entropy σ is evidenced, σ=SClausΔS, ΔS=S0.

Following the previuos definitions and results we have

ΔS=ΔQ0Θ=nRlnnγ,lnγ=ΔSknNA+lnn=ΔSkN+lnNlnNA, γ=N ΔSkN=lnNA.

By the entropy ΔS the ’lost’ heat ΔQ0 (at the temperature Θ) is defined.

Thus, our observation can be understood as an information transfer T in an information channel K with entropies H(X), H(Y), H(X|Y) and H(Y|X) in (4) but now bound physically; we have these information entropies per one particle of the observed system A :

input H(X)=DefSkN=lnγ=B=lnN=rB(r)output H(Y)=DefσkNBGibbs=BBoltz=B(r),loss H(X|Y)=DefS0kN,noise H(Y|X)=Def0 forthesimplicity;H(X|Y)=rB(r)[B(r)]=B(r)(r1)=(B)r1r, r1; 1r=ηmax.

For a number m of cells of our railings in the volume V with A, mN or for the accuracy r of this description of the ’inner structure’ of A (a thought structure of V with A) and for the number q of diaphragms creating our railings of cells and constructed in such a way that q <1, m1>, we have that r=(N1)/q.

Our observation of the equilibrium system A, including the mathematical correction for Gibbs Paradox, is then describable by the Shannon transfer scheme [X, K, Y] where

H(X)=SClauskN, H(X|Y)=S0kN, H(Y)=SClauskN, H(Y|X)=ΔSkN.

(However, a real observation process described in (15), equivalent to that one with r=1, is impossible [6].)

We conclude by that, the diminishing of the measured entropy value about ΔS against S awaited, evidenced by Gibbs Paradox, does not originate in a watched system itself. Understood this way, it is a contradiction of a gnozeologic character based on not respecting real properties of any observation [6]. And, this means the following.

The minimal accuracy of our description of the watched system A should be for r=. In this case we should place q=0 diaphragms, no railings is laid, m=1, q=0. ’We think nothing’ about the ’inner structure’ of the system A, for in this case we are not outside of it (for the ’structure’ measured - considered is ’created’ by 0 diaphragms ’laid down’ just from the outside). Thus we define an output information source Y, which is bound physically, and for which its (bound) information entropy is H(Y)=BGibbs=0. Then, the result of such ’observation’ is 0, and the loss information entropy is


If we consider m>1 as the number of ’windows’ being laid down over the measured (observed) system A from the outside, but now m=1 and then q=0, we resign the possibility of saying about the system A anything else than that it exists. The whole system A is now ’viewed’ in the just one ’window’ but now created by the volume V, the system A occupies, itself. This only one observation ’window’ is ’pressed’ to us by the system A itself and, by this way, ’we ourselves are identical’ with it, ’we ourselves are’ the system A. We ’can move’ within the volume V, but ’undivided’ this time, and thus our measuring could be ’organized’ now by the mediation of this only one window, but not laid down from the outside (by us). We are inside of this system (of the volume V) and our measuring is organized with the parameter r=. We say that we are identical with the system for we have now no ’step-aside’ from it [H(Y)=H(X)H(X|Y)=0]. This is the reason why we do not see it (from the outside) and, what we can think about it is nothing. In such a sense that we can’t ’lay down’ the diaphragms over the system (q0 is needed) and create on it our ’windows’ (m>1). Because we do not rule our measuring of the system A we do not ’divide’ it by our, just from the outside laid, diaphragms (now q=0) and, for this, we are not able to organize its measuring with the parameter r<. ’We ourselves are’ the system A and for this we can’t, from the outside, see us, which means we can’t see the system. (Or we can evidence its or our certain existence in this window.) The Auto-Reference is not possible; the measuring of the system A (from the outside !) is intended to be organized in the inside (!) of the system A itself.[3] -

Our measuring also represents an observation of a certain phenomenon with the degenerated probability distribution [ni/n]i=11. The information amounts i(X) and H(X) of this phenomenon are equal to 0 and, due to this, our measuring organized by the measuring method with the parameter m=1 or r= respectively will end with the result H(Y)=0 [H(Y)=ηmaxH(X); H(X)=0, ηmax=1/].

For this all we need a certain ’step-aside’ from the system A expressed by the value r<, but, nevertheless with respecting properties of this ’step-aside’, we do not fall into Gibbs Paradox [even when our railngs of diafragmas in the mode given r(1, is laid down].

The ideal ’step-aside’ is expressed by the value r=1.

The maximal accuracy of our ’description’, the accuracy of our watching of the system A, is achieved for r=1. Then B(r)=B* and for the output, the input and the loss information entropies it is valid that

H(Y)=H(X)=B, H(X|Y)=0

Then we have the ideal ’step-aside’ from the system A. ’We have insight into its inside’ and we can draw a layout of measuring its state without the distortion being given by the (at least partial) transfer (energy) of its state into the organization of our measuring [in the form of the nonzero value of the information loss H(X|Y) when r(1,.]

This ’step-aside’ enables us to set our measuring with the parameter q=N1, r=1. Now we know exactly what we measure, we know that we measure the status of the equilibrium thermodynamic system A and, by our ’step-aside’ from it, we are able to check the precision (the organization) of the measuring method. Just this enables us to organize the measuring of the status of the system A without nonzero information loss H(X|Y) in the other case being generated by the method itself (for r1), see the mentioned above gnozeological defect.

[But note that something quite different is the realization of the measuring method where the information losses are inevitable yet as the result of the validity of the II. Principle of Thermodynamics (and its equivalents [8]).]

3. Information Thermodynamic Concept Removing Auto-Reference

In the Auto-Reference case, the whole combined machine OO is a system in the equilibrium status. For this status we can introduce the term (quasi)stationary status in which the (infinitesimal) part of heat is circulating. Any round of this circulation is lasting the time interval Δt ; infinite, Δt , for not ideal model, or, finite, Δt < , when the ideal model is used; then the part of heat cannot be the infinitesimal. With the exception of the II. Perpetuum Mobile functionality of this combined machine, which is not possible, see (5) and (6), only the opening of the system and an external activity, a certain ’step-aside’ between the cycles O and O, moves it away (prevent it) from this status.

Nevertheless, we sustain our wants of gaining the information (about) H(X|Y) about the structure of the observed O (the transfer channel K), we want the nonzero value ΔA*, the nonzero information H*(Y*)=ΔA*/kTW>0.

Then, necessarily, the mirror, the reverse Carnot Cycle O (the transfer channel K) is to be constructed with that ’step-aside’ (excluding that stationarity) from the observed OK. Now we mean that the ’step-aside’ of the observing process O from the observed process O is realized by the difference TWTW>0. Now, within this thermodynamic point of view, it is valid that ΔA<ΔA for T0=T0, TWT*0


Then, for the whole information amount ΔA*/kTW of our combined cycle it is valid that


The structure H(X|Y) of the observed transfer (channel, process) OK is measurable with the ’step-aside’ only, created now by different temperatures (TW>TW). The result is

ΔA*kTW>0, [ΔA*kTWf[H(X|Y)]>0]

Following (5), (6) and (9) the Auto-Reference arises just when

TW=TW [ H(Y)=0]

Now we will describe, in the information thermodynamic way, the problem of a Self-Observation or the Auto-Reference problem and will draw a method of its ceasing.

For we want to have the information H(X|Y) describing the structure of the transfer process OT which we observe we need gain a nonzero value (difference) ΔA* and, consequently, a nonzero information H*(Y*),


Then we need a ’mirror’, the reverse Carnot Cycle OT (or the relevant transfer channel K) would be constructed in such a way that the mentioned ’step-aside’ from the observed transfer channel K was respected. [It is the ’step-aside’ of the observing process (O, T) from the observed process (O, T); also we can consider a computing process κ and its description - the program η, and its observation, see later].

Now the required ’step-aside’ is realized by the temperature difference TWTW>0. Thus now we consider, within the frame of our thermodynamic approach, that ΔA<ΔA for T0=T0 and then, under the condition ΔQ0=ΔQ0, it will be valid for the cycles O, O and OO that


but also it is valid that


From the proposition ΔQ0=ΔQ0 [from relations (21) and (22)] pro ΔQW follows that


With the denotation β(1ηmax), β(1ηmax) we write


Within the cycles O and O the following relations are valid,


Figure 2.

The concept for ceasing the Auto-Reference

By (23) and (24) for ΔA it is valid in the cycle O that


and, further, for ΔA in the cycle O we have


and thus, for the cycles O and O it is valid that


For the whole work ΔA* of the combined cycle OO we have


Then, for the whole change of the thermodynamic entropy within the combined cycle OO (measured in information units Hartley, nat, bit) and thus for the change of the whole information entropy H*(Y*) [HC], following the relation (29), it is valid


It is valid, for ΔA* is a residuum work after the work ΔA has been performed at the temperature TW. Evidently, the sense of the symbol TW (within the double cycle OO and when ΔQ0=ΔQ0) is expressible by the symbol T0*, which is possible, for the working temperatures of the whole cycle OO are TW and TW=T0*. The relation (30) expresses that fact that the double cycle OO is the direct Carnot Cycle just with its working temperatures TW>TW=T0*. In the double cycle OO it is valid that

β=ΔQ0ΔQW=ΔQ0TWΔQWTW=H(Y|X)H(Y)=T0TW, TW=T0*, cyklus Oβ=ΔQ0ΔQW=ΔQ0TWΔQWTW=H(X|Y)H(X)=T0TW, cyklus Oββ=TWTW=T0*TWβ*

and then, by (30) a (31) is writable that


It is ensured by the propositions TW>TW, T0=T0 and also by that fact, that the loss entropy H(X|Y) is described and given by the heat ΔQ0=ΔQ0. But by our combined cycle OO it is valid too that


and by both parts of (4) we have


For the whole information entropy ΔA*/kTW (the whole thermodynamic entropy SC in information units) and by following the previous relations also it is valid that


And thus, the structure of the information transfer channel K (expressed by the quantity H(X|Y)) is measurable by the value H*(Y*) from (20), (32) and (35). Symbolicaly, we can write, using a growing function f,

H*(Y*) = ΔA*kTW f[H(X|Y)]>0

The cycles O, O and OO are the Carnot Cycles and thus, from their definition and construction they are, imaginatively[4] - in principle, the infinite cycles; in each of them the following criterion of an infinite cycle (see [14]) it is valid inevitably,

T(X[];Y[])=H(X[])H(X[]|Y[])=H(Y[])>0 and ΔSL[]=0

The construction of the cycle OO enables us to recognize that the infinite cycle O is running; the unsolvable case of the Auto-Reference in OO occurs just when and only when

TW=TW, [[TW=TW] [H*(Y*)=0]]

(Thus, when we contemporarily await anything else than the zero output or an output which is not relevant to the given input, by this only awaiting, we require a construction of the Perpetuum Mobile functionality.)

Our observation of the process O now being considered as a model or as the realization of the computing process κ in a certain Turing Machine TM or an information transfer T in a certain channel K - the measuring of a transferring structure expressed by the value of the quantity H(X|Y), is thus possible but only by another process with a certain ’step-aside’ from the observed process O in-built, with a certain ’mirror’ OT, with the aids of another transferring structure expressed by the value of the quantity H(X|Y).

From the both processes, the cycles O and O, the whole combined and one cycle OO is created to be modeling the whole and one transfer channel KK in which the observed, the measured property, the value H(X|Y) in the given ’time click’ p (the click, the interval) expressing the structure of the channel K is one of the ingoing (input) messages (informations). The resulting double cycle OO performs as a direct Carnot Cycle with the working temperatures TW and T*0, TW>T*0, TW=T*0.

The required ’step-aside’ is created by the difference TWT*0>0. The whole information entropy (the thermodynamic entropy in information units) of the whole isolated system in which our double cycle OO is running, the whole output information H*(Y*) of this double cycle OO is then a certain function f() of the measure H(X|Y) of the structure being measured (observed) and as such, of the value of the argument H(X|Y) from the relation (36).

Remark: Also the following consideration is possible: We use the change H(Y) of the output entropy of the observed process O as its reaction to a change H(X) of the input entropy and just by the evidenced H(X|Y). Our measuring is then characterized, in the same sense as in (36) and (37), by the whole value


Now it is obvious that the transferring (copying, measuring, observing) of the structure of the channel K is possible then and only then a certain structural ’step-aside’ between the observed object (the transferred or the measured structure, now and here the structure of a transfer channel K) and the observing process (the channel K with the transferring process T) expressed by the nonzero and positive values of the difference (2) is possible.

By the term ’step-aside’ of the observing computing process (let us denote it as κ) from the observed computing process (let us denote it as κ) we understand a time delay between them, better said, it is a tracing[5] - of the observed computing process κ.

4. Turing Computing, Auto-Reference and Halting Problem

Turing Machine (TM) is driven by a program which is interpreted by its Control Unit (CUTM). The Control Unit CUTM is a finite automaton (Mealy’s or Moore’s sequential machine). The program for the TM consists of the finite sequence η of instructions η[],

η=(ηq)q=1q=[(si, xk, sj, yl, D)q]q=1q, |η|N

Each of these instructions describes an overwriting rule of a regular grammar [15, 19, 21],

si(xk, yl, D)sj

performed in the given step (time, moment) p, p, of the TM ’s activity;

  • si is the i -th nonterminal symbol of the regular grammar, or, respectively, it is a status of the CUTM within the actual step p of the TM ’s activity,

  • xk is an input terminal symbol being read from the input-output tape of the TM within the actual step p of the TM ’s activity,

  • yl is an output terminal symbol by which the CUTM overwrites the symbol xk which has been read (in the actual step p of the TM ’s activity),

  • sj is the successive status of the CUTM, given by the instruction for the following step p+1 of the CUTM.

Within the actual step p of the TM ’s activity the CUTM is changing its status to sj [this change is based on the status si, and the symbol xk has been read (sixksj)], and is performing the transformation

on the scanned (actual) position of the input-output tape,

  • D determines the moving direction of the read-write head of the CUTM after the symbol yl has been recorded [in the status sjp (sjp denotes sj for the step p) used further on as the following one, sjp=Defsip+1], D{L, R}.

The value L or R of the symbol D determines the left slip or the right slip from the actual position on the input-output tape to its (left or right) neighbor after the transformation xk to yl has been performed.

The oriented edge of the transition graph of the CUTM (the finite automaton) is described by the symbol si(xk, yl, D)sj. The TM ’s activity generates a sequence of the instructions having been performed in steps p, [(sip, xkp, sjp, ylp, Dp)]p=1p=plast,

sip(xkp, ylp, Dp)sjp, furtheron sjp=sip+1

(the edge of the oriented transition graph of the CUTM in the step p), by which the computing process (κ) has gone through (from the first step p=1 till, for this while, the last step p=plast of the TM ’s activity). They are also the overwriting rules of the regular grammar, being performed within each step p, p1,

sip(xkp, ylp, Dp)sjp, sjp=sip+1

By this way a regular language of the words (xkp, ylp, Dp) or, respectively, a regular language of the instructions (sip, xkp, sjp, ylp, Dp) having been performed is defined. This second regular language is describable by the rules (of a regular grammar)

Sip(sip, xkp, sjp, ylp, Dp)Sjp, Sjp=DefSip+1

being applied in each step p1 of the TM ’s activity. Thus, this language is to be acceptable by a certain finite automaton with n states S[].

When this language is infinite[6] - (the infinite chain of instructions of the finite length), such its word

[(si1, xk1, sj1, yl1, D1), ...,(sip, xkp, sjp, ylp, Dp)]p=l

of the length l exists that for that finite automaton [with n states S[] and the transition rules (42) or (44)] the Pumping Lemma [19, 21] it is valid

4.1. Auto-Reference in Turing Computing

Although any instruction of the Turing Machine TM describes one step of the computing process in this TM, it is considerable as a description (of one step) of an information transfer process running in a certain transfer channel K ; we postulate the relation TMK. The computing process in the TM is, also, a transfer process in a channel K. For KO it is valid that TMO.

In each step p>1 of its activity, the TMK is accepting its own configuration from the previous step p1 as its input, includes its contemporary status (sip=sjp1) and generates its status [si(p+1)] and the configuration for the next step p+1, etc.[7] -

Similarly it is valid for the configurations (denoted now by Xp and Yp), see further on.

For each p 1 we consider the actual instances of the stochastic quantities[8] - X, Y,

XXp, YYp; X|YXp|Yp, Y|XYp|Xp; Yp=Xp+1Xp|Xp+1XpXpXp+1, Xp+1|Xp=ΔXp+1(XpXp+1)1

In any step p of the TM ’s activity its own configurations (σp, sp, ρp) - members of the sequence - of the computing process κ=Def[(σp, sp, ρp)]p=1p= ..., can be considered as follows;

  • let now the stochastic quantity Xp be realized by the chain

(σp, sp, ρp) T*×S×T*; p=1, σ1=ε a ρ1=ξ, sp=s0
  • let now the stochastic quantity Yp be realized by the chain

[σp+1, sp+), ρp+1] T*×S×T*

Then, the computing process in the TMK is describable informationally,[9] - [10] -

H(X)H(Xp)=H(σp, sp, ρp)H(Y)H(Yp)=H[Xp+1]= H[σp+1, sp+1, ρp+1]H(X|Y)H(Xp|Yp)=H[Xp|Xp+1], H(Y|X)H(Yp|Xp)=H[Xp+1|Xp]T(X;Y)T(Xp;Yp)=H(Xp)H[Xp|Xp+1]=H(σp, sp, ρp)H[(σp, sp, ρp)|[σp+1, sp+1, ρp+1]]T(Y;X)T(Yp;Xp)=H(Xp+1)H[Xp+1|Xp]=H[σp+1, sp+1, ρp+1]H[σp+1, sp+1, ρp+1]|(σp, sp, ρp)]

The Auto-Reference arises with the following description of the computing (transfer, observation) process when, e.g., for a certain p p* 1,

H(Xp)H[Xp|Xp+1]=H(Xp+1)H[Xp+1|Xp], p 1H(Xp)=H(Xp|Xp+1); Xp=XpXp+1, Xp+1=εH(Xp+1)=H(Xp+1|Xp), [H[Xp+1|Xp]=0]

This way of considerations ’constructs’ the TM ’s infinite cycle from the programmer’s point of view, Self-Observation in an information point of view and a stationary status from the thermodynamics point of view.

In any case a ’step-aside’ to gain something else than the zero output is required.

By the ’step-aside’ of the observing computing process from the observed computing process we mean a time delay between those two processes or, better said, a staging of the trace of the observed process.

4.2. Halting Problem as Auto-Reference

Now we are considering a certain TM (the observed machine) being driven by a program η and working with a certain input word ξ. Let this activity be described by the word [d(TM)].

Let us consider that the TM with the input word ξ

  • halts, HALTTM, whether the word ξ is accepted or rejected,

  • does not halt, LOOPTM (the TM ’s infinite cycle)

Let us construct the three new Turing Machines M1, M2 and M3 as follows [18][11] -

  • M1 works with the input word [d(TM)*ξ] in that way that

  • halts, HALTM1Accept,

  • stops, HALTM1Reject,

  • M2 modifies the activity of the M1 in that way, that the input word which is being worked with is [d(TM)*ξ] and

  • halts, HALTM2,

  • does not halt, LOOPM2,

  • M3 is an ’extension’ of the M2 : it doubles its own input word [d(TM)] into [d(TM)*d(TM)] and gives it to the input (of its sub-machine) M2 and

  • halts, HALTM3,

  • does not halt, LOOPM3,

  • But, when the machine M3TM accepts the description d(M3), thus it is valid that d(M3)d(TM), then


This result (56) is the contradiction. It is the consequence of the Cantor diagonal argument having been used carrying the Auto-Reference to the sequence of the machines (TM, M1, M2, M3), or, respectively, to the sequence of the machines (TM, M3),

(TM_, M3TM_)

and is leading us to that opinion that the proposition about the decidability of the Halting Problem (recognizing the LOOPTM state) is not right.

In any given step p 1 the machine TM is deciding about itself (it is working with the description of its own actual status), it is the channel ’transferring’ its own structure, it is the Self-Observer. Thus it is in a stationary status in the thermodynamic point of view.

Within this point of view, we can envisage two identical, but reversed mutually, ideal Carnot Cycles connected together. In this sense, these two machine O and O create the equilibrium system OO, in which we introduce the term stationary status.

Within such a system the (infinitesimal) part of heat is circulating through the whole combined machine OO. Let this fact be now the thermodynamic model of the infinite cycle being started by the Self-Observation, by the Auto-Reference action (56), (57); one run is like an uninterruptable operation.

The recursive call of the function d(TM) (of the machine TM) by the same function d(TM) (by the machine TM) with the same argument [d(TM)*d(TM)] is now given. The Auto-Reference (56), (57) is then the generative function for the infinite sequences, nevertheless thought only, as the consequence of the stationarity concept in-built in this type of consideration,

(TM, TM, ..., TM, ... )(TMΔtp)p=1[d(TM), d(TM), ..., d(TM), ... ][[d(TM)]Δtp]p=1[HALTTM, HALTTM, ..., HALTTM, ... ][(HALTTM)Δtp]p=1[LOOPTM, LOOPTM, ..., LOOPTM, ... ][(LOOPTM)Δtp]p=1

We envisage, within this time-expansion of the (56) or (57) [which possibility follows from the (quasi)stationarity concept], the infinite cycle in the observed machine TM arises, based on its Self-Description [ξ][d(TM)]. But, following the Auto-Reference construction, it ’runs’ in the double-machine (TM,M3TM)OO.[12] -

The Auto-Reference step that is to solve the Halting solve the Halting Problem proves, only, its own disusability creates just a certain image of what is to be possibly discovered - the infinite cycle in the form of the infinite constant time sequences [when the time expansion (58) for p1 is considered].

As it is valid for any stationary status, also this one can be ceased or can be excluded by an external action, by the ’step-aside’, by the staging as follows.

5. Concept Removing Halting Problem

We suppose that in the case of a computing process running in a TM its status LOOPTM (the infinite cycle) is decidable within the Observing Turing Machine (OTM) by using the TMs trace. By this trace, the machine OTM generates and controls the ’combined observing process’ for the process in the TM.

We will show and use the fact, that certain regular sequences are generated. If they are infinite, they are, inevitably, periodical; as such, they are decidable languages for their infinity [1].

We will use the alphabet of terminal symbols T={I, B} and these structures:

  • (si, xk, sj, yl, D) is the instruction

  • (σ, s[], ρ) is the configuration

  • (ε[σ, s[], ρ]ε) is the configuration type

Further, we introduce the general configuration type X ;

  • X=(σ, s[], ρ)(Bσ, s[], ρB).

By this general configuration type the chains, e.g. BIBs[]BIB are ment,


The computing process in the observed TM generates the grammar of a regular language of instructions and, also, of general types of configurations especially, infinite possibly, and thus cyclical. As such, they are decidable languages for their infinity.

These grammars are given by the initial input ξ, or, respectively, by the initial configuration (ε, s0, ξ) and, also, by the instructions ηqp of the programme η, being generated by the sequence of the steps p of the TM ’s activity in which the TM instructions are interpreted. This sequence itself is pressed out by the configurations having been generated (See Appendix).

The Auto-Reference arises when, e.g., for a certain p p* p0 1,

H(Xp)H(Xp|Xp+1)=H(Xp+1)H(Xp+1|Xp)=0Xp(Xp*, Xp*+1, ... ,Xp)Xp+1(Xp, Xp+1, ...,X2p+2p*)(XpXp)

Also we can write [the similar is writable for (22), (23); also see the remark 7].


where Pr() denotes probability.

5.1. Method - the OTM

1 Let the Turing Machine be driven by the program η, do a certain number eP of instructions η[] (of the program η) beginning from the initial configuration (ε, s0, ξ). Let, e.g., P=2l+1, l=|ξ|, e1 ; eN be the number of the stage (for each stage we write, in a programmer style, e:=e+1)

  • The step 1 generates the table of nine-partite structures. Its length of eP rows,[13] -

[p q si xk sj yl D [σ s[] ρ] C (ε[σ, s[], ρ]ε) g σ s[] ρ m]p=1p=e.P

where the denotation used is

  • C is the number of the configuration (σ, s[], ρ)

  • g is the number of the configuration type (ε(σ, s[], ρ)ε)

  • m is the number of the general configuration type G, (σ, s[], ρ)

2 In the table (1) we are seeking two successive blocks of rows limited by those rows having the identical values in the columns (the identical six-partite structures)

[q si xk sj yl D (ε[σ, s[], ρ]ε) g σ s[] ρ m]

Thus, we are seeking for (the sequence of) the three rows being identical in those columns while the last row of the first block is the first row of the second block [this second ends by the third row (identical in the six columns considered)]. The numbers of these separating rows, the first, the second and the third row are the numbers of steps p[], p[] and p[] of the observed computing process. These rows are separated by numbers


of rows lying between them. (They can follow immediately, Δp[]=0, Δp[]=0.)

3 If the three separating rows are not found within the given stage e (2), we start the computing process driven by the program η, and its tracing, from the beginning [(ε, s0, ξ), (1)], and let it run eP steps, where e:=e+1 is set down.

4 If those three separating rows are found within the given stage e (2) (those two blocks covering the rows p[], ..., p[] where p[]=p[]+Δp[]+1 and p[]=p[]+Δp[]+1) we are checking both the two blocks, each of them from its beginning (p[], or p[] respectively) till its end (p[], or p[] respectively), seeking the rows with the identical values within their six columns (62),[14] -

[q si xk sj yl D (ε[σ, s[], ρ]ε) g σs[] ρ m]

5 Two or more such identical six-partite structures on the successive row positions (denoted by symbols z and m) are considered as the only one row

1)z[], ..., z[], p[]=z[] z[], m = 1 =Def m[]2)z[], ..., z[], z[] > z[], z[] z[], m = 23)z[], ..., z[], z[] > z[], z[] z[], m = 3... )z[] ... , ..., z[] ... , p[] = z[] ... z[] ... , m = ... =Def m[]

(the first of them is considered only).[15] -

6 We check whether, by this way, two new identical successive blocks of unique six-partite structures are created [ the first block is between the (newly numbered) rows m[]÷m[] and the second is (newly) between the rows m[]÷m[]].

Their lengths are Δm[] and Δm[] [ for Δm[]: m[]:=(m[])mod(m[]1)],

Δm[]=m[]m[]+1Δp[], Δm[]=m[]m[]+1Δp[].

If NOT - we continue with 2, p*=p[...] ; 1 when eP is exhausted, e:=e+1 ;

If YES - the distances (Δm[] [2] and Δm[] [2]) between the marginal rows of the new blocks (5) are constant, Δm[] = Δm[],

[the distances are counted in the number m of the unique six-partite structures (5), the last one of the first block is the first one of the second block], Δm[]=Δm[].

7 If the distance Δp[] of the first and the last row of the first block (2) is less or equal to the distance Δp[] of the first and last row of the second blok Δp[] Δp[], we have discovered the infinite cycle driven by the program η [now the distances are counted in the number of steps p].

We continue further but within the first block and with the Δm[] only.

From each unique six-partite structures (5, 6) the three columns

[q G m]

are now taken only, being interpreted as the rules of a regular grammar (accepted by a finite automaton)

Sq*GmSqmq*{0}.{1, ..., |η|},qm{1, ..., |η|}, m = m[], ..., m[]1

with the set S of nonterminal symbols and the set T of terminal symbols,

S*=Def{S0}.{Sqm}m=m[]m[]1, card S*=n, T=Def{Gqm}m=m[]m[]1

having the starting nonterminal symbol S0 S.[16] -

8 For the first block of the unique six–partite structures (5, 6, 7) the sequence of rules of a regular grammar is being generated (accepted by the finite automaton with the states S0, Sqm[], )

S0Gm[]Sqm[]Sqm[]Gm[]+1Sqm[]+1Sqm[]2Gm[]1Sqm[]1Sqm[]2ε S0, [or Sqm[]1ε SHALT, SHALTS]

where the denotation Sqm[]Sz[],,z[], Sqm[]+1Sz[],, z[], Sqm[]Sz[] ... ,, z[] ... is used.

We have described the activity of a finite automaton which accepts the infinite regular language of the general configurations types (of the configurations of the observed machine TM). They are the words of the infinite length and having the form

[(σm, sm, ρm)m=m[]m[]1]+=[X]+ [[d(TM*)]+]

Yet after the second round of the observed TM through the infinite cycle has been finished the Pumping Lemma is usable and valid for the length L of the relevant word of this infinite language, [card SL<2card S].

We can generate the status (the signal) SHALT to halt the whole machine OTM and the TM consequently.

We can say, briefly, that: If such the three identical bi-partite structures [η, (σ, s, ρ)], following each other, exist that for their distances Δp[] (measured by the number of steps of the observed process) between the first and the second bi-partite structure and Δp[] between the second and the third bi-partite structure it is valid that Δp[] Δp[], the observed TM is going through the infinite cycle.

The expression (70) means that we have discovered, within the dynamical system OO, the dynamical subsystem O*O* (TM*) which is in a limit cycle. It means the thermodynamic equilibrium within the double cycle O*O*, thus for its temperatures it is valid that T*W=T*'W, T*0=T*'0 ; for sequences of the general configuration types (G[]), Xp and Xp+1 from (59), (60), it is valid, for certain p p* p0 1, that

H(Xp)H(Xp|Xp+1)=0 whereXp=(Xp+1, ..., X2p+2p*)=(Xp*, ..., Xp)

which represents the zero change of the information and thermodynamic entropy within the working medium of a reversible Carnot Cycle; for the sequences Xp and Xp+1 of the observed TM ’s configurations X[] from (47) (TMO) it is valid that

H(Xp)H(Xp|Xp+1)=H(Yp)>0, X[]Xp*, ..., X[], p*1

which represents the nonzero output and, also, the growth of the thermodynamic and information entropy within the whole isolated system in which that reversible Carnot Cycle is running, see [6, 8]. Generally, any Carnot Cycle is, under its construction draft, the infinite cycle, and, thus, both the relations (71) and (72) represent the information thermodynamic criterion for the infinite cycle existence.

The following section gives the examples of this method.

6. Examples

Example I

η=(η1, η2, η3, η4);η1=(s0, I, s0, I, R)η2=(s0, B, s1, I, L)η3=(s1, I, s1, I, L)η4=(s1, B, s0, B, R)

This program conserves the given string ξ

ξ=[IIII...I] resp. B[IIII...I]B resp. B[IIII...I]B

or BIB respectively. Let the input ξ=IIIII be given. From the table Tab. 1 it is obvious that

p[]=1, p[]=13, p[]=25, m[]=1, m[]=7 andthus Δp[]=11, Δp[]=13, Δp[]=15, Δm[]=Δm[]=Δm[]= ... =7

Then we can write down the regular grammar of the (regular) language of the general configuration given by the computing process driven by the program η,

S0B s0 IB S1 S1BI s0 IB S2 3 4 5 S2 3 4 5BI s0 B S6S6BI s1 IB S7 8 9 10 S7 8 9 10B s1 IB S11 S11B s1 BIBS12 S12εS0

This grammar is with the set S of its nonterminal symbols,

S={S0 S1 S2 3 4 5 S6 S7 8 9 10 S11 S12}, card S=7

(let us remember (66)-(69) and that S1Sq1, S2 3 4 5Sq2, S6Sq3, S7 8 9 10Sq4, S11Sq5, S12=Sq6)

and generates (the computing process described by it generates) the infinite word [X]+,

[X]+=[B s0 IB, BI s0 IB, BI s0 B, BI s1 IB, B s1 IB, B s1 BIB]+=[d(TM*)]+

For the generation of the signal which halts the whole combined machine we can add and use the rule


After the second round through the indicated infinite cycle the word of the general configuration types of the length l=12 is generated out and, thus, the Pumping Lemma it is valid,

card S*l<2(card S*),card S*=77 12 <27

See the following table.


Table 1.

Tracing and staging for Example I

[We ’idle’ for the same number of clicks Δp in each of the blocks Δm. The sequence of Δp is (11,11,11,), the sequence of Δm is (5,5,5,). The sequence for p is (13,13,13,) and for m is (7,7,7,).]

After the observed (sub)machine has entered into the infinite cycle, which takes place in a finite time, and has gone through this cycle twice it is halted by the signal from the observing machine.

Example II

η=(η1, η2, η3, η4)η1=(s0, I, s0, I, R)η2=(s0, B, s1, I, L)η3=(s1, I, s1, I, L)η4=(s1, B, s0, B, R)

This program generates the expanding sequence

[IIIII] resp. B[IIIII...I]B, , B[IIII...I...]B,

or BIB respectively. Let the input ξ=IIIII be given. From the following table Tab. 2 it is obvious that

p[]=1, p[]=13, p[]=27, m[]=1, m[]=7 andthus Δp[]=11, Δp[]=13, Δp[]=15, Δm[]=Δm[]=Δm[]= ... =7

and we can write down the regular grammar (of the regular) language of the general configuration types generated by the computing process driven by η

S0B s0 IB S1 S1BI s0 IB S2 3 4 5 S2 3 4 5BI s0 B S6S6BI s1 IB S7 8 9 10 S7 8 9 10B s1 IB S11 S11B s1 BIBS12 S12εS0

where, after the relations (66)-(69), S1Sq1, S2 3 4 5Sq2, S6Sq3, S7 8 9 10Sq4, S11Sq5, S12=Sq6.

This grammar is with the set of nonterminal symbols

S*={S0 S1 S2 3 4 5 S6 S7 8 9 10 S11 S12}, card S*=7

and generates (the computing process described by it generates) the infinite word [X]+,

[X]+=[B s0 IB, BI s0 IB, BI s0 B, BI s1 IB, B s1 IB, B s1 BIB]+=[d(TM*)]+

For the generation of the signal which halts the whole combined machine we can add and use the rule


After the second round through the indicated infinite cycle the word of the general configuration types of the length l=12 is generated and, thus, the Pumping Lemma it is valid,

card S*l<2(card S*),card S*=77 12 <27

See the following table.

We are going through the seven states repeatedly and each such a pass lasts longer than the previous one.

We ’idle’ in such each pass a longer time (for the growing number of clicks p of the CUTM).

[The growing sequence of Δp is (11, 13, 15,...) is evidenced while the sequence of Δm is (5,5,5,...).

The sequence for p is (13, 15, 17,...) and for m is (7, 7, 7,...).]

After the observed (sub)machine has entered into the infinite cycle, which occurs in a finite time, and has gone through this cycle twice it is halted by the signal from the observing machine.


Table 2.

Tracing and staging for Example II

Example III

η=(η1, η2, η3, η4, η5, η6, η7 η8, η9, η10 η11, η12, η13)η1=(s0, I, s1, B, R)η2=(s1, I, s1, I, R)η3=(s1, B, s2, B, R)η4=(s2, I, s2, I, R)η5=(s2, B, s3, B, L)η6=(s3, I, s4, B, L)η7=(s4, B, sH, B, R)η8=(s4, I, s5, I, L)η9=(s5, I, s5, I, L)η10=(s5, B, s6, B, L)η11=(s6, I, s6, I, L)η12=(s6, B, s0, B, R)η13=(s0, B, s0, B, L)

This program generates, or is figuring, the difference between the numbers 3 and 4 in this order,

and without checking the negativity of the result. Thus it enters into the infinite cycle in which generates the left-direction expanding sequence


Let now the input word ξ=IIIBIIII be given.

From the following table Tab 3. it is obvious that after the generation of the finite length’s beginning part G, but of the infinite lengthy word of the general configurations numbered by values of m,


the part [22]+ follows. From the table it is obvious that the cycle is with the values

p[]=41, p[]=42, p[]=43m[]=41, m[]=42

We can write down the regular grammar (of the regular) language of the general configuration types generated by the computing process driven by η

S0G S140S140B s0 BIB S41 42 43 S41 42 43 B s0 BIB S41 42 43

This grammar is with the set of nonterminal symbols

S={S0 S140 S41 42 43 S6 S7 8 9 10 S11 S12}, card S=3

and generates the infinite word

After the second round through the indicated infinite cycle the subword of the length l = 12 is generated out,


and thus, the Pumping Lemma it is valid,

card S*l<2(card S*),card S*=33 3 <23

For the generation of the signal which halts the whole combined machine we can use (or add) the rule


Here again it is valid that for the finite number of tracing steps and, for each is lasting the finitely long time, both halt states of our double-Turing Machine, which is the Turing Machine, too, occur in a finite time.


Table 3.

Tracing and staging for Example III

[The distance Δp of p for the general configuration numbered 22 is constant (Δp=0).]

Example IV

η=(η1, η2, η3, η4, η5, η6, η7 η8, η9, η10 η11, η12, η13)η1=(s0, I, s1, B, R)η2=(s1, I, s1, I, R)η3=(s1, B, s2, B, R)η4=(s2, I, s2, I, R)η5=(s2, B, s3, B, L)
η6=(s3, I, s4, B, L)η7=(s4, B, sH, B, R)η8=(s4, I, s5, I, L)η9=(s5, I, s5, I, L)η10=(s5, B, s6, B, L)η11=(s6, I, s6, I, L)η12=(s6, B, s0, B, R)η13=(s0, B, s0, B, L)

This program is figuring the difference 4-3 and the input is ξ=IIIBIIII.

From the following table follows that during our staging the whole double-machine halts itself. In the numbers m of the general configuration types the following word G is generated,


We can write the regular grammar of the (regular) language of the general configuration types being generated by the process driven by η,

S0G1 S12345678910111213S12345678910111213G2 S1415161718192021

This grammar has the set S of the nonterminal symbols,

S={S0 S12345678910111213 S1415161718192021 SHALT}, card S=3

The whole staging is ended by the natural end of the process in the observed machine.


Table 4.

Tracing and staging for Example IV

[All computing and tracing is stopped by the end of the observed process.]

All the previous examples have shown that after the finite number of steps, each is interpreted in a finite time and thus we need the finite time only, the signal halting the whole double machine is generated in the all cases. The reason for generating the halting signal and as such, the recognition the finite or infinite cycle in the observed machine is quite visible from all our tracing tables. Our double machine is the Turing Machine, too.

7. Conclusion

The unsolvable decision problems are of two types. The first type of the problem is solvable but not with the objects and decision-counting methods we have at our disposal. The example is the unsolvability of the binomical equations in the real axis. But with the Complex Numbers Theory they are solvable describing the physical reality. The help is that the imaginary axis (the new dimension) has been introduced. The another example is the Great Fermat Theorem and its solution (Andrew Wiles 1993 and Ann. Math. 1995).

The second type of the unsolvable or undecidable problems are those which are given mistakenly by having an Auto-Reference embedded. They are the paradoxes, which invokes the infinite cycles when they are ’solved’ and just for this they are reducible to the Halting Problem; their solution without a certain ’step-aside’ requires the Perpetuum Mobile functionality. This doesn’t mean that a counting under their description is not performed physically but is not resultative in a finite time.

(Nevertheless we can want to have the infinite cycle for technology purposes, e.g. the push-pull circuit. Here the infinite cycle’s functionality is created intentionally and, as such, the push-pull circuit is the example of the recurring but not recursive counting. Here the Auto-Reference is introduced intentionally, as a successive figuring method, creating an infinite sequence of wanted values.)

Generally, a sequence of states or figuring steps of solving a problem could be divergent or constant or convergent. The divergent and constant cases are felt as the real example of the infinite cycle in the very sense of this term. We can say, rather jokingly, that the convergent counting halts, even if it was in the infinity (e.g. the Newton method) and that the divergent counting doesn’t halt even in the infinity, including now the constant sequence too - the model is the information transfer in an interrupted information transfer channel. When in the recurring counting the number of figuring steps is not given explicitly, then, the results from the successive steps must be compared. When it is set badly or is not set at all, the infinite cycle occurs and, by the algorithm’s definition requiring resultativness, such a counting is errorneous. The flagrant example of the badly set task is the way in which the Gibbs Paradox arises - here it is the Auto-Reference embedded by not respecting the difference between what is measured (observed), and what is measuring (observing). We used it in the extreme case (with the complete mixing these two levels) as the physical model of all noetic paradoxes.

The aim of this paper was to detect the infinite cycle from its own characteristics. Our vision is that the counting itself is of the physical character and, as such, is subjected to physical laws, especially, to the II Principle of Thermodynamics. The infinite cycle is viewed as a certain type of an equilibrium state.[17] - To await the finite-time end of such states is a paradoxical and, as such, unachievable wish. Nevertheless, all these cycles are representable by the Carnot Cycle (it is the infinite cycle conceptually) used as the thermodynamic model of a cyclic information transfer [6, 8, 10]. We see here the growth of thermodynamic entropy within the whole isolated system in which the cycle, or the information transfer, is running and we see the constant or decreasing thermodynamic entropy within its working medium, or within the transfer channel in the information-thermodynamic representation. From this point of view the aim to recognize any infinite cycle, to decide the Halting Problem, is solvable. The information-thermodynamic considerations were expressed in terms of the Automaton Theory The general configuration types of the observed Turing Machine were generated and the Pumping Lemma was used. The author believes that he has shown that problems given paradoxically, errorneously as for being resultant, have the Auto-Reference embedded both in the sense of the objective of the problem or in the sense of the solving the problem - the Auto-Reference can be in the solving method while the very objective of the problem can be solvable. The author’s wish is that the following claim could be considered as the theorem for recognizing, deciding of any infinite cycle:

Due to the fact that any infinite cycle starts at a finite time and, for the Control Unit of any Turing Machine is an finite-state automaton, and due to the fact that the Pumping Lemma it is valid for the regular infinite and thus periodical language of the general configuration types of the observed Turing Machine, the Halting Problem is decidable.

8. Appendix

We consider the basic types of chains of the terminal symbols on the input-output TM ’s tape,


BB; B=B,B=BB,B=BBB, ..., Bε_

Further types are











The following equivalences of chains of the terminal symbols are considerable,
















This work was supported by the grant of Ministery of Education of the Czech Republic MSM 6046137307.


1 - Ajzerman, M. A. a kol. Logika automaty a algoritmy; Academia: Praha, 1971.
2 - Hejna, B. Generování tabulek přechodů konečných automatů. Ing. Diplomová práce, Praha, FEL ČVUT, Praha, 1973.
3 - Hejna, B.; Vajda, I. Information transmission in stationary stochastic systems. AIP Conf. Proc. 1999, 465 (1), 405–418. DOI: 10.1063/1.58272.
4 - Hejna, B. Informační kapacita stacionárních fyzikálních systémů. Ph.D. Dissertation, ÚTIA AV ČR, Praha, FJFI ČVUT, Praha, 2000.
5 - Hejna, B. Tepelný cyklus a přenos informace. Matematika na vysokých kolách: Determinismus a chaos; JČMF, ČVUT: Praha, 2005; pp 83–87.
6 - Hejna, B. Thermodynamic Model of Noise Information Transfer. In AIP Conference Proceedings, Computing Anticipatory Systems: CASYS’07 – Eighth International Conference; Dubois, D., Ed.; American Institute of Physics: Melville, New York, 2008; pp 67–75. ISBN 978-0-7354-0579-0. ISSN 0094-243X.
7 - Hejna, B. Informační význam Gibbsova paradoxu. In Matematika na vysokých kolách: Variačn principy; JČMF: Praha, 2007; pp 25–31.
8 - Hejna, B. Gibbs Paradox as Property of Observation, Proof of II. Principle of Thermodynamics. In AIP Conf. Proc., Computing Anticipatory Systems: CASYS‘09: Ninth International Conference on Computing, Anticipatory Systems, 3–8 August 2009; Dubois, D., Ed.; American Institute of Physics: Melville, New York, 2010; pp 131–140. ISBN 978-0-7354-0858-6. ISSN 0094-243X.
9 - Hejna, B. Informační termodynamika I.: Rovnovážná termodynamika přenosu informace; VŠCHT Praha: Praha, 2010. ISBN 978-80-7080-747-7.
10 - Hejna, B. Informační termodynamika II.: Fyzikální systémy přenosu informace; VŠCHT Praha: Praha, 2011. ISBN 978-80-7080-774-3.
11 - Hejna, B. Information Thermodynamics, Thermodynamics - Physical Chemistry of Aqueous Systems, Juan Carlos Moreno-Piraján (Ed.), ISBN: 978-953-307-979-0, InTech, 2011. Available from:
12 - Hejna, B. Recognizing the Infinite Cycle: A Way of Looking at the Halting Problem. Lecture on CASYS’11 Conference, August 8-13, 2011, Liege, Belgium, Proceedings of the Tenth International Conference CASYS’11 on Computing Anticipatory Systems, Lie‘ge, Belgium, August 8-13, 2011, D. M. Dubois (Ed.), Publ. by CHAOS, 2012. ISSN 1373-5411.
13 - Hejna, B. Information Capacity of Quantum Transfer Channels and Thermodynamic Analogies, Thermodynamics - Fundamentals and its Application in Science, Ricardo Morales-Rodriguez (Ed.), ISBN: 978-953-51-0779-8, InTech, 2012. Available from:
14 - Hejna, B. Informační Termodynamika III.: Automaty, termodynamika, přenos informace, výpočet a problém zastavení; VŠCHT Praha, 2013, ISBN 978-80-7080-851-1.
15 - Hopcroft, J. E.; Ullman J. D. Formálne jazyky a automaty; Alfa: Bratislava, 1978.
16 - Horák, Z.; Krupka, F. Technická fyzika; SNTL/ALFA: Praha, 1976.
17 - Kalčík, J.; Sýkora, K. Technická termomechanika; Academia: Praha, 1973.
18 - Manna, Z. Matematická teorie programů; SNTL: Praha, 1981.
19 - Mareš, J. Technická kybernetika; ČVUT: Praha, 1992, 1997.
20 - Maršák, Z. Termodynamika a statistická fyzika; ČVUT: Praha, 1995.
21 - Melichar, Z. Gramatiky a jazyky; ČVUT: Praha, 1979, 1982.
22 - Moisil, Br. C. Algebraická teorie automatů; ČSAV: Praha, 1964.
23 - Prchal, J. Signály a soustavy; SNTL/ALFA: Praha, 1987.
24 - Kobrinskij N. E.; Trachtěnbrot B. A. Úvod do teorie konečných automatů; SNTL: Praha, 1967.


[1] - B. Russel, L. Whitehead, Principia Mathematica, 1910, 1912, 1913 and 1927.

[2] - We follow the proof of physical and thus logical impossibility of the construction and functionality of the Perpetuum Mobile of the II. and, equivalently [8], of the I. type.

[3] - Our eye cannot see itself by itself only and, even more, it cannot see inside itself by itself only.

[4] - When an infinite reserve of energy would exist.

[5] - In the programmers’ manner or language: listing, cross-reference, memory dump.

[6] - Better said, having the arbitrary (but finite) length.

[7] - (sip, xkp, sjp, ylp, Dp)(σp, sp, ρp)(σp+1, sip+1, ρp+1), see further.

[8] - ’ now denotes the substring from the beginning of the string.

[9] - The transitions are given by the ηqp called by Xp; Xp|Xp+1ηqp, [ηqp1(Xp+1)=Xp].

[10] - ηqp1(Xp+1)=Xp - comparison of the structures of the Xp and the Xp+1 (in Xp|Xp+1).

[11] - Minski’s proof for the undecidability of the Halting Problem (Entscheidungsproblem type).

[12] - Generally, the cycle OO is considered as the reversible only.

[13] - The symbols '[' and ']' in the tables denote the range of the input (its limits) and, also, the ’operating space’ for the CUTM in each step.

[14] - If these blocks are identical the infinite cycle is discovered, but we continue uniformly with 5.

[15] - The situation in the first block is described only by (64).

[16] - 7a If the distance Δp[] of the first and the last row of the first block (5 is greater than the distance Δp[] of the first and last row of the second blok, Δp[]>Δp[], we have discovered the finite cycle driven by the program η.

[17] - The interesting is that the stability of an equilibrium state and of an atomic structure are similar. Without the natural radioactivity the end of atoms seems to ’be in the infinity’, too.