## 1. Introduction

Perhaps the most troubling enigma in modern natural sciences is the principle contradiction that exists between quantum mechanics and Relativity theory (Greene, 2003) ; Indeed, this principle incompatibility between Quantum Mechanics and Relativity Theory propelled Einstein to relentlessly pursuit a 'Unified Field Theory' (Einstein, 1929, 1931, 1951) and subsequently prompted an intensive search for a 'Theory of Everything' (TOE) (Bagger & Lambert, 2007; Elis, 1986; Hawkins, 2002; Polchinski, 2007; Brumfiel, 2006). The principle contradictions that exist between quantum mechanics and relativity theory are:

Probabilistic vs. deterministic models of physical reality:

Relativity theory is based on a positivistic model of ‘space-time’ in which an object or an event possesses clear definitive ‘space-time’, ‘energy-mass’ properties and which therefore gives rise to precise predictions regarding the prospective ‘behavior’ of any such object or event (e.g., given an accurate description of its initial system’s state). In contrast, the probabilistic interpretation of quantum mechanics posits the existence of only a ‘probability wave function’ which describes physical reality in terms of complimentary ‘energy-space’ or ‘temporal-mass’ uncertainty wave functions (Born, 1954; Heisenberg, 1927). This means that at any given point in time all we can determine (e.g., at the subatomic quantum level) is the statistical likelihood of a given particle or event to possesses a certain ‘spatial-energetic’ and ‘temporal-mass’ complimentary values. Moreover, the only probabilistic nature of quantum mechanics dictates that this statistical uncertainty is almost ‘infinite’ prior to our measurement of the particle’s physical properties and ‘collapses’ upon our interactive measurement of it into a relatively defined (complimentary) physical state... Hence, quantum mechanics may only provide us with a probabilistic prediction regarding the physical features of any given subatomic event – as opposed to the relativistic positivistic (deterministic) model of physical reality.

“Simultaneous-entanglement” vs. “non-simultaneous-causality” features:

quantum and relativistic models also differ in their (a-causal)

*‘simultaneous-entanglement’*vs.*‘non-simultaneous-causal’*features; In Relativity theory the speed of light represents the ultimate constraint imposed upon the transmission of any physical signal (or effect), whereas quantum mechanics advocates the existence of a ‘simultaneous-entanglement’ of quantum effects (e.g., that are not bound by the speed of light constraint). Hence, whereas the relativistic model is based on strict causality – i.e., which separates between any spatial-temporal ‘cause’ and ‘effect’ through the speed of light (non-simultaneous) signal barrier, quantum entanglement allows for ‘a-causal’*simultaneous**effects*that are independent of any light-speed constraint ( Horodecki et al., 2007).Single vs. multiple spatial-temporal modeling:

Finally, whereas Relativity theory focuses on the conceptualization of only a *single* spatial point at any given time instant – i.e., which therefore possesses a well defined spatial position, mass, energy, or temporal measures, quantum mechanics allows for the measurement (and conceptualization) of multiple spatial-temporal points (simultaneously) – giving rise to a (probability) ‘*wave* function’; Indeed, it is hereby hypothesized that this principle distinction between a *single*
*spatial-temporal* quantum *‘particle’* or localized relativistic object (or event) and a *multi- spatial-temporal* quantum *‘wave’* (function) may both shed light on some of the key conceptual differences between quantum and relativistic modeling as well as potentially assist us in bridging the apparent gap between these two models of physical reality (based on a conceptually higher-ordered computational framework).

## 2. The ‘Duality Principle’: Constraining quantum and relativistic 'Self-Referential Ontological Computational System' (SROCS) paradigms

However, despite these (apparent) principle differences between quantum and relativistic models of physical reality it is hypothesized that both of these theories share a basic *‘materialistic-reductionistic’* assumption underlying their basic (theoretical) computational structure: It is suggested that mutual to both quantum and relativistic theoretical models is a fundamental ‘Self-Referential-Ontological-Computational-System’ (SROCS) structure (Bentwich, 2003a, 2003b, 2003c, 2004, 2006) which assumes that it is possible to determine the ‘existence’ or ‘non-existence’ of a certain ‘y’ factor solely based on its *direct physical interaction* (PR{x,y}/di1) with another ‘x’ factor (e.g., at the same ‘di1’ computational level), thus:

But, a strict computational-empirical analysis points out that such (quantum and relativistic) SROCS computational structure may also inevitably lead to ‘*logical inconsistency’* and inevitable consequent *‘computational indeterminacy’* – i.e., a principle inability of the (hypothesized) SROCS computational structure to determine whether the particular ‘y’ element “exists” or “doesn’t exist”: Indeed (as will be shown below) such ‘logical inconsistency’ and subsequent ‘computational indeterminacy’ occurs in the specific case in which the direct physical interaction between the ‘x’ and ‘y’ factors leads to a situation in which the ‘y’ factor “*doesn’t* exist”, which is termed: a ‘Self-Referential-Ontological-Negative-System’, SRONCS)... However, since there exist ample empirical evidence that both quantum and relativistic computational systems *are*
*capable* of determining whether a particular ‘y’ element (e.g., state/s or value/s) “exists” or “doesn’t exist” then this contradicts the SRONCS (above mentioned) inevitable ‘computational indeterminacy’, thereby calling for a reexamination of the currently assumed quantum and relativistic SROCS/SRONCS computational structure;

Indeed, this analysis (e.g., delineated below) points at the existence of a (new) computational *‘Duality Principle’* which asserts that the computation of any hypothetical quantum or relativistic (x,y) relationship/s must take place at a *conceptually higher-ordered computational framework ‘D2’* – e.g., that is (in principle) irreducible to any direct (or even indirect) physical interaction between the (quantum or relativistic) ‘x’ and ‘y’ factors (but which can nevertheless determine the association between any two given ‘x’ and ‘y’ factors) (Bentwich, 2003a, 2003b, 2003c, 2004, 2006a, 2006b).

In the case of Relativity theory, such basic SROCS computational structure pertains to the computation of any spatial-temporal or energy-mass value/s of any given event (or object) – solely based on its *direct physical interaction* with any hypothetical (differential) relativistic observer; We can therefore represent any such (hypothetical) spatial-temporal or energy-mass value/s (of any given event or object) as a particular *‘Phenomenon’*: ‘P[s-t *(i...n)*, e-m *(i...n)*]’; Therefore, based on the (above) relativistic ‘materialistic-reductionistic’ assumption whereby the specific value of any (spatial-temporal or energy-mass) ‘Phenomenon’ value is computed solely based on its direct physical interaction (‘di1’) with a specific (hypothetical differential) relativistic observer, we obtain the (above mentioned) SROCS computational structure:

(2) |

Hence, according to the above mentioned SROCS computational structure the relativistic SROCS computes the “existence” or “non-existence” of any particular ‘Phenomenon’ (e.g., specific ‘spatial-temporal’ or ‘energy-mass’ *‘i’* value/s of any given object/event) – solely based upon the direct physical interaction (PR.../di1) between the potential (exhaustive hypothetical) values of this Phenomenon ('P[s-t *(i...n)*, e-m *(i...n)*]') and any hypothetical differential relativistic observer; But, note that the relativistic SROCS computational structure assumes that it is solely through the direct physical interaction between any series of (hypothetical differential) relativistic observer/s and the Phenomenon’s (entire spectrum of possible spatial-temporal or energy-mass) value/s – that a particular ‘Phenomenon’ (spatial-temporal or energy mass) value is computed. The relativistic SROCS computational structure assumes that it is solely through the direct physical interaction between any series of (hypothetical differential) relativistic observer/s and the Phenomenon’s (entire spectrum of possible spatial-temporal or energy-mass) value/s – that the particular ‘Phenomenon’ (spatial-temporal or energy-mass) value is computed. But, a thorough analysis of this SROCS computational structure indicates that in the specific case in which the direct physical interaction between any hypothetical differential relativistic observer/s and the *Phenomenon’s* whole spectrum of potential values – leads to the *“non-existence”* of all of the other ‘space-time’ or ‘energy-mass’ values that were not measured by a particular relativistic observer ('O-*i*') (at the same 'di1' computational level):

However, this SRONCS computational structure inevitably leads to the (above mentioned) ‘logical inconsistency’ and *‘computational indeterminacy’*:

This is because according to this SRONCS computational structure all of the other ‘Phenomenon’ values (e.g., ‘space-time’ or ‘energy-mass values) – which do not correspond to the specifically measured ‘space-time’ or ‘energy-mass’ {i} values (i.e., that are measured by a particular corresponding ‘O-*diff-i* relativistic observer): P[{s-t ≠ *i*} or {e-m ≠ *i*}] are necessarily computed by the SRONCS paradigmatic structure to both “*exist*” AND *“not exist”* – at the same ‘di1’ computational level: according to this SRONCS computational structure all of the other ‘Phenomenon’ values (e.g., ‘space-time’ or ‘energy-mass values) – which do not correspond to the specifically measured ‘space-time’ or ‘energy-mass’ {i} values (i.e., that are measured by a particular corresponding ‘O-*diff-i* relativistic observer): P[{s-t ≠ *i*} or {e-m ≠ *i*}] are necessarily computed by the SRONCS paradigmatic structure to both “*exist*” AND *“not exist”* – at the same ‘di1’ computational level:

But, given the SROCS/SRONCS strong ‘materialistic-reductionistic’ working assumption – i.e., that the computation of the “existence” or “non-existence” of the particular P[{s-t ≠ *i*}, {e-m ≠ *i*}] values solely depends on its direct physical interaction with the series of (potential) differential observers at the ‘di1’ computational level, then the above SRONCS computational assertion that the particular P[{s-t ≠ *i*}, {e-m ≠ *i*}] values both “exist” and “don’t exist” at the same ‘di1’ computational level inevitably also leads to both ‘logical inconsistency’ and a closely linked ‘computational indeterminacy’ – e.g., conceptual computational inability of such ‘di1’ computational level to determine whether the P[{s-t ≠ *i*}, {e-m ≠ *i*}] values “exist” or “doesn’t exist”...

But, since there exists ample relativistic empirical evidence pointing at the capacity of any relativistic observer to determine whether or not a particular ‘P[{s-t ≠ *i*}, {e-m ≠ *i*}]’ “exists” or “doesn’t exist”, then a novel (hypothetical) computational *‘Duality Principle’* asserts that the determination of the “existence” or “non-existence” of any given P[s-t *(i...n)*, e-m *(i...n)*] can only be computed at a conceptually higher-ordered ‘D2’ computational level e.g., that is in principle irreducible to any direct or even indirect physical interactions between the full range of possible ‘Phenomenon’ values ‘P[s-t *(i...n)*, e-m *(i...n)*]’ and any one of the potential range of (differential) relativistic observers:

(6) |

Note that the computational constraint imposed by the Duality Principle is *conceptual* in nature – i.e., as it asserts the conceptual computational inability to determine the “existence” or “non-existence” of any (hypothetical) ‘Phenomenon’ (e.g., ‘space-time’ event/s or ‘energy-mass’ object value/s) from within its direct physical interaction with any (hypothetical) differential relativistic observer; Indeed, a closer examination of the abovementioned SROCS/SRONCS relativistic computational structure may indicate that the computational constraint imposed by the Duality Principle is not limited to only *direct physical interaction* between any ‘Phenomenon’ (e.g., space-time or energy-mass value/s) and any (hypothetical) differential relativistic observer/s, but rather extends to any direct *or indirect physical interaction* (between any such ‘Phenomenon’ and any potential differential relativistic observer/s); In order to prove this broader applicability of the computational Duality Principle – as negating the possibility of determining the “existence” or “non-existence” of any such ‘Phenomenon’ from within its direct *or indirect* physical interaction/s with any (hypothetical) differential relativistic observer/s (e.g., but only from a conceptually higher-ordered hypothetical computational level ‘D2’) let us assume that it *is possible* to determine the precise value/s of any given ‘Phenomenon’ based on its *indirect interaction* with another intervening variable (or computational level) ‘d2’ (which may receive any information or input/s or effect/s etc. from any direct physical interaction/s between the given ‘Phenomenon’ and any hypothetical differential relativistic observer at the ‘di1’ level);

(7) |

But, a closer analysis of the this hypothetical ‘di2’ (second) intervening computational level (or factor/s) as possibly being able to determine whether any particular space-time event or energy-mass object “exists” or “doesn’t exist” may indicate that it precisely replicates the same SROCS/SRONCS (‘problematic’) computational structure which has been shown to be constrained by the (novel) computational ‘Duality Principle’. This is because despite the (new) assumption whereby the computation of the “existence” or “non-existence” of any particular ‘Phenomenon’ (e.g., space-time or energy-mass) value is computed at a different ‘di2’ computational level (or factor/s etc.), the SROCS/SRONCS *intrinsic*
*‘materialistic-reductionistic’* computational structure is such that it assumes that the determination of the “existence”/”non-existence” of any particular Phenomenon value/s is *‘solely caused’* (or ‘determined’) *by the direct physical interaction* between that ‘Phenomenon’ and any hypothetical (differential) relativistic observer/s, which is represented by the *causal arrow “*→*”* embedded within the relativistic SROCS/SRONCS computational structure :

(8) |

Thus, even though the direct physical interaction between the ‘Phenomenon’ and the differential relativistic observer seem to take place at the ‘di1’ computational level whereas the determination of the “existence”/”non-existence” of a particular Phenomenon value appears to be carried out at a different ‘di2’ computational level, the actual (embedded) computational structure still represents a SROCS/SRONCS paradigm. This is because even this new SROCS/SRONCS computational structure still maintains the strict ‘materialistic-reductionistic’ working assumption whereby it is solely the direct physical interaction between the Phenomenon and the differential relativistic observer that determines the “existence”/”non-existence” of a particular Phenomenon value; An alternate way of proving that the SROCS/SRONCS computational structure remains unaltered (e.g., even when we assume that the computation of the “existence” or “non-existence” of the particular ‘Phenomenon’ value may take place at another ‘di2’ computational level) is based on the fact that due to its (above mentioned) ‘materialistic-reductionistic’ working hypothesis – the determination of the “existence” or “non-existence” of the particular ‘Phenomenon’ value is solely computed based on the information obtained from the direct physical interaction between the ‘Phenomenon’ and the series of potential differential observers (at the ‘di1’ computational level); Hence, in effect there is a total contingency of the determination of the “existence”/”non-existence” of the particular ‘Phenomenon’ value (at the hypothetical ‘di2’ computational level) upon the direct physical interaction between this ‘Phenomenon’ and any differential relativistic observer (at the ‘di1’ level) which therefore does not alter the ‘di1’ SROCS computational structure, and may be expressed thus:

(Note: precisely due to the above mentioned total “existence”/”non-existence” of the particular Phenomenon value (*i*) at ‘di2’ upon input from the Phenomenon’s direct physical interaction with any differential relativistic observer at ‘di1’ it may be more convenient to formally represent this SROCS computational structure as occurring altogether – either at the ‘di1’ or ‘di2’ computational level, as presented above);

However, as proven by the Duality Principle (above), given the fact that there exists ample empirical evidence indicating the capacity of relativistic (computational) systems to determine whether a particular ‘Phenomenon’ (space-time or energy-mass) value “exists” or “doesn’t exist”, then the broader extension of the Duality Principle evinces that it is not possible (e.g., in principle) to determine such “existence” or “non-existence” of any particular ‘Phenomenon’ (e.g., space-time or energy-mass) value from *within* any direct or indirect physical interaction/s between any such ‘Phenomenon’ and any (hypothetical) series of differential relativistic observer/s; Instead, the ‘Duality Principle’ postulates that the determination of any ‘Phenomenon’ (e.g., ‘space-time’ or energy-mass) values can only be determined by a conceptually higher ordered ‘D2’ computational level which is capable of determining the *‘co-occurrence/s’* of specific Phenomenon values and corresponding differential relativistic observers' measurements (e.g., and which is irreducible to any direct or indirect physical interactions between such differential relativistic observer/s and any Phenomenon value/s):

(10) |

Hence, a thorough reexamination of Relativity’s SROCS computational structure (e.g., which assumes that the determination of any ‘space-time’ or ‘energy-mass’ Phenomenon value is solely determined based on that particular event’s or object’s direct or indirect physical interaction/s with any one of a series of potential relativistic observers) has led to the recognition of a (novel) computational ‘Duality Principle’; This 'Duality Principle' proves that it is not possible (in principle) to determine any such space-time or energy-mass ‘Phenomenon’ values based on any hypothetical direct or indirect physical interaction between such ‘Phenomenon’ and any hypothetical series of (differential) relativistic observer/s; Rather, according to this novel computational Duality Principle the determination of any space-time or energy-mass relativistic value can only be computed based on a conceptually higher-ordered ‘D2’ computational level (e.g., which is again in principle irreducible to any hypothetical direct or indirect physical interaction between any differential relativistic observer/s and any space-time or energy-mass Phenomenon); Such conceptually higher-ordered 'D2' computational level is also postulated to compute the ‘co-occurrences’ of any‘differential relativistic observer/s’ and corresponding ‘Phenomenon’ (e.g., space-time or energy-mass value/s)...

Intriguingly, it also hypothesized that the same precise SROCS/SRONCS computational structure may underlie the quantum probabilistic interpretation of the ‘probability wave function’ and ‘uncertainty principle’; Indeed, it is hereby hypothesized that precisely the same SROCS/SRONCS computational structure may pertain to the quantum mechanical computation of the physical properties of any given subatomic ‘target’ (*'t'*) (e.g., assumed to be dispersed all along a probability wave function) which is hypothesized to be determined solely through its direct physical interaction with another subatomic complimentary ‘probe’ (P(‘e/s’ or ‘t/m’)) entity, thus:

(11) |

In a nutshell, it is suggested that this SROCS/SRONCS computational structure accurately represents the (current) probabilistic interpretation of quantum mechanics in that it describes the basic working hypothesis of quantum mechanics wherein it is assumed that the determination of the particular (complimentary) ‘spatial-energetic’ or ‘temporal-mass’ values of any given subatomic ‘target’ particle – i.e., which is assumed to be dispersed probabilistically all along the probability wave function’s (complimentary) spatial-energetic and temporal-mass values, occurs through the direct physical interaction of such probability wave function dispersed ‘target’ entity with another subatomic measuring ‘probe’ element; Moreover, it is assumed that this direct physical interaction between the probability wave function dispersed ‘target’ element and the subatomic probe element constitutes the sole (computational) means for the “collapse” of the target’s probability wave function to a singular complimentary target value: This inevitably produces a SROCS computational structure which possesses the potential of expressing a SRONCS condition, thus:

wherein the probabilistically distributed ‘target’ element (e.g., all along the complimentary ‘spatial-energetic’ or ‘temporal-mass’ probability wave function) which possesses all the possible spectrum of such ‘spatial-energetic’ or ‘temporal-mass’ values: *t* [s/e *(i...n)*, t/m *(i...n)*] “collapses” – solely as a result of its direct physical interaction with another subatomic ‘probe’ element (which also possesses complimentary ‘spatial-energetic’ and ‘temporal-mass’ properties); Indeed, it is this assumed direct physical interaction between the subatomic ‘probe’ and probabilistically distributed ‘target’ wave function which “collapses” the target’s (complimentary) wave function, i.e., to produce only a *single* (complimentary) spatial-energetic or temporal-mass *measured* value (e.g., *t* [e-s *(i)*, t-m *(i)*])– which therefore *negates all of the other “non-collapsed”* spatial-energetic or temporal-mass complimentary *values* (e.g., ‘not *t* [e-s *(≠i)*, t-m *(≠i)*]’) of the target’s (‘pre-collapsed’) wave function!

But, as we’ve seen earlier (in the case of the relativistic SRONCS), such SRONCS computational structure invariably leads to both ‘logical inconsistency’ and subsequent ‘computational indeterminacy’: This is because the above mentioned SRONCS condition essentially advocates that all of the “non-collapsed” complimentary ‘target’ values (i.e., *t* [*s-e* ≠ *i* or *t-m ≠i*] seem to both “exist” *AND* “not exist” at the same ‘di1’ computational level – thereby constituting a *'logical inconsistency'*!? But, since the basic ‘materialistic-reductionistic’ working hypothesis underlying the SROCS/SRONCS computational structure also assumes that the determination of any particular target complimentary (spatial-energetic or temporal-mass) value can *only* be determined based on the direct physical interaction between the target probability wave function’s distribution and a subatomic ‘probe’ element – e.g., *at the same ‘di1’ computational level*, then the above mentioned 'logical inconsistency' invariably also leads to *‘computational indeterminacy’*, e.g., a principle inability to determine whether any such “non-collapsed” complimentary ‘target’ values (i.e., *t* [*s-e* ≠ *i* or *t-m ≠i*]) “exists” or “doesn’t exist”... However, as noted above, since there exists ample empirical evidence indicating the *capacity* of quantum (computational) systems to determine whether any such *t* [e-s *(i…n)*, t-m *(i…n)*] quantum target value “exists” or “doesn’t exist” the Duality Principle once again asserts the need to place the computation regarding the determination of any pairs of subatomic complimentary ‘probe’ and ‘target’ values at a conceptually higher-ordered ‘D2’ level (e.g., that is in principle *irreducible to any direct physical interactions between them*).

Finally, as shown in the case of the relativistic SROCS/SRONCS paradigm, the conceptual computational constraint imposed by the Duality Principle further expands to include not only strictly ‘*direct*’ physical interaction/s between the subatomic ‘probe’ and ‘target’ elements but also any other hypothetical ‘*indirect’* interaction/s, elements, effects, or even light-signals, information, etc. – that may mediate between these subatomic ‘probe’ and ‘target’ elements;

This is because even if we assume that the determination of the “existence” or “non-existence” of any particular subatomic ‘target’ (spatial-energetic or temporal-mass) value can occur through a (second intervening or mediating) ‘di2’ computational interaction, entity, process or signal/s transfer we still obtain the same SROCS/SRONCS computational structure which has been shown to be constrained by the computational Duality Principle:

(13) |

The rational for asserting that this (novel) computational instant precisely replicates the same SROCS/SRONCS computational structure (e.g., noted above) arises (once again) from the recognition of the strict *‘materialistic-reductionistic’ “causal”* connection that is assumed to exist between the direct physical interaction between the subatomic ‘probe’ and target’ elements (e.g., taking place at the ‘di1’ level) and the hypothetical ‘di2’ computational level – and which is assumed to *solely determine* whether a particular target value ‘*t* [s/e *(i)*, t/m *(i)*]’ “exists” or “doesn’t exist”; This is because since the (abovementioned) basic materialistic-reductionistic causal assumption whereby the ‘di2’ determination of the “existence”/”non-existence” of any specific (spatial-energetic or temporal-mass) ‘target’ value is *solely determined by the direct ‘probe-target’ physical interaction* at the ‘di1’ level value, therefore the logical or computational structure of the (abovementioned) SROCS/SRONCS is replicated; Specifically, the case of the SRONCS postulates the "existence" of the entire spectrum of possible target values *t* [e-s *(i…n)*, t-m *(i…n)*] at the ‘di1’ direct physical interaction between the ‘probe’ and ‘target’ entities – but also asserts the “non-existence” of all the “non-collapsed” target values at the ‘di2’ computational level (e.g., ‘not *t* [e/s *(≠i)*, t/m *(≠i)*]); This intrinsic contradiction obviously constitutes the abovementioned 'logical inconsistency' and ensuing 'computational indeterminacy' (that are contradicted by known empirical findings).

Indeed, this SRONCS structure is computationally equivalent to the abovementioned SRONCS: PR{P(‘e-s’ or ‘t-m’), *t* [e-s *(i...n)*, t-m *(i...n)*]} → ‘no*t*
*t* [e-s *(i)*, t-m *(i)*]’]/di1, since the determination of the [‘*t* [e-s *(i)*, t-m *(i)*]’ or ‘not *t* [e-s *(i)*, t-m *(i)*]’] is solely determined based on the direct physical interaction at ‘di1’. Therefore, also the ‘logical inconsistency’ and ‘computational indeterminacy’ (mentioned above) ensues which is contradicted by robust empirical evidence that inevitably leads to the Duality Principle’s assertion regarding the determination of any (hypothetical) ‘probe-target’ pair/s at a conceptually higher-ordered ‘D2’ computational level:

(14) |

Therefore, an analysis of the basic SROCS/SRONCS computational structure underlying both relativistic as well as quantum’s computational paradigms has led to the identification of a novel computational ‘Duality Principle’ which constrains each of these quantum and relativistic SROCS/SRONCS computational paradigms and ultimately points at the inevitable existence of a conceptually higher-ordered ‘D2’ computational level; Based upon the Duality Principle’s identification of such a conceptually higher-ordered ‘D2’ computational level (which alone can determine any relativistic ‘Phenomenon’ or any quantum spatial-energetic or temporal-mass target value, it also postulates the computational products of this conceptually higher-ordered ‘D2’ computational level – as the determination of the “co-occurrence” of any relativistic Phenomenon-relativistic observer pair/s or of any quantum ‘probe-target’ (complimentary) pair/s; Thus, the first step towards the hypothetical unification of quantum and relativistic theoretical frameworks within a singular (conceptually higher-ordered) model is the identification of a singular computational ‘Duality Principle' constraining both quantum and relativistic (underlying) SROCS paradigms and its emerging conceptually higher-ordered singular ‘D2’ computational level (which produces ‘co-occurring’ quantum ‘probe-target’ or relativistic ‘observer-Phenomenon‘ pairs) – as the only feasible computational level (or means) capable of determining any quantum (space-energy or temporal-mass) ‘probe-target’ relationship or any ‘observer-Phenomenon’ relativistic relationship/s.

## 3. ‘D2’: A singular 'a-causal' computational framework

There are two (key) questions that arise in connection with the discovery of the Duality Principle’s conceptually higher-ordered (novel) ‘D2’ computational framework:

Is there a

*singular*(mutual) ‘D2’ computational level that underlies*both*quantum and relativistic (basic) SROCS paradigms?What may be the D2

*'a-causal' computational framework*– which transcends the SROCS' computational constraints imposed by the Duality Principle?

In order to answer the first question, lets apply once again the conceptual proof of the ‘Duality Principle’ regarding the untenable computational structure of the SROCS – which (it is suggested) is applicable (once again) when we try to determine the physical relationship/s between these two potential quantum and relativistic ‘D2’ computational frameworks; Specifically, the Duality Principle proves that it is not possible (e.g., in *principle*) to maintain two such “independent” (conceptually higher-ordered) ‘D2’ computational frameworks; Rather, that there can only exist a *singular conceptually higher-ordered ‘D2’*
*computational framework* which coalesces the above mentioned quantum and relativistic ‘D2’ computational levels; Let’s suppose there exist two “*separate”* such conceptually higher-ordered computational frameworks: ‘D2*1*’ and ‘D*2*
*2*’ as underlying and constraining quantum and relativistic modeling (e.g., as proven above through the application of the Duality Principle to the two principle SROCS/SRONCS computational paradigms underlying current quantum and relativistic modeling). Then, according to the Duality Principle this would imply that in order to be able to determine any hypothetical physical relationship between quantum [‘qi{*1*}’] and relativistic [‘ri{*2*}’] entities or processes (i.e., that exist at the above mentioned hypothetical corresponding D21 quantum and D22 relativistic computational levels) – we would necessarily need a conceptually higher-ordered ‘D3’ that is (again in principle) irreducible to the lower-ordered D21(‘qi{*1*}’) and D22('ri{*2*}') physical interactions at the D2 computational level. This is because otherwise, the determination of the “existence” or “non-existence” of any such hypothetical quantum or relativistic phenomena would be carried out at the same computational level (‘D2’} as the direct physical interaction between these (hypothetical) quantum and relativistic entities (or processes), thereby precisely replicating the SROCS structure (that was shown constrained by the Duality Principle), thus:

(15) |

But, since we already know that the Duality Principle proves the conceptual computational inability to carry out the conceptually higher-ordered computation at the same computational level (e.g., in this case termed: ‘D2’) as the direct physical interaction between any two given elements, then we are forced (once again) to conclude that there must be only *one singular conceptually higher-ordered D2* computational level underlying both quantum and relativistic SROCS models. Therefore, we are led to the (inevitable) conclusion whereby there may only exist *one* conceptually higher-ordered ‘D2’ computational framework which underlies (and constrains) both quantum and relativistic relationships.

A critical element arising from the computational Duality Principle is therefore the recognition that it is not possible (in principle) to determine (or compute) any quantum or relativistic relationships based on any ‘direct’ physical relationship, (at 'di2’ or indirect physical relationship/s ('di3'), (as shown above) that may exist between any hypothetical differential relativistic observer and any hypothetical ‘Phenomenon’ or between any complimentary subatomic ‘probe’ measurement and the target’s (assumed) probability ‘wave-function’; Hence, the untenable SROCS/SRONCS computational structure evident in the case of attempting to determine the (direct or indirect) physical relationship/s between the conceptually higher-ordered ‘D21’ quantum and 'D22' relativistic computational frameworks once again points at the Duality Principle’s conceptual computational constraint which can only allow for only a *singular conceptually higher-ordered ‘D2’ computational framework* – as underlying *both* quantum and relativistic phenomena (which constitutes the answer to the first theoretical question, above).

Next, we consider the second (above mentioned) theoretical question – i.e., provided that (according to the Duality Principle) there can only be a *singular* conceptually higher-ordered ‘D2’ computational framework as underlying both quantum and relativistic phenomena, what may be its computational characteristics? It is suggested that based on the recognition of the Duality Principle’s singular conceptually higher-ordered ‘D2’ computational framework – which necessarily underlies both quantum and relativistic phenomena, it is also possible to answer the second (above mentioned) question regarding the computational characteristics of such higher-ordered (singular) ‘D2’ framework; Specifically, the Duality Principle’s (above) proof indicates that rather than the existence of any direct (or indirect) ‘materialistic-reductionistic’ physical interaction between any hypothetical differential relativistic 'observer' and any corresponding 'Phenomena', or between any complimentary subatomic ‘probe’ element and probability wave function ‘target’ there exists a singular conceptually higher-ordered ‘D2’ computational framework which simply computes the “co-occurrences” of any of these quantum or relativistic (differential) 'observer/s' and corresponding 'Phenomenon' value/s or between any quantum subatomic ‘probe’ and ‘target’ elements...

Therefore, the singular conceptually higher-ordered ‘D2’ computational framework produces an *“a-causal”* computation which computes the ‘co-occurrences’ of any range of quantum ‘probe-target’ or relativistic ‘observer-Phenomenon’ pairs thus:

(17) |

(18) |

The key point to be noted (within this context) is that such ‘a-causal’ computation negates or precludes the possibility of any “real” ‘material-causal’ interaction taking place at either quantum or relativistic levels! In other words, the Duality Principle’s negation of the fundamental quantum or relativistic SROCS/SRONCS computational structure (e.g., as invariably leading to both ‘logical inconsistency’ and ‘computational indeterminacy’ that are contradicted by robust quantum and relativistic empirical data) also necessarily negates the existence of any (real) *‘causal-material’* interaction between or within any quantum or relativistic phenomena – e.g., at the conceptually higher-ordered ‘D2’ computational level. In order to prove that the Duality Principle constraining the basic (materialistic-reductionistic) SROCS/SRONCS computational structure also necessarily points at the conceptual computational inability of such SROCS/SRONCS paradigms to determine the existence of any (real) ‘causal-material’ interactions (e.g., between any exhaustive series of x and y factors, interactions etc.) let us reexamine (once again) the SROCS/SRONCS working hypothesis wherein it *is* possible to determine whether a certain ‘x’ factor ‘causes’ the ‘existence’ or ‘non-existence’ of the particular ‘y’ factor:

Let’s suppose it is possible for the SROCS/SRONCS direct physical (quantum or relativistic) interaction between the ‘x’ and ‘y’ (exhaustive series’) factors to causally determine the ‘existence; or ‘non-existence’ of the ‘y’ factor. In its most general formulation this would imply that:

But, as we’ve already seen (earlier), such SROCS computational structure invariably also contains the special case of a SRONCS of the form:

However, this SRONCS structure inevitably leads to both ‘logical inconsistency’ and ‘computational indeterminacy’ which are contradicted by empirical findings (e.g., in the case of quantum and relativistic phenomena).

Therefore, the Duality Principle inconvertibly proves that the basic materialistic-reductionistic SROCS/SRONCS paradigmatic structure underlying the current quantum and relativistic theoretical models must be replaced by a conceptually higher-ordered (singular) ‘D2’ computation which cannot (in principle) contain any SROCS/SRONCS ‘*causal-material’* relationships – e.g., wherein any hypothetical ‘y’ element is “caused” by its direct (or indirect) physical interaction with another (exhaustive) X*{1...n}* series. As pointed out (above), the only such possible conceptually higher-ordered ‘D2’ computation consists of an ‘a-causal association’ between pairs of D2: {(‘x*i*’, y*i*)... (‘x*n*’, ‘y*n*’)}.

The essential point to be noted is that the Duality Principle thereby proves the conceptual computational unfeasibility of the currently assumed ‘materialistic-reductionistic’ SROCS/SRONCS structure – including the existence of any hypothetical ‘causal-material’ interaction between any exhaustive ‘x’ and ‘y’ series! This means that in both quantum and relativistic domains the determination of any hypothetical (exhaustive) spatial-temporal event or energy-mass object, or of any complimentary spatial-energetic or temporal subatomic target – there cannot (in principle) exist any ‘causal-material’ interaction between the relativistic event and any differential relativistic observer or between the subatomic probe and target elements... Instead, the Duality Principle proves that the only viable means for determining any such exhaustive hypothetical relativistic or quantum relationship is through the conceptually higher-ordered singular ‘a-causal’ D2 association of certain pairs of spatial-temporal or energy-mass values and corresponding relativistic observer frameworks or between pairs of subatomic probe and corresponding complimentary pairs of spatial-energetic or temporal-mass target values...

However, if indeed, the entire range of quantum and relativistic phenomena must necessarily be based upon a singular conceptually higher-ordered ‘D2’ computational level – which can only compute the “co-occurrences” of quantum ‘probe-target’ or relativistic ‘observer-Phenomenon’ pairs, but which precludes the possibility of any “real” ‘material-causal’ relationship/s existing between any such quantum (‘probe-target’) or relativistic (‘observer-Phenomenon’) pairs, then this necessitates a potential significant reformulation of both quantum and relativistic theoretical models based on the Duality Principle’s asserted conceptually higher-ordered singular ‘D2’ ‘a-causal’ computational framework; This is because the current formulation of both quantum and relativistic theoretical frameworks is deeply anchored in- and dependent upon- precisely such direct (or indirect) physical interactions between a differential relativistic observer and any hypothetical (range of) ‘Phenomenon’ (e.g., as defined earlier), or between any subatomic (complimentary) ‘probe’ element and a probabilistically dispersed ‘target’ wave function. Thus, for instance, the entire theoretical structure of Relativity Theory rests upon the assumption that the differential physical measurements of different observers travelling at different speeds relative to any given object (or event) arises from a direct physical interaction between a (constant velocity) speed of light signal and the differentially mobilized observer/s... In contrast, the (novel) Duality Principle proves the conceptual computational inability to determine any such relativistic differential Phenomenon values – based on any direct or indirect physical interaction between any (hypothetical) differential relativistic observer and any given ‘Phenomenon’ (at their ‘di1’ or even ‘di2’ computational levels), but only from the conceptually higher-ordered ‘D2’ computational level through an ‘a-causal’ computation of the “co-occurrences” of any (differential) relativistic observer and (corresponding) Phenomenon! Hence, to the extent that we accept the Duality Principle’s conceptual computational proof for the existence of a singular higher-ordered ‘a-causal D2’ computational framework – as underlying both quantum and relativistic theoretical models, then Relativity’s well-validated empirical findings must be reformulated based on such higher-ordered ‘D2 a-causal computation’ framework...

Likewise, in the case of Quantum Mechanical theory it is suggested that the current formalization critically depends on the ‘collapse’ of the target ‘wave-function’ – upon its direct physical interaction with the (complimentary) probe element, which is contradicted by the (earlier demonstrated) Duality Principle’s proof for the conceptual computational inability to determine any (complimentary) ‘target’ values based on its direct (or even indirect) physical interactions with another subatomic (complimentary) ‘probe’ element. Instead, the Duality Principle asserts that all quantum (complimentary) ‘probe-target’ values may only be computed ‘a-causally’ based on the conceptually higher-ordered ‘D2’ computation of the “co-occurrences” of any hypothetical ‘probe-target’ complimentary elements... Therefore, it becomes clear that both Quantum and Relativistic theoretical models have to be reformulated based on the Duality Principle’s (proven) singular conceptually higher-ordered ‘a-causal D2’ computational framework. A key possible guiding principle in searching for such an alternative singular conceptually higher-ordered ‘D2 a-causal’ computational framework formulation of both quantum and relativistic (well-validated) empirical findings is Einstein’s dictum regarding the fate of a “good theory” (Einstein, 1916) – which can become a special case in a broader more comprehensive framework. More specifically, based on the Duality Principle’s (abovementioned) negation of the current existing quantum or relativistic theoretical interpretations of these well-validated empirical findings including: the quantum – ‘probabilistic interpretation of the uncertainty principle’ (and its corresponding probabilistic ‘wave function’), ‘particle-wave duality’ ‘quantum entanglement’, and relativistic constancy of the speed of light (and corresponding speed limit on transfer of any object or signal), there seems to arise a growing need for an alternative reformulation of each and every one of these physical phenomena (e.g., separately and conjointly) – which may “fit in” within this singular (conceptually higher-ordered) ‘D2 a-causal’ computational framework; Indeed, what follows is a ‘garland’ of those quantum or relativistic empirical findings – reformulated based upon the Duality Principle – as fitting within a singular ‘a-causal D2’ computational mechanism; In fact, it is this assembly of Duality Principle’s (motivated) theoretical reformulations of the (above) well-validated empirical dictums which will invariably lay down the foundations for the hypothetical ‘Computational Unified Field Theory’. Fortunately (as we shall witness), this piecemeal work of the assembly of all quantum and relativistic Duality Principle’s theoretically refomalized ‘garlands’ may not only lead to the discovery of such singular conceptually higher-ordered ‘D2’ Computational Unified Field Theory’ (CUFT), but may also resolve all known (apparent) theoretical contradictions between quantum and relativistic models (as well as predict yet unknown empirical phenomena, and possibly open new theoretical frontiers in Physics and beyond)...

### 3.1. Single- multiple- and exhaustive- spatial-temporal measurements

Perhaps a direct ramification of the above mentioned critical difference between empirical facts and theoretical interpretation which may have a direct impact on the current (apparent) schism between Relativity Theory and Quantum Mechanics is the distinction between *single- vs. multiple- spatial-temporal* empirical measurements and its corresponding “particle” vs. “wave” theoretical constructs; It is hypothesized that if we put aside (for the time being) the ‘positivistic’ vs. ‘probabilistic’ characteristics of Relativity theory and Quantum Mechanics then we may be able to characterize *both* relativistic and quantum empirical data as representing ‘single’- vs. ‘multiple’- spatial-temporal measurements; Thus, for instance, it is suggested that a (subatomic) “*particle*” or (indeed) any well-localized relativistic object (or event) can be characterized as indicating a ‘*single’* (localized) *spatial-temporal*
*measurement* such that the given object or event is measured at a particular (single) spatial point {si} at any given temporal point {ti}. In contrast, the “*wave*” characteristics of quantum mechanics represent a *multi spatial-temporal measurement* wherein there are at least *two* separate spatial-temporal measurements for each temporal point {s*i* t*i,* s(*i+n*) t(*i+n*)}.

Indeed, I hypothesize that precisely such a distinction between single- and multiple- spatial-temporal measurement (and conceptualization) may stand at the basis of some of the (apparent) quantum ‘conundrums’ such as the ‘particle-wave duality’, the ‘double-slot experiment’, and ‘quantum entanglement’; Specifically, I suggest that if (indeed) the primary difference between the ‘particle’ and ‘wave’ characterization is *single-* vs. *multiple-* spatial-temporal measurements, then this can account for instance for the (apparently) “strange” empirical phenomena observed in the ‘double-slot’ experiment. This is because it may be the case wherein the opening of a *single* slot only allows for the measurement of a single spatial-temporal measurement at the interference detector surface (e.g., due to the fact that a single slot opening only allows for the measurement of the change in a single photon’s impact on the screen). In contrast, opening two slots allows the interference detector surface to measure *two* spatial-temporal points simultaneously thereby revealing the ‘wave’ (interference) pattern. Moreover, I hypothesize that if indeed the key difference between the ‘particle’ and ‘wave’ characteristics is their respective single- vs. multiple- spatial-temporal measurements, then it may also be the case wherein any *“particle”* measurement (e.g., or for that matter also any single spatial-temporal *relativistic* measurements) is *embedded* within the broader *multi- spatial-temporal ‘wave’* measurement... In this case, the current probabilistic interpretation of quantum mechanics (which has been challenged earlier by the Duality Principle) may give way to a *hierarchical-dualistic* computational interpretation which regards any ‘particle’ measurement as merely a localized (e.g., single spatial-temporal) segment of a broader multi spatial-temporal ‘wave’ measurement.

One further potentially significant computational step – e.g., beyond the ‘single' spatial-temporal “particle” (or object) as potentially *embedded* within the ‘multiple spatial-temporal “wave” measurement – may be to ask: is it possible for both the single spatial-temporal “particle” and the multi- spatial-temporal “wave” measurements to be embedded within a conceptually higher-ordered ‘D2’ computational framework?

This hypothetical question may be important as it may point the way towards a formal physical representation of the Duality Principle's asserted singular conceptually higher-ordered 'D2 a-causal computational framework': This is because the Duality Principle’s assertion regarding the existence of a singular higher-ordered D2 ‘a-causal’ computation can consist of *all single- multiple- or even the entire range of spatial pixels’*{s*i*....s*n*} that exist at any point/s in time {t*i...*t*i*} which are *computed as “co-occurring” pairs* of 'relativistic observer – Phenomenon’ or pairs of subatomic ‘probe – target’ elements (e.g., as computed at this singular conceptually higher-ordered ‘D2’ computational level); This implies that since there cannot be any “real” ‘material-causal’ interactions between any of these relativistic ‘observer-Phenomenon’ or quantum ‘probe-target’ pairs, then all such hypothetical ‘spatial pixels’{s*i*....s*n*} occurring at any hypothetical temporal point/s {t*i...*t*i*} must necessarily form an exhaustive ‘pool’ of the entire corpus of spatial-temporal points, which according to the Duality Principle must only exist as the above mentioned quantum (subatomic) ‘probe-target’ or relativistic (differential) ‘observer-Phenomenon’ computational pairs at the singular conceptually higher-ordered 'D2 A-Causal Computational Framework'.

### 3.2. The ‘Universal Simultaneous Computational Frames’ (USCF’s)

Indeed, an additional empirical support for the existence of such (hypothetical) singular conceptually higher-ordered 'D2' exhaustive pool of all "co-occurring" quantum or relativistic pairs may be given by the well validated empirical phenomenon of ‘quantum entanglement’; In a nutshell, ‘quantum entanglement’ refers to the finding whereby a subatomic measurement of one of two formerly connected “particles” – which may be separated (e.g. at the time of measurement) by a distance greater than a lights signal can travel can ‘instantaneously' affect the measure outcome of the other (once interrelated) ‘entangled’ particle...

The reason that ‘quantum entanglement’ may further constrain the operation of higher-ordered hypothetical ‘D2 A-Causal’ computational framework is that it points at the existence of an empirical dictum which asserts that even in those computational instances in which two spatial-temporal events seem to be physically “separated” (e.g., by a distance greater than possibly travelled by Relativity’s speed of light limit) the higher-ordered ‘D2 A-Causal Computation’ occurs ‘instantaneously’! Therefore, this ‘quantum entanglement’ empirical dictum indicates that the ‘D2 a-causal' computation of all spatial pixels in the universe – be carried out “at the same time”, i.e., “simultaneously” at the D2 computational mechanism; In other words, the above mentioned ‘D2 a-causal computation’ mechanism must consist of the entirety of all possible quantum ‘probe-target’ or relativistic ‘observer-Phenomenon’ pairs occupying an exhaustive three-dimensional ‘picture’ of the entire corpus of all spatial pixels in the universe – for any given (minimal) ‘time-point’;

Therefore, if (indeed) due to the empirical-computational constraint imposed by ‘quantum entanglement’ we reach the conclusion wherein all spatial-pixels in the (subatomic as well relativistic) universe must necessarily exist “simultaneously” (e.g., for any minimal ‘temporal point') at the ‘D2 a-causal computation' level’’; And based on the Duality Principle’s earlier proven conceptual computational irreducibility of the determination of any quantum or relativistic relationship to within any direct or indirect *physical* interaction between any hypothetical subatomic ‘probe’ and ‘target’ elements or between any relativistic differential ‘observer’ and any ‘Phenomenon’ – but only from this singular higher-ordered ‘D2 A-Causal Computational Framework’;

It is hereby hypothesized that the ‘D2 A-Causal Computational’ processing consists of a series of ‘*Universal Simultaneous Computational Frames’* (USCF’s) which comprise the entirety of the (quantum and relativistic) ‘spatial-pixels’ in the physical universe (i.e., at any given “minimal time-point”)... Moreover, it is hypothesized that in order for this singular conceptually higher-ordered ‘A-Causal Computational Framework’ to produce all known quantum and relativistic physical phenomena there must necessarily exist a series of (extremely rapid) such ‘Universal Simultaneous Computational Frames’ (USCF’s) that give rise to three distinct *‘Computational Dimensions’* – which include: ‘Computational *Framework’*, ‘Computational *Consistency’* and ‘Computational *Locus’*;

### 3.3. Computational- framework, consistency and locus

Based on the Duality Principle’s asserted singular conceptually higher-ordered ‘D2’ computational framework comprising of an ‘A-Causal-Computation’ of a rapid series of ‘Universal Simultaneous Computational Frames’ (USCF’s) it is hypothesized that three interrelated computational dimensions arise as different computational measures relating to – the *‘Framework’* of computation (e.g., relating to the entire USCF/s ‘*frame/s’* or to a particular ‘*object’* within the USCF/s), the degree of *‘Consistency’* across a series of USCF’s (e.g., ‘consistent’ vs. ‘inconsistent’), and the *‘Locus’* of computational measure/s (e.g., whether the computation is carried out ‘locally’- from within any particular ‘reference system’, or ‘globally’- that is, externally to a particular reference system). It is further suggested that the combination of these three independent computational factors gives rise – not only to all relativistic and quantum basic physical features of ‘space’, ‘time’, ‘energy’, ‘mass’ etc. but may in fact exhaustively replicate, coalesce and harmonize all apparently existing theoretical contradictions between quantum and relativistic theories of physical reality...

First, the (four) basic physical features of physical reality are defined as the product of the interaction between the two Computational Dimensions of ‘*Framework’* (‘frame’ vs. ‘object’) and *‘Consistency’* (‘consistent’ vs. ‘inconsistent’): thus, for instance, it is hypothesized that a computational index of the degree of ‘frame-consistent’ presentations across a series of USCF’s gives us a measure of the “spatial” value of any given object; In contrast, the computation of the degree of ‘frame-inconsistent’ measure/s of any given object – gives rise to the ‘”energy” value (of any measured object or event). Conversely, the computational measure of the degree of ‘object-consistent’ presentations (e.g., across a series of USCF’s) produces the object’s “mass” value. In contrast, the measure of an object’s (or event’s) ‘object-inconsistent’ presentations computes that object’s/event’s temporal value... A (partial) rational for these hypothetical computational measures may be derived from glancing at their computational “equivalences” – within the context of an analysis of the apparent physical features arising from the dynamics of a *cinematic* (two dimensional) film;

A quick review of the analogous cinematic measure of (any given object’s) “spatial” or “energetic” value/s indicates that whereas a (stationary) object’s ‘spatial’ measure or a measure of the ‘spatial’ distance a moving object traverses (e.g., across a certain number of cinematic film frames) depends on the number of pixels that object occupies “*consistently*”, or the number of pixels that object travelled which remained constant (e.g., consistent) – across a given number of cinematic frames. Thus, the cinematic computation of ‘spatial’ distance/s is given through an analysis of the number of pixels (e.g., relative to the entire frame’s reference system) that were either traversed by an object or which that object occupies (e.g., its “spatial” dimensions); In either case, the ‘spatial value’ (e.g., of the object’s consistent dimensions or of its travelled distance) is computed based on the number of consistent pixels that object has travelled through or has occupied (across a series of cinematic frames); In contrast, an object’s “energetic” value is computed through a measure of the number of pixels that object has ‘displaced’ across a series of frames – such that its “energy” value is measured (or computed) based on the number of pixels that object has displaced (e.g., across a certain number of series of cinematic frames). Thus, an object’s ‘energy’ value can be computed as the number of ‘inconsistent’ pixels that object has displaced (across a series of frames)... Note that in both the cases of the ‘spatial’ value of an object or of its ‘energetic’ value, the computation can only be carried out with reference to the (entire- or certain segments of-) ‘frame/s’, since we have to ascertain the number of ‘consistent’ or ‘inconsistent’ pixels (e.g., relative to the reference system of the entire- or segments of- frame/s);

In contrast, it is suggested that the analogous cinematic measures of “mass” and “time” – involve a computation of the number of “*object*-related” (i.e., in contrast to the abovementioned “frame-related”) “consistent” vs. “inconsistent” presentations; Thus, for instance, a special cinematic condition can be created in which any given object can be presented at- or below- or above- a certain ‘psychophysical threshold’ – i.e., such that the “appearance” or “disappearance” of any given object critically depends on the number of times that object is presented ‘consistently’ (across a certain series of cinematic frames); Such psychophysical-object cinematic condition necessarily produces a situation in which the number of consistent-object presentations (across a series of frames) determines whether or not that object will be perceived to “exist” or “not exist”; Indeed, a further extension of the same precise psychophysical cinematic scenario can produce a condition in which there is a direct correlation between the number of times an object is presented ‘consistently’ (across a given series of cinematic frames) and its perceived “mass”: Thus, whereas the given object – would seem to “not exist” below a certain number of presentation (out of a given number of frames), and would begin to “exist” once its number of presentations exceeds the particular psychophysical threshold, then it follows that a further increase in the number of presentations (e.g., out of a given number of frames) will increase that object’s perceived “mass”... Perhaps somewhat less ‘intuitive’ is the cinematic computational equivalence of “time” – which is computed as the number of ‘object-related’ “inconsistent” presentations; It’s a well-known fact that when viewing a cinematic film, if the rate of projection is slowed down (“slow-motion”) the sense of time is significantly ‘slowed-down’... This is due to the fact that there is much less changes taking place relative to the object/s of interest that we are focusing on... Indeed, it is a scientifically validated fact that our perception of time depends (among other factors) on the number of stimuli being presented to us (within a given time-interval, called the *‘filled-duration’ illusion*). Therefore, it is suggested that the cinematic computation (equivalence) of “time” is derived from the number of ‘object-related *inconsistent’* presentations (across a given number of cinematic frames); the greater the number of object-related inconsistent presentations the more time has elapsed – i.e., as becomes apparent in the case of ‘slow-motion’ (e.g., in which the number of object-related inconsistent presentations are small and in which very little ‘time’ seems to elapse) as opposed to ‘fast-motion’ (e.g., in which the number of object-related inconsistent presentations is larger and subsequently a significant ‘time’ period seems to pass)...

Obviously, there are significant differences between the two dimensional cinematic metaphor and the hypothetical Computational Unified Field Theory’s postulated rapid series of three-dimensional ‘Universal Simultaneous Computational Frames’ (USCF’s); Thus, for instance, apart from the existence of *two-dimensional* vs. *three dimensional* frames, various factors such as: the (differing) rate of projection, the universal simultaneous computation (e.g., across the entire scope of the physical universe) and other factors (which will be delineated below). Nevertheless, utilizing at least certain (relevant) aspects of the cinematic film metaphor may still assist us in better understanding some the potential dynamics of the USCF’s rapid series; Hence, it is suggested that we can perhaps learn from the (above mentioned) ‘object’ vs. ‘frame’ and ‘consistent’ vs. ‘inconsistent’ computational features characterizing the cinematic equivalents of “space” (‘frame-consistent’), “energy” (‘frame-inconsistent’), “mass” (‘object-consistent’) and “time” (‘object-inconsistent’) – with reference to the CUFT’s hypothesized two (abovementioned) Computational Dimensions of ‘Computational Framework’ (e.g., ‘frame’ vs. ‘object’) and ‘Computational Consistency’ (e.g., ‘consistent’ vs. ‘inconsistent’). The third (and final) hypothesized computational dimension of ‘Computational Locus’ does not correlate with the cinematic metaphor but can be understood when taking certain aspects of the cinematic metaphor and combining them with certain known features of Relativity theory; As outlined (earlier), this third ‘Computational Locus’ dimension refers to the particular frame of reference from which any of the two other Computational Dimensions (e.g., ‘Framework’ or ‘Consistency’) are being measured: Thus, for instance, it is suggested that the measurement of any of the abovementioned (four) basic physical features of ‘space’ (‘frame-consistent’), ‘energy’ (frame-inconsistent’), ‘mass’ (‘object-consistent’) and ‘time’ (‘object-inconsistent’) – can be computed from *within* the ‘local’ frame of reference of the particular object (or observer) being measured, or from an external ‘global’ frame of reference (e.g., which is different than that of the particular object or observer).

## 4. The ‘Computational Unified Field Theory’ (CUFT)

Based on the abovementioned three basic postulates of the ‘Duality Principle’ (e.g., including the existence of a conceptually higher-ordered ‘D2 A-Causal’ Computational framework), the existence of a rapid series of ‘Universal Simultaneous Computational Frames’ (USCF’s) and their accompanying three Computational Dimensions of – ‘Framework’ (‘frame’ vs. ‘object’), ‘Consistency’ (‘consistent’ vs. ‘inconsistent’) and ‘Locus’ (‘global’ vs. ‘local’) a (novel) ‘Computational Unified Field Theory’ (CUFT) is hypothesized (and delineated);

First, in order to fully outline the theoretical framework of this (new) hypothetical CUFT let us try to closely follow the (abovementioned) ‘cinematic-film’ metaphor (e.g., while keeping in mind the earlier mentioned limitations of such an analogy in the more complicated three dimensional universal case of the CUFT): It is hypothesized that in the same manner that a cinematic film consists of a series of (rapid) ‘still-frames’ which produce an ‘illusion’ of objects (and phenomena) being displaced in ‘space’, ‘time’, possessing an apparent ‘mass’ and ‘energy’ values – the CUFT’s hypothesized rapid series of ‘Universal Simultaneous Computational Frames’ (USCF’s) gives rise to the apparent ‘physical’ features of ‘space’, ‘time’, ‘energy’ and ‘mass’... It is further hypothesized that (following the cinematic-film analogy) the minimal (possible) degree of ‘change’ across any two (subsequent) ‘Universal Simultaneous Computational Frames’ (USCF’s) is given by Planck’s ‘*h’* constant (e.g., for the various physical features of ‘space’, ‘time’, ‘energy’ or ‘mass’)... Likewise the maximal degree of (possible) change across two such (subsequent) USCF’s may be given by: *‘c*
^{
2
}
*’*; Note that both of these (quantum and relativistic) computational constraints – arising from the ‘mechanics’ of the rapid (hypothetical) series of USCF’s – exist as basic computational characteristics of the conceptually higher-ordered ‘D2’ (a-causal) computational framework, rather than exist as part of the ‘di1’ physical interaction (apparently) taking place within any (single or multiple) USCF’s... Indeed, it is further hypothesized that a measure of the actual rate of presentation (or computation) of the series of USCF’s may be given precisely through the product of these (‘D2’) computational constraints of the maximal degree of (inter-frame) change/s (*‘c*2*’)* divided by the minimal degree of (inter-frame) change/s (*‘h’*): *‘c*2*’/ ‘h’!*

Specifically, the CUFT hypothesizes that the computational measures of ‘space’, ‘energy’, ‘mass’ and ‘time’ (and “causation”) are derived based on an ‘object’ vs. ‘frame’ and ‘consistent’ vs. ‘inconsistent’ computational combinations;

Thus, it is hypothesized that the ‘space’ measure of an object (e.g., whether it is the spatial dimensions of an object or event of whether it relates to the spatial location of an object) is computed based on the number of ‘*frame-consistent*’ (i.e., cross-USCF’s constant points or “universal-pixels”) which that object possesses across subsequent USCF’s, divided by Planck’s constant ‘h’ which is multiplied by the number of USCF's across which the object's spatial values have been measured.

such that:

where the ‘space’ measure of a given object (or event) is computed based on a *frame consistent* computation that adds the specific USCF’s (x,y,z) localization across a series of USCF’s [1...n] – which nevertheless do *not exceed* the threshold of Planck’s constant per each (‘n’) number of frames (e.g., thereby providing the CUFT’s definition of “space” as ‘frame-consistent’ USCF’s measure).

Conversely, the ‘energy’ of an object (e.g., whether it is the spatial dimensions of an object or event or whether it relates to the spatial location of an object) is computed based on the *frame’s differences* of a given object’s location/s or size/s across a series of USCF’s, divided by the speed of light 'c' multiplied by the number of USCF's across which the object's energy value has been measured:

such that:

wherein the energetic value of a given object, event etc. is computed based on the subtraction of that object’s “universal pixels” location/s across a series of USCF’s, divided by the speed of light multiplied by the number of USCF's.

In contrast, the of ‘mass’ of an object is computed based on a measure of the number of times an *‘object’* is presented *‘consistently’* across a series of USCF’s, divided by Planck’s constant (e.g., representing the minimal degree of inter-frame’s changes):

(25) |

where the measure of *‘mass’* is computed based on a comparison of the number of instances in which an object’s (or event’s) ‘universal-pixels’ measures (e.g., along the three axes ‘x’, y’ and ‘z’) is identical across a series of USCF’s (e.g., ∑o*i*{x,y,z} [USCF(*n*)] = o*j*{(x+m),(y+m),(z+m)} [USCF(*1...n*)]), divided by Planck’s constant.

Again, the measure of ‘mass’ represents an *object-consistent* computational measure – e.g., regardless of any changes in that object’s spatial (frame) position across these frames.

Finally, the *‘time’* measure is computed based on an *‘object-inconsistent’* computation of the number of instances in which an ‘object’ (i.e., corresponding to only a particular segment of the entire USCF) changes across two subsequent USCF’s (e.g., ∑ o*i*{x,y,z} [USCF(*n*)] ≠ o*j*{(x+m),(y+m),(z+m)} [USCF(*1...n*)]), divided by ‘c’:

such that:

Hence, the measure of ‘time’ represents a computational measure of the number of *‘object-inconsistent’* presentations any given object (or event) possesses across subsequent USCF’ (e.g., once again- regardless of any changes in that object’s ‘frame’s’ spatial position across this series of USCF’s).

Interestingly, the concept of “causality” – when viewed from the perspective of the (above mentioned) ‘D2 A-Causal Computation’ (rapid) series of USCF’s replaces the (apparent) ‘di1’ “material-causal” physical relationship/s between any given ‘x’ and ‘y’ objects, factors, or phenomenon – through the existence of *apparent* (quantum or relativistic) spatial-temporal or energy-mass relationships across a series of USCF’s; Thus, for instance, according to the CUFT’s higher-ordered ‘D2 A-Causal Computation’ theoretical interpretation (e.g., as well as based on the earlier outlined ‘Duality Principle’ proof) in both the relativistic (assumed SROCS) direct physical interaction (‘di1’) between any hypothetical (differential) relativistic observer and any (corresponding) spatial-temporal or energy-mass ‘Phenomenon’, and in the quantum (assumed SROCS) direct physical interaction (‘di1’) between any subatomic ‘probe’ particle and any possible ‘target’ element –there *does not exist* any ‘direct’ (‘di1’) *material-causal* relationship/s between the relativistic observer and (measured) Phenomenon, or between the quantum subatomic ‘probe’ and ‘target’ entities which results in the determination of the particular spatial-temporal value of any given Phenomenon (e.g., for a particular differential observer) or the ‘collapse’ of the (assumed) probability wave function which results in the selection of only one (complimentary) spatial-energetic or temporal-mass target value... Instead, according to the CUFT’s stipulated conceptually higher-ordered singular (quantum and relativistic) D2 A-Causal Computational Framework these apparently ‘material-causal’ subatomic probe-target or relativistic differential observer-Phenomenon pair/s are in fact replaced by A hypothetical *'Universal Computational Principle' ("י")* D2 A-Causal Computation of the ‘co-occurrence’ of a particular set of such relativistic ‘observer-Phenomenon’ or quantum subatomic ‘probe-target’ pairs (e.g., appearing across a series of USCF’s!) Indeed, a thorough understanding of the CUFT’s replacement of any (hypothetical quantum or relativistic) ‘material-causal’ relationship/s with the conceptually higher-ordered (singular) ‘D2 A-Causal Computation (‘י’), which simply co-presents a series of particular relativistic ‘observer-Phenomenon’ or subatomic ‘probe-target’ pairs across the series of given USCF’s may also open the door for a fuller appreciation of the lack of any (continuous) “physical” or “material” relativistic or quantum object’s, event/s or phenomena etc. “in-between” USCF’s frames – except for the (above mentioned) ‘Universal Computational Principle’ (‘י’ - at ‘D2’). In other words, when viewed from the perspective of the CUFT’s conceptually higher-ordered (singular) ‘י’ computational stance of the series of (rapid) USCF’s all of the known quantum and relativistic phenomena (and laws) of ‘space’, ‘time’, energy’, ‘mass’ and ‘causality’, ‘space-time’, ‘energy-mass’ equivalence, ‘quantum entanglement’, ‘particle-wave duality’, “collapse” of the ‘probability wave function’ etc. phenomena – are replaced by an ‘a-causal’ (D2) computational account (which will be explicated below);

### 4.1. The CUFT’s replication of quantum & relativistic findings

As sown above, the Computational Unified Field Theory postulates that the various combinations of the ‘Framework’ and ‘Consistency’ computational dimensions produce the known ‘physical’ features of: ‘space’ (‘frame-consistent’), ‘energy’ (‘frame-inconsistent’), ‘mass’ (‘object-consistent’) and ‘time’ (‘object-inconsistent’). The next step is to explicate the various possible relationships that exists between each of these four basic ‘physical’ features and the two levels of the third Computational Dimension of ‘Locus’ – e.g., ‘global’ vs. ‘local’: It is suggested that each of these four basic physical features can be measured either from the computational framework of the entire USCF’s perspective (e.g., a ‘global’ framework) or from the computational perspective of a particular segment of those USCF’s (e.g., ‘local’ framework). Thus, for instance, the spatial features of any given object can be measured from the computational perspective of the (series of the) entire USCF’s, or it can be measured from the computational perspective of only a segment of those USCF’s – i.e., such as from the perspective of that object itself (or from the perspective of another object travelling alongside- or in some other specific relationship- to that object). In much the same manner all other (three) physical features of ‘energy’, ‘mass’ and ‘time’ (e.g., of any given object) can be measured from the ‘global’ computational perspective of the entire (series of) USCF’s or from a ‘local’ computational perspective of only a particular USCF’s segment (e.g., of that object’s perspective or of another travelling frame of reference perspective).

One possible way of formalizing these two different ‘global’ vs. ‘local’ computational perspectives (e.g., for each of the four abovementioned basic physical features) is through attaching a ‘global’ {‘*g*’} vs. ‘local’ {‘*l*’} subscript to each of the two possible (e.g., ‘global’ vs. ‘local’) measurements of the four physical features. Thus, for instance, in the case of ‘mass’ the ‘global’ (computational) perspective measures the number of times that a given object has been presented consistently (i.e., unchanged)– when measured across the (entire) USCF’s pixels (e.g., across a series of USCF’s) ; In contrast, the ‘local’ computational perspective of ‘mass’ measures the number of times that a given object has been presented consistently (e.g., unchanged) when measured from within that object’s frame of reference;

such that

such that

What is to be noted is that these (hypothesized) different measurements of the ‘global’ vs. local’ computational perspectives – i.e., as measured externally to a particular object's pixels (‘global’) as opposed to only the pixels constituting the particular segment of the USCFs which comprises the given object (or frame of reference) may in fact replicate Relativity’s known phenomenon of the increase in an object’s mass associated with a (relativistic) increase in its velocity (e.g., as well as all other relativistic phenomena of the dilation in time, shrinkage of length etc.); This is due to the fact that the ‘global’ measurement of an object’s mass critically depends on the *number of times* that object has been presented (consistently) across a series of USCF’s: e.g., the greater the number of (consistent) presentations the higher its mass. However, since the computational measure of ‘mass’ is computed relative to Planck’s (*‘h’*) constant (e.g., computed as a given object’s number of consistent presentations across a specific number of USCF’s frames); and since the spatial measure of any such object is contingent upon that object's consistent presentations (across the series of USCF’s) such that the object does not differ (‘spatially’) across frames by more than the number of USCF’s multiplied by Planck’s constant; then it follows that the higher an object’s energy – i.e., displacement of pixels across a series of USCF’s, the greater number of pixels that object has travelled and also the greater number of times that object has been presented across the series of USCF’s – which constitutes that object’s ‘global’ mass measure! In other words, when an object’s mass is measured from the ‘global’ perspective we obtain a measure of that object’s (number of external) global pixels (reference) which increases as its relativistic velocity increases, thereby also increasing the number of times that object is presented (e.g., from the global perspective) hence increasing its globally measured ‘mass’ value. In contrast, when that object’s mass is measured from the ‘local’ computational perspective – such ‘local mass’ measurement only takes into account the number of times that object has been presented (across a given series of USCF’s) as measured from within that object’s frame of reference; Therefore, even when an object increases its velocity – if we set to measure its mass from within its own frame of reference we will not be able to measure any increase in its measured ‘mass’ (e.g., since when measured from within its local frame of reference there is no change in the number of times that object has been presented across the series of USCF’s)...

Likewise, it is hypothesized that if we apply the ‘global’ vs. ‘local’ computational measures to the physical features of ‘space’, ‘energy’ and ‘time’ we will also replicate the well-known relativistic findings of the shortening of an object’s length (in the direction of its travelling), and the dilation of time (as measured by a ‘global’ observer): Thus, for instance, it is suggested that an application of the same ‘global’ computational perspective to the physical feature of ‘space’ brings about an inevitable shortening of its spatial length (e.g., in the direction of its travelling):

such that:

It is hypothesized that this is due to the global computational definition of an object’s spatial dimensions which computes a given object’s spatial (length) based on its consistent ‘spatial’ pixels (across a series of USCF’s) – such that any changes in that object’s spatial dimensions must not exceed Planck’s (‘h’) spatial constant multiplied by the number of USCF's; This is because given such Planck’s minimal ‘spatial threshold’ computational constraint – the faster a given relativistic object travels (e.g., from a global computational perspective) the less ‘consistent’ spatial ‘pixels’ that object possesses across frames which implies the shorter its spatial dimensions become (i.e., in the direction of its travelling); in contrast, measured from a ‘local’ computational perspective there is obviously no such “shrinkage” in an object’s spatial dimensions – since based on such a ‘local’ perspective all of the spatial ‘pixels’ comprising a given object remain unchanged across the series of USCF’s.

such that:

Somewhat similar is the case of the ‘global’ computation of the physical feature of ‘time’ which is computed based on the number of measured changes in the object’s spatial ‘pixels’ constitution (across frames):

such that:

The temporal value of an event (or object) is computed based on the number of times that a given object or event has changed – relative to the speed of light (e.g., across a certain number of USCF's); However, the *measurement* of temporal changes (e.g., taking place at an object or event) differ significantly – when computed from the 'global' or 'local' perspectives: This is because from a *'global'* perspective, the faster an object travels (e.g., relative to the speed of light) the less potential changes are exhibited in that object's or event's presentations (across the relevant series of USCF's). In contrast, from a 'local' perspective, there is no change in the number of measured changes in the given object (e.g., as its velocity increases relative to the speed of light) – since the local (computational) perspective does not encompass globally measured changes in the object's displacement (relative to the speed of light)…

Note also that we can begin appreciating the fact that from the CUFT’s (D2 USCF’s) computational perspective there seems to be inexorable (computational) interrelationships that exist between the eight computational products of the three postulated Computational Dimensions of ‘Framework’, ‘Consistency’ and ‘Locus’; Thus, for instance, we find that an acceleration in an object’s velocity increases the number of times that object is presented (e.g., 'globally' across a given number of USCF frames) – which in turn also increases it ‘mass’ (e.g., from the ‘global Locus’ computational perspective), and (inevitably) also decreases its (global) ‘temporal’ value (due to the decreased number of instances that that object changes across those given number of frames (e.g., globally- relative to the speed of light maximal change computational constraint)... Indeed, over and beyond the hypothesized capacity of the CUFT to replicate and account for all known relativistic and quantum empirical findings, its conceptually higher-ordered ‘D2’ USCF’s emerging computational framework may point at the unification of all apparently “distinct” physical features of ‘space’, ‘time’, ‘energy’ and ‘mass’ (and ‘causality’) as well as a complete harmonization between the (apparently disparate) quantum (microscopic) and relativistic (macroscopic) phenomena and laws; the apparent disparity between quantum (microscopic) and relativistic (macroscopic) phenomena and laws;

Towards that end, we next consider the applicability of the CUFT to known quantum empirical findings: Specifically, we consider the CUFT’s account of the quantum (computational) complimentary properties of ‘space’ and ‘energy’ or ‘time’ and ‘mass’; of an alternative CUFT’s account of the “collapse” of the probability wave function; and of the ‘quantum entanglement’ and ‘particle-wave duality’ subatomic phenomena; It is also hypothesized that these alternative CUFT’s theoretical accounts may also pave the way for the (long-sought for) unification of quantum and relativistic models of physical reality. First, it is suggested that the quantum complimentary ‘physical’ features of ‘space’ and ‘energy’, ‘time’ and ‘mass’ – may be due to a ‘*computational exhaustiveness’* (or ‘complimentarity’) of each of the (two) levels of the Computational Dimension of ‘Framework’. It is hypothesized that both the ‘*frame’* and ‘*object’* (‘D2-USCF’s’) computational perspectives are *exhaustively* comprised of their ‘consistent’ (e.g., ‘space’ and ‘energy’, or ‘mass’ and ‘time’ physical features, respectively): Thus, whether we chose to examine the USCF’s (D2) computation of a ‘frame’ – which is exhaustively comprised of its ‘space’ (‘consistent’) and ‘energy’ (‘inconsistent’) computational perspectives or if we chose to examine the ‘object’ perspective of the USCF’s (D2) series – which is exhaustively comprised of its ‘mass’ (‘consistent’) and ‘time’ (inconsistent) computational aspects: in both cases the (D2) USCF’s series is exhaustively comprised of these ‘consistent’ and ‘inconsistent’ computational aspects (e.g., of the ‘frame’ or ‘object’ perspectives)...

This means that the computational definitions of each of these pairs of ‘frame’: ‘space’ (consistent) and ‘energy’ (inconsistent) or ‘object’: ‘mass’ (consistent) or ‘time’ (inconsistent) is ‘exhaustive’ in its comprising of the USCF’s Framework (i.e., ‘frame’ or ‘object’) Dimension:

Indeed, note that the computational definitions of ‘space’ and ‘energy’ exhaustively define the USCF’s (D2) Framework computational perspective of a ‘*frame’*:

such that:

andsuch that:

Likewise, note that the computational definitions of ‘mass’ and ‘time’ exhaustively define the USCF’s (D2) Framework computational perspective of an ‘*object’*:

such that

andsuch that:

Thus, it is hypothesized that it is the *computational exhaustiveness* of the Framework Computational Dimension‘s (two) levels (e.g., of ‘frame’ or ‘object’ perspectives) which gives rise to the known quantum complimentary ‘physical’ features of ‘space’ and ‘energy’ (e.g., the *frame’s* ‘consistent’ and ‘inconsistent’ perspectives) or of ‘mass’ and ‘time’ (e.g., the *object’s* ‘consistent’ and ‘inconsistent’ perspectives). However, since this hypothetical ‘computational exhaustiveness’ of the Framework Dimension’s (two) levels arises as an integral part of the USCF’s (D2) Universal Computational Principle’s operation – it manifests through both the (above mentioned) computational definitions of ‘space’ and ‘energy, ‘mass’ and ‘time’, as well as through a singular ‘Universal Computational Formula’, postulated below:

### 4.2. The ‘Universal Computational Formula’

Based on the abovementioned three basic postulates of the ‘Duality Principle’ (e.g., including the existence of a conceptually higher-ordered ‘D2 A-Causal’ Computational framework), the existence of a rapid series of ‘Universal Simultaneous Computational Frames’ (USCF’s – e.g., which are postulated to be computed at an incredible rate of *‘c*
^{
2
}
*’/ ‘h’*) and their accompanying three Computational Dimensions of – ‘Framework’ (‘frame’ vs. ‘object’), ‘Consistency’ (‘consistent’ vs. ‘inconsistent’) and ‘Locus’ (‘global’ vs. ‘local’) a singular ‘Universal Computational Formula’ is postulated which may underlie all (known) quantum and relativistic phenomena:

wherein the left side of this singular hypothetical Universal Computational Formula represents the (abovementioned) universal rate of computation by the hypothetical Universal Computational Principle, whereas the right side of this Universal Computational Formula represents the ‘integrative-complimentary’ relationships between the four basic physical features of ‘space’ (s), ‘time’ (t), ‘energy’ (e) and ‘mass’ (m), e.g., as comprising different computational combinations of the three (abovementioned) Computational Dimensions of ‘Framework’, ‘Consistency’ and ‘Locus’;

Note that on both sides of this Universal Computational Formula there is a coalescing of the basic quantum and relativistic computational elements – such that the rate of Universal Computation is given by the product of the maximal degree of (inter-USCF’s relativistic) change ‘c^{2}’ divided by the minimal degree of (inter-USCF’s quantum) change ‘*h*’; Likewise, the right side of this Universal Computational Formula meshes together both quantum and relativistic computational relationships – such that it combines between the relativistic products of space and time (s/t) and energy-mass (e/m) together with the quantum (computational) complimentary relationship between ‘space’ and ‘energy’, and ‘time’ and ‘mass’;

More specifically, this hypothetical Universal Computational Formula fully integrates between two sets of (quantum and relativistic) computations which can be expressed through two of its derivations:

The first amongst these equations indicates that there is a computational equivalence between the (relativistic) relationships of ‘space and time’ and ‘energy and mass’; specifically, that the computational ratio of ‘space’ (e.g., which according to the CUFT is a measure of the ‘frame-consistent’ feature) and ‘time’ (e.g., which is a measure of the ‘object-inconsistent’ feature) is equivalent to the computational ratio of ‘mass’ (e.g., a measure of the ‘object-consistent’ feature) and ‘energy’ (e.g., ‘frame-inconsistent’ feature)... Interestingly, this (first) derivation of the CUFT’s Universal Computational Formula incorporates (and broadens) key (known) relativistic laws – such as (for instance) the ‘E=Mc^{2}’ equation, as well as the basic concepts of ‘space-time’ and its curvature by the ‘mass’ of an object (which in turn also affects that object’s movement – i.e. ‘energy’).

The second equation explicates the (above mentioned) quantum ‘computational exhaustiveness’ (or ‘complimentary’) of the Computational Framework Dimension’s two levels of ‘*frame’*: ‘space’ (‘consistent’) and ‘energy’ (‘inconsistent’) and of ‘*object’*: ‘mass’ (‘consistent’) and ‘time’ (‘inconsistent’) ‘physical’ features, as part of the singular integrated (quantum and relativistic) Universal Computational Formula...

## 5. Unification of quantum and relativistic models of physical reality

Thus, the three (abovementioned) postulates of the ‘Duality Principle’, the existence of a rapid series of ‘Universal Simultaneous Computational Frames’ (USCF’s – computed by the ‘Universal Computational Principle’ {‘י’} at the incredible hypothetical rate of ‘c^{2}/h’), and the three Computational Dimensions of ‘Framework’, ‘Consistency’ and ‘Locus’ have resulted in the formulation of the (hypothetical) new ‘Universal Computational Formula’:

It is (finally) suggested that this (novel) CUFT and (embedded) Universal Computational Formula can offer a satisfactory harmonization of the existing quantum and relativistic models of physical reality, e.g., precisely through their integration within the (above) broader higher-ordered singular ‘D2’ Universal Computational Formula;

In a nutshell, it is suggested that this Universal Computational Formula embodies the singular higher-ordered ‘D2’ series of (rapid) USCF’s, thereby integrating quantum and relativistic effects (laws and phenomena) and resolving any apparent ‘discrepancies’ or ‘incongruities’ between these two apparently distinct theoretical models of physical reality:

Therefore, it is suggested that the three (above mentioned apparent) principle differences between quantum and relativistic theories, namely: *‘probabilistic’ vs. ‘positivistic’* models of physical reality, *‘simultaneous-entanglement’ vs. ‘non-simultaneous causality’* and *‘single-’ vs. ‘multiple-’ spatial-temporal modeling* can be explained (in a satisfactory manner) based on the new (hypothetical) CUFT model (represented by the Universal Computational Formula);

As suggested earlier, the apparent ‘probabilistic’ characteristics of quantum mechanics, e.g., wherein an (apparent) multi spatial-temporal “probability wave function” ‘collapses’ upon its assumed ‘SROCS’ direct (‘di1’) physical interaction with another ‘probe’ element is replaced by the CUFT’s hypothesized (singular) conceptually higher-ordered ‘D2’s’ rapid series of USCF’s (e.g., governed by the above Universal Computational Formula); Specifically, the Duality Principle’s conceptual proof for the principle inability of the SROCS computational structure to compute the “collapse” of (an assumed) “probability wave function” (‘target’ element) based on its direct physical interaction (at ‘di1’) with another ‘probe’ measuring element has led to a reformalization of the various subatomic quantum effects, including: the “collapse” of the “probability wave function”, the “particle-wave duality”, the “Uncertainty Principle’s” computational complimentary features, and “quantum entanglement” as arising from the (singular higher-ordered ‘D2’) rapid USCF’s series:

Thus, instead of Quantum theory’s (currently assumed) “collapse” of the ‘probability wave function’, the CUFT posits that there exists a rapid series of ‘Universal Simultaneous Computational Frames’ (USCF’s) that can be looked at from a ‘single’ spatial-temporal perspective (e.g., subatomic ‘particle’ or relativistic well localized ‘object’ or ‘event’) or from a ‘multiple’ spatial-temporal perspective (e.g., subatomic ‘wave’ measurement or conceptualization). Moreover, the CUFT hypothesizes that both the subatomic ‘single spatial-temporal’ “particle” and ‘multiple spatial-temporal’ “wave” measurements are embedded within an exhaustive series of ‘Universal Computational Simultaneous Frames’ (USCF’s) (e.g., that are governed by the above mentioned Universal Computational Formula). In this way, it is suggested that the CUFT is able to resolve all three abovementioned (apparent) conceptual differences between quantum and relativistic models of the physical reality: This is because instead of the ‘collapse’ of the assumed ‘quantum probability wave function’ through its (SROCS based) direct physical interaction with another subatomic probe element, the CUFT posits the existence of the rapid series of USCF’s that can give rise to ‘single-spatial temporal’ (subatomic “particle” or relativistic ‘object’ or ‘event’) or to ‘multiple spatial-temporal’ (subatomic or relativistic) “wave” phenomenon; Hence, instead of the current “probabilistic-quantum” vs. “positivistic-relativistic” (apparently disparate) theoretical models, the CUFT coalesces both quantum and relativistic theoretical models as constituting integral elements within a singular rapid series of USCF’s. Thereby, the CUFT can explain all of the (apparently incongruent) quantum and relativistic phenomena (and laws) such as for instance, the (abovementioned) ‘particle’ vs. ‘wave’ and ‘quantum entanglement’ phenomena – e.g., which is essentially a representation of the fact that all single- multiple- (or exhaustive) measurements are embedded within the series of ‘Universal *Simultaneous* Computational Frames’ (USCF’s) and therefore that two apparently “distinct” ‘single spatial-temporal’ measured “particles” that are embedded within the ‘multiple spatial-temporal’ “wave” measurement necessarily constitute integral parts of the same singular simultaneous USCF’s (which therefore give rise to the apparent 'quantum entanglement' phenomenon). Nevertheless, due to the above mentioned ‘computational exhaustiveness’ (or ‘complimentarity’) the computation of such apparently ‘distinct’ “particles” embedded within the same “wave” and USCF’s (series) leads to the known quantum (‘uncertainty principle’s’) complimentary computational (e.g., simultaneous) constraints applying to the measurement of ‘space’ and ‘energy’ (e.g., 'frame': consistent vs. inconsistent features), or of ‘mass’ ad ‘time’ (e.g., 'object': consistent vs. inconsistent features). Such USCF’s based theoretical account for the empirically validated “quantum entanglement” natural phenomena is also capable of resolving the apparent contradictions that seems to exist between such “simultaneous action at a distance” (to quote Einstein’s famous objection) and Relativity’s constraint set upon the transmission of any signal at a velocity that exceeds the speed of light: this is due to the fact that while the CUFT postulates that the “entangled particles” are computed simultaneously (along with the entire physical universe) as part of the same USCF/s (e.g., and more specifically of the same multi spatial-temporal “wave” pattern).

Another important aspect of this (hypothetical) Universal Computational Formula’s representation of the CUFT may be its capacity to replicate Relativity’s curvature of ‘space-time’ based on the existence of certain massive objects (which in turn also affects their own space-time pathway etc.): Interestingly, the CUFT points at the existence of USCF’s regions that may constitute: “high-space, high-time; high-mass, low-energy” vs. other regions which may be characterized as: “low-space, low-time; low-mass, high-energy” based on the computational features embedded within the CUFT (and its representation by the above Universal Computational Formula). This is based on the Universal Computational Formula’s (integrated) representation of the CUFT’s basic computational definitions ‘space’, ‘time’, ‘energy’ and ‘mass’ as:

which represents: ‘space’ – as the number of (accumulated) USCF’s ‘consistent-frame’ pixels that any given object occupies and its (converse) computational definition of ‘time’ as the number of ‘inconsistent-object’ pixels; and likewise the computational definition of ‘mass’ – as the number of ‘consistent-object’ USCF’s pixels and of ‘energy’ – as the (computational) definition of ‘mass’ as the number of ‘inconsistent-frame’ USCF’s pixels.

Hence, General Relativity may represent a 'special case' embedded within the CUFT's Universal Computational Formula integrated relationships between 'space', 'time', 'energy' and 'mass' (computational definitions): This is because General Relativity describes the specific dynamics between the "mass" of relativistic objects (e.g., a 'global-object-consistent' computational measure), their curvature of "space-time" (i.e., based on an 'frame-consistent' vs. 'object-inconsistent' computational measures) and its relationship to the 'energy-mass' equivalence (e.g., reflecting a 'frame-inconsistent' – 'object-consistent' computational measures); This is because from the (above mentioned) ‘global’ computational measurement perspective there seems to exist those USCF’s regions which are displaced significantly across frames (e.g., possess a high 'global-inconsistent-frame' energy value) – and therefore also exhibit increased 'global-object-consistent' mass value, and moreover are necessarily characterized by their (apparent) curvature of 'space-time' (i.e., alteration of the 'global-frame-consistent' space values and associated 'global-object-inconsistent' time values)…

Therefore, in the special CUFT's case described by General Relativity we obtain those "massive" objects, i.e., which arise from high 'global-frame-inconsistent' energy values (e.g., which are therefore presented many times consistently across frames – yielding a high 'global-object-consistent' mass value); These objects also produce low (dilated) global temporal values since the high 'global-object-consistent' (mass) value is inevitably linked with a low 'global-object-inconsistent' (time) value; Finally, such a high 'global-frame-inconsistent' (energy) object also invariably produces low 'global-frame-consistent' spatial measures (e.g., in the vicinity of such 'high-energy-high-mass' object). Thus, it may be the case that General Relativity’s described mechanical dynamics between the mass of objects and their curvature of ‘space-time’ (which interacts with these objects’ charted space-time pathway) represents a particular instance embedded within the more comprehensive (CUFT) Universal Computational Formula’s outline of a (singular) USCF’s-series based D2 computation (e.g., comprising the three above mentioned ‘Framework’, Consistency’ and ‘Locus’ Computational Dimensions) of the four basic ‘physical’ features of ‘space’, ‘time’, ‘energy’ and ‘mass’ interrelationships (e.g., as ‘secondary’ emerging computational products of this singular Universal Computational Formula driven process)...

Indeed, the CUFT’s hypothesized rapid series of USCF’s (governed by the above mentioned ‘Universal Computational Formula’) integrates (perfectly) between the essential quantum complimentary features of ‘space and energy’ or ‘time and mass’ (e.g., which arises as a result of the abovementioned ‘computational exhaustiveness’ of each of the Computational Framework Dimension’s ‘frame’ and ‘object’ levels, which was represented earlier by one of the derivations of the Universal Computational Formula); “quantum entanglement”, the “uncertainty principle” and the “particle-wave duality” (e.g., which arises from the existence of the postulated ‘Universal Simultaneous Computational Frames’ [USCF’s] that compute the entire spectrum of the physical universe simultaneously per each given USCF and which embed within each of these USCF’s any ‘single- spatial-temporal’ measurements of “entangled particles” as constituting integral parts of a ‘multiple spatial-temporal’ “wave” patterns); Quantum mechanics’ minimal degree of physical change represented by Planck’s ‘h’ constant (e.g., which signifies the CUFT’s ‘minimal degree of inter-USCF’s change’ for all four ‘physical’ features of ‘space’, ‘time’, ‘energy’ and ‘mass’); As well as the relativistic well validated physical laws and phenomena of the “equivalence of energy and mass” (e.g., the famous “E= Mc^{2”} which arises as a result of the transformation of any given object’s or event’s ‘frame-inconsistent’ to ‘object-consistent’ computational measures based on the maximal degree of change, but which also involves the more comprehensive and integrated Universal Computational Formula derivation: t x m x (c^{2}/h x י) = s x e.); Relativity’s ‘space-time’ and ‘energy-mass’ relationships expressed in terms of their constitution of an integrated singular USCF’s series which is given through an alternate derivation of the same Universal Computational Formula:

Indeed, this last derivation of the Universal Computational Formula seems to encapsulate General Relativity’s proven dynamic relationships that exist between the curvature of space-time by mass and its effect on the space-time pathways of any such (massive) object/s – through the complete integration of all four physical features within a singular (conceptually higher-ordered ‘D2’) USCF’s series... Specifically, this (last) derivation of the (abovementioned) Universal Computational Formula seems to integrate between ‘space-time’ – i.e., as a ratio of a ‘frame-consistent’ computational measure divided by ‘object-inconsistent’ computational measure – as equal to the computational ratio that exists between ‘mass’ (e.g., ‘object-consistent’) divided by ‘energy’ (e.g., ‘frame-inconsistent’) multiplied by the Rate of Universal Computation (R = c^{2}/h) and multiplied by the Universal Computational Principle’s operation (‘י’); Thus, the CUFT’s (represented by the above Universal Computational Formula) may supply us with an elegant, comprehensive and fully integrated account of the four basic ‘physical’ features constituting the physical universe (e.g., or indeed any set of computational object/s, event/s or phenomena etc.):

Therefore, also the Universal Computational Formula’s full integration of Relativity’s maximal degree of inter-USCF’s change (e.g., represented as: ‘c^{2}’) together with Quantum’s minimal degree of inter-USCF’s change (e.g., represented by: Planck’s constant ’h’) produces the ‘Rate’ {R} of such rapid series of USCF’s as: R = c^{2}/h, which is computed by the Universal Computational Principle ‘י’ and gives rise to all four ‘physical’ features of ‘space’, ‘time’, ‘energy’ and ‘mass’ as integral aspects of the same rapid USCF’s universal computational process.

Thus, we can see that the discovery of the hypothetical Computational Unified Field Theory’s (CUFT’s) rapid series of USCF’s fully integrates between hitherto validated quantum and relativistic empirical phenomena and natural laws, while resolving all of their apparent contradictions.

## 6. CUFT: Theoretical ramifications

Several important theoretical ramifications may follow from the CUFT; First, the CUFT’s (novel) definition of ‘space’, ‘time’, ‘energy’ and ‘mass’ – as emerging computational properties which arise as a result of different combinations of the three Computational Dimensions (e.g., of ‘Framework’, ‘Consistency’ and ‘Locus’) transform these apparently “physical” properties into (secondary) ‘computational properties’ of a D2 series of USCF’s... This means that instead of ‘space’, ‘energy’, ‘mass’ and ‘time’ existing as "independent - physical” properties in the universe they arise as *'secondary integrated computational properties'* (e.g., ‘object’/’frame’ x ‘consistent’/’inconsistent’ x ‘global’/’local’) of a singular conceptually higher-ordered 'D2' computed USCF’s series…

Second, such CUFT’s delineation of the USCF’s arising (secondary) computational features of ‘space’, ‘time’, ‘energy’ and ‘mass’ is also based on one of the (three) postulates of the CUFT, namely: the ‘Duality Principle’, i.e., recognizing the computational constraint set upon the determination of any “causal-physical” relationship between any two (hypothetical) interacting ‘x’ and ‘y’ elements (at any direct ‘di1’ or indirect '…din' computational level/s), but instead asserting only the existence of a conceptually higher-ordered’D2’ computational level which can compute only the “co-occurrences” of any two or more hypothetical spatial-temporal events or phenomena etc. This means that the CUFT’s hypothesized ‘D2’ computation of a series of (extremely rapid) USCF’s does not leave any room for the existence of any (direct or indirect) “causal-physical” ‘x→y’ relationship/s. Instead, the hypothesized D2 A-Causal Computation calls for the computation of the co-occurrences of certain related phenomena, factors or events – but which lack any “real” ‘causal-physical’ relationship/s (phenomena or laws)...

Third, the Duality Principle’s above mentioned necessity to replace any (direct or indirect) causal-physical relationship (or scientific paradigm), e.g., “x→y” by the CUFT’s hypothesized D2 A-Causal Computation of the “co-occurrence” of particular spatial-temporal factors, events, phenomena etc. that constitute certain ‘spatial-pixels’ within a series of USCF’s may have significant theoretical ramifications for several other key scientific paradigms (across the different scientific disciplines); Specifically, it is suggested that perhaps an application of the Duality Principle’s identified- and constrained- SROCS computational structure (e.g., of the general form: PR{x,y}/di1→[‘y’ or ‘not y’]/di1) towards key existing scientific paradigms such as: ‘Darwin’s Natural Selection Principle’, ‘Gödel's Incompleteness Theorem’ (e.g., and Hilbert’s failed ‘Mathematical Program’), Neuroscience’s (currently assumed) ‘materialistic-reductionistic’ working hypothesis etc. may open the door for a potential reformalization of these scientific paradigms in a way that is compatible with the novel computational Duality Principle and its ensued CUFT.

Hence, to the extent that the hypothesized CUFT may replicate (adequately) all known quantum and relativistic empirical phenomena and moreover offer a satisfactory (conceptually higher-ordered ‘D2’) USCF’s series based computational framework that may harmonize- and bridge the gap- between quantum and relativistic models of physical reality, the CUFT may constitute a potential candidate to integrate (and replace) both quantum and relativistic theoretical models; However, in order for such (potentially) serious theoretical consideration to occur, the next required step will be to identify those particular (empirical) instances in which the CUFT’s predictions may differ (significantly) from those of quantum mechanics or Relativity theory.

## 7. Conclusion

In order to address the principle contradiction that exists between quantum mechanics and Relativity Theory (e.g., comprising of: Probabilistic vs. deterministic models of physical reality, *“Simultaneous-entanglement”* vs. *“non-simultaneous-causality”* features and Single vs. multiple spatial-temporal modeling) a computational-empirical analysis of a fundamental ‘Self-Referential Ontological Computational System’ (SROCS) structure underlying both theories was undertaken; It was suggested that underlying both quantum and relativistic modeling of physical reality there is a mutual 'SROCS’ which assumes that it is possible to determine the ‘existence’ or ‘non-existence’ of a certain ‘y’ factor solely based on its *direct physical interaction* (PR{x,y}/di1) with another ‘x’ factor (e.g., at the same ‘di1’ computational level), thus:

In the case of Relativity theory, such basic SROCS computational structure pertains to the computation of any spatial-temporal or energy-mass value/s of any given event (or object) – based (solely) on its *direct physical interaction* with any hypothetical (differential) relativistic observer:

(54) |

In the case of quantum mechanics, it is hypothesized that precisely the same SROCS/SRONCS computational structure may pertain to the quantum mechanical computation of the physical properties of any given subatomic ‘target’ (e.g., assumed to be dispersed all along a probability wave function) that is hypothesized to be determined solely through its direct physical interaction with another subatomic complimentary ‘probe’ entity, thus:

(55) |

However, the computational-empirical analysis indicated that such SROCS computational structure (which underlies both quantum and relativistic paradigms) inevitably leads to both *‘logical inconsistency’* and ensuing *‘computational indeterminacy’* (i.e., an apparent inability of these quantum or relativistic SROCS systems to determine weather a particular spatial-temporal or energy-mass ‘Phenomenon’ or a particular spatial-energetic or temporal-mass target value “exists” or “doesn’t exist”). But, since there exists ample empirical evidence indicating the capacity of these quantum or relativistic computational systems to determine the “existence” or “non-existence” of any particular relativistic ‘Phenomenon’ or quantum complimentary target value, then a novel computational ‘Duality Principle’ asserts that the currently assumed SROCS computational structure is invalid; Instead, the Duality Principle points at the existence of a conceptually higher-ordered (‘D2’) *“a-causal”* computational framework which computes the “co-occurrences” of any range of quantum ‘probe-target’ or relativistic ‘observer-Phenomenon’ pairs thus:

(56) |

(57) |

Indeed, a further application of this (new) hypothetical computational Duality Principle indicated that there cannot exist “multiple D2” computational levels but rather only one singular ‘conceptually higher-ordered ‘D2’ computational framework as underlying both quantum and relativistic (abovementioned) ‘co-occurring’ phenomena.

Next, an examination of the potential characteristics of such conceptually higher-ordered (singular) ‘a-causal D2’ computational framework indicated that it may embody ‘single’- ‘multiple’- and ‘exhaustive’ spatial-temporal measurements as embedding all hypothetical ‘probe-target’ subatomic pairs or all hypothetical (differential) observer/s – ‘Phenomenon’ pairs; It was suggested that such D2 (singular ‘a-causal’) arrangement of all hypothetical quantum ‘probe-target’ or relativistic ‘observer-Phenomenon’ pairs may give rise to all known *single* spatial-temporal (quantum) “*particle*” or (relativistic) “*object*” or “*event*” measurements or all *multiple* spatial-temporal “*wave*” measurements. Moreover, when we broaden our computational analysis beyond the scope of such ‘single-’ or ‘multiple’ spatial-temporal measurements (or conceptualizations) to the entire corpus of all hypothetical possible spatial-temporal points- e.g., as 'co-occurring' at the Duality Principle’s asserted conceptually higher-ordered ‘D2’ computational framework, then this may point at the existence of a series of *‘Universal Simultaneous Computational Frames’* (USCF’s). The existence of such (a series of) hypothetical conceptually higher-ordered ‘D2’ series of USCF’s which constitute the entirety of all hypothetical (relativistic) spatial-temporal or energy-mass phenomena and all hypothetical (quantum complimentary) spatial-energetic or temporal-mass “pixels” was suggested by the well-validated empirical phenomenon of ‘quantum entanglement’ (e.g., relating to a ‘computational linkage’ between 'greater than light-speed travelling distance' of two spatial-temporal "entangled particles"); This is because based on the fact that two such disparate 'entangled' quantum “particles” (e.g., which could hypothetically comprise a probability wave function that can span tremendous cosmic distances) we may infer that the entirety of all (hypothetical) cosmic quantum (complimentary) 'probe-target' pairs or all (hypothetical) relativistic 'observer-Phenomenon' pairs may be computed as "co-occurring" simultaneously as part of such (hypothetical) 'D2' 'Universal Simultaneous Computational Frames' (USCF's).

This hypothetical (rapid series of) 'Universal Simultaneous Computational Frames' (USCF'S) was further stipulated to possess three basic (interrelated) 'Computational Dimensions' which include: Computational *‘Framework’* (e.g., relating to the entire USCF/s ‘*frame/s’* or to a particular ‘*object’* within the USCF/s), Computational *‘Consistency’* (which refers to the degree of *'consistency'* of an object or of segments of the frame across a series of USCF’s (e.g., ‘consistent’ vs. ‘inconsistent’), and Computational *‘Locus’* of (e.g., whether the computation is carried out ‘locally’- from within any particular object or ‘reference system’, or ‘globally’- i.e., externally to a particular reference system from the perspective of the entire frame or segments of the frame). Interestingly (partially) by using a *'cinematic film metaphor'* it was possible to derive and formalize each of the four basic physical features of 'space', 'time', 'energy' and 'mass' as emerging (secondary) computational properties arising from the singular 'D2' computation of a series of USCF's – through a combination of the two Computational Dimensions of 'Framework' and 'Consistency': Thus, a combination of the 'object' level (e.g., within the 'Framework' Dimension) with the 'consistent' vs. 'inconsistent' levels (of the 'Consistency' Dimension) produced the physical properties of 'mass' and 'time' (correspondingly); On the other hand, a combination of the 'frame' level (within the Framework Dimension) and the 'consistent' vs. 'inconsistent' ('Consistency' Dimension) yielded the two other basic physical features of 'space' and 'energy'. It was further hypothesized that (following the cinematic-film analogy) the minimal (possible) degree of ‘change’ across any two (subsequent) ‘Universal Simultaneous Computational Frames’ (USCF’s) is given by Planck’s ‘*h’* constant (e.g., for the various physical features of ‘space’, ‘time’, ‘energy’ or ‘mass’), whereas the maximal (possible) degree of change across two such (subsequent) USCF’s is be given by: *‘c*
^{
2
}
*’*; Finally, the 'rate' at which the series of USCF's may be computed (or presented) was hypothesized to be given by: c^{2}/h!

Hence, based on the above mentioned three basic theoretical postulates of the ‘Duality Principle’ (e.g., including the existence of a conceptually higher-ordered ‘D2 A-Causal’ Computational framework), the existence of a rapid series of ‘Universal Simultaneous Computational Frames’ (USCF’s) and their accompanying three Computational Dimensions of – ‘Framework’ (‘frame’ vs. ‘object’), ‘Consistency’ (‘consistent’ vs. ‘inconsistent’) and ‘Locus’ (‘global’ vs. ‘local’) a (novel) ‘Computational Unified Field Theory’ (CUFT) was hypothesized; Based on a computational formalization of each of the four basic physical features of 'space' and 'energy', 'mass' and 'time' (e.g., which arise as secondary computational measures of the singular D2 rapid series of USCF's Computational Dimensions combination of *'frame*': 'consistent' vs. 'inconsistent' and *'object'*: 'consistent' vs. 'inconsistent', correspondingly), the hypothesized 'Computational Unified Field Theory' (CUFT) can account for all known quantum and relativistic empirical findings, as well as seem to 'bridge the gap' between quantum and relativistic modeling of physical reality: Specifically, the various relativistic phenomena were shown to arise based on the interaction between the two ('global' vs. 'local') 'Framework' and (consistent vs. inconsistent) 'Consistency' computational dimensions. Conversely, a key quantum complimentary feature that characterizes the probabilistic interpretation of the 'uncertainty principle (e.g., as well as the currently assumed "collapse" of the probability wave function) was explained based on the *'computational exhaustiveness'* arising from the computation of both the 'consistent' and 'inconsistent' aspects (or levels) of the Computational Dimensions' levels of *'frame'* or *'object';* Thus for instance, both the 'consistent' and 'inconsistent' aspects (or levels) of the (Framework dimension's) 'frame' level (e.g., which comprise the quantum measurements of 'space' and 'energy', respectively) exhaustively describe the entire spectrum of this 'frame' computation. Thus, for instance, if we opt to increase the accuracy of the subatomic 'spatial' ('frame-consistent') measurement, then we also necessarily decrease the computational accuracy of its converse (exhaustive) 'energy' (e.g., 'frame-inconsistent') measure etc.

Indeed, such CUFT's reformalization of the key quantum and relativistic laws and empirical phenomena as arising from the singular (rapid series of) USCF's interrelated (secondary) computational measures (e.g., of the four basic quantum and relativistic physical features of 'space', 'time', 'energy' and 'mass' has led to the formulation of a singular *'Universal Computational Formula'* which was hypothesized to underlie- harmonize- and broaden- the current quantum and relativistic models of physical reality:

wherein the left side of this singular hypothetical Universal Computational Formula represents the (abovementioned) universal rate of computation by the hypothetical Universal Computational Principle, whereas the right side of this Universal Computational Formula represents the ‘integrative-complimentary’ relationships between the four basic physical features of ‘space’ (s), ‘time’ (t), ‘energy’ (e) and ‘mass’ (m), (e.g., as comprising different computational combinations of the three (abovementioned) Computational Dimensions of ‘Framework’, ‘Consistency’ and ‘Locus’;

Note that on both sides of this Universal Computational Formula there is a coalescing of the basic quantum and relativistic computational elements – such that the rate of Universal Computation is given by the product of the maximal degree of (inter-USCF’s relativistic) change ‘c^{2}’ divided by the minimal degree of (inter-USCF’s quantum) change ‘*h*’; Likewise, the right side of this Universal Computational Formula meshes together both quantum and relativistic computational relationships – such that it combines between the relativistic products of space and time (s/t) and energy-mass (e/m) together with the quantum (computational) complimentary relationship between ‘space’ and ‘energy’, and ‘time’ and ‘mass’;Significantly, it was suggested that the three (above mentioned apparent) principle differences between quantum and relativistic theories, namely: *‘probabilistic’ vs. ‘positivistic’* models of physical reality, *‘simultaneous-entanglement’ vs. ‘non-simultaneous causality’* and *‘single-’ vs. ‘multiple-’ spatial-temporal modeling* can be explained (in a satisfactory manner) based on the new (hypothetical) CUFT model (represented by the Universal Computational Formula);

Finally, there may be important theoretical implications to this (new) hypothetical CUFT;

First, instead of ‘space’, ‘energy’, ‘mass’ and ‘time’ existing as *"independent-physical”* properties in the universe they may arise as *'secondary integrated computational properties'* (e.g., ‘object’/’frame’ x ‘consistent’/’inconsistent’ x ‘global’/’local’) of a singular conceptually higher-ordered 'D2' computed USCF’s series…

Second, based on the 'Duality Principle' postulate underlying the CUFT which proves the conceptual computational constraint set upon the determination of any “causal-physical” relationship between any two (hypothetical) ‘x’ and ‘y’ elements (at the ‘di1’ computational level), we are forced to recognize the existence of a conceptually higher-ordered’D2’ computational level which can compute only the “co-occurrences” of any two or more hypothetical spatial-temporal events or phenomena etc. This means that the CUFT’s hypothesized ‘D2’ computation of a series of (extremely rapid) USCF’s does not leave any room for the existence of any (direct or indirect) “causal-physical” ‘x→y’ relationship/s, but instead points at the. singular conceptually higher-ordered D2 A-Causal Computation which computes the co-occurrences of certain related phenomena, factors or events…

Third, an application of one of the three theoretical postulates underlying this novel CUFT, namely: the 'Duality Principle' to other potential 'Self-Referential Ontological Computational Systems' (SROCS) including: ‘Darwin’s Natural Selection Principle’, ‘Gödel's Incompleteness Theorem’ (e.g., and Hilbert’s “failed” ‘Mathematical Program’), Neuroscience’s (currently assumed) ‘materialistic-reductionistic’ working hypothesis etc. (Bentwich, 2003a, 2003b, 2003c, 2004, 2006a, 2006b) (may open the door for a potential reformalization of these scientific paradigms in a way that is compatible with the novel computational Duality Principle and its ensued CUFT.