The Monte Carlo Techniques and the Complex Probability Paradigm

The concept of mathematical probability was established in 1933 by Andrey Nikolaevich Kolmogorov by defining a system of five axioms. This system can be enhanced to encompass the imaginary numbers set after the addition of three novel axioms. As a result, any random experiment can be executed in the complex probabilities set C which is the sum of the real probabilities set R and the imaginary probabilities set M. We aim here to incorporate supplementary imaginary dimensions to the random experiment occurring in the “real” laboratory in R and therefore to compute all the probabilities in the sets R, M, and C. Accordingly, the probability in the whole set C 1⁄4 RþM is constantly equivalent to one independently of the distribution of the input random variable in R, and subsequently the output of the stochastic experiment in R can be determined absolutely in C. This is the consequence of the fact that the probability in C is computed after the subtraction of the chaotic factor from the degree of our knowledge of the nondeterministic experiment. We will apply this innovative paradigm to the well-known Monte Carlo techniques and to their random algorithms and procedures in a novel way.


Introduction
"Thus, joining the rigor of the demonstrations of science to the uncertainty of fate, and reconciling these two seemingly contradictory things, it can, taking its name from both, appropriately arrogate to itself this astonishing title: the geometry of chance." Blaise Pascal "You believe in the God who plays dice, and I in complete law and order." Albert Einstein, Letter to Max Born "Chance is the pseudonym of God when He did not want to sign." Anatole France "There is a certain Eternal Law, to wit, Reason, existing in the mind of God and governing the whole universe." Saint Thomas Aquinas Regarding some applications of the novel established model and as a subsequent work, it can be applied to any nondeterministic experiments using Monte Carlo algorithms whether in the continuous or in the discrete cases.
Moreover, compared with existing literature, the major contribution of the current chapter is to apply the innovative complex probability paradigm to the techniques and concepts of the probabilistic Monte Carlo simulations and algorithms.
The next figure displays the major aims and purposes of the complex probability paradigm (CPP) (Figure 1).

The original Andrey Nikolaevich Kolmogorov system of axioms
The simplicity of Kolmogorov's system of axioms may be surprising [1][2][3][4][5][6][7][8][9][10][11][12][13][14]. Let E be a collection of elements {E 1 , E 2 , … } called elementary events and let F be a set of subsets of E called random events. The five axioms for a finite set E are: Axiom 1: F is a field of sets. Axiom 2: F contains the set E. Axiom 3: A nonnegative real number P rob (A), called the probability of A,is assigned to each set A in F. We have always 0 ≤ P rob (A) ≤ 1.
Axiom 4: P rob (E) equals 1. Axiom 5: If A and B have no elements in common, the number assigned to their union is And we say also that P rob A ∩ B ðÞ ¼ P rob A ðÞ Â P rob B=A ðÞ ¼ P rob B ðÞ Â P rob A=B ðÞ which is the conditional probability. If both A and B are independent then P rob A ∩ B ðÞ ¼ P rob A ðÞ Â P rob B ðÞ . Moreover, we can generalize and say that for N disjoint (mutually exclusive) events A 1 , A 2 , … , A j , … , A N (for 1 ≤ j ≤ N), we have the following additivity rule: And we say also that for N independent events A 1 , A 2 , … , A j , … , A N (for 1 ≤ j ≤ N), we have the following product rule

Adding the imaginary part M
Now, we can add to this system of axioms an imaginary part such that: Axiom 6: Let P m ¼ i Â 1 À P r ðÞ be the probability of an associated complementary event in M (the imaginary part) to the event A in R (the real part). It follows that P r þ P m =i ¼ 1 where i is the imaginary number with i ¼ ffiffiffiffiffiffi À1 p or i 2 ¼À1. Axiom 7: We construct the complex number or vector Z ¼ P r þ P m ¼ P r þ i 1 À P r ðÞ having a norm Z jj such that Z jj 2 ¼ P 2 r þ P m =i ðÞ 2 : Axiom 8: Let Pc denotes the probability of an event in the complex probability universe C where C ¼ R þ M. We say that Pc is the probability of an event A in R with its associated event in M such that 2 ¼ Z jj 2 À 2iP r P m and is always equal to 1: We can see that by taking into consideration the set of imaginary probabilities, we added three new and original axioms, and consequently the system of axioms defined by Kolmogorov was hence expanded to encompass the set of imaginary numbers.

A brief interpretation of the novel paradigm
To summarize the novel paradigm, we state that in the real probability universe R, our degree of our certain knowledge is undesirably imperfect and hence unsatisfactory; thus, we extend our analysis to the set of complex numbers C which incorporates the contributions of both the set of real probabilities which is R and the complementary set of imaginary probabilities which is M. Afterward, this will yield an absolute and perfect degree of our knowledge in the probability universe C ¼ R þ M because Pc = 1 constantly. As a matter of fact, the work in the universe C of complex probabilities gives way to a sure forecast of any stochastic experiment, since in C we remove and subtract from the computed degree of our knowledge the measured chaotic factor. This will generate in the universe C a probability equal to 1 ( Many applications taking into consideration numerous continuous and discrete probability distributions in my 14 previous research papers confirm this hypothesis and innovative paradigm. The extended Kolmogorov axioms (EKA) or the complex probability paradigm (CPP) can be shown and summarized in the next illustration (Figure 2).

The divergence and convergence probabilities
Let R E be the exact result of the stochastic phenomenon or of a multidimensional or simple integral that are not always possible to compute by probability theory ordinary procedures or by deterministic numerical means or by calculus [1][2][3][4][5][6][7][8][9][10][11][12][13][14]. And let R A be the phenomenon and integrals approximate results calculated by the techniques of Monte Carlo: The relative error in the Monte Carlo methods is Rel: and is always between 0% and 100%. Therefore, the relative error is always between 0 and 1. Hence Moreover, we define the real probability in the set R by = probability of Monte Carlo method divergence in the imaginary complementary probability set M since it is the imaginary complement of P r . Consequently, = the relative error in the Monte Carlo method. = probability of Monte Carlo method divergence in R since it is the real complement of P r .
In the case where 0 0 ≤ P r ≤ 1 and we deduce also that 0 means before the beginning of the simulation, then And if R A ¼ R E that means at the end of Monte Carlo simulation, then ¼ the imaginary part of Z: That means that the complex random vector Z is the sum in C of the real probability of convergence in R and of the imaginary probability of divergence in M.
If R A ¼ 0orR A ¼ 2R E (before the simulation begins), then at the middle of the simulation), then and and

The degree of our knowledge, DOK
We have From CPP we have that 0: then solving the second-degree equations for R A R E gives That means that DOK is minimum when the approximate result R A is equal to half of the exact result R E if 0 ≤ R A ≤ R E or when the approximate result is equal to three times the half of the exact result if R E ≤ R A ≤ 2R E , which means at the middle of the simulation.
In addition, if DOK ¼ 1, then

and vice versa:
That means that DOK is maximum when the approximate result R A is equal to 0 or 2R E (before the beginning of the simulation) and when it is equal to the exact result R E (at the end of the simulation). We can deduce that we have perfect and total knowledge of the stochastic experiment before the beginning of Monte Carlo simulation since no randomness was introduced yet, as well as at the end of the simulation after the convergence of the method to the exact result.

The chaotic factor, Chf
We have Since From CPP we have that À0:5 ≤ Chf ≤ 0, and then if Chf ¼À0:5 That means that Chf is minimum when the approximate result R A is equal to half of the exact result R E if 0 ≤ R A ≤ R E or when the approximate result is equal to three times the half of the exact result if R E ≤ R A ≤ 2R E , which means at the middle of the simulation.
In addition, if Chf ¼ 0 then And, conversely, if That means that Chf is equal to 0 when the approximate result R A is equal to 0 or 2R E (before the beginning of the simulation) and when it is equal to the exact result R E (at the end of the simulation).

The magnitude of the chaotic factor, MChf
We have and vice versa. That means that MChf is maximum when the approximate result R A is equal to half of the exact result R E if 0 ≤ R A ≤ R E or when the approximate result is equal to three times the half of the exact result if R E ≤ R A ≤ 2R E , which means at the middle of the simulation. This implies that the magnitude of the chaos (MChf) introduced by the random variables used in Monte Carlo method is maximum at the halfway of the simulation.
In addition, if And, conversely, if That means that MChf is minimum and is equal to 0 when the approximate result R A is equal to 0 or 2R E (before the beginning of the simulation) and when it is equal to the exact result R E (at the end of the simulation). We can deduce that the magnitude of the chaos in the stochastic experiment is null before the beginning of Monte Carlo simulation since no randomness was introduced yet, as well as at the end of the simulation after the convergence of the method to the exact result when randomness has finished its task in the stochastic Monte Carlo method and experiment.

The probability Pc in the probability set
This is due to the fact that in C, we have subtracted in the equation above the chaotic factor Chf from our knowledge DOK, and therefore we have eliminated chaos caused and introduced by all the random variables and the stochastic fluctuations that lead to approximate results in the Monte Carlo simulation in R. Therefore, since in C we have always R A ¼ R E , then the Monte Carlo simulation which is a stochastic method by nature in R becomes after applying the CPP a deterministic method in C since the probability of convergence of any random experiment in C is constantly and permanently equal to 1 for any iterations number N.

The rates of change of the probabilities in R, M, and C
Since Therefore, That means that the slope of the probability of convergence in R or its rate of change is constant and positive if 0 ≤ R A ≤ R E , and constant and negative if R E ≤ R A ≤ 2R E , and it depends only on R E ; hence, we have a constant increase in P r (the convergence probability) as a function of the iterations number N as R A increases from 0 to R E and as R A decreases from 2R E to R E till P r reaches the value 1 that means till the random experiment converges to R E : That means that the slopes of the probabilities of divergence in R and M or their rates of change are constant and negative if 0 ≤ R A ≤ R E and constant and positive if R E ≤ R A ≤ 2R E and they depend only on R E ; hence, we have a constant decrease in P m =i and P m (the divergence probabilities) as functions of the iterations number N as R A increases from 0 to R E and as R A decreases from 2R E to R E till P m =i and P m reach the value 0 that means till the random experiment converges to R E . Additionally, that means that the module of the slope of the complex probability vector Z in C or of its rate of change is constant and positive and it depends only on R E ; hence, we have a constant increase in Re Z ðÞ and a constant decrease in Im Z ðÞ as functions of the iterations number N and as Z goes from (0, i)atN = 0 till (1,0) at the simulation end; hence, till Re Z ðÞ ¼ P r reaches the value 1 that means till the random experiment converges to R E . Furthermore, since which means that Pc is constantly equal to 1 for every value of R A ,ofR E , and of the iterations number N, which means for any stochastic experiment and for any simulation of Monte Carlo method. So, we conclude that in C ¼ R þ M, we have complete and perfect knowledge of the random experiment which has become now a deterministic one since the extension in the complex probability plane C defined by the CPP axioms has changed all stochastic variables to deterministic variables.

The new paradigm parameter evaluation
We can infer from what has been developed earlier the following: The real probability of convergence P r N ðÞ DOK N ðÞ is equal to 1 when P r N ðÞ ¼ P r 0 ðÞ ¼ 0 and when P r N ðÞ ¼ P r N C ðÞ ¼ 1. The Chaotic factor

MChf N
ðÞ is null when P r N ðÞ ¼ P r 0 ðÞ ¼ 0 and when P r N ðÞ ¼ P r N C ðÞ ¼ 1. At any iteration number N 0 ≤∀N ≤ N C , the probability calculated in the set C of complex probabilities is as follows: Thus, the prediction in the set C of the probabilities of convergence of the random Monte Carlo methods is always certain.
Let us consider afterward a multidimensional integral and a stochastic experiment to simulate the Monte Carlo procedures and to quantify, to draw, as well as to visualize all the prognostic and CPP parameters.

The flowchart of the prognostic model of Monte Carlo techniques and CPP
The flowchart that follows illustrates all the procedures of the elaborated prognostic model of CPP.

Simulation of the new paradigm
Note that all the numerical values found in the simulations of the new paradigm for any iteration cycles N were computed using the 64-bit MATLAB version 2020 software and compared to the values found by Microsoft Visual C++ programs. Additionally, the reader should be careful of the truncation and rounding errors since we represent all numerical values by at most five significant digits and since we are using Monte Carlo techniques of simulation and integration which yield approximate results under the influence of stochastic aspects and variations. We have considered for this purpose a high-capacity computer system: a workstation computer with parallel microprocessors, a 64-bit operating system, and a 64-GB RAM. 15 The Monte Carlo Techniques and the Complex Probability Paradigm DOI: http://dx.doi.org/10.5772/intechopen.93048

The continuous random case: a four-dimensional multiple integral
The Monte Carlo technique of integration can be summarized by the following equation: Let us consider here the multidimensional integral of the following function:

Forecasting in Mathematics -Recent Advances, New Perspectives and Applications
x j y j z j w j ¼ R A with 1 ≤ N ≤ N C after applying Monte Carlo method. Furthermore, the four figures (Figures 3-6) illustrate and prove the increasing convergence of Monte Carlo simulation and technique to the exact result

The discrete random case: the matching birthday problem
An interesting problem that can be solved using simulation is the famous birthday problem. Suppose that in a room of n persons, each of the 365 days of the year (not a leap year) is equally likely to be someone's birthday. It can be proved from the theory of probability and contrary to intuition that only 23 persons need to be present for the probability to be better than fifty-fifty that at least two of them will have the same birthday.
Many people are interested in checking the theoretical proof of this statement, so we will demonstrate it briefly before doing the problem simulation. After someone is asked about his or her birthday, the probability that the next person asked will not have the same birthday is 364/365. The probability that the third person's birthday will not match those of the first two people are 363/365. It is well-known that the probability of two independent and successive events happening is the product of the probability of the separate events. In general, the probability that the nth person asked will have a birthday different from that of anyone already asked is The probability that the nth person asked will provide a match is 1 minus this value: which shows that with 23 persons, the chances are 50.7%; with 55 persons, the chances are 98.6% or almost theoretically certain that at least two out of 55 people will have the same birthday. The table gives the theoretical probabilities of matching birthdays for a selected number of people n ( Table 1).
Without using the probability theory, we can write a routine that uses the random number generator to compute the approximate chances for groups of n persons. Obviously, what is needed here is to choose n random integers from the set of integers {1, 2, 3, … , 365} and to check whether there is a match. When we repeat this experiment a large number of times, we can calculate afterward the probability of at least one match in any gathering of n persons. Note that if n ≥ 366, then P matching birthdays ÀÁ ¼ 1 by the famous pigeonhole principle. Furthermore, the four figures (Figures 8-11

The cubes of complex probability
In Figure 13 Table 1. Some theoretical probabilities of matching birthdays for n people where 1 ≤ n ≤ 365. We can notice that point K (DOK = 0.5, Chf = À0.5, N=375,000 iterations) is the minimum of all these curves. We can notice also that point L has the coordinates (DOK =1,Chf =0,N=N C = 750,000 iterations). Additionally, the three points J, K, and L correspond to the same points that exist in Figure 12.

21
The Monte Carlo Techniques and the Complex Probability Paradigm DOI: http://dx.doi.org /10.5772/intechopen.93048 In Figure 14 and in the second cube, we simulate the probability of convergence P r (N) and its complementary real probability of divergence P m (N)/i as functions of the iterations N for the problem of matching birthday. If we project Pc 2 (N)= P r (N)+P m (N)/i =1=Pc(N) on the plane N = 0 iterations, we will get the line in cyan. The starting point of this line is point (P r =0,P m /i = 1), and the final point is   The graph of P m (N)/i in the plane P r (N)+P m (N)/i = 1 is represented by the blue curve. We can notice how much point K is important and which is the intersection of the blue and red graphs when P r (N)=P m (N)/i = 0.5 at N = 375,000 iterations. Additionally, the three points J, K, and L correspond to the same points that exist in Figure 12.
In Figure 15 and in the third cube, we simulate the vector of complex probabilities Z(N)inC as a function of the real probability of convergence P r (N) = Re(Z)inR and of its complementary imaginary probability of divergence P m (N)=i Â Im(Z)inM, and as a function of the iterations N for the problem of matching birthday. The graph of P r (N) in the plane P m (N) = 0 is represented by the red curve, and the graph of P m (N) in the plane P r (N) = 0 is represented by the blue curve. The graph of the vector of complex probabilities Z(N)=P r (N)+P m (N) = Re(Z)+i Â Im(Z) in the plane P r (N)=iP m (N)+1is represented by the green curve. The graph of Z(N) has point J (P r =0,P m = i, N =0 iterations) as the starting point and point L (P r =1,P m =0,N = N C = 750,000 iterations) as the end point. If we project Z(N) curve on the plane of complex probabilities whose equation is N = 0 iterations, we get the line in cyan which is P r (0) = iP m (0) + 1. This projected line has point J (P r =0,P m = i, N = 0 iterations) as the starting point and point (P r =1,P m =0,N = 0 iterations) as the end point. We can notice how much point K is important, and it corresponds to P r = 0.5 and P m = 0.5i when N = 375,000 iterations. Additionally, the three points J, K, and L correspond to the same points that exist in Figure 12.

Perspectives and conclusion
In the current chapter, the extended and original Kolmogorov model of eight axioms (EKA) was connected and applied to the random and classical Monte Carlo techniques. Thus, we have bonded Monte Carlo algorithms to the novel CPP paradigm. Accordingly, the paradigm of "complex probability" was more expanded beyond the scope of my 14 earlier studies on this topic.
Also, as it was proved and demonstrated in the original paradigm, when N =0 (before the beginning of the random simulation) and when N = N C (after the convergence of Monte Carlo algorithm to the exact result), then the chaotic factor (Chf and MChf) is 0, and the degree of our knowledge (DOK) is 1 since the stochastic aspects and variations have either not commenced yet or they have terminated their job on the random phenomenon. During the course of the nondeterministic phenomenon (N > 0), we have 0 < MChf ≤ 0.5, 0.5 ≤ DOK < 1, and À 0.5 ≤ Chf < 0, and it can be noticed that throughout this entire process, we have continually and incessantly Pc 2 = DOK À Chf = DOK + MChf =1=Pc, which means that the simulation which seemed to be random and nondeterministic in the set R is now deterministic and certain in the set C ¼ R þ M, and this after adding the contributions of M to the experiment happening in R and thus after removing and subtracting the chaotic factor from the degree of our knowledge. Additionally, the probabilities of convergence and divergence of the random Monte Carlo procedure that correspond to each iteration cycle N have been determined in the three sets of probabilities which are C, M, and R by Pc, P m , and P r , respectively. Subsequently, at each instance of N, the novel Monte Carlo techniques and CPP parameters DOK, Chf, MChf, R E , R A , P r , P m , P m =i, Pc, and Z are perfectly and surely predicted in the set of complex probabilities C with Pc kept as equal to 1 continuously and forever. Also, referring to all these shown simulations and obtained graphs all over the entire chapter, we can visualize and quantify both the system chaos and stochastic influences and aspects (expressed by Chf and MChf) and the certain knowledge (expressed by DOK and Pc) of Monte Carlo algorithms. This is definitely very wonderful, fruitful, and fascinating and demonstrates once again the advantages of extending the five axioms of probability of Kolmogorov and thus the benefits and novelty of this original theory in applied mathematics and prognostics that can be called verily: "the complex probability paradigm." Moreover, it is important to mention here that one essential and very wellknown probability distribution was taken into consideration in the current chapter which is the uniform and discrete probability distribution as well as a specific generator of uniform random numbers, knowing that the original CPP model can be applied to any generator of uniform random numbers that exists in literature. This will yield certainly analogous results and conclusions and will confirm without any doubt the success of my innovative theory.
As a prospective and future challenges and research, we intend to more develop the novel conceived prognostic paradigm and to apply it to a diverse set of nondeterministic events like for other stochastic phenomena as in the classical theory of probability and in stochastic processes. Additionally, we will implement CPP to the first-order reliability method (FORM) in the field of prognostic in engineering and also to the problems of random walk which have huge consequences when applied to economics, to chemistry, to physics, and to pure and applied mathematics. any event probability P r the probability in the real set R = the probability of convergence in R P m the probability in the complementary imaginary set M that corresponds to the real probability set in R = the probability of divergence in M Pc the probability in R of the event with its associated event in M = the probability in the set C ¼ R þ M of complex probabilities R E the exact result of the random experiment R A the approximate result of the random experiment Z complex probability number = complex random vector = sum of P r and P m DOK = Z jj 2 the degree of our knowledge of the stochastic experiment or system, it is the square of the norm of Z Chf the chaotic factor of Z MChf the magnitude of the chaotic factor of Z N the number of iterations cycles = number of random vectors N C the number of iterations cycles till the convergence of Monte Carlo method to R E = the number of random vectors till convergence.