Open access peer-reviewed chapter

Differences between Universal-Deterministic and Probabilistic Hypotheses in Operations Management Research

Written By

Roberto Sarmiento

Submitted: 26 May 2022 Reviewed: 24 June 2022 Published: 12 July 2022

DOI: 10.5772/intechopen.1000218

From the Edited Volume

Operations Management and Management Science

Fausto Pedro Garcia Marquez

Chapter metrics overview

74 Chapter Downloads

View Full Metrics

Abstract

Very few papers in the operations management (OM) field have taken the themes of universal-deterministic (UD) and probabilistic hypotheses as their main topics of investigation and discussion. Our investigation continues a recent line of research that focuses on a better understanding of these critical issues. Specifically, we attempt to respond to some pointed criticisms that experts in the field have made when the topic UD and probabilistic hypotheses have emerged in academic settings/discussions. A detailed analysis of those criticisms shows that they lack merit, thereby reinforcing our argument that it is most important to distinguish between the two types of scientific hypotheses in order to advance in the rigor of OM theoretical and empirical research. Ideas for future research are outlined.

Keywords

  • universal-deterministic hypotheses
  • probabilistic hypotheses
  • case study research
  • quantitative studies
  • qualitative studies

1. Introduction

Previous investigations (e.g., [1, 2]) have attempted to provide a better understanding and awareness vis-à-vis the two types of hypotheses that exist in all fields of empirical science: universal-deterministic (UD) and probabilistic. In particular, those papers emphasize the potential problems of not acknowledging the theoretical/empirical differences between these two types of scientific propositions. Continuing with this line of research, our paper deals with some objections that have been made by experts in the operations management (OM) field1. More specifically, our paper discusses why those criticisms are not justified. This article strengthens the argument about the importance of acknowledging the distinct characteristics that UD and probabilistic hypotheses possess. This topic is important, not only in operations management research, but in all areas of empirical science.

In sections 2, 3, and 4, we present the aforementioned criticisms and explain why those objections lack justification. In Section 5, we further explore (based on Popperian logic and methodology) how falsified hypotheses can still be of practical use. Section 6 offers some concluding remarks.

Advertisement

2. Criticism (a): “measurement variability messes up deterministic hypotheses. This problem turns a deterministic relationship into a probabilistic one”

There is no question that variability in the measurement of individual observations is an important issue in the empirical sciences. Nevertheless, it is not accurate to say that measurement variability transforms a UD hypothesis into a probabilistic one. These are two separate issues: 1) the problems with the measurement of empirical evidence (e.g., individual observations) when testing a hypothesis of interest, and 2) the logical form and implications of UD hypotheses.

Popper (e.g., [7], p. 63) acknowledges the potential problems that exist with respect to measurement variability. His recommendation is that there should be “methodological rules” by which decisions regarding empirical evidence in support of or against a given hypothesis may be agreed upon: “… Inter-subjectively testable experiments are either to be accepted, or to be rejected in the light of counter-experiments.” Or as Dienes ([8], p. 20, his italics) puts it,

According to Popper, observation statements are finally accepted only by decision or agreement. Finally, there comes a point where everyone concerned feels the observation statement is sufficiently motivated that no one wishes to deny it. Considerable work may be needed to reach that point; and even then the decision to accept an observation may be overturned by new considerations. […] In the end we must decide which observation statements we will accept. The decision is fallible and amounts to tentatively accepting a low-level empirical hypothesis which describes the effect: For example, accepting an observation statement amounts to accepting a hypothesis that “Peter is an extrovert”, or “This extrovert was asleep at 7 am” and so on.

To illustrate Popper’s point, we recall the story of the experiments and events that corroborated Einstein’s general theory of relativity. According to Folsing, ([9], p. 443),

The numerical values obtained were 1.98 ± 0.12 seconds of arc for the Sobral pictures and 1.61 ± 0.30 seconds of arc for the pictures from Principe (which had been impaired by clouds). Both results ruled out the “Newtonian” value; their mean value was almost exactly equal to Einstein’s prediction of 1.74 seconds of arc.

This famous episode in the history of science shows that irrespective of the variability/errors that sometimes are inevitable when measuring empirical evidence, scientists can still develop methodological rules/agreements in order to judge whether the predictions/implications of a UD hypothesis have been corroborated or refuted. Put differently, the variability in the measurement of empirical evidence does not turn a UD hypothesis (e.g., Einstein’s) into a probabilistic one.

We now offer a more recent example. Overbye [10] reported that in September of 2011, a group of scientists announced that they had measured a batch of subatomic particles (neutrinos) traveling faster than the speed of light. This appeared to falsify the UD law that negates that specific occurrence (“no object can travel faster than the speed of light”). However, Overbye then quoted a scientist as saying that “[T]he evidence is beginning to point toward the (results showing falsifying evidence of the UD law) being an artifact of the measurement” (see also [11] for more reports about the same episode).

The neutrinos story was very interesting to follow because the whole process – from the initial claims of falsification to the conclusion that errors had been made during the experiments – was consistent with the logic and methodology that Popper proposed in order to test a UD law. It also serves to illustrate once more that regardless of the different potential difficulties that can be present when testing a UD hypothesis, scientists can still come to the conclusion (based on methodological rules/conventions) that a given UD proposition has been refuted or corroborated. This helps us to affirm that the different difficulties that might appear when testing a UD hypothesis do not turn it into a probabilistic one.

Response to criticism (a): potential measurement problems that exist when testing the implications of UD hypotheses do not turn them into probabilistic ones. Instead, scientists can/should arrive at methodological rules/agreements upon which decisions should be made about empirical evidence that appear to corroborate or refute a hypothesis of interest.

Advertisement

3. Criticism (b): “the probabilistic versus deterministic distinction is basically a red herring,” “something that is impossible according to a deterministic hypothesis is basically the same as something that has a one-in-a-billion chance of happening according to a probabilistic hypothesis”

These criticisms arguably reflect the prevalent thinking among scientists that are familiar with the frequentist approach to probability and statistical hypothesis testing2. For the sake of argument, we will assume that this approach is an adequate way to apply Popperian falsificationism when testing a probabilistic hypothesis3. In this approach, researchers usually have to formulate a null hypothesis (Ho) and an alternative hypothesis (H1). We will use these assumptions and conventions to explain one of the theoretical/empirical differences between UD and probabilistic hypotheses.

We first discuss a situation where researchers are interested in testing whether a variable “A” (the cause) has an effect on variable “B”. We model this relationship using the usual principles and assumptions of standard linear regression analysis. We state Ho as a general proposition: “A has no noticeable effect on B” (e.g., Ho: β1 = 0). We also establish H1 as “A has a noticeable effect (e.g., “statistically significant”) on B” (e.g., β1 ≠ 0). Let us also suppose that two different investigations are performed: one to test a probabilistic relationship between A and B, and the other one to test its UD version. To be clear that we are dealing with two different and separate situations, we will rename the variables as Ap/Bp in the case of a probabilistic relationship and Ad/Bd to model its UD version. We can now restate both H1’s as “Ap is likely to have a positive and significant effect on Bp,” and “the effect of Ad on Bd is positive and linear,” respectively. Figures 1 and 2 illustrate the results of the hypothetical investigations of the probabilistic and UD relationships:

Figure 1.

Evidence supporting a probabilistic relationship between Ap and Bp.

Figure 2.

Evidence supporting a UD relationship between Ad and Bd.

These figures show evidence supporting the alternative hypothesis (e.g., H1: “A has a noticeable and positive effect on B”) in both its probabilistic and UD forms. Based on this, the assertion that there are no practical differences between these two types of hypotheses would appear to be true. Put differently, since Figures 1 and 2 support the idea that Ho (“A has no noticeable effect on B”) in its probabilistic and UD forms has been “practically falsified” and “falsified,” respectively, it does seem as if arguing that UD and probabilistic hypotheses are different in nature is indeed a red herring.

We now proceed to explain that, in spite of what was discussed previously, there are clear theoretical/empirical differences between UD and probabilistic hypotheses. To accomplish our objective, we again model a causal relationship between two different variables: C (“the cause”) and D (“the effect”). We make the same assumptions as before, with only one difference: we now suppose that there is prior corroborating evidence showing a noticeable and positive effect of C on D. We state the probabilistic version of this relationship as “Cp is likely to have a positive and noticeable effect on Dp” (i.e., Ho: β1 ≠ 0). The UD form of this relationship could be phrased along the lines of “Cd has a linear and positive effect on Dd” (e.g., Ho β1 = 1). This is a different situation to the one that was analyzed in the previous paragraphs (Ho: β1 = 0). Let us also suppose that two investigations are run to examine these two relationships. Figures 3 and 4 show the results of these hypothetical investigations.

Figure 3.

Evidence that “practically falsifies” a probabilistic relationship between Cp and Dp.

Figure 4.

Evidence that falsifies (logically and strictly) a UD relationship between Cd and Dd.

These illustrations show one of the theoretical/empirical differences between UD and probabilistic hypotheses. Figure 3 shows evidence that the probabilistic relationship between Cp and Dp has been “practically falsified.” Likewise, Figure 4 shows evidence that the UD relationship between Cd and Dd has been falsified (logically and strictly). In short, Ho for both the probabilistic and UD forms of the causal relationship between C and D has been “practically falsified” and “falsified,” respectively.

While it is true that in both cases there appears to be undermining/falsifying evidence against Ho (when Ho establishes a noticeable and positive relationship between the variables), it is obvious that in theoretical/empirical terms, researchers finding evidence that “practically falsifies” a probabilistic hypothesis cannot continue to affirm that there is a noticeable and positive effect of Cp on Dp. Figure 3 shows evidence of this non-relationship. Therefore, for all practical and theoretical purposes, researchers should be inclined to conclude that there is no evidence of a relationship between Cp and Dp. On the other hand, researchers who find falsifying evidence of the UD relationship between Cd and Dd should be inclined to say that even though the UD relationship has been (logically and strictly) falsified, there still appears to be a noticeable and positive relationship between Cd and Dd. In other words, the relationship between these two variables, for all practical and theoretical purposes, can still be considered as positive and noticeable. In short, the falsified UD relationship can now be stated as a probabilistic one: Cd is likely to have a noticeable and positive effect on Dd.

To complement the current discussion, we now offer an example of a relationship where the dependent and independent variables can be present or absent. To avoid repetition, now we will only model the situation where Ho states a prior noticeable effect of E (the cause) on F (the effect). The probabilistic form of this relationship can be phrased along the lines of “if E occurs/is present, then F is more likely to occur/be present than not.” Its UD form can be stated as “If E occurs/is present, then F will always occur/be present.” Once more we utilize Ep and Fp for a probabilistic relationship, and Ed and Fd for a UD one.

Let us suppose that we collect two different samples to test the two distinct hypotheses. In the case of the probabilistic relationship, let us assume that we found two cases where Ep and Fp are present, and three cases where Ep is present but Fp is absent. Figure 5 illustrates these scenarios.

Figure 5.

Evidence that practically falsifies a probabilistic relationship between Ep and Fp.

In the case of a UD hypothesis, let us suppose that we found four cases where Ed and Fd are present, and one case where Ed is present but Fd is absent.

Similar to the scenario disused before, it can be seen again that when researchers find evidence that “practically falsifies” a hypothesis stating that “if Ep occurs/is present, then Fp is more likely to occur/be present than not,” for all pragmatic purposes, such a hypothesis should be discarded, as the evidence in Figure 5 shows that in most cases, it does not hold. Therefore, researchers should be inclined to conclude that there is no evidence supporting the assertion that Ep and Fp are in some way associated. On the other hand, Figure 6 shows that although the UD relationship “If Ed occurs/is present, then Fd will always occur/be present” has been logically and strictly falsified (i.e., one observation falsified this hypothesis), for all practical and theoretical purposes, researchers could still affirm that Ed is likely to be a causal agent for Fd, as the evidence shows that in most cases (four to one), the expected relationship does hold. Therefore, the falsified UD hypothesis between Ed and Fd can now be restated as a probabilistic hypothesis: “if Ed occurs/is present, then Fd is more likely to occur/be present than not.”

Figure 6.

Evidence that falsifies (logically and strictly) a UD relationship between Ed and Fd.

The previous discussions show a clear difference between UD and probabilistic hypotheses: a “practically falsified” probabilistic hypothesis (when Ho establishes a noticeable and positive relationship between the variables) has no theoretical/empirical utility. On the other hand, a (strictly and logically) “falsified” UD hypothesis could still have pragmatic/empirical utility. In Section 5, we elaborate on the potential utility of hypotheses that have been falsified.

Response to criticism (b): a “practically falsified” probabilistic hypothesis (when Ho establishes a noticeable and positive relationship between the variables), for all theoretical/practical purposes, does not have predictive power/utility. On the other hand, a falsified UD hypothesis can still have predictive power/practical utility. Therefore, it is not a red herring to say that UD and probabilistic hypotheses are different in nature.

Advertisement

4. Criticism (c): “Few interesting relationships in operations management can be formulated as deterministic hypotheses”

The Cambridge Dictionary online4 defines the word “interesting” as:

Someone or something that is interesting keeps your attention because he, she, or it is unusual, exciting, or has a lot of ideas:

It could be validly argued that a UD relationship (e.g., something that always occurs) is far more interesting that something that takes place, for example, in most cases (e.g., a probabilistic relationship). Nonetheless, irrespective of why a researcher may deem a causal relationship interesting, we think that all scientists should be knowledgeable about the theoretical/empirical differences between UD and probabilistic hypotheses. This is important in order to avoid inaccuracies/omissions when hypothesis-testing and also when offering advice about research methodology (see [2] for a more detailed discussion about these issues in OM research).

Furthermore, while it is clear that the OM field (and arguably most of Social Sciences5) has been dominated by what could be called as “significance testing” [13], recently some scientists have also attempted to emphasize the importance and utility of UD hypotheses, for example, in the Business/Operations Management field [1, 14]. Likewise, Dienes ([8], p. 26, his italics) affirms that.

Maybe many theories in psychology could effectively be written in the form, “In certain contexts, people always use this mechanism”: “When my experimental procedure is set up in this way, all learning involves this sort of neural network.”

Therefore, if UD hypotheses are to become more prominent in the Social Sciences, then it follows that scientists should be aware of their differences vis-à-vis probabilistic hypotheses, regardless of whether scientists (in their subjective views) think them interesting or not.

Response to criticism (c): whether a scientific proposition appears to be interesting or not, it is important for all scientists to be knowledgeable vis-à-vis the differences in the logical form and theoretical/empirical implications of UD and probabilistic hypotheses.

Advertisement

5. To be clear false theories can still be of use!

In Section 3, we explained that when a probabilistic hypothesis has been “practically falsified,” researchers should be inclined to conclude that there is no evidence at all of a causal relationship between an independent and a dependent variable. We also explained that when a UD hypothesis has been logically and strictly falsified, it could still be of practical use, as a noticeable causal relationship between two variables could still be identified (see Figures 4 and 6).

It is not possible to be certain regarding the reasons that explain why expert researchers (see footnote 1) still opine that the differences between UD and probabilistic hypotheses are just a red herring and unimportant. Nevertheless, we think that a contributing factor might be the lack of awareness about the fact that falsified hypotheses can still be of practical utility. It would appear as if researchers tend to equate the terms “falsified/false” with “useless,” “meaningless” or “pointless.” To support our contention that falsified hypotheses can still be of use, we quote Popper ([15], p. 74):

… false theories often serve well enough: most formulae used in engineering or navigation are known to be false, although they may be excellent approximations and easy to handle; and they are used with confidence by people who know them to be false.

Even though Newtonian physics have been “recognized as literally false” ([8], p. 5), “… scientists needed nothing more than Newton’s equations to plot the course of the rocket that landed men on the moon” [16].

More specifically, in the OM field, the influential strategic trade-offs model [17] was put forward as a UD theory [2]. Even if empirical evidence were to falsify – logically and strictly – this proposition in the future, it would be difficult for researchers to affirm that Skinner’s core argument (“no product/service can be the best at everything”) does not reflect phenomena that occurs in the marketplace6. In short, even if Skinner’s model were to be falsified, we argue that it would still be useful/have practical utility in the understanding – at least in some circumstances/situations – of the differences observed in the features/performance between pairs of rival products/services.

The fact that strictly and logically false/falsified (UD) hypotheses can still be of practical/empirical use should serve to strengthen our argument about the importance of acknowledging the differences that exist between UD and probabilistic hypotheses in empirical science.

Advertisement

6. Some concluding remarks

Investigations that deal directly with the topics of universal-deterministic and probabilistic hypotheses are not common in business management research in general, and operations management science in particular. Our hope is that this paper can contribute to a better understanding of the theoretical/empirical differences that exist between these two types of hypotheses.

References

  1. 1. Dul J, Hak T. Case Study Methodology in Business Research. Oxford UK, and Burlington, MA, USA: Butterworth-Heinemann; 2008
  2. 2. Whelan G, Sarmiento R, Sprenger J. Universal-deterministic and probabilistic hypotheses in operations management research: A discussion paper. Production Planning & Control. 2018;29(16):1306-1320
  3. 3. Ross D. I don’t Understand Quantum Physics. Available from: https://www.southampton.ac.uk/~doug/quantum_physics/quantum_physics.pdf. Southampton, UK: University of Southampton; 2018 [Accessed: May 20, 2022]
  4. 4. Eisenhardt KM. Building theories from case study research. Academy of Management Review. 1989;14(4):532-550
  5. 5. Voss C, Tsikriktsis N, Frohlich M, M. Case research in operations management. International Journal of Operations and Production Management. 2002;22(2):195-219
  6. 6. Yin RK. Case Study Research: Design and Methods. London: Sage; 2014
  7. 7. Popper K. The Logic of Scientific Discovery. London and New York: Routledge Classics, Taylor & Francis Group; 2002
  8. 8. Dienes Z. Understanding Psychology as a Science: An Introduction to Scientific and Statistical Inference. Basingstoke and New York: Palgrave-Macmillan; 2008
  9. 9. Folsing A. Albert Einstein: a biography. Viking (published by the Penguin Group), London: Penguin Books Ltd; 1997
  10. 10. Overbye, D. 2012. The Trouble With Data That Outpaces a Theory. Available from: http://www.nytimes.com/2012/03/27/science/the-trouble-with-neutrinos-that-outpaced-einsteins-theory.html. [Accessed: December 13, 2015]. Available from: https://www.nytimes.com/2012/03/27/science/the-trouble-with-neutrinos-that-outpaced-einsteins-theory.html. [Accessed: May 18, 2022]
  11. 11. Cartlidge E. BREAKING NEWS: Error Undoes Faster-Than-Light Neutrino Results. 2012. Available from: http://news.sciencemag.org/2012/02/breaking-news-error-undoes-faster-light-neutrino-results?ref=hp. [Accessed: December 13, 2015]. Available from: https://www.science.org/content/article/breaking-news-error-undoes-faster-light-neutrino-results. [Accessed: May 18, 2022]
  12. 12. Miller D. Popper’s contributions to the theory of probability and its interpretation. In: Shearmur J, Stokes G, editors. The Cambridge Companion to Popper. Cambridge: Cambridge University Press; 2016. pp. 208-229
  13. 13. Hartmann S, Sprenger J. Mathematics and statistics in the social sciences. In: Jarvie IC, Zamora Bonilla J, editors. The SAGE Handbook of the Philosophy of Social Sciences. London: SAGE; 2011. pp. 594-612
  14. 14. Dul J, Hak T, Goertz G, Voss C. Necessary condition hypotheses in operations management. International Journal of Operations & Production Management. 2010;30(11):1170-1190
  15. 15. Popper K. Conjectures and Refutations: The Growth of Scientific Knowledge. London and New York: Routledge Classics, Taylor & Francis Group; 2002
  16. 16. Green B. The Elegant Universe: Episode 1 - Einstein’s Dream. Available from: https://www.youtube.com/watch?v=5vUzzCoArCo. [Accessed: May 20, 2022]
  17. 17. Skinner W. Manufacturing-missing link in corporate strategy. Harvard Business Review. 1969;47(3):136-145
  18. 18. Sarmiento R, Thurer M, Whelan G. Rethinking Skinner’s model: Strategic trade-offs in products and services. Management Research Review. 2016;39(10):1199-1213
  19. 19. Singh PJ, Wiengarten F, Nand AA, Betts T. Beyond the trade-off and cumulative capabilities models: Alternative models of operations strategy. International Journal of Production Research. 2015;53(13):4001-4020
  20. 20. Sarmiento R, Whelan G, Thürer M. A note on ‘beyond the trade-off and cumulative capabilities models: Alternative models of operations strategy’. International Journal of Production Research. 2018;56(12):4368-4375

Notes

  • Ross [3] says “[I]t is often stated that, unlike classical physics, Quantum Physics is not deterministic. This statement is not really correct …”. With this assertion, Ross begins a detailed explanation of a concept that he calls “probabilistic determinism.” Later in his treatise, Ross writes that “[T]his means we do indeed have determinism, but only determinism of probability distributions of positions and momentum, as opposed to determinism of their exact values…”. “Thus, although Quantum Physics does not allow us to determine where a particular photon will land, it does allow us to determine where we will find dense and sparse regions – and in this sense it is deterministic,” Ross concludes. To reinforce his point, Ross explains that “probabilistic determination” is a viewpoint/approach that is tacitly/implicitly accepted/used in other disciplines, such as the Social Sciences. In the process of publishing OM papers that deal with the topics of UD and probabilistic hypotheses, we have received criticisms from anonymous referees that are similar to Ross’ arguments, e.g., “the probabilistic versus deterministic distinction is basically a red herring,” “something that is impossible according to a deterministic hypothesis is basically the same as something that has a one-in-a-billion chance of happening according to a probabilistic hypothesis.” Referees are supposed to have considerable expertise on the subjects that are presented in the investigations they evaluate. Therefore, it is reasonable to conclude that if expert reviewers are not aware of the differences between UD and probabilistic hypotheses, then it is possible that the rest of the field is also unaware of these themes. We also note that highly influential works that offer guidance about case study research methodology (e.g., [4, 5, 6]) do not offer a detailed discussion of the differences between UD and probabilistic hypotheses. Moreover, when these topics have been brought up in informal gatherings and discussions with colleagues in other areas of business management research, a lack of awareness/understanding of these types of scientific propositions has also been identified. We argue that these situations justify our investigation. We are quoting or paraphrasing the objections that are dealt with in the paper.
  • For a more technical discussion on those topics, we recommend Popper [7] and Miller [12].
  • Popper ([7], p. 183n) affirms that due to their logical form, probabilistic hypotheses are not directly falsifiable. However, he also says that scientists can arrive at “…the adoption of a methodological rule…, which makes probability hypotheses falsifiable.” In this way, probabilistic hypotheses can attain a scientific status, because once a methodological rule has been adopted, their implications can be tested empirically (i.e., their predictions can be contradicted by empirical evidence).
  • See https://dictionary.cambridge.org/dictionary/english/interesting [accessed 20 May 2022].
  • See for example Hartmann and Sprenger [13] for a more detailed discussion on what they call “the mathematization of the social sciences.”
  • See [18] for a detailed discussion of strategic trade-offs between two competing products. Moreover, some authors (e.g., [19]) claim that in practice, the trade-offs model is not used. Sarmiento et al. [20] provide a detailed response to Singh et al.’s assertion.

Written By

Roberto Sarmiento

Submitted: 26 May 2022 Reviewed: 24 June 2022 Published: 12 July 2022