The primary objective of this paper is to make a case that R.A. Fisher’s objections to the decision‐theoretic framing of frequentist inference are not without merit. It is argued that this framing is congruent with the Bayesian but incongruent with the frequentist approach; it provides the former with a theory of optimal inference but misrepresents the optimality theory of the latter. Decision‐theoretic and Bayesian rules are considered optimal when they minimize the expected loss “for all possible values of θ in Θ” [∀θ∈Θ], irrespective of what the true value θ∗ [state of Nature] happens to be; the value that gave rise to the data. In contrast, the theory of optimal frequentist inference is framed entirely in terms of the capacity of the procedure to pinpoint θ∗. The inappropriateness of the quantifier ∀θ∈Θ calls into question the relevance of admissibility as a minimal property for frequentist estimators. As a result, the pertinence of Stein’s paradox, as it relates to the capacity of frequentist estimators to pinpoint θ∗, needs to be reassessed. The paper also contrasts loss‐based errors with traditional frequentist errors, arguing that the former are attached to θ, but the latter to the inference procedure itself.
- decision theoretic inference
- Bayesian vs. frequentist inference
- Stein’s paradox
- James‐Stein estimator
- loss functions
- error probabilities
- loss functions
- risk functions
- complete class theorem
Wald’s  decision‐theoretic framework is widely viewed as providing a broad enough perspective to accommodate and compare the frequentist and Bayesian approaches to inference, despite their well‐known differences. It is perceived as offering a neutral framing of inference that brings into focus their common features and tones down their differences; see Refs. [2–4].
“The problem in this formulation is very general. It contains the problems of testing hypotheses and of statistical estimation treated in the literature.” (p. 340)
Among the frequentist pioneers, Jerzy Neyman accepted enthusiastically this broader perspective, primarily because the concepts of decision rules and action spaces seemed to provide a better framing for his behavioristic interpretation of Neyman‐Pearson (N‐P) testing based on the accept/reject rules; see Refs. [7, 8]. Neyman’s attitude towards Wald’s  framing was also adopted wholeheartedly by some of his most influential students/colleagues at Berkeley, including [9, 10]. In a foreword of a collection of Neyman’s early papers, his students/editors described the Wald’s framing as (, p. vii):
“A natural but far reaching extension of their [N‐P formulation] scope can be found in Abraham Wald’s theory of statistical decision functions.”
At the other end of the argument, Fisher  rejected Wald’s framing on the grounds that it seriously distorts his rendering of frequentist statistics:
“The attempt to reinterpret the common tests of significance used in scientific research as though they constituted some kind of acceptance procedure and led to “decisions” in Wald’s sense, originated in several misapprehensions and has led, apparently, to several more.” (p. 69)
With a few exceptions, such as Refs. [13–15], Fisher’s  viewpoint has been inadequately discussed and evaluated by the subsequent statistics literature. The primary aim of this paper is to revisit Fisher’s minority view by taking a closer look at the decision‐theoretic framework with a view to reevaluate the claim that it provides a neutral framework for comparing the frequentist and Bayesian approaches. It is argued that Fisher’s view that the decision theoretic framing is germane to “acceptance sampling,” but misrepresents frequentist inference, is not without merit. The key argument of the discussion that follows is that the decision‐theoretic notions of loss function and admissibility are congruent with the Bayesian approach, but incongruent with both the primary objective and the underlying reasoning of the frequentist approach.
Section 2 introduces the basic elements of the decision theoretic set‐up with a view to bring out its links to the Bayesian and frequentist approaches, calling into question the conventional wisdom concerning its neutrality. Section 3 takes a closer look at the Bayesian approach and argues that had the decision‐theoretic apparatus not exist, Bayesians would have been forced to invent it in order to establish a theory of optimal Bayesian inference. Section 4 discusses critically the notions of loss functions and admissibility, focusing primarily on their role in giving rise to Stein’s paradox and their incompatibility with the frequentist approach. It is argued that the frequentist dimension of the notions of a loss function and admissibility is more apparent than real. Section 5 makes a case that the decision‐theoretic framework misrepresents both the primary objective and the underlying reasoning of the frequentist approach. Section 6 revisits the notion of a loss function and its dependence on “information other than the data.” It is argued that loss‐based errors are both different and incompatible with the traditional frequentist errors because they are attached to the unknown parameters instead of the inference procedures themselves, as the traditional frequentist errors (Type I, II and coverage).
2. The decision theoretic set‐up
2.1. Basic elements of the decision‐theoretic framing
The current decision‐theoretic set‐up has three basic elements:
1. A prespecified (parametric) statistical model , generically specified by
where denotes the (joint) distribution of the sample denotes the sample space and the parameter space. This model represents the stochastic mechanism assumed to have given rise to data
2. A decision space containing all mappings where denotes the set of all actions available to the statistician.
The basic idea is that when the decision‐maker selects action , he/she does not know the “true” state of Nature, represented by However, contingent on each action the decision maker “knows” the losses (gains and utilities) resulting from different choices The decision maker observes data which provides some information about and then maps each to a certain action guided solely by
2.2. The original Wald framing
It is important to bring out the fact that the original Wald  framing was much narrower than the above basic elements 2 and 3, due to its original objective to formalize the Neyman‐Pearson (N‐P) approach; see . What were the key differences?
The decision (action) space was defined exclusively in terms of subsets of the parameter space . For estimation purposes is the set of all singleton points of and for testing , the null and alternative regions, respectively.
The original loss (weight) was a zero‐positive function, with zero loss at:
where is the true value of in For the discussion that follows, it is important to note that Eq. (2) is nonoperational in practice because is unknown.
The more general framing, introduced by Wald ([1, 20]) and broadened by Le Cam , extended the scope of the original set‐up by generalizing the notions of loss functions and decision spaces. In what follows it is argued that these extensions created serious incompatibilities with both the objective and the underlying reasoning of frequentist inference.
In addition, it is both of historical and methodological interest to note that Wald  introduced the notion of a prior distribution, into the original decision‐theoretic machinery reluctantly, and justified it on being a useful tool for proving certain technical results:
“The situation regarding the introduction of an a priori probability distribution of is entirely different. First, the objection can be made against it, as Neyman has pointed out, that is merely an unknown constant and not a variate, hence it makes no sense to speak of the probability distribution of . Second, even if we may assume that is a variate, we have in general no possibility of determining the distribution of and any assumptions regarding this distribution are of hypothetical character. The reason why we introduce here a hypothetical probability distribution of is simply that it proves to be useful in deducing certain theorems and in the calculation of the best system of regions of acceptance.” (p. 302)
2.3. A shared neutral framework?
The frequentist, Bayesian, and the decision‐theoretic approaches share the notion of a statistical model by viewing data as a realization of a sample from Eq. (1).
The key differences between the three approaches are as follows:
The frequentist approach relies exclusively on
The Bayesian approach adds a prior distribution, (for all )
The decision‐theoretic framing revolves around a loss (gain or utility) function:
The claim that the decision‐theoretic perspective provides a neutral ground is often justified  on account of the loss function being a function of the sample and parameter spaces through the two universal quantifiers:
(i) “,” associated with the distribution of the sample:
(ii)“” associated with the posterior distribution:
The idea is that allowing for all values of in goes beyond the Bayesian perspective, which relies exclusively on a single point . What is not obvious is whether that is sufficient to do justice to the frequentist approach. A closer scrutiny suggests that frequentist inference is misrepresented by the way both quantifiers are employed in the decision‐theoretic framing of inference.
First, the quantifier plays only a minor role in transforming a loss function, say into a risk function:
This is the only place where the distribution of the sample, enters the decision‐theoretic framing, and the only relevant part of the behavior of is how it affects the risk function for different values of in In frequentist inference, however, the distribution of the sample takes center stage for the theory of optimal frequentist inference. It determines the sampling distribution of any statistic (estimator, test, and predictor) through:
and that, in turn, yields the relevant error probabilities that determine optimal inference procedures.
Second, the decision‐theoretic notion of optimality revolves around the universal quantifier “,” rendering it congruent with the Bayesian but incongruent with the frequentist approach. To be more specific, since different risk functions often intersect over an optimal rule is usually selected after the risk function is reduced to a scalar. Two such choices of risk are:
Hence, an obvious way to choose among different rules is to find the one that minimizes the relevant risk with respect to all possible estimates . In the case of Eq. (8), this gives rise to two corresponding decision rules:
In this sense, a decision or a Bayes rule will be considered optimal when it minimizes the relevant risk, no matter what the true state of Nature happens to be. The last clause, “irrespective of ” constitutes a crucial caveat that is often ignored in discussions of these approaches. When viewed as a game against Nature, the decision maker selects action from irrespective of what value Nature has chosen. That is, plays no role in selecting the optimal rules since the latter have nothing to do with the true value of . To avoid any misreading of this line of reasoning, it is important to emphasize that “the true value ” is shorthand for saying that “data constitute a typical realization of the sample with distribution ”; see Ref. .
This should be contrasted with the notion of optimality in frequentist inference that gives center stage, in the sense that it evaluates the capacity of the inference procedure to inform the modeler about no other value is relevant. According to Reid :
“A statistical model is a family of probability distributions , the central problem of statistical inference being to identify which member of the family  generated the data of interest.” (p. 418)
3. The Bayesian approach
To shed further light on the affinity between the decision‐theoretic framework and the Bayesian approach, let us take a closer look at the latter.
3.1. Bayesian inference and its primary objective
A key argument in favor of the Bayesian approach is often its simplicity in the sense that all forms of inference revolve around a single function, the posterior distribution: Hence, an outsider looking at Bayesian approach might naturally surmise that its primary objective is to yield “a probabilistic ranking” (ordering) of all values of in . According to O’Hagan :
“Having obtained the posterior density , the final step of the Bayesian method is to derive from it suitable inference statements. The most usual inference question is this: After seeing the data , what do we now know about the parameter . The only answer to this question is to present the entire posterior distribution.” (p. 6)
The idea is that the modeling begins with an a priori probabilistic ranking based on which is revised after observing to derive ; hence the key role of the quantifier . O’Hagan , echoing earlier views in [24, 25], contrast the frequentist (classical) inferences with the Bayesian inference arguing:
“Classical inference theory is very concerned with constructing good inference rules. The primary concern of Bayesian inference, …, is entirely different. The objective is to extract information concerning from the posterior distribution, and to present it helpfully via effective summaries. There are two criteria in this process. The first is to identify interesting features of the posterior distribution. … The second criterion is good communication. Summaries should be chosen to convey clearly and succinctly all the features of interest. … In Bayesian terms, therefore, a good inference is one which contributes effectively to appropriating the information about which is conveyed by the posterior distribution.” (p. 14)
Clearly, O’Hagan’s  attempt to define what is a “good” Bayesian inference begs the question: what does constitute “effective appropriation of information about ” mean, beyond the probabilistic ranking? That is, the issue of optimality is inextricably bound up with what the primary objective of Bayesian inference is. If the primary objective of Bayesian inference is not the revised probabilistic ranking, what is it? The answer is that the ranking is only half the story. The other half is concerned with the optimality for Bayesian inference which cannot be framed exclusively in terms of the posterior distribution. The decision‐theoretic perspective provides the Bayesian approach with a theory of optimal inference as well as a primary objective: minimize expected losses for all values of in .
In his attempt to defend his stance that the entire posterior distribution is the inference, O’Hagan  argues that criteria for “optimal” Bayesian inferences are only parasitical on the Bayesian approach and enter the picture through the decision theoretic perspective:
“… a study of decision theory has two potential benefits. First, it provides a link to classical inference. It thereby shows to what extent classical estimators, confidence intervals and hypotheses tests can be given a Bayesian interpretation or motivation. Second, it helps identify suitable summaries to give Bayesian answers to stylized inference questions which classical theory addresses.” (p. 14)
Both of the above mentioned potential benefits to the Bayesian approach, are questionable for two reasons. First, the link between the decision‐theoretic and the classical (frequentist) inference is more apparent than real because it is fraught with misleading definitions and unclarities pertaining to the reasoning and objectives of the latter. As argued in the sequel, the quantifier “” used to define “optimal” decision‐theoretic or Bayes rules is at odds with and misrepresents frequentist inference. Second, the claim concerning Bayesian answers to frequentist questions of interest is misplaced because the former provides no real answers to the frequentist primary question of interest which pertains to learning about An optimal Bayes rule offers very little, if anything, relevant for learning about the value that gave rise to . Let us unpack this answer in some more detail.
3.2. Optimality for Bayesian inference
where ; see Ref. . The second and third equalities presume that one can reverse the order of integration (a technical issue), and treat as the joint distribution of and so that the following equalities hold:
In this case, these equalities are questionable due to the blurring of the distinction between a generic value of and the particular value ; see Ref. .
In light of Eq. (10), a Bayesian estimate is “optimal” relative to a particular loss function when it minimizes , or equivalently This makes it clear that what constitutes an “optimal” Bayesian estimate is primarily determined by :
When , the Bayes estimate is the mean of .
When , the Bayes estimate is the median of .
When for the Bayes estimate is the mode of .
In practice, the most widely used loss function is the square:
whose risk function is the decision‐theoretic Mean Square Error (MSE1):
Surprising, however, this definition of the MSE, denoted by MSE1, is different from the frequentist MSE, which is defined by:
The key difference is that Eq. (14) is defined at the point as opposed to . Unfortunately, statistics textbooks adopt one of the two definitions of the MSE— either at or —and ignore (or seem unaware) of the other. At first sight, his difference might appear pedantic, but it turns out that it has very serious implications for the relevant theory of optimality for the frequentist vs. Bayesian inference procedures. Indeed, reliance on undermines completely the relevance of admissibility as a minimal property for estimators in frequentist inference.
Admissibility. An estimator is inadmissible if there exists another estimator such that:
and the strict inequality () holds for at least one value of . Otherwise, is said to be admissible with respect to the loss function
The objective of minimizing losses weighted by for all value of in is in direct contrast to the frequentist primary objective, which is to learn from data about the true value underlying the generation of Hence, the question that naturally arises is: what does an optimal Bayes rule, stemming from Eq. (17) convey about the underlying data generating mechanism in Eq. (1)? It is not obvious why the highest ranked value (mode), or some other feature of the posterior distribution, has any value in pinpointing knowing that is selected irrespective of the true state of Nature.
3.3. The duality between loss functions and priors
The derivation in Eq. (10) brings out the built‐in affinity between the decision‐theoretic framing of inference and the Bayesian approach. As shown above, minimizing the Bayes risk:
is equivalent to minimizing the integral:
This result brings out two important features of optimal Bayesian inference.
First, it confirms the minor role played by the quantifier in both the Bayesian and decision‐theoretic optimality theory of inference.
Second, it indicates that and are perfect substitutes with respect to any weight function , in the derivation of Bayes rules. Modifying the loss function or the prior yields the same result:
“… the problem of estimating with a modified (weighted) loss function is identical to the problem with a simple loss but with modified hyperparameters of the prior distribution while the form of the prior distribution does not change.” (, p. 522)
This implies that in practice a Bayesian could derive a particular Bayes rule by attaching the weight to the loss function or to the prior distribution depending on which derivation is easier; see Refs. [18, 28].
3.4. Revisiting the complete class theorem
The issue of contrasting objectives highlights the key built‐in tension between the frequentist and Bayesian approaches to optimality, which in turn undermines several important results, including the complete class theorem, first proved in Ref. :
“Wald showed that under fairly general conditions the class of Bayes decision functions forms an essentially complete class; in other words, for any decision function that is not Bayesian, there exists one that is Bayes and is at least as good no matter what the true state of Nature may be.” (, p. 341)
As argued in the sequel, it should come as no surprise to learn that Bayes rules dominate all other rules when admissibility is given center stage. The key result is that a Bayes rule with respect to a prior distribution is:
Admissible, under certain regularity conditions, including when is unique up to equivalence relative to the same risk function .
Ignoring the contrasting objectives, these results have been interpreted as evidence for the superiority of the Bayesian perspective, and led to the intimation that an effective way to generate optimal frequentist procedures is to find the Bayes solution using a reasonable prior and then examine their frequentist properties to see whether it is satisfactory from the latter viewpoint; see Refs. [29, 30].
As argued next, even if one were to agree that Bayes rules and admissible estimators largely coincide, the importance of such a result hinges on the relevance of admissibility as a key property for frequentist estimators.
4. Loss functions and admissibility revisited
The claim to be discussed in this section is that the notions of a “loss function” and “admissibility” are incompatible with the optimal theory of frequentist estimation as framed by Fisher; see Ref. .
4.1. Admissibility as a minimal property
The following example brings out the inappropriateness of admissibility as a minimal property for optimal frequentist estimators.
Example. In the context of the simple Normal model:
consider the decision‐theoretic notion of MSE1 in Eq. (13) to compare two estimators of :
The maximum likelihood estimator (MLE):
The “crystalball” estimator:
When compared on admissibility grounds, both estimators are admissible and thus equally acceptable. Common sense, however, suggests that if a particular criterion of optimality cannot distinguish between [a strongly consistent, unbiased, fully efficient and sufficient estimator] and an arbitrarily chosen real number that ignores the data altogether, is not much of a minimal property.
A moment’s reflection suggests that the inappropriateness of admissibility stems from its reliance on the quantifier “.” The admissibility of arises from the fact that for certain values of close enough to , say for is “better” than on MSE1 grounds:
Given that the primary objective of a frequentist estimator is to pin‐point the result in Eq. (19) seems totally irrelevant as a gauge of its capacity to achieve that!
This example indicates that admissibility is totally ineffective as a minimal property because it does not filter out the worst possible estimator! Instead, it excludes potentially good estimators like the sample median; see Ref. . This highlights the “extreme relativism” of admissibility to the particular loss function, , in this case. For the absolute loss function , however, the sample median would have been the optimal estimator. Despite his wholehearted embrace of the decision‐theoretic framing, Lehmann  warned statisticians about the perils of arbitrary loss functions:
“It is argued that the choice of a loss function, while less crucial than that of the model, exerts an important influence on the nature of the solution of a statistical decision problem, and that an arbitrary choice such as squared error may be baldly misleading as to the relative desirability of the competing procedures.” (p. 425)
A strong case can be made that the key minimal property (necessary but not sufficient) for frequentist estimation is consistency, an extension of the Law of Large Numbers (LLN) to estimators, more generally. For instance, consistency would have eliminated from consideration because it is inconsistent. This makes intuitive sense because if an estimator cannot pinpoint with an infinite data information, it should be considered irrelevant for learning about . Indeed, there is nothing in the notion of admissibility that advances learning from data about .
Further to relative (to particular loss functions) efficiency being a dubious property for frequentist estimators, the pertinent measure of finite sample precision for frequentist estimators is full efficiency, which is defined relative to the assumed statistical model (1).
4.2. Stein’s paradox and admissibility
The quintessential example that has bolstered the appeal of the Bayesian claims concerning admissibility is the James‐Stein estimator , which gave rise to an extensive literature on shrinkage estimators, see Ref. .
Let be independent sample from a Normal distribution:
where is known. Using the notation :=and :=diag(), this can be denoted by:
Find an optimal estimator of with respect to the square “overall” loss function:
Stein  astounded the statistical world by showing that for the least‐squares (LS) estimator is admissible, but for is inadmissible. Indeed, James and Stein  were able to come up with a nonlinear estimator:
that became known as the James‐Stein estimator, which dominates in MSE1 terms by demonstrating that:
It turns out that is also inadmissible for and dominated by the modified James‐Stein estimator that is admissible:
where see Ref. .
The traditional interpretation of this result is that for the Normal, Independent model in Eq. (20), the James‐Stein estimator (15) of for reduces the overall MSE1 in Eq. (21). This result seems to imply that one will “do better” (in overall MSE1 terms) by using a combined nonlinear (shrinkage) estimator, instead of estimating these means separately. What is surprising about this result is that there is no statistical reason (due to independence) to connect the inferences pertaining to the different individual means, and yet the obvious estimator (LS) is inadmissible.
As argued next, this result calls into question the appropriateness of the notion of admissibility with respect to a particular loss function, and not the judiciousness of frequentist estimation.
5. Frequentist inference and learning from data
The objectives and underlying reasoning of frequentist inference are inadequately discussed in the statistics literature. As a result, some of its key differences with Bayesian inference remain beclouded.
5.1. Frequentist approach: primary objective and reasoning
All forms of parametric frequentist inference begin with a prespecified statistical model This model is chosen from the set of all possible models that could have given rise to data by selecting the probabilistic structure for the underlying stochastic process in such a way so as to render the observed data a “typical” realization thereof. In light of the fact that each value of represents a different element of the family of models represented by the primary objective of frequentist inference is to learn from data about the “true” model:
where denotes the true value of in . The “typicality” is testable vis‐a‐vis the data using misspecification testing; see Ref. .
The frequentist approach relies on two modes of reasoning for inference purposes:
where denotes the true value of in , and denote hypothesized values of associated with the hypotheses, : , : where and constitute a partition of
A frequentist estimator aims to pinpoint , and its optimality is evaluated by how effectively it achieves that. Similarly, a test statistic usually compares a good estimator of with a prespecified value but behind is the value assumed to have generated data Hence, the hypothetical reasoning is used in testing to learn about and has nothing to do with all possible values of in
This contradicts misleading claims by Bayesian textbooks (, p. 61):
“The frequentist paradigm relies on this criterion [risk function] to compare estimators and, if possible, to select the best estimator, the reasoning being that estimators are evaluated on their long‐run performance for all possible values of the parameter ”
Contrary to this claim, the only relevant value of in evaluating the “optimality” of is Such misleading claims stem from an apparent confusion between the existential and universal quantifiers in framing certain inferential assertions.
The existence of can be formally defined using the existential quantifier:
This introduces a potential conflict between the existential and the universal quantifier “” because neither the decision theoretic nor the Bayesian approach explicitly invoke . Decision‐theoretic and Bayesian rules are considered optimal when they minimize the expected loss no matter what happens to be.
Any attempt to explain away the crucial differences between the two quantifiers can be easily scotched using elementary logic. The two quantifiers could not be more different since, using the logical connective for negation (), the equivalence between the two involves double negations:
Similarly, invoking intuition to justify the quantifier as innocuous and natural on the grounds that one should care about the behavior of an estimator for all possible values of is highly misleading. The behavior of for all , although relevant, is not what determines how effective a frequentist estimator is at pinpointing what matters is its sampling behavior around . Assessing its effectiveness calls for evaluating (deductively) the sampling distribution of under factual or hypothetical values and and not for all possible values of in Let’s unpack the details of this claim.
5.2. Frequentist estimation
The underlying reasoning for frequentist estimation is factual, in the sense the optimality of an estimator is appraised in terms of its generic capacity of to zero‐in on (pinpoint) the true value , whatever the sample realization . Optimal properties like consistency, unbiasedness, full efficiency, sufficiency, etc., calibrate this generic capacity using its sampling distribution of evaluated under i.e., in terms of for For instance, strong consistency asserts that as will zero‐in on almost surely:
Similarly, unbiasedness asserts that the mean of is the true value
In this sense, both of these optimal properties are defined at the point =. This is achieved by using factual reasoning, i.e., evaluating the sampling distribution of under the true state of Nature (=), without having to know This is in contrast to using loss functions, such as Eq. (2), which are defined in terms of but are rendered nonoperational without knowing .
Example. In the case of the simple Normal model in Eq. (18), the point estimator, , is consistent, unbiased, fully efficient, sufficient, with a sampling distribution:
What is not usually appreciated sufficiently is that the evaluation of that distribution is factual, i.e., , and should formally be denoted by:
When is standardized, it yields the pivotal function:
whose distribution only holds for the true and no other value. This provides the basis for constructing a confidence interval (CI):
which asserts that the random interval will cover (overlay) the true mean , whatever that happens to be, with probability or equivalently, the error of coverage is Hence, frequentist evaluation of the coverage error probability depends only on the sampling distribution of and is attached to random interval for all values without requiring one to know
The evaluation at calls into question the decision‐theoretic definition of unbiasedness:
in the context of frequentist estimation since this assertion makes sense only when defined at Similarly, the appropriate frequentist definition of the MSE for an estimator, initially proposed by Fisher , is defined at the point :
Indeed, the well‐known decomposition:
is meaningful only when defined at the point (true mean) since by definition:
and thus, the variance and the bias involve only two values of in and and when the estimator is unbiased. This implies that the apparent affinity between the MSEdefined in Eq. (13) and the variance of an estimator is more apparent than real because the latter makes frequentist sense only when is a single point.
5.3. James‐Stein estimator from a frequentist perspective
For a proper frequentist evaluation of the above James‐Stein result, it is important to bring out the conflict between the overall MSE (14) and the factual reasoning underlying frequentist estimation. From the latter perspective, the James‐Stein estimator raises several issues of concern.
First, both the least‐squares and the James‐Stein estimators are inconsistent estimators of since the underlying model suffers from the incidental parameter problem: there is essentially one observation () for each unknown parameter (), and as the number of unknown parameters increases at the same rate. To bring out the futility of comparing these two estimators more clearly, consider the following simpler example.
Example. Let be a sample from the simple Normal model in Eq. (18). Comparing the two estimators and inferring that is relatively more efficient than relative to a square loss function, i.e.,
is totally uninteresting because both estimators are inconsistent!
Second, to be able to discuss the role of admissibility in the Stein  result, we need to consider a consistent James‐Stein estimator, by extending the original data to a panel (longitudinal) data where the sample is:
In this case, the consistent least‐squares and James‐Stein estimators are:
This enables us to evaluate the notion of “relatively better” more objectively.
Admissibility relative to the overall loss function in Eq. (21) introduces a trade‐off between the accuracy of the estimators for individual parameters and the “overall” expected loss. The question is: “In what sense the overall MSE among a group of mean estimates provides a better measure of “error” in learning about the true values ?” The short answer is: it does not. Indeed, the overall MSE will be irrelevant when the primary objective of estimation is to learn from data about . This is because the particular loss function penalizes the estimator’s capacity to pin‐point by trading an increase in bias for a decrease in the overall MSE in Eq. (21), when the latter is misleadingly evaluated over all in . That is, the James‐Stein estimator flouts the primary objective of pinpointing in favor of reducing the overall MSE .
In summary, the above discussion suggests that there is nothing paradoxical about Stein’s  original result. What is problematic is not the least‐squares estimator, but the choice of “better” in terms of admissibility relative to an overall MSE in evaluating the accuracy of the estimators of .
5.4. Frequentist hypothesis testing
Another frequentist inference procedure one can employ to learn from data about is hypothesis testing, where the question posed is whether is close enough to some prespecified value . In contrast to estimation, the reasoning underlying frequentist testing is hypothetical in nature.
5.4.1. Legitimate frequentist error probabilities
For testing the hypotheses:
one utilizes the same sampling distribution N, but transforms the pivot into the test statistic by replacing with the prespecified value yielding However, instead of evaluating it under the factual , it is now evaluated under various hypothetical scenarios associated with and to yield two types of (hypothetical) sampling distributions:
In both cases, (I) and (II), the underlying reasoning is hypothetical in the sense that the factual in Eq. (33) is replaced by hypothesized values of and the test statistic provides a standardized distance between the hypothesized values (or ) and the true assumed to underlie the generation of the data yielding Using the sampling distribution in (I), one can define the following legitimate error probabilities:
Using the sampling distribution in (II), one can define:
It can be shown that the test defined by the test statistic and the rejection region constitutes a uniformly most powerful (UMP) test for significance level see Ref. . The type I [II] error probability is associated with test erroneously rejecting [accepting] . The type I and II error probabilities evaluate the generic capacity [whatever the sample realization ] of a test to reach correct inferences. Contrary to Bayesian claims, these error probabilities have nothing to do with the temporal or the physical dimension of the long‐run metaphor associated with repeated samples. The relevant feature of the long‐run metaphor is the repeatability (in principle) of the DGM represented by this feature can be easily operationalized using computer simulation; see Ref. .
The key difference between the significance level and the p‐value is that the former is a pre‐data and the latter a post‐data error probability. Indeed, the p‐value can be viewed as the smallest significance level at which would have been rejected with data . The legitimacy of postdata error probabilities underlying the hypothetical reasoning can be used to go beyond the N‐P accept/reject rules and provide an evidential interpretation pertaining to the discrepancy from the null warranted by data ; see Ref. .
Despite the fact that frequentist testing uses hypothetical reasoning, its main objective is also to learn from data about the true model This is because a test statistic like constitutes nothing more than a scaled distance between the value behind the generation of and a hypothesized value with being replaced by its “best” estimator
6. Revisiting loss and risk functions
The above discussion raises serious doubts about the role of loss functions and admissibility in evaluating learning from data about To understand why the decision‐theoretic framing misrepresents the frequentist approach, one needs to consider the role of loss functions in statistical inference more generally.
6.1. Where do loss functions come from?
A closer scrutiny of the decision‐theoretic set up reveals that the loss function needs to invoke “information from sources other than the data,” which is usually not readily available. Indeed, such information is available in very restrictive situations, such as acceptance sampling in quality control. In light of that, a proper understanding of the intended scope of statistical inference calls for distinguishing the special cases where the loss function is part and parcel of the available substantive information from those that no such information is either relevant or available.
“Now it is undoubtedly true that on the one hand that situations exist where the loss function is at least approximately known (for example, certain problems in business) and sampling inspection are of this sort. … On the other hand, a vast number of inferential problems occur, particularly in the analysis of scientific data, where there is no way of knowing in advance to what use the results of research will subsequently be put.”
Cox  went further and questioned this framing even in cases where the inference might involve a decision:
“The reasons that the detailed techniques [decision‐theoretic] seem of fairly limited applicability, even when a fairly clear cut decision element is involved, may be (i) that, except in such fields as control theory and acceptance sampling, a major contribution of statistical technique is in presenting the evidence in incisive form for discussion, rather than in providing mechanical presentation for the final decision. This is especially the case when a single major decision is involved. (ii) The central difficulty may be in formulating the elements required for the quantitative analysis, rather than in combining these elements via a decision rule.” (p. 45)
Another important aspect of using loss functions in inference is that in practice they seem to be an add‐on to the inference itself since they bring to the problem the information other than the data. In particular, the same statistical inference problem can give rise to very different decisions/actions depending on one’s loss function. To illustrate that consider an example from :
“… consider the case of a new drug whose effects are studied by a research scientist attached to the laboratory of a pharmaceutical company. The conclusion of the study may have different bearings on the action to be taken by (a) the scientist whose line of further investigation would depend on it, (b) the company whose business decisions would determined by it, and (c) the Government whose policies as to health care, drug control, etc., would take shape on that basis.” (p. 72)
In practice, each one of these different agents is likely to have a very different loss function, but their inferences should have a common denominator: the scientific evidence pertaining to the true that stems solely from the observed data.
6.2. Decisions vs. inferences
The above discussion brings out the crucial distinction between a “decision” and an “inference” stemming from data . Even before Wald  introduced the decision‐theoretic perspective, Fisher  perceptively argued:
“In the field of pure research no assessment of the cost of wrong conclusions, or of delay in arriving at more correct conclusions can conceivably be more than a pretence, and in any case such an assessment would be inadmissible and irrelevant in judging the state of the scientific evidence.” (pp. 25–26)
Tukey (1960) echoed Fisher’s view by contrasting decisions vs. inferences:
“Like any other human endeavor, science involves many decisions, but it progresses by the building up of a fairly well established body of knowledge. This body grows by the reaching of conclusions — by acts whose essential characteristics differ widely from the making of decisions. Conclusions are established with careful regard to evidence, but without regard to consequences of specific actions in specific circumstances.” (p. 425)
Hacking  brought out the key difference between an “inference pertaining to evidence” for or against a hypothesis, and a “decision to do something” as a result of an inference:
“… to conclude that an hypothesis is best supported is, apparently, to decide that the hypothesis in question is best supported. Hence it is a decision like any other. But this inference is fallacious. Deciding that something is the case differs from deciding to do something. … Hence deciding to do something falls squarely in the province of decision theory, but deciding that something is the case does not.” (p. 31)
This issue was elaborated upon by Birnbaum , p. 19:
“Two contrasting interpretations of the decision concept are formulated: behavioral, applicable to “decisions” in a concrete literal sense as in acceptance sampling; and evidential, applicable to “decisions” such as “reject ” in a research context, where the pattern and strength of statistical evidence concerning statistical hypotheses is of central interest.”
6.3. Loss functions vs. inherent distance functions
The notion of a loss function stemming from “information other than the data” raises another source of potential conflict. This stems from the fact that within each statistical model there exists an inherent statistic distance function, often relating to the log‐likelihood and the score function, which constitutes information contained in the data; see Ref. .
It is well known that when the distribution underlying is normal, the inherent distance function for comparing estimators of the mean () is the square:
On the other hand, when the distribution is Laplace, the relevant statistical distance function is the absolute distance (see Ref. ):
Similarly, when the distribution underlying is uniform, the inherent distance function is:
Note that these distance functions are defined at the point and not for all in , as traditional loss functions.
The dilemma facing a Bayesian or a decision‐theoretic statistician is to decide when it makes sense to override the MLE and select the optimal rule stemming from an externally given loss function. The dilemma is not as trivial as it might seem at first sight for two reasons. First, the key difference between the two is that the assumptions of the likelihood function are testable vis‐a‐vis the data, but those underlying the loss function are not. Second, the likelihood function renders the notion of efficiency “global,” full efficiency, in terms of Fisher’s information:
Hence, the optimality of an estimator can be affirmed using testable information comprising the statistical model . This is in direct contrast with admissibility, which is a property defined in terms of “local” efficiency—relative to a loss function—based on external (nontestable) information.
6.4. Acceptance sampling vs. learning from data
Let us bring out the key features of a situation where the above decision‐theoretic set up makes perfectly good sense. This is the situation Fisher  called acceptance sampling, such as an industrial production process where the objective is quality control, i.e., to make a decision pertaining to shipping sub‐standard products (e.g., nuts and bolts) to a buyer using the expected loss/gain as the ultimate criterion.
In an acceptance sampling context, the MSEor some other risk function, are relevant because they evaluate genuine losses associated with a decision related to the choice of an estimate , say the cost of the observed percentage of defective products, but that has nothing to do with type I and II error probabilities.
Acceptance sampling differs from a scientific enquiry in two crucial respects:
The primary aim is to use statistical rules to minimize the expected loss associated with “a decision.”
The sagacity of all actions is determined by the respective “losses” stemming from “relevant information other than the data (, p. 251).”
The trade‐off between the two types of error probabilities is determined by the risk function itself and not by any endeavor to learn from data about Indeed, the learning is deliberately undermined by certain loss function such as the overall MSE (14) that favor biased estimators of the James‐Stein type.
The key difference between acceptance sampling and a scientific inquiry is that the primary objective of the latter is not to minimize expected loss (costs and utility) associated with different values of but to use data to learn about the “true” model (17). The two situations are drastically different mainly because the key notion of a “true ” calls into question the above acceptance sampling set up. Indeed, the loss function being defined “,” will penalize since there is no reason to expect that the highest ranked would coincide with , unless by accident.
The extreme relativism of loss function optimality renders decision‐theoretic and Bayes rules highly vulnerable to abuse. In practice, one can justify any estimator as optimal, however lame in terms of other criteria, by selecting an “appropriate” loss function.
Example 1. Consider a manufacturer of high precision bolts and nuts who has information that the buyer only checks the first and last box for quality control when accepting an order. This suggests that to minimize losses, stemming from the return of its products as defective, an appropriate loss function might be:
From the acceptance sampling perspective, the “optimal” estimator is excellent because it minimizes the expected losses, but it is a terrible estimator for pinpointing because it is inconsistent!
Consider a more general case where acceptance sampling resembles hypothesis testing in so far as final products are randomly selected for inspection during the production process. In such a situation the main objective can be viewed as operationalizing the probabilities of false acceptance/rejection with a view to minimize the expected losses. The conventional wisdom has been that this situation is similar enough to Neyman‐Pearson (N‐P) testing to render the latter as the appropriate framing for the decision to ship this particular batch or not. However, a closer look at some of the examples used to illustrate such a situation , reveals that the decisions are driven exclusively by the risk function and not by any quest to learn from data about the true . For instance, N‐P way of addressing the trade‐off between the two types of error probabilities, fixing to a small value and seek a test that minimizes the type II error probability, seems utterly irrelevant in such a context. One can easily think of a loss function where the “optimal” trade‐off calls for a much larger type I than type II error probability. As argued in Ref. :
“Wald’s decision theory … has given up fixed probability of errors of the first kind, and has focused on gains, losses or regrets.” (p. 433)
Indeed, Wald  was the first to highlight that the decision‐theoretic notion of “optimality” revolves around a particular loss function:
“The “best” system of regions of acceptance … will depend only on the weight function of the errors.” (, p. 302)
Given the crucial differences in [a]–[c], one can make a strong case that the objectives and the underlying reasoning of acceptance sampling are drastically different from those pertaining to learning from data in a scientific context.
6.5. Is expected loss a legitimate frequentist error?
The key question is: what do expected losses and traditional frequentist errors, such as bias, MSE and the type I–II errors, have in common, if anything?
First, they stem directly from the statistical model since the underlying sampling distributions of estimators, test statistics, and predictors are derived exclusively from the distribution of the sample through Eq. (7). In this sense, the relevant error probabilities are directly related to statistical information pertaining to the data as summarized by the statistical model itself.
Second, they are attached to a particular frequentist inference procedure as they relate to a relevant inferential claim. These error probabilities calibrate the effectiveness of inference procedures in learning from data about the true statistical model
In light of these features, the question is: “in what sense a risk function could potentially represent relevant frequentist errors?” That argument that the risk function represents legitimate frequentist errors because it is derived by taking expectations with respect to , is misguided for two reasons.
The relevant errors in estimation, including the bias and MSE are evaluated with respect to , by invoking factual reasoning; denotes the state of Nature. Wald’s  original loss function in Eq. (2) represents an interesting case because it is defined in terms of , which renders it nonoperational when evaluated for all in , since is unknown in practice. In contrast, the errors associated with the bias and MSE are rendered operational by the factual reasoning fashioned to forgo knowing .
The expected losses stemming from the risk function are attached to particular values of in . Such an assignment is in direct conflict with all the above legitimate error probabilities that are attached to the inference procedure itself, and never to the particular values of in The expected loss assigned to each value of in has nothing to do with learning from data about . Indeed, the risk function will penalize a procedure for pinpointing since the latter is unknown in practice. This is in direct conflict with the main objective of frequentist estimation but in sync with “acceptance sampling,” where the objective of the inference has everything to do with expected losses.
7. Summary and conclusions
'The paper makes a case for Fisher’s [12, 42] assertions concerning the appropriateness of the decision‐theoretic framing for “acceptance sampling” and its inappropriateness for frequentist inference. A closer look at this framing reveals that it is congruent with the Bayesian approach because it supplements the posterior distribution with a theory of optimal inference. Decision‐theoretic and Bayesian rules are considered optimal when they minimize the expected loss for all possible values of [irrespective of what the true value happens to be. In contrast, the theory of optimal frequentist inference revolves around the true value , since it depends entirely on the capacity of the procedure to pinpoint The frequentist approach relies on factual (estimation and prediction), as well as hypothetical (testing) reasoning, both of which revolve around the existential quantifier . The inappropriateness of the quantifier calls into question the relevance of admissibility as a minimal property for frequentist estimators. A strong case can be made that the relevant minimal property for frequentist estimators is consistency. In addition, full efficiency provides the relevant measure of an estimator’s finite sample efficiency (accuracy) in pinpointing . Both of these properties stem from the underlying statistical model in contrast to admissibility which relies on loss functions based on information other than the data.
It is argued that Stein’s  result stems from the fact that admissibility introduces a trade‐off between the accuracy of the estimator in pinpointing and the “overall” expected loss. That is, the James‐Stein estimator achieves a higher overall MSE by blunting the capacity of a frequentist estimator to pinpoint Why would a frequentist care about the overall MSE defined for all in After all, expected losses are not legitimate errors similar to bias and MSE (when properly defined), as well as coverage, type I and II errors. The latter are attached to the frequentist procedures themselves to calibrate their capacity to achieve learning from data about . In contrast, expected losses are assigned to different values of in , using information other than the data.