The Bayesian Posterior Estimators under Six Loss Functions for Unrestricted and Restricted Parameter Spaces

In this chapter, we have investigated six loss functions. In particular, the squared error loss function and the weighted squared error loss function that penalize overestimation and underestimation equally are recommended for the unrestricted parameter space (cid:1) ∞ ; ∞ ð Þ ; Stein ’ s loss function and the power-power loss function, which penalize gross overestimation and gross underestimation equally, are recommended for the positive restricted parameter space 0 ; ∞ ð Þ ; the power-log loss function and Zhang ’ s loss function, which penalize gross overestimation and gross underestimation equally, are recommended for 0 ; 1 ð Þ . Among the six Bayesian estimators that minimize the corresponding posterior expected losses (PELs), there exist three strings of inequalities. However, a string of inequalities among the six smallest PELs does not exist. Moreover, we summarize three hierarchical models where the unknown parameter of interest belongs to 0 ; ∞ ð Þ , that is, the hierarchical normal and inverse gamma model, the hierarchical Poisson and gamma model, and the hierarchical normal and normal-inverse-gamma model. In addition, we summarize two hierarchical models where the unknown parameter of interest belongs to 0 ; 1 ð Þ , that is, the beta-binomial model and the beta-negative binomial model. For empirical Bayesian analysis of the unknown parameter of interest of the hierarchical models, we use two common methods to obtain the estimators of the hyperparameters, that is, the moment method and the maximum likelihood estimator (MLE) method.


Introduction
In Bayesian analysis, there are four basic elements: the data, the model, the prior, and the loss function. A Bayesian estimator minimizes some posterior expected loss (PEL) function. We confine our interests to six loss functions in this chapter: the squared error loss function (well known), the weighted squared error loss function ( [1], p. 78), Stein's loss function [2][3][4][5][6][7][8][9][10], the power-power loss function [11], the power-log loss function [12], and Zhang's loss function [13]. It is worthy to note that among the six loss functions, the first and second loss functions are defined on Θ ¼ À∞; ∞ ð Þ , and they penalize overestimation and underestimation equally. The third and fourth loss functions are defined on Θ ¼ 0; ∞ ð Þ, and they penalize gross overestimation and gross underestimation equally, that is, an action a will suffer an infinite loss when it tends to 0 or ∞. The fifth and sixth loss functions are defined on Θ ¼ 0; 1 ð Þ, and they penalize gross overestimation and gross underestimation equally, that is, an action a will suffer an infinite loss when it tends to 0 or 1.
The squared error loss function and the weighted squared error loss function have been used by many authors for the problem of estimating the variance, σ 2 , based on a random sample from a normal distribution with mean μ unknown (see, for instance, [14,15]). As pointed out by [16], the two loss functions penalize equally for overestimation and underestimation, which is fine for the unrestricted parameter space Θ ¼ À∞; ∞ ð Þ . For Θ ¼ 0; ∞ ð Þ, the positive restricted parameter space, where 0 is a natural lower bound and the estimation problem is not symmetric, we should not choose the squared error loss function and the weighted squared error loss function but choose a loss function which can penalize gross overestimation and gross underestimation equally, that is, an action a will suffer an infinite loss when it tends to 0 or ∞. Stein's loss function owns this property, and thus it is recommended for Θ ¼ 0; ∞ ð Þby many researchers (e.g., see [2][3][4][5][6][7][8][9][10]). Moreover, [11] proposes the power-power loss function which not only penalizes gross overestimation and gross underestimation equally but also has balanced convergence rates or penalties for its argument too large and too small. Therefore, Stein's loss function and the powerpower loss function are recommended for Θ ¼ 0; ∞ ð Þ. Analogously, for a restricted parameter space Θ ¼ 0; 1 ð Þ, where 0 and 1 are two natural bounds and the estimation problem is not symmetric, we should not select the squared error loss function and the weighted squared error loss function but select a loss function which can penalize gross overestimation and gross underestimation equally, that is, an action a will suffer an infinite loss when it tends to 0 or 1. It is worthy to note that Stein's loss function and the power-power loss function are also not appropriate in this case. The power-log loss function proposed by [12] has this property. Moreover, they propose six properties for a good loss function on Θ ¼ 0; 1 ð Þ. Specifically, the power-log loss function is convex in its argument, attains its global minimum at the true unknown parameter, and penalizes gross overestimation and gross underestimation equally. Apart from the six properties, [13] proposes the seventh property, that is, balanced convergence rates or penalties for the argument too large and too small, for a good loss function on Θ ¼ 0; 1 ð Þ. Therefore, the power-log loss function and Zhang's loss function are recommended for Θ ¼ 0; 1 ð Þ. The rest of the chapter is organized as follows. In Section 2, we obtain two Bayesian estimators for θ ∈ Θ ¼ À∞; ∞ ð Þunder the squared error loss function and the weighted squared error loss function. In Section 3, we obtain two Bayesian estimators for θ ∈ Θ ¼ 0; ∞ ð Þunder Stein's loss function and the power-power loss function. In Section 4, we obtain two Bayesian estimators for θ ∈ Θ ¼ 0; 1 ð Þ under the power-log loss function and Zhang's loss function. In Section 5, we summarize three strings of inequalities in a theorem. Some conclusions and discussions are provided in Section 6.

Squared error loss function
The Bayesian estimator under the squared error loss function (well known), δ π 2 x ð Þ, minimizes the posterior expected squared error loss (PESEL), E L 2 θ; a ð Þjx ½ , that is, Þis an action (estimator), is the squared error loss function, and θ ∈ À∞; ∞ ð Þis the unknown parameter of interest. The PESEL is easy to obtain (see [16]): It is found in [16] that by taking partial derivative of the PESEL with respect to a and setting it to 0.

Weighted squared error loss function
The Bayesian estimator under the weighted squared error loss function, δ π w2 x ð Þ, minimizes the posterior expected weighted squared error loss (PEWSEL) (see [1]), where A a x ð Þ : a x ð Þ ∈ À∞; ∞ ð Þ f g is the action space, a ¼ a x ð Þ ∈ À∞; ∞ ð Þis an action (estimator), is the weighted squared error loss function, and θ ∈ À∞; ∞ ð Þis the unknown parameter of interest. The PEWSEL is easy to obtain (see [1]): It is found in [1] that by taking partial derivative of the PEWSEL with respect to a and setting it to 0.

Bayesian estimation for θ ∈ (0,∞)
There are many hierarchical models where the parameter of interest is As pointed out in the introduction, we should calculate and use the Bayesian estimator of the parameter θ under Stein's loss function or the powerpower loss function because they penalize gross overestimation and gross underestimation equally. We list several such hierarchical models as follows.

Model (a) (hierarchical normal and inverse gamma model).
This hierarchical model has been investigated by [10,16,17]. Suppose that we observe X 1 , X 2 , …, X n from the hierarchical normal and inverse gamma model: where À∞ < μ < ∞, α > 0, and β > 0 are known constants, θ is the unknown parameter of interest, N μ; θ ð Þ is the normal distribution, and IG α; β ð Þ is the inverse gamma distribution. It is worthy to note that the problem of finding the Bayesian rule under a conjugate prior is a standard problem and the problem is treated in almost every text on mathematical statistics. The idea of selecting an appropriate prior from the conjugate family was put forward by [18]. Specifically, Bayesian estimation of θ under the prior IG α; β ð Þ is studied in Example 4.2.5 (p. 236) of [17] and in Exercise 7.23 (p. 359) of [16]. However, they only calculate the Bayesian estimator with respect to IG α; β ð Þprior under the squared error loss, δ π 2 x ð Þ ¼ E θjx ð Þ.

Model (b) (hierarchical Poisson and gamma model).
This hierarchical model has been investigated by [1,16,19,20]. Suppose that X 1 , X 2 , …, X n are observed from the hierarchical Poisson and gamma model: where α > 0 and β > 0 are hyperparameters to be determined, P θ ð Þ is the Poisson distribution with an unknown mean θ > 0, and G α; β ð Þ is the gamma distribution with an unknown shape parameter α and an unknown rate parameter β. The gamma prior G α; β ð Þ is a conjugate prior for the Poisson model, so that the posterior distribution of θ is also a gamma distribution. The hierarchical Poisson and gamma model (10) has been considered in Exercise 4.32 (p. 196) of [4]. It has been shown that the marginal distribution of X is a negative binomial distribution if α is a positive integer. The Bayesian estimation of θ under the gamma prior is studied in [19] and in Tables 3.3.1 (p. 121) and 4.2.1 (p. 176) of [1]. However, they only calculated the Bayesian posterior estimator of θ under the squared error loss function.

One-dimensional case
The Bayesian estimator under Stein's loss function, δ π s x ð Þ, minimizes the posterior expected Stein's loss (PESL) (see [1,10,16] where is Stein's loss function, and θ > 0 is the unknown parameter of interest. The PESL is easy to obtain (see [10]): It is found in [10] that by taking partial derivative of the PESL with respect to a and setting it to 0. The PESLs evaluated at the Bayesian estimators are (see [10]) where δ π 2 x ð Þ ¼ E θjx ð Þ is the Bayesian estimator under the squared error loss function.
For the variance parameter θ of the hierarchical normal and inverse gamma model (9), [10] recommends and analytically calculates the Bayesian estimator: where with respect to IG α; β ð Þ prior under Stein's loss function. This estimator minimizes the PESL. [10] also analytically calculates the Bayesian estimator, with respect to IG α; β ð Þ prior under the squared error loss, and the corresponding PESL. [10] notes that which is essential for the calculation of and depends on the digamma function ψ Á ð Þ. Finally, the numerical simulations exemplify that PESL s π; x ð Þ and PESL 2 π; x ð Þ depend only on α and n and do not depend on μ, β, and x; the estimators δ π s x ð Þ are unanimously smaller than the estimators δ π 2 x ð Þ; and PESL s π; x ð Þ are unanimously smaller than PESL 2 π; x ð Þ. For the hierarchical Poisson and gamma model (43), [20] first calculates the posterior distribution of θ, π θjx ð Þ, and the marginal pmf of x, π x ð Þ, in Theorem 1 of their paper. [20] then calculates the Bayesian posterior estimators δ π s x ð Þ and δ π 2 x ð Þ, and the PESLs PESL s π; x ð Þ and PESL 2 π; x ð Þ, and they satisfy two inequalities. After that, the estimators of the hyperparameters of the model (10) by the moment method α 1 n ð Þ and β 1 n ð Þ are summarized in Theorem 2 of their paper. Moreover, the estimators of the hyperparameters of the model (10) by the maximum likelihood estimator (MLE) method α 2 n ð Þ and β 2 n ð Þ are summarized in Theorem 3 of their paper. Finally, the empirical Bayesian estimators of the parameter of the model (10) under Stein's loss function by the moment method and the MLE method are summarized in Theorem 4 of their paper. In numerical simulations of [20], they have illustrated the two inequalities of the Bayesian posterior estimators and the PESLs, the moment estimators and the MLEs are consistent estimators of the hyperparameters, and the goodness of fit of the model to the simulated data. The numerical results indicate that the MLEs are better than the moment estimators when estimating the hyperparameters. Finally, [20] exploits the attendance data on 314 high school juniors from two urban high schools to illustrate their theoretical studies.
For the variance parameter θ of the normal distribution with a normal-inversegamma prior (11), [23] recommends and analytically calculates the Bayesian posterior estimator, δ π s x ð Þ, with respect to a conjugate prior μ|θ À Á under Stein's loss function which penalizes gross overestimation and gross underestimation equally. This estimator minimizes the PESL. As comparisons, the Bayesian posterior estimator, δ π 2 x ð Þ ¼ E θjx ð Þ, with respect to the same conjugate prior under the squared error loss function, and the PESL at δ π 2 x ð Þ, are calculated. The calculations of δ π s x ð Þ, δ π 2 x ð Þ, PESL s π; x ð Þ, and PESL 2 π; x ð Þ depend only on E θjx ð Þ, E θ À1 jx À Á , and E log θjx ð Þ . The numerical simulations exemplify their theoretical studies that the PESLs depend only on v 0 and n, but do not depend on μ 0 , κ 0 , σ 0 , and especially x. The estimators δ π 2 x ð Þ are unanimously larger than the estimators δ π s x ð Þ, and PESL 2 π; x ð Þ are unanimously larger than PESL s π; x ð Þ. Finally, [23] calculates the Bayesian posterior estimators and the PESLs of the monthly simple returns of the Shanghai Stock Exchange (SSE) Composite Index, which also exemplify the theoretical studies of the two inequalities of the Bayesian posterior estimators and the PESLs.

Power-power loss function
The Bayesian estimator under the power-power loss function, δ π p x ð Þ, minimizes the posterior expected power-power loss (PEPL) (see [11]), where is the power-power loss function, and θ > 0 is the unknown parameter of interest. The PEPL is easy to obtain (see [11]): It is found in [11] that by taking partial derivative of the PEPL with respect to a and setting it to 0. The PEPLs evaluated at the Bayesian estimators are (see [11]) The power-power loss function is proposed in [11], and it has all the seven properties proposed in his paper. More specifically, it penalizes gross overestimation and gross underestimation equally, is convex in its argument, and has balanced convergence rates or penalties for its argument too large and too small. Therefore, it is recommended for the positive restricted parameter space Θ ¼ 0; ∞ ð Þ.

Bayesian estimation for θ ∈ 0; 1 ð Þ
There are some hierarchical models where the unknown parameter of interest is θ ∈ Θ ¼ 0; 1 ð Þ. As pointed out in the introduction, we should calculate and use the Bayesian estimator of the parameter θ under the power-log loss function or Zhang's loss function because they penalize gross overestimation and gross underestimation equally. We list two such hierarchical models as follows.
Model (d) (beta-binomial model). This hierarchical model has been investigated by [1,12,13,16,32,33]. Suppose that X 1 , X 2 , …, X n are from the beta-binomial model: where α > 0 and β > 0 are known constants, m is a known positive integer, θ ∈ 0; 1 ð Þ is the unknown parameter of interest, Be α; β ð Þ is the beta distribution, and Bin m; θ ð Þis the binomial distribution. Specifically, Bayesian estimation of θ under the prior Be α; β ð Þ is studied in Example 7.2.14 (p. 324) of [16] and in Tables 3.3.1 (p. 121) and 4.2.1 (p. 176) of [1]. However, they only calculate the Bayesian estimator with respect to Be α; β ð Þ prior under the squared error loss, δ π 2 x ð Þ ¼ E θjx ð Þ. Moreover, they only consider one observation. The beta-binomial model has been investigated recently. For instance, [32] uses the beta-binomial to draw the random removals in progressive censoring; [12,13] use the beta-binomial to model some magazine exposure data for the monthly magazine Signature; [33] develops estimation procedure for the parameters of a zero-inflated overdispersed binomial model in the presence of missing responses.
Model (e) (beta-negative binomial model). This hierarchical model has been investigated by [1,34]. Suppose that X 1 , X 2 , …, X n are from the beta-negative binomial model: where α > 0 and β > 0 are known constants, m is a known positive integer,
In Table 1, property (a) means that any action a of the parameter θ should incur a nonnegative loss. Property (b) means that when x ¼ a=θ ¼ 1, or a ¼ θ, that is, a correctly estimates θ, the loss is 0. Property (c) means that when x ¼ a=θ ! 1=θ ð Þ À , that is, a is moving away from θ and tends to 1 À , it will incur an infinite loss. Property (d) means that when x ¼ a=θ ! 0 þ , that is, a is moving away from θ and tends to 0 þ , it will also incur an infinite loss. Properties (c) and (d) mean that the loss function will penalize gross overestimation and gross underestimation equally. Property (e) is useful in the proofs of some propositions of the minimaxity and the admissibility of the Bayesian estimator (see [1]). Property (f) means that 1 and θ are the local extrema of L x ð Þ and L ajθ ð Þ, respectively. Property (f) also implies that L θ þ Δajθ ð Þ¼o Δa ð Þ, that is, the loss incurred by an action a ¼ θ þ Δa near θ (Δa ≈ 0), is very small compared to Δa. Let Define Thus It is easy to check (see the supplement of [12]) that L pl θ; a ð Þ ¼ L pl ajθ ð Þ ¼ L pl x ð Þ x¼a=θ , which is called the power-log loss function, satisfies all the six properties listed in Table 1. Consequently, the power-log loss function is a good loss function for Θ ¼ 0; 1 ð Þ, and thus it is recommended for Θ ¼ 0; 1 ð Þ. We remark that the power-log loss function on Θ ¼ 0; 1 ð Þ is an analog of the power-log loss function on Θ ¼ 0; ∞ ð Þ, which is the popular Stein's loss function.
The Bayesian estimator under the power-log loss function, δ π pl x ð Þ, minimizes the posterior expected power-log loss (PEPLL) (see [12]), E L pl θ; a ð Þjx Â Ã , that is, where A a x ð Þ : a x ð Þ ∈ 0; 1 ð Þ f g is the action space, a ¼ a x ð Þ ∈ 0; 1 ð Þ is an action (estimator), L pl θ; a ð Þ given by (34) is the power-log loss function, and θ ∈ 0; 1 ð Þ is the unknown parameter of interest. The PEPLL is easy to obtain (see [12]): where by taking partial derivative of the PEPLL with respect to a and setting it to 0. The PEPLLs evaluated at the Bayesian estimators are (see [12]) Finally, the numerical simulations and a real data example of some monthly magazine exposure data (see [35]) exemplify the theoretical studies of two size relationships about the Bayesian estimators and the PEPLLs in [12].

Zhang et al. [12] proposed six properties for a good loss function
Apart from the six properties, [13] proposes the seventh property (balanced convergence rates or penalties for the argument too large and too small) for a good loss function on Θ ¼ 0; 1 ð Þ. Moreover, the seven properties for a good loss function on Θ ¼ 0; 1 ð Þare summarized in Table 1 of [13]. The explanations of the first six properties in Table 1 of [13] can be found in the previous subsection (see also [12]). In Table 1 And they say that L k 1 θ ð Þ 1 n À Á and L 1 θ 1 À 1 n À Á À Á are asymptotically equivalent. Similarly, L k 2 θ ð Þ 1 n jθ À Á and L 1 À 1 n jθ À Á are said to be asymptotically equivalent. They also say that L x ð Þ (L ajθ ð Þ) has balanced convergence rates or penalties for x (a) too large and too small. It is worthy to note that k 1 θ ð Þ 1 n ! 0 and 1 θ 1 À 1 n À Á ! 1 θ at the same order O 1 n À Á . Analogously, k 2 θ ð Þ 1 n ! 0 and 1 À 1 n ! 1 at the same order O 1 n À Á . Finally, only when properties (c) and (d) hold, property (g) may hold. Let It is easy to check (see the supplement of [13]) that L z θ; a ð Þ ¼ L z ajθ ð Þ ¼ L z x ð Þj x¼a=θ , which is called Zhang's loss function, satisfies all the seven properties listed in Table 1 of [13]. Consequently, Zhang's loss function is a good loss function, and thus it is recommended for Θ ¼ 0; 1 ð Þ. The Bayesian estimator under Zhang's loss function, δ π z x ð Þ, minimizes the posterior expected Zhang's loss (PEZL) (see [13]), E L z θ; a ð Þjx ½ , that is, where A a x ð Þ : a x ð Þ ∈ 0; 1 ð Þ f g is the action space, a ¼ a x ð Þ ∈ 0; 1 ð Þ is an action (estimator), L z θ; a ð Þ given by (43) is Zhang's loss function, and θ ∈ 0; 1 ð Þ is the unknown parameter of interest. The PEZL is easy to obtain (see [13]): where It is found in [13] that by taking partial derivative of the PEZL with respect to a and setting it to 0. The PEZLs evaluated at the Bayesian estimators are (see [13]) Zhang et al. [13] considers an example of some magazine exposure data for the monthly magazine Signature (see [12,35]) and compares the numerical results with those of [12].
For the probability parameter θ of the beta-negative binomial model (31), [34] recommends and analytically calculates the Bayesian estimator δ π z x ð Þ, with respect to Be α; β ð Þprior under Zhang's loss function which penalizes gross overestimation and gross underestimation equally. This estimator minimizes the PEZL. They also calculate the usual Bayesian estimator δ π 2 x ð Þ ¼ E θjx ð Þ which minimizes the PESEL. Moreover, they also obtain the PEZLs evaluated at the two Bayesian estimators, PEZL z π; x ð Þand PEZL 2 π; x ð Þ. After that, they show two theorems about the estimators of the hyperparameters of the beta-negative binomial model (31) when m is known or unknown by the moment method (Theorem 1 in [34]) and the MLE method (Theorem 2 in [34]). Finally, the empirical Bayesian estimator of the probability parameter θ under Zhang's loss function is obtained with the hyperparameters estimated by the moment method or the MLE method from the two theorems.
In the numerical simulations of [34], they have illustrated three things: the two inequalities of the Bayesian posterior estimators and the PEZLs, the moment estimators and the MLEs, which are consistent estimators of the hyperparameters, and the goodness of fit of the beta-negative binomial model to the simulated data. Numerical simulations show that the MLEs are better than the moment estimators when estimating the hyperparameters in terms of the goodness of fit of the model to the simulated data. However, the MLEs are very sensitive to the initial estimators, and the moment estimators are usually proved to be good initial estimators.
In the real data section of [34], they consider an example of some insurance claim data, which are assumed from the beta-negative binomial model (31). They consider four cases to fit the real data. In the first case, they assume that m ¼ 6 is known for illustrating purpose (of course, one can assume another known m value). In the other three cases, they assume that m is unknown, and they provide three approaches to handle this scenario. The first two approaches consider a range of m values, for instance, m ¼ 1, 2, …, 20. The first approach is to maximize the log-likelihood function. The second approach is to maximize the p-value of the goodness of fit of the model (31) to the real data. The third approach is to determine the hyperparameters α, β, and m from Theorems 1 and 2 in [34] by the moment method and the MLE method, respectively, when m is unknown. Four tables which show the number of claims, the observed frequencies, the expected probabilities, and the expected frequencies of the insurance claims data are provided to illustrate the four cases.

Inequalities among Bayesian posterior estimators
For the six loss functions, we have the corresponding six Bayesian estimators δ π w2 x ð Þ, δ π pl x ð Þ, δ π s x ð Þ, δ π p x ð Þ, δ π 2 x ð Þ, and δ π z x ð Þ. Interestingly, for the six Bayesian estimators, we discover three strings of inequalities which are summarized in Theorem 1 (see Theorem 1 in [36]). To our surprise, an order between the two Bayesian estimators δ π w2 x ð Þ and δ π pl x ð Þ on Θ ¼ 0; 1 ð Þdoes not exist. It is worthy to note that the three strings of inequalities only depend on the loss functions. Moreover, the inequalities are independent of the chosen models, and the used priors provided the Bayesian estimators exist, and thus they exist in a general setting which makes them quite interesting.
It is easy to see that all the six loss functions are well defined on Θ Moreover, for Θ ¼ 0; ∞ ð Þ, there exists a string of inequalities among the four Bayesian estimators: Finally, for Θ ¼ À∞; ∞ ð Þ , there exists an inequality between the two Bayesian estimators: The proof of Theorem 1 exploits a key, important, and unified tool, the covariance inequality (see Theorem 4.7.9 (p. 192) in [16]), and the proof can be found in the supplement of [36].
It is worthy to note that the six Bayesian estimators and the six smallest PELs are all functions of π, x, and the loss function. Because there exists three strings of inequalities among the six Bayesian estimators, we would wonder whether there exists a string of inequalities among the six smallest PELs, in other words, PEWSEL w2 π; x ð Þ, PEPLL pl π; x ð Þ, PESL s π; x ð Þ, PEPL p π; x ð Þ, PESEL 2 π; x ð Þ, and PEZL z π; x ð Þ. The answer to this question is no! The numerical simulations of the smallest PELs exemplify this fact (see [36]). Table 2.

Smallest PELs
( Table 1 in [36]) The six Bayesian estimators, the PELs, and the smallest PELs.

Conclusions and discussions
In this chapter, we have investigated six loss functions: the squared error loss function, the weighted squared error loss function, Stein's loss function, the powerpower loss function, the power-log loss function, and Zhang's loss function. Now we give some suggestions on the conditions for using each of the six loss functions. It is worthy to note that among the six loss functions, the first two loss functions are defined on Θ ¼ À∞; ∞ ð Þand they penalize overestimation and underestimation equally on À∞; ∞ ð Þ , and thus we recommend to use them when the parameter space is À∞; ∞ ð Þ . Moreover, the middle two loss functions are defined on Θ ¼ 0; ∞ ð Þ, and they penalize gross overestimation and gross underestimation equally on 0; ∞ ð Þ, and thus we recommend to use them when the parameter space is 0; ∞ ð Þ. In particular, if one prefers the loss function to have balanced convergence rates or penalties for its argument too large and too small, then we recommend to use the power-power loss function on 0; ∞ ð Þ. Furthermore, the last two loss functions are defined on Θ ¼ 0; 1 ð Þ, and they penalize gross overestimation and gross underestimation equally on 0; 1 ð Þ, and thus we recommend to use them when the parameter space is 0; 1 ð Þ. In particular, if one prefers the loss function to have balanced convergence rates or penalties for its argument too large and too small, then we recommend to use Zhang's loss function on 0; 1 ð Þ. For each one of the six loss functions, we can find a corresponding Bayesian estimator, which minimizes the corresponding posterior expected loss. Among the six Bayesian estimators, there exist three strings of inequalities summarized in Theorem 1 (see also Theorem 1 in [36]). However, a string of inequalities among the six smallest PELs does not exist.
We summarize three hierarchical models where the unknown parameter of interest is θ ∈ Θ ¼ 0; ∞ ð Þ, that is, the hierarchical normal and inverse gamma model (9), the hierarchical Poisson and gamma model (10), and the hierarchical normal and normal-inverse-gamma model (11). In addition, we summarize two hierarchical models where the unknown parameter of interest is θ ∈ Θ ¼ 0; 1 ð Þ, that is, the beta-binomial model (30) and the beta-negative binomial model (31). Now we give some suggestions on the selection of the hyperparameters. One way to select the hyperparameters is through the empirical Bayesian analysis, which relies on a conjugate prior modeling, where the hyperparameters are estimated from the observations and the "estimated prior" is then used as a regular prior in the later inference. The marginal distribution can then be used to recover the prior distribution from the observations. For empirical Bayesian analysis, two common methods are used to obtain the estimators of the hyperparameters, that is, the moment method and the MLE method. Numerical simulations show that the MLEs are better than the moment estimators when estimating the hyperparameters in terms of the goodness of fit of the model to the simulated data. However, the MLEs are very sensitive to the initial estimators, and the moment estimators are usually proved to be good initial estimators.