InTechOpen uses cookies to offer you the best online experience. By continuing to use our site, you agree to our Privacy Policy.

Mathematics » "Advances in Statistical Methodologies and Their Application to Real Problems", book edited by Tsukasa Hokimoto, ISBN 978-953-51-3102-1, Print ISBN 978-953-51-3101-4, Published: April 26, 2017 under CC BY 3.0 license. © The Author(s).

Chapter 6

Gamma-Kumaraswamy Distribution in Reliability Analysis: Properties and Applications

By Indranil Ghosh and Gholamhossein G. Hamedani
DOI: 10.5772/66821

Article top


GK density plot for some specific parameter values.
Figure 1. GK density plot for some specific parameter values.
GK hazard rate function plot for some specific parameter values.
Figure 2. GK hazard rate function plot for some specific parameter values.
cdf for fitted distributions of the endurance of deep-groove ball bearings data.
Figure 3. cdf for fitted distributions of the endurance of deep-groove ball bearings data.
cdf for fitted distributions of the fatigue life of 6061-T6 Aluminum data.
Figure 4. cdf for fitted distributions of the fatigue life of 6061-T6 Aluminum data.

Gamma-Kumaraswamy Distribution in Reliability Analysis: Properties and Applications

Indranil Ghosh1 and Gholamhossein G. Hamedani2
Show details


In this chapter, a new generalization of the Kumaraswamy distribution, namely the gamma-Kumaraswamy distribution is defined and studied. Several distributional properties of the distribution are discussed in this chapter, which includes limiting behavior, mode, quantiles, moments, skewness, kurtosis, Shannon’s entropy, and order statistics. Under the classical method of estimation, the method of maximum likelihood estimation is proposed for the inference of this distribution. We provide the results of an analysis based on two real data sets when applied to the gamma-Kumaraswamy distribution to exhibit the utility of this model.

Keywords: gamma-Kumaraswamy distribution, Renyi’s entropy, reliability parameter, stochastic ordering, characterizations

1. Introduction

The generalization of a distribution by mixing it with another distribution over the years has provided a mathematical based way to model a wide variety of random phenomena statistically. These generalized distributions are effective and flexible models to analyze and interpret random durations in a possibly heterogeneous population. In many situations, observed data may be assumed to have come from such a mixture population of two or more distributions.

Two parameter gamma and a two parameter Kumaraswamy are most popular distribution for analyzing any lifetime data. Gamma distribution is a well-known distribution, and it has several desirable properties [1].

A serious limitation of the gamma distribution, however, is that the distribution function (or survival function) is not available in a closed form if the shape parameter is not an integer, thereby it requires some numerical methods to evaluate these quantities. As a consequence, this distribution is less attractive as compared to Ref. [2], which has nice tractable distribution function, survival function and hazard function. In this paper, we consider a four parameter gamma-Kumaraswamy distribution. It is observed that it has many properties which are quite similar to those of a gamma distribution, but it has an explicit expression for the distribution function or the survival functions. The major motivation of this chapter is to introduce a new family of distributions, make a comparative study of this family with respect to a Kumaraswamy family and a gamma family and provide the practitioner with an additional option, with a hope that it may have a ‘better fit’ compared to a gamma family or Kumaraswamy family in certain situations. It is noteworthy to note that the gamma-Kumaraswamy distribution is a generalization of Kumaraswamy distribution with the property that it can exhibit various shapes. ( Figure 1 ). This provides more flexibility to the gamma-Kumaraswamy distribution in comparison with Kumaraswamy distribution in modeling different data sets. The property of left-skewness is a rare characteristic as it is not enjoyed by several generalizations of Kumaraswamy distribution. Our proposed model is different from that of Ref. [3], where the authors have proposed a generalized gamma-generated distribution with an extra positive parameter for any continuous baseline G distribution.


Figure 1.

GK density plot for some specific parameter values.

The rest of the paper is organized as follows. In Section 2, we propose the gamma-Kumaraswamy distribution [GK(α, β, a, b)]. In Section 3, we study various properties of the GK(α, β, a, b) including the limiting behavior, transformation, and the mode. In Section 4, the moment generating function, the moments and the mean deviations from the mean and the median, and Renyi’s entropy are studied. In Section 5, we consider the maximum likelihood estimation of the GK(α, β, a, b). In Section 6, we provide an expression for the reliability parameter for two independent GK(α, β, a, b) with different choices for the parameters α and β but for a fixed choice of the two shape parameters of Kumaraswamy distribution. In Section 7, discussion is made for the moment generating function of the r-th order statistic and also the limiting distribution of the sample minimum and the sample maximum for a random sample of size n drawn from GK(α, β, a, b). An application of GK(α, β, a, b) is discussed in Section 8. Certain characterizations of GK(α, β, a, b) are presented in Section 9. In Section 10, some concluding remarks are made.

2. The gamma-Kumaraswamy distribution

We consider the following class of gamma-X class of distributions, for which, the parent model being


where α, β are positive parameters. Also, g(x)[G(x)] is the density function [cumulative distribution function] of the random variable X. Furthermore, G¯(x) is the survival function of the associated random variable X.

If X has density Eq. (1), then the random variable W=G(x)G¯(x) has a gamma distribution with parameters α, β. The reverse happens to be true as well. Here, we consider G(.) to be the cdf of a Kumaraswamy distribution with parameters a, b. Then, the cdf of the gamma-Kumaraswamy (hereafter GK) reduces to


where γ1(α,z)=Γ(α,z)Γ(α) with Γ(α,x)=0xuα1eudu is the regularized incomplete gamma function. So the density and hazard functions corresponding to Eq. (2) are given, respectively, by




The percentile functions for GK distribution: The p th percentile x p is defined by F(x p ) = p. From Eq. (2), we have γ1(α,1(1xa)bβ(1xa)b)=p . Define Zp=1(1xa)bβ(1xa)b , then Zp=γ11(α,p) , where γ11 is the inverse of regularized incomplete gamma function. Hence, xp=(1(β(1+Z1p))1/b)1/a .

In the density equation (3), a, b, and α are shape parameters and β is the scale parameter. It can be immediately verified that Eq. (3) is a density function. Plots of the GK density and survival rate function for selected parameter values are given in Figures 1 and 2 , respectively.


Figure 2.

GK hazard rate function plot for some specific parameter values.

If X~GK(a, b, α, β), then the survival function of X, S(x) will be


We simulate the GK distribution by solving the nonlinear equation


where u has the uniform (0,1) distribution. Some facts regarding the GK distribution are as follows:

  • If X~GK(a, b, α, β), then X m ~GK(a, b, α, β), m0 .

  • Also, we have the following important result: If X~GK(1, b, α, β), then X 1/a ~GK(a, b, α, β), a0 .

  • The GK distribution does not possess the reproductive property. In other words, if for any two X 1~GK (a1,b1,α1,β1) and X 2~GK (a2,b2,α2,β2), then the distribution of the sum S = X 1 + X 2 will not be a GK.

The first result provides an important property of the GK distribution for information analysis is that this distribution is closed under power transformation. The latter result is equally important because it provides a simple way to generate random variables following the GK distribution.

3. Properties of GK distribution

The following lemma establishes the relation between GK(α, β, a, b) distribution and gamma distribution.

Lemma 1. (Transformation): If a random variable X follows a gamma distribution with parameters α and β, then Y=1(1Xa)b(1Xa)b follows GK(α, β, a, b) distribution.

Proof. The proof follows immediately by using the transformation technique. W

The limiting behaviors of the GK pdf and its hazard function are given in the following theorem.

Theorem 1. The limits GK density function, f(x), and the hazard function, hF(x) , are given by


Proof. Straightforward and hence omitted. W

Theorem 2. The mode of the GK distribution is the solution of the equation k(x)=0, where


Proof. The derivative of f(x) in Eq. (3) can be written as


The critical values of Eq. (10) are the solutions of k(x)=0. W

Next, we discuss the IFR and/or DFR property of the hazard function for the GK distribution. For this, we will consider the result of Lemma 1. According to Lemma 1, if X~GK(a, b, α, β), then Y=1(1Xa)b(1Xa)b Gamma (α, β). In such a case for the random variable Y, the hazard rate function can be written as


Therefore, r(t)=(0(1+ut)α1exp(1/βu)du)1 . If α>1 , (1+ut)α1 is decreasing in t and hence r(t) is increasing, thereby and has a IFR. If 0<α<1 , then

(1+ut)α1 is increasing in t, so r(t) decreases and hence has a DFR. Now, since X is a one-to-one function of Y, the hazard rate function of X will also follow the exact pattern.

Let X and Y be two random variables. X is said to be stochastically greater than or equal to Y denoted by XstY if P(X>x)P(Y>x) for all x in the support set of X.

Theorem 3. Suppose X~GK (a1,b1,α,β1) and Y~GK (a2,b2,α,β2). If β 1 > β 2, a 1 > a 2 and b 1 < b 2. Then XstY , for integer values of a 1 and a 2.

Proof. At first, we note that the incomplete gamma function Γ(α,x) is an increasing function of x for fixed α. For any real number x(0,1), β1>β2, a1>a2 , and b1<b2 , we have


This implies that Γ(α,β11((1xa1)b11))Γ(α,β21((1xa2)b21)) . Equivalently, it implies that P(X>x)P(Y>x) , and this completes the proof. W

Note: For fractional choices of a 1 and a 2, the reverse of the above inequality will hold.

4. Moments and mean deviations

For any r1 ,

E(Xr)=01xrf(x)dx =1Γ(α)0exp(u)uα1(1(1+uβ)1/b)r/adu (onsubstitutionu=1(1xa)bβ(1xa)b) =1Γ(α)j=0(1)j(r/aj)0exp(u)uα1(1+uβ)j/bdu =1Γ(α)j=0k=0(1)j+k(r/aj)(j/b+k1k)0exp(u)βkuα+k1du =βkΓ(α)j=0k=0(1)j+k(r/aj)(j/b+k1k)Γ(α+k).

Upper bounds for the r -th order moment: Since (nk)nkk! , for 1kn , from Eq. (13), one can write E(Xr)((ra)(jb+k1))+βkΓ(α)j=0k=0(1)j+k(r/a)jj!((j/b+k1)kk!)Γ(α+k) , provided r/a and j/b+k−1 are both integers. Employing successively, the generalized series expansion of (1(1+βu)1/b)j/a , the characteristic function for X~GK (a,b,α,θ) will be given by [from Eq. (3)]

φX(t)=1Γ(α)01eitxf(x)dx =1Γ(α)0uα1euexp(it(1(1+βu)1/b)1/a)du onsubstitutionu=1(1xa)bβ(1xa)b =1Γ(α)j=00(it(1(1+βu)1/b)1/a)jj!uα1eudu =1Γ(α)j=0k1=0k2=0(1)k1+k2βk2(it)j(j/ak1)(k1/bk2)Γ(α+k2).

If j/a and k 1/b are integers then in Eq. (14), the second and third summations will stop at j/a and k 1/b, respectively.

If we denote the median by T, then the mean deviation from the mean, D(μ) , and the mean deviation from the median, D(T), can be written as


Now, consider


Using the substitution u=1(1xa)bβ(1xa)b in Eq. (17), we obtain


where we used successively binomial series expansion.

By using Eqs. (2) and (18), the mean deviation from the mean and the mean deviation from the median are, respectively, given by


4.1. Entropy

One useful measure of diversity for a probability model is given by Renyi’s entropy. It is defined as IR(ρ)=(1ρ)1log(fρ(x)dx) , where ρ>0 and ρ1 . If a random variable X has a GK distribution, then we have


Next, consider the integral

01fρ(x)dx=(abΓ(α)βα)ρρ10uα1exp(u)(1(1+βu1/ρ)1/b)1/adu (onsubstitutionu=(1(1xa)bβ(1xa)b)ρ).

Now, using successive application of the generalized binomial expansion, we can write


Hence, the integral in Eq. (21) reduces to


Therefore, the expression for the Renyi’s entropy will be


5. Maximum likelihood estimation

In this section, we address the parameter estimation of the GK(α, β, a, b) under the classical set up. Let X 1, X 2, …, X n be a random sample of size n drawn from the density Eq. (3). The log-likelihood function is given by


The derivatives of Eq. (13) with respect to α, β, a, and b are given by


where Ψ(α)=αlogΓ(α) ,


The MLEs α^ , β^, a^ , and b^ are obtained by setting Eqs. (2629) to zero and solving them simultaneously.

To estimate the model parameters, numerical iterative techniques must be used to solve these equations. We may investigate the global maxima of the log likelihood by setting different starting values for the parameters. The information matrix will be required for interval estimation. The elements of the 4 × 4 total observed information matrix (since expected values are difficult to calculate), J(θ)=Jr,s(θ) (for r,s=α,β,a,b ), can be obtained from the authors under request, where θ=(α,β,a,b) . The asymptotic distribution of (θ^θ) is N4(0,K(θ)1) , under the regularity conditions, where K(θ)=E(J(θ)) is the expected information matrix, and J(θ^)1 is the observed information matrix. The multivariate normal N4(0,K(θ)1) distribution can be used to construct approximate confidence intervals for the individual parameters.

5.1. Simulation study

In order to assess the performance of the MLEs, a small simulation study is performed using the statistical software R through the package (stats4), command MLE. The number of Monte Carlo replications was 20,000 For maximizing the log-likelihood function, we use the MaxBFGS subroutine with analytical derivatives. The evaluation of the estimates was performed based on the following quantities for each sample size; the empirical mean squared errors (MSEs) are calculated using the R package from the Monte Carlo replications. The MLEs are determined for each simulated data, say, (α^i,β^i,a^i,b^i) for i=1,2,,20,000 , and the biases and MSEs are computed by




for h=α,β,a,b . We consider the sample sizes at n = 100, 200, and 500 and consider different values for the parameters . The empirical results are given in Table 1 . The figures in Table 1 indicate that the estimates are quite stable and, more importantly, are close to the true values for these sample sizes. Furthermore, as the sample size increases, the MSEs decrease as expected.

Sample sizeActual valueBiasMSE
n α β a b α^ β^ a^ b^ α^ β^ a^ b^

Table 1.

Bias and MSE of the estimates under the maximum likelihood method.

6. Reliability parameter

The reliability parameter R is defined as R=P(X>Y) , where X and Y are independent random variables. For a detailed study on the possible applications of the reliability parameter, an interested reader is suggested to look at Ref. [4, 5]. If X and Y are two continuous and independent random variables with the cdf’s F1(x) and F2(y) and their pdf’s f1(x) and f2(y) , respectively, then the reliability parameter R can be written as


Theorem 4. Let X~GK(a, b, α 1, β 1) and Y~(a, b, α 2, β 2), then


Proof: From Eqs. (2) and (3), we have


Using the series expansion for the incomplete gamma function γ1(k,x)=xkp=0(x)pk!(k+p) , and using the substitution u=1(1ta)bβ1(1ta)b , Eq. (34) reduces to


Hence the proof. W

7. Order statistics

Here, we derive the general r-th order statistic and the large sample distribution of the sample minimum and the sample maximum based on a random sample of size n from the GK(α, β, a, b) distribution. The corresponding density function of the r-th order statistic, Xr:n, from Eq. (3) will be


Using the series expression for the incomplete gamma function: γ1(α,x)=k=0ex(x)α+kα(α+1)(α+k) , the pdf of Xr:n can be written as

fr:n(x)=1B(r,nr+1)f(x)j=0r1(1)j(r1j)(k=0exp(1(1xr:na)bβ(1xr:na)b)(1(1xr:na)bβ(1xr:na)b)α+kΓ(α)α(α+1)(α+k))nr+j =f(x)B(r,nr+1)j=0r1k1knr+j=0(1)j+sk(r1j)exp((nr+j)1(1xr:na)bβ(1xr:na)b) ×(1(1xr:na)b(1xr:na)b)sk+(nr+j)α(Γ(α))nr+jβsk+(nr+j)αpk =1B(r,nr+1)j=0nrk1knr+j=0(1)j+sk(r1j) ×Γ(sk+(r+j)α)(Γ(α))nr+jpkf(x|sk+(nr+j)α,β,a,b),

where sk=i=1nr+jki and pk=i=1nr+j(ki+α).

From Eq. (37), it is interesting to note that the pdf of the r-th order statistic Xr:n can be expressed as an infinite sum of the GK pdf ’s.

8. Application

Here, we consider two well-known illustrative data sets which are used to show the efficacy of the GK distribution. For details on these two data sets [6, 7], the second data set in Table 2 is from Ref. [8], and it represents the fatigue life of 6061-T6 aluminum coupons cut parallel with the direction of rolling and oscillated at 18 cycles per second. The GK distribution is fitted to the first data set and compared the result with the Kumaraswamy, gamma-uniform [9], and beta-Pareto [10]. These results are reported in Table 3 . The results show that gamma-uniform, GK distributions provide adequate fit to the data. Figure 3 displays the empirical and the fitted cumulative distribution functions. This figure supports the results in Table 3 . A close look at Figure 3 indicates that GK distribution provides better fit to the left tail than the gamma-uniform distribution. This is due to the fact that GK distribution can have longer left tail ( Figure 3 ).


Table 2.

Fatigue life of 6061-T6 aluminum data.

Parameter estimates a^=0.653 α^=7.528 c^=5.048 α^=7.891
b^=1.1182 β^=2.731 β^=0.401 β^=0.785
a^=6.49 θ^=6.417 a^=5.352
b^=0.932 b^=1.735
Log likelihood−162.34−116.58−113.36−113.25
A0* 12.1640.59511.31250.4282
W0* 2.89370.09310.13170.04893
K-S p-value0.00000.91400.83740.9978

Table 3.

Goodness of fit of deep-groove ball bearings data.


Figure 3.

cdf for fitted distributions of the endurance of deep-groove ball bearings data.

In addition, to check the goodness-of-fit of all statistical models, several other goodness-of-fit statistics are used and are computed using computational package Mathematica. The MLEs are computed using N maximize technique as well as the measures of goodness-of-fit statistics including the log-likelihood function evaluated at the MLEs (l), Akaike information criterion (AIC), corrected Akaike information criterion (AICC), consistent Akaike information criterion (CAIC), the Anderson-Darling (A *), the Cramer-von Mises (W *), and the Kolmogrov-Smirnov (K-S) statistics with their p values to compare the fitted models. These statistics are used to evaluate how closely a specific distribution with cdf (2) fits the corresponding empirical distribution for a given data set. The distribution with better fit than the others will be the one having the smallest statistics and largest p value. Alternatively, the distribution for which one obtains the smallest of each of these criteria (i.e., AIC, AICC, K-S, etc.) will be most suitable one. The mathematical equations of those statistics are given by

  • AIC=2(θ^)+2q

  • AICC=AIC+2q(q+1)nq1

  • CAIC=2(θ^)+2qnnq1

  • A0*=(2.25n2+0.75n+1)(n1ni=1n(2i1)log(zi(1zni+1)))

  • W0*=(0.5n+1)[(zi2i12n)2+112n]

  • KS=Max(inzi,zii1n),

where (θ^) denotes the log-likelihood function evaluated at the maximum likelihood estimates, q is the number of parameters, n is the sample size and zi=cdf(yi) , the y i ’s being the ordered observations.

Lieblein and Zelen [6] proposed a five parameter beta generalized Pareto distribution and fitted the data in Table 4 and compared the result with beta-Pareto and other known distributions. The results of fitting beta generalized Pareto and beta-Pareto from Ref. [8] are reported in Table 4 along with the results of fitting the Pareto (IV) and GK distributions to the data. The KS value from Table 4 indicates that the GK distribution provides the best fit. The fact that GK distribution has the least number of parameters than beta generalized Pareto and beta-Pareto adds an extra advantage over them. Figure 4 displays the empirical and the fitted cumulative distribution functions. This figure supports the results in Table 4 .

DistributionKumaraswamyBeta-ParetoBeta generalized Paretogamma-Kumaraswamy
Parameter estimates δ^=0.235 α^=485.470 α^=12.112 α^=4.87
γ^=2.4926 β^=162.060 β^=1.702 β^=3.352
k^=0.3943 μ^=40.564 a^=1.1722
θ^=3.910 k^=0.273 b^=2.0154
Log likelihood−572.39−517.33−457.85−417.36
A0* 4.17451.00830.65840.4921
W0* 0.67390.28270.11950.0822
K-S p-value0.0000.2480.5370.736

Table 4.

Parameter estimates for the fatigue life of 6061-T6 aluminum coupons data.


Figure 4.

cdf for fitted distributions of the fatigue life of 6061-T6 Aluminum data.

9. Characterization of GK distribution

In this section, we present characterizations of GK distribution in terms of the ratio of two truncated moments. For the previous works done in this direction, we refer the interested readers to Glänzel [1114] and Hamedani [1517]. For our characterization results, we employ a theorem due to Ref. [11], see for further details. The advantage of the characterizations given here is that cdf F need not have a closed form. We present here a corollary as a direct application of the theorem discussed in details in Ref. [11].

Corollary 1. Let X:Ω(0,1) be a continuous random variable and let h(x)=βα1 (1xa)b(α2)+1[1(1xa)b]1α and g(x)=h(x)exp(1(1xa)bβ(1xa)b) for x(0,1). Then X has pdf (3) if and only if the function η defined in Theorem 5 has the form


Proof. Let X has pdf (3), then




and finally


Conversely, if η is given as above, then


and hence


Now, in view of Theorem 5, X has pdf (3).

Corollary 2. Let X:Ω(0,1) be a continuous random variable and let h(x) be as in Proposition 1. Then, X has pdf (3) if and only if there exist functions g and η defined in Theorem 5 satisfying the differential equation


Remarks 1. (a) The general solution of the differential equation in Corollary 1 is


for 0 < x < 1, where D is a constant. One set of appropriate functions is given in Proposition 1 with D = 0

(b) Clearly, there are other triplets of functions (h, g, η) satisfying the conditions of Theorem 5. We presented one such triplet in Proposition 1.

10. Concluding remarks

A special case of the gamma-generated family of distributions, the gamma-Kumaraswamy distribution, is defined and studied. Various properties of the gamma-Kumaraswamy distribution are investigated, including moments, hazard function, and reliability parameter. The new model includes as special sub-models the gamma and Kumaraswamy distribution. Also, we provide various characterizations of the gamma-Kumaraswamy distribution. An application to a real data set shows that the fit of the new model is superior to the fits of its main sub-models. As future work related to this univariate GK model, we will consider the following:

  • A natural bivariate extension to the model in Eq. (1) would be


    In this case, exact evaluation of the normalizing constant would be difficult to obtain, even for a simple analytic expression of a baseline bivariate distribution function, G(x, y). Numerical methods such as Monte Carlo methods of integration might be useful here. We will study and discuss structural properties of such a bivariate GK model.

  • Extension of the proposed univariate GK model to multivariate GK models and discuss the associated inferential issues. It is noteworthy to mention that classical methods of estimation, such as for example, maximum likelihood method of estimation might not be a good strategy because of the enormous number of model parameters. An appropriate Bayesian inference might be the only remedy. In that case, we will separately study two different cases of estimation: (a) with non-informative priors and (b) with full conditional conjugate priors (Gibbs sampling). Since the GK distribution is in the one parameter exponential family, a reasonable choice for priors for α and β might well be gamma priors with appropriate choice of hyper-parameters. For prior choices of the parameters that are from the baseline G(.) distribution function, a data-driven prior approach will be more suitable.

  • A discrete analog of the univariate GK model with a possible application in modeling rare events.

  • Construction of a new class of GK mixture models by adopting Marshall-Olkin method of obtaining new distribution.


1 - Johnson, N.L., Kotz, S., and Balakrishnan, N.: Continuous Univariate Distributions. John Wiley and Sons; New York, 1994. Vol. 2.
2 - Kumaraswamy, P.: Generalized probability density-function for double-bounded random-processes. Journal of Hydrology, 1980; 462: 79–88.
3 - Zografos, K., and Balakrishnan, N.: On families of beta and generalized gamma-generated distributions and associated inference. Statistical Methodology, 2009; 4: 344–362.
4 - Hall, I.J.: Approximate one-sided tolerance limits for the difference or sum of two independent normal variates. Journal of Qualitative Technology, 1984; 16: 15–19.
5 - Weerahandi, S., and Johnson, R.A.: Testing reliability in a stress-strength model when X and Y are normally distributed. Technometrics, 1992; 38: 83–91.
6 - Lieblein, J., and Zelen, M.: Statistical investigation of the fatigue life of deep-groove ball bearings. Journal of Research of the National Bureau of Standards, 1956; 57: 273–316.
7 - Alzaatreh, A., and Ghosh, I.: A study of the Gamma-Pareto (IV) distribution and its applications. Communications in Statistics – Theory and Methods. 2016;45: 636–654, doi:10.1080/03610926. 2013.834453
8 - Mahmoudi, E.: The beta generalized Pareto distribution with application to lifetime data. Mathematics and Computers in Simulation, 2011; 81: 2414–2430.
9 - Torabi, H., and Hedesh, N.M.: The gamma-uniform distribution and its applications. Kybernetika-Praha, 2011; 48: 16–30.
10 - Akinsete, A., Famoye, F., and Lee, C.: The beta-Pareto distribution. Statistics, 2008; 42: 547–563.
11 - Glänzel, W.: A characterization theorem based on truncated moments and its application to some distribution families. Mathematical Statistics and Probability Theory, 1987: 75–84.
12 - Glänzel, W.: Some consequences of a characterization theorem based on truncated moments. Statistics, 1990; 21: 613–618.
13 - Glänzel, W., Telcs, A., and Schubert, A.: Characterization by truncated moments and its application to Pearson-type distributions. Zeitschrift fur Wahrscheinlichkeitstheorie und verwandte Gebiete, 1984; 66: 173–183.
14 - Glänzel, W., and Hamedani, G.G.: Characterizations of univariate continuous distributions. Studia Scientiarum Mathematicarum Hungarica, 2001; 37: 83–118.
15 - Hamedani, G.G.: Characterizations of univariate continuous distributions. Studia Scientia-rum Mathematicarum Hungarica, 2002; 39: 407–424.
16 - Hamedani, G.G.: Characterizations of univariate continuous distributions. Studia Scientia-rum Mathematicarum Hungarica, 2006; 43: 361–385.
17 - Hamedani, G.G.: Characterizations of continuous univariate distributions based on the truncated moments of functions of order statistics. Studia Scientiarum Mathematicarum Hungarica, 2010; 47: 462–484.