Open access peer-reviewed chapter

Multiconvolutional Approach to Treat the Main Probability Distribution Functions Used to Assess the Uncertainties of Metallurgical Tests

By Ion Pencea

Submitted: December 19th 2011Reviewed: June 6th 2012Published: September 19th 2012

DOI: 10.5772/50511

Downloaded: 1642

1. Introduction

The quality of a metallic product is validated by tests results. Thus, the product quality is depicted by the quality of the testing results that furthermore depends on several factors such as: the incomplete knowledge about measurand, the adequacy of the testing method in relation to measurand, the equipment adequacy for method, the human factor; the statistical inference, etc. The influence factors of a measurement process, whether known or unknown, may alter the result of a measurement in an unpredictable way. Thus, a test result bears an intrinsic doubt about its closeness to the conventional true value of the measurand, fact which is commonly perceived as uncertainty about the test result. One of the most important tasks for the experimentalist is to specify an interval about the estimated value of the measurand (x¯), [x¯- U; x¯+ U] in which the true (conventional) value (μ) could be found with specified confidence level, i.e. the probability (p) that μ [x̣- U;x̣+ U]. The practice exigency requires p95%[1, 2, 3]. The EN ISO 17025 [1] stipulates that the quality of a numeric test result is quantified by the expanded uncertainty U(p%),where pis the level of confidence (p95%). An alternative specification of Uis its level of significance expressed as 1-p. In order to obtain a higher quality of the test result, the experimentalist should perform a set of at least 30 repeated measurements [2, 4]. In the field of metallurgy this is quite impossible for technical and economical reasons. Therefore, the quality of the test result should be guaranteed by advanced knowledge about the testing method and by other means such as: equipment etalonation, Certified Reference Materials (CRMs) usage and, last but not least, correct uncertainty estimation based on proper knowledge about the probabilistic behavior of the compound random variable derived from a set of repeated measurements (arithmetic mean, standard deviation, etc.). This chapter is intended to provide the knowledge for a better statistical evaluation of the outcomes of the metallurgical tests, based on proper selection of the probability density function mainly for compound variable such as arithmetic sample mean, standard deviation, t-variable etc. The reader will find in this chapter the derivations of the Gauss, Student, Fisher-Snedecor distributions and also several compound ones. The derivations are intended to provide the information necessary to select the appropriate specific distribution for a measurand. Furthermore, the chapter addresses the uncertainty estimation based on multiconvolutional approach of the measurand by presenting a case study of Rockwell C hardness test, that reveals the superiority of the statistical inference based on the approach proposed in this chapter.


2. Probability density functions for experimental data analysis

2.1. Elements of Kolmagorov’s theory of probability

According to Kolmogorov’s theory [4, 5], the behavior of a random experiment or phenomenon can be mathematically modeled in the frame of class theory using the sample space, the event class and the probability function [5, 6]. Kolmagorov’s theory addresses experiments, phenomena or classes of entities having likelihood behavior that can be repeated under the same condition as many times as needed. The testing of the occurrence of an event will be considered generically as being an experiment or probe. The sample space or sample universe (E) is the entire class of outcomes ei, i=1,ṇ, of an experiment that are mutually exclusive events, respectively


An event Ais a part of the E, i.e. AP(E). The probability of an event is a function defined on P(E)i.e. P: P(E)[0;1], which satisfies the following axioms:


P(E) = 1 (3)


2.2. Discrete and continuous random variables

A discrete random variable is associated to an experiment that gives finite or countable elementary outcomes, having well defined probabilities. The sum of the probabilities of the discrete sample space must be one. To a finite set of outcomes of an experiment, a discrete random variableXis assigned, which is represented as:


For a countable set of outcomes, a discrete random variableXis expressed as:


The relationships in (5) and (6) are called the probability distribution functions (pdf) of the discrete variable. As it is well known, there are experiments given numeric continuous outcomes. To such experiments, continuous variables are associated. For a continuous random variable X, the probability assigned to any specific value is zero, whereas the probability that Xtakes values in an interval [a, b] is positive. The probability that a continuous random variableXtakes values in the [a, b] interval is expressed asP(a<X<b). The probability that a continuous random variableXis less than or equal with a value xis


which is called cumulative distribution function (cdf).

FX(x)should be a continuous and derivable function to fulfill the condition that for any infinitesimal interval [x, x+dx] one can estimate the probability that Xx,x+dxas:


where px=dFx(x)dxis the density distribution function ofX. As it is evident, FX(x)=0forxawhile FX(x)=1forxb.

2.3. Independent and conditional events

2.3.1. Conditional probabilities

Let Ebe a discrete sample space, containing nelementary events of an experience with probabilistic outcomes. Let be two events Aand Bof Ethat contain k, respectively lelementary events so that ABcontains m. Assuming a trial is done and Boccurs, then the probability that Aoccurs simultaneously is the ratio of favorable outcomes for Acontained in Band of the possible outcomes of B.Thus, the probability of the event A, knowing that the compatible event Boccurred is named conditional probability of AgivenBand denoted asPB(A)orPAB. In the above example the PB(A)is:


The probability of Bgiven Ais:


From Eqs.(9) and (10) it can be derived that


Event Ais independent of Bif the conditional probability of Agiven Bis the same as the unconditional probability of A, P(A)e.g PBA=PA.According to Eg.(11) the probability of two independent event of E, let say Aand B, is:


2.3.2. Pairwise and global independence

If the event Ai,i=1,3̣are such that any of pairs are exclusive e.g. AiAj=,ijthen the eventsAi,i=1,3̣may be not totally exclusive i.e.A1A2A3. The classical proof of 3 events pairwise independent but not totally independent was given by S. Berstain [5, 6]. The events AiE,i=1,ṇare totally independent if for any selection of kevents of E, written as {AS1, AS2…….ASk},the following statement is true


2.3.3. Geometric probabilities

The probability of an event related to the location of a geometric figure placed randomly on a specific planar or spatial domain is called geometric probability. A representative example is that of a disk (D) of radius “r”, that is thrown randomly onto a planar domain A(square of edge length a) that includes a sub domain B(square of edge length b) as shown in Figure 1.a. The addressed problem is to estimate the possibility that the disk center falls into the domain B. This is the ration between the area of domain Band the area of A, i.e. PCB=(b/a)2. If the event consists of DB,then the probability is PDB=(b2+4br+πr2)/a2. The examples could be extended to the micro hardness test, e.g. if a field of a steel specimen of area Acontains compounds of total area B(Figure 1.b) then the probability that at a random indentation the indenter tip impinges on a compound is pB=B/A. The most well-known example of geometric probability is the Buffon’s needle problem [7]. Thus, if a needle of length “2l” is dropped randomly on a horizontal surface ruled with parallel lines at a distance h>2lapart what is the probability that the needle intersects one of the lines?

Figure 1.

Schematic representation of the sample space for: a) disk thrown; b) micro-indentation; c) Buffon’s needle problem

The probability that a needle thrown randomly crosses a line is the sum of all favorable probabilities which, in fact, is the integral:


The criterion for estimating the correctness of a Buffon’s needle experiment is the closeness of (2l)/(hP)to the value ofϞ. The geometrical probabilities are used on a large scale in metallography for grain size and grain shape estimation [7]. Their usage may be extended to micro and nano-indentation tests etc.

2.4. Discrete probability density functions

Discrete probability deals with events that occur in a countable sample space [1-4]. If the discrete sample contains “n” elementary events {ei;i=1,ṇ} then an “intrinsic” probability value is assigned to each eiwhile for any eventXis attributed a probability f(x)which satisfies the following properties:

f(X)Г[0,1]for all X Г EE14

The function fei,i=1,ṇthat maps an elementar event to the “probability value” in the [0, 1] interval is called the probability mass function, abbreviated as pmf. The probability theory does not deal with f(ei)assessing, but builds methods for probability calculation of every event assuming prior known of f(ei),i=1,ṇ. The pmfis synonymous withpdfthereforepdfwill be used as the abbreviation for the probability density of X, which is:


where xi= X(ei), pi=f(ei)f(xi) ; i=1,ṇ

Since the most used pdfs(“pmfs”) in the field of metallurgy are Poisson and Bernoulli distribution only these distributions will be addressed in this chapter.

2.4.1. Poisson scheme

Let us consider a set of nexperiences denoted as: E = E1xE2 x….x En.Each experience has two elementary events, i.e. the variable Xiattached to the sample space of Ei, i=1,ṇis:


where Aiis the event of interest of Ei, having the probability “pi” while Aị is the contrary one having the probability qi = 1– pi.

The Poisson's scheme is designed to help estimating the probability of occurrence of kexpected outcomes when each experiment Ei, i=1,ṇ, of E is performed once. In this instance, assuming that “k” expected events occurred then one can renumber the events starting with these “k” expected and, next, with those (n-k) unexpected as follows:


The probability of the event Ej=Aj1Aj2Aj+nis the product of the individual Ajlevents:


The event En(k)is realized for any combination of k Ai, i=1,ṇ, events i.e. for Cnkevents. Thus to calculate the P(En(K))Pn (K)one must sum all different products consisting of “k” terms of pi, i =1,ṇmultiplied by the rest of qiprobabilities. This way of calculation is identical with the case of calculating the coefficient of Xk of the polynomial:


This approach of calculations of Pn(k)as the coefficient of Xkof Pn(x)is known as Poisson’s scheme.

2.4.2. Bernoulli or binominal distribution

The Bernoulli distribution addresses the case of an array of “n” identical experiences, i.e. the particular case of Poisson scheme when pi = pand qi = q, i =1,ṇ. In this case, the probability of occurrence of “k” events from “n” trials is:


where n!=1*2*3*...*n

The mean value of “k”, denoted as , could be calculated as:


The term dispersion is many times used instead of variance.

The variance of the Bernoulli distribution is:


2.4.3. Poisson distribution

Poisson distribution is a particular case of the Bernoulli distribution when the number of events tested is very large, but the probability of the experimental outcome is close to zero i.e. it is the distribution of rare events. In this instance, the mean Ϛ=npis considered a constant quantity that characterizes the distribution as will be shown forwards. According to the Bernoulli distribution the probability of k events realization in a series of nis:


The limit of Pnkfor+, denoted as P(k), is:


The dispersion of the Poisson distribution isD2=Ϛ.

The Poisson distribution is often assigned to the quantum particle counting statistics because the standard deviation (SD) is simply expressed as square root of the mean number <n>, where <n>is the mean of a set of measurements. Many times the SD is estimated as sqrt(n)using a single test result. But this approach is many times inappropriate to the real situation because the detection probability of a quatum particle is not close to zero.

There are other interesting discrete pdfsfor metallurgist as: hypergeometric distribution, negative binomial distribution, multinomial distribution, [1-5] but it is beyond the scope of this chapter.

2.5. Continuous probability density function

2.5.1. Introduction

The variable Xis called a continuous random variable if its values are real numbers as the outcomes of a large class of measurements dealing with continuous measurand such as temperature of a furnace, grain size etc. The sample space for a continuous variableXmay be an interval [a, b]on the real number axes or even entire Rspace. The probability tharXtakes a value in the interval [x1, x2],denoted as P(x1<X<x2),is directly proportional with the interval length x2-x1and depends on the intrinsic nature of the experiment to whichXwas assigned. The local specificity ofXis assessed by P(x<X<x+dx),where dxis of infinitesimal length. The probability that X<xis P(X<x)= F(x), which is called cumulative distribution function, abbreviated cdf. By definition, thecdfmust be derivable because P(x<X<x+dx)= F(x+dx)-F(x)and for any dxthe (F(x+dx)-F(x))/dxshould be finite and continuous i.e.


where p(x)is the probability density function (pdf) over the range aXb.

The main statistical parameter assigned to a variableXon the base of itspdfis: mean, median, mode, sample variance or dispersion, standard deviation and central moments. The mean ofXvariable is:


where [a, b] is the domain ofXvariable.

The median value ofX(xm) is the value which divides the number of outcomes into two equal parts i.e. F(xm)=0.5. The mode of apdfis the value x0for which p(x0) reaches its maximum i.e. dp(x0)/dx=0

The variance VofXis expressed as:


The standard deviation is defined as:


SDis a measure of the spreading ofXvalues aboutϚ. A set of measurement results are even more centered around Ϛas their dispersion is smaller.The central moment of the order ris defined as:


where ris a natural number and usually r>2.

The central moments are used for assessing the skewness and kustoisis of the pdfs. [1, 4]

2.5.2. Continuous uniform probability distribution function

A random continuous variableXhas a uniformpdfif it takes values with the same probability in an interval [a, b]. Thepdfof an uniform variable, abbreviated as updf, is given as:


The cmdofXis:


Usually, a updfis transformed in a standard uniform distribution by the coordinate transformation:


which leads to:


Thecdfof a standard updf is:


The graphs of the standard updf, f(x), and of its cdf, F(x),are shown in Figure 2.

The main parameters of a updfare: Ϛ=a+b2;SD=(b-a)/3;Ϛ2k+1=0while for a standard updfare µ=0, SD=3/3. In testing practice, a updfis assigned to an experimental measurand when there is no information or experimental evidences that its values have a clustering tendency. This is the case of a measure device having specified only the tolerance range as for the pyrometer i.e. 5 oC.

Figure 2.

The graphs of the standardupdf,f(x),and of itscdf, F(x)

2.5.3. Trapezoidal probability distribution function

A trapezoidalpdfis ascribed to a continuous random measurand if the distribution of its values around the mean is likely to be uniform while at the extremities of the interval the frequency of its occurrence vanishes linearly to zero as is shown in Figure 3.

Figure 3.

The graphs of trapezoidal symmetricpdf,f(x), and of itscdf, F(x).

The length of the larger base of the trapeze is usually denoted as 2awhile the lesser one is 2b=2aβ. The height of the trapeze (h) is determined by the normalization condition i.e. the area between f(x)and Oxaxis is 1. Thus, for a trapezoidalpdfh=[a1+ϐ]-1. In this instance, the isosceles trapezoidalpdfis expressed as:


The F(x)of a trapezepdfis obtained by integrating f(x)over the interval [µ-a; x].

The variance of f(x)given by Eq.(37) is:


The trapezoidal distribution could be considered as the general form for the class of distribution whose graphs are made of linear branches. Thus, for β=1 the trapezoidalpdfdegenerates into a uniform distribution and for β=0 into a triangular one. Therefore, the triangular distribution will be considered as a particular case of trapezoidal which has the following parameters: µ, and V=a2/6. The triangular distribution is appropriate for the measurand whose values cluster around the mean value. The triangular distribution with the width 2amay be considered as a twofold convolution of uniform distribution of identical length a. The same, the trapezoidal distribution can be seen as a convolution of two different uniform distributions. The triangularpdfis mostly used for uncertainty estimation of type B given by an instrument whose tolerance limits are specified.

2.6. The normal or Gaussian probability distribution function

Herein will be presented a derivative of normalpdfto emphasize the circumstances of its application The normal or Gaussianpdfwas formulated by F. Gauss in 1809 and since then it became the most prominentpdfencountered in testing practice. For example, the result error in a test is usually assumed to follow a normal distribution. The Gaussian function is defined on the entire axis of real number as [4-6]:


where µis the mean and σ2is the variance of the continuous random variable X.

The Gaussian distribution with µ=0 and σ2=1 is called the standard normal distribution, denoted by Φ(x)or N(0,1)which is expressed as:


N(µ,σ2)can be expressed as:

) (41)

The functioncdfof the standard normal distribution is:


The integral (40) cannot be expressed in terms of elementary functions but as error function as:


Thecdfof N(µ,σ2), F(x; µ, σ2),can be expressed as:


The normally distributed variable has a symmetric distribution about µ, which is in the same time the median and the mode. The probability that X~N(µ,σ2)takes values far from µ(i.e. more than a few standard deviations) drops off extremely rapidly.

From the perspective of the test result analysis, it is important to derive thefx;Ϛ,ϡ2. An easy and meaningful approach to derive fx;Ϛ,ϡ2is based on binomialpdfwith pnot close to zero and for a very large n. In such a case, Pn(k)has a maximum for k̡=npand drops off very rapidly as kdeparts fromk̡. When nis very large, Pn(k)varies smoothly with kand practically kcan be considered a continuous variable. Because Pn(k)takes significant values only in the vicinity of k, then its values will be well approximated by formally constructed Taylor series for ln(Pn(k)), around k̡as follows:


The second term of the right side of the equation is null becausedPn(k̡)dk=0.

For deriving the ln(Pnk)it is assumed that the Stirling’s first approximation is valid i.e.


Based on Eq.(46) the derivative of lnPnkof the r order, r2,was deduced as:


Thus, the terms containing derivative of the order r>2could be neglected in the Taylor’s expression ofPn(k). With these considerations and taking into account that d2lnPn(k̡)dk2=-1npqthen Eq.(45) can be written as:


where µis the mean and σ2is the variance of binomial pdf

The next assumption based on nsufficient larger is to consider kas a continuous variable X.

TheXvariable may be extended on Rbased on exponential decreasing of the probability thatXtakes values far from µ. In this instance, Eq.(48) can be written as:


where is a constant determined from the normalized condition.

Deduction mathematical expression of the Gauss distribution, fx;Ϛ,ϡ2,on the base of the binomialpdfclearly shows that the Gaussianpdfis valid for a very large number of experiments. Therefore, by analogy, it is a matter of evidence that Gaussianpdfaddresses experiments having a large number of influence factors that give rise to random, unbiased errors. Usually, an uncertainty budget comprises a number of influence factors less than 20. Thus, at the first glance, it results that a normalpdfis not sufficiently justified to be applied in such a case. On the other hand, if each influence factor has its own influence factors so that the number of the overall contributors to the uncertainty of the measurand exceeds 30, then assigning a normalpdfto the measurand is justified.

2.7. Continuous probability distribution functions used in metallurgy practice

In principle, any continuous function defined on an interval a,bRcan be used as apdfon condition that:


For metallurgists, the most useful pdfsother than the normal one are the log-normal, Weibull, Cauchy (Cauchy-Lorentz) and exponentialpdf[6, 7]. The log-normalpdfis used mainly for grain-size data analysis [6]. The mathematical expression of log-normal distribution is derived from normal one by substituting ln(x)forxas follows:


where µg is the “geometric mean” and σgis the “geometric standard deviation”.

The log-normalpdfis proven by empirical facts. Thus, intended used of a log-normalpdffor a specific sample remains at the latitude of the experimentalist.

In the field of metallurgy, the Weibull pdfis used mostly for failure rate or hazard rate estimation. Thepdfof a Weibul random variableXis: [5, 8]


where k>0is the shape parameter and ϙ>0is the scale parameter of the distribution.

The Cauchy (Cauchy-Lorentz) function is used successfully to describe the X-ray diffraction or spectral lines profiles which are subjected to homogeneous broadening. The Cauchy function is many times used as apdfofXvariable defined on R, as follows [5, 6, 8]:


where xois the peak location and γis the half-width at half-maximum.

As for normal pdf, f(x; 0, 1)is called the standard Cauchy distribution whosepdfis:


The exponential distribution is fitted to describe the random behavior of a process in which events occur independently at a constant average rate. The pdf of the exponential distribution is:


where λ>0 is the rate parameter.

The mean of an exponentialpdfis µ=1ϙwhile its standard deviation isϡ=2ϙ=2µ.

The exponential, Cauchy, log-normal and Weibull pdfswere presented very shortly for the sake of the chapter completeness but normal and uniform distributions will be used extensively in the next subchapters.

3. The probability distribution function of the compound variables

A random variable Yis defined as a function of other variables Xi, i=1,ṇ, denoted as Y= f(Xi). In this chapter only the functions met in testing practice are considered and expressed as: Y=aX+b, Y=(X1+…Xn)/n; Y=X2and Y=X;Y=X1/X2andY=X21/X22.

3.1. The probability distribution function of the variable Y=aX+b

The simplest case of a compound variable is that of the variable Y=aX+bwhere aand bare two positive real numbers. Assuming that thepdfofXis defined on Rthen the probability probability PY(Yy=aX+b)is equal with PX(X≤ x). In this instance, the cdfsof YandXfor Y = aX+bhave the same value i.e.


Substituting the variable t=av+bfor vin Eq.(56)




3.2. The probability density function of linear compound variables

Consider two random variables X1 and X2:RR. If a variableXis defined on the intervals, it can be considered defined on Rbecause their pdfscan be extended to Ras follows:


The extension to Rfor fX1(x)and fX2(x)is necessary in order to make possible their convolution. Thecdfof the variable Yis:


where PY(X1X2)is the event, conditioned byX1+X2y. If the pdfsof the X1, X2, Yvariables are used then the Eq.(60) becomes:



Substituting the variable u for x1 and vfor x1+x2in Eq.(61)




where fYyis the convolution of the of fx1and fx2pdfs.

The convolution of two functions is a mathematical operator which has specific properties as commutatively and associatively, but the most important property lies in the fact that the Fourier transform of fX1fX2is the product of the Fourier transforms of the respective functions.

Based on Eq.(63) one may supposes that thepdfof the variable Yn(y)=X1+X2+….Xnis:


where: fXi,i=1,ṇare the pdfsof the Xivariable.

The validity of Eq.(64) is proved by mathematical induction method. Thus, the above assumption is valid on condition that thepdfof the variable Yn+1(y)=X1+X2+….Xn+Xn+1 is of the same form as that given by Eq.(64). To prove that, Yn+1is written as:

Yn+1(y)=X1+X2+….Xn+Xn+1=Yn+Yn+1 (65)



Consequently,, Eq.(66) proves that Eq. (64) is valid for any n. In the case fXi=fX, i=1,ṇ, thepdfof Ynvariable is the convolution product of nidentical functions, denoted as :


Thepdfof a Ynvariable defined as: Yn= a1X1+a2X2+……+anXn where Xi, i=1,ṇare random variables, ai,i=1,ṇare real numbers, can be calculed in two steps. In the first step, the variables Zi =aiXi are introduced and then their pdfsare calculated as;


Next, fYnyis calculated using the Eq. (64):


Note: The variables: Yn=i=1nXi,with Xi=X, i=1,ṇand Yn=nXare diffrent.

The variableX͝n, assigned to the mean of n numerical results obtained in repeatability conditions, also called as sample mean variable, is the typical variable to which the linear compound variables theory is applied. Thus, the X͝nhas the expression:


where Xi, i=1,ṇ, are the variable assigned to each measurement.

The pdf of X͝is:


If fX(x)is known then the experimental mean distribution can be calculated, and subsequently, the dispersion of the experimental mean around the conventional true mean µofXcan be estimated as well.

3.3. The probability density function of Y=X2 variable

Consider a random variableXwith fX:RRand a variable Y=X2. By its definition Yhas the following cdf:


where fYyis thepdfof Yfory0.

The condition uyimplies that x2yi.eX-y,y,respectively. Accordingly, the probability thatu[0,y]is equal to the probability thatX-y,yi.e.


The Eq.(73) may be expressed as:




Substituting the variable -tforXinto I1


Likewise, if in I2 is applied the substitutionst, forxthen


From Eqs. (74), (75) and (76) it is deduced that:


The repartition density of X2variable differs significantly from that of X. For example, ifXis a variable with the N(0,1)pdfthen the thepdfof variable Y= X2 is:


which is of Weibull type.

3.4. The probability distribution function of Y= Xvariable

ConsiderXas a random continuous and positive variable with thepdffX(x):RR. The Y=Xvariable has the value y=xwhen X=x. Thecdfof Yis:


On the other hand, PYYyis equal toPX(Xx) i.e.


If on the right hand side of Eq.(80) one substitutes v2 for t then




Generally, Eq.(79) is used for estimating thepdfof the standard deviation when thepdfof sample dispersion (variance) is known.

3.5. The probability distribution function of the ratio of two distributions

Let be the two random and independent variables X1and X2 and their pdfsfX1x1andfX2X2, respectively defined on R.The variable Y=X1/X2 has the repartition function:


Substituting ufor x2and uvfor x1in Eq.(83)


where Dx,yu,v=dx1dudx1dvdx2dudx2dv=vu10=-uis the Jacobian of the coordinate transformation

From Eqs.(83) and (84)::


Thepdfof Y=X1/X2 variable with X1and X2of N(0, 1)type is:


Eq.(86) shows that thepdfof the ratio of two variables with standard normal distribution is the Cauchy standard distribution.

3.6. General approach for deriving the pdf of the sample mean variable

As reported in literature [2, 8] the sample mean of a sample size “n” i.e. {x1, x2, …, xn} is an estimator for the population mean. On the other hand, the mean has two different meanings: 1) numeric value of the sample mean calculated from observed values of the sample and 2) a function of random variables from a random sample. This subchapter addresses thepdfassigned to the mean of the outcomes of n repeated tests. Therefore, the mean as a function is a sum of nidentical functions divided by n. Thepdfof X, as it was derived in § 3.1, is:


In the next three paragraphs the fX̣ will be deducted for Gaussian, uniform and Cauchy pdfs.

3.7. The probability distribution function of mean of “n” Gaussian variables

Let Yn be a compound variable of nidentical Gaussian variableXdefined as Yn = X1+X2+…+Xn, where:XiX,i=1,ṇ. ThepdfofXis:


As proven in § 3.1.2., thepdfof Y2 = X1 + X2 is:


Eq.(89) shows that thepdfof Y2is of Gaussian type with the mean μ2 =2μand standard deviationϡ2=2ϡ. Based on the above result, let’s assume that the Ynvariable has apdfof the form


According to the complete induction method the above assumption is true if on the basis of Eq.(90) it can be proven that thepdfof the Yn+1variable is:


The variable Yn+1 = Yn +Xn+1, thereforepdfof Yn+1may be written as;


Eq.(92) proves that Eq.(90) is true and permits to state that thepdfof a sum of “n” identical Gaussian variable is of Gaussian type having the mean μn = nand the standard deviation ϡn=nϡ

As it was proven in § 3.1. thepdfof the sample mean variable X̣is:


where ϡx̣=ϡ/nis the standard deviation of the mean when the sample size is n.

Eq.(93) shows that the mean values (x̣) are centered on the population mean μand their standard deviation is ntimes smaller than standard deviation of the population.

The mean and the variance ofXcould be easily estimated on the base of mean operator (M) and of variance Vone applied to a vector of statistical variable, i.e.[6]


where:a1=1/n,i=1,ṇ, but these operations has no meaning for the experimentalist.

3.8. The probability distribution function of mean of some uniform variable

In real world testing situations are often found where there is no knowledge about thepdfof the measurand. In such cases, the experimentalists have to consider that thepdfassigned to the measurand is of the uniform type. The same, metallurgical practice implies statistical modeling using additive uniform variable. Simple examples are: weight or length of a chain with nlinks, strength resistance of a series of nbars, fiability assessment of a product composed of nparts. According to §2.3.2 any updfmay be related to a so called standard updfhaving the width 2, =0 and SD=3/3

Based on the above consideration this section addresses only standard updfgiven by Eq.(34).

As was derived in §3. 2.1, thepdfassigned to arithmetic mean of noutcomes is:


whereXi=X,i=1,ṇ.The first step in deriving thepdfofXis to calculate thef(X)n. The expression of f(x)nwill be derived by recurrence approach i.e f((k+1)=fkf(x). Thus, the two fold convolution of f is:


where fX(u-x)is equivalent with a translation of the graph of fX2with xrelated to the origin of coordinate. The value of the above integral is proportional to the overlapping area of the graphs of the X1and X2 pdfs, which is:


The f2is known and as Simpson distribution [9]. The convolution of two uniform variables gives a triangle distribution with the same height as convolutedpdfbut two times larger.

The f(x)3is calculated as:


f3is very important because it is the keystone from where thepdfof a sum of identical uniform distribution turns in a polynomial shape (Figure 4). The first order derivative of f3is continuous inX= ±1 but the second order one is not.

Figure 4.

The graphs of the sample meanspdfsforn=1,5̣

Thus, at x1,2= ±1 are two inflection points. In the same way as forf(x)3,f4can be calculate as;


The f5pdfis deduced as:


The expressions of the k-convolved standard updfsare difficult to be calculated for k>5, but one can get help using the general form of a sum of nidentical standard uniform variables given by Renyi[9]:


where n̠n,x=x+n2is the largest integer less than x+n2

Thepdfof the sample mean of the 5 uniform distributed outcomes obtained in repetitive or reproductive condition, denoted fM5m=5f5m,5is


The graphs of the sample means for n=2,5̣are given in Figure 4. A special attention is drawn to fM5(m)because it is appropriate for the hardness test where the standard recommends five reproductive measurements. Based on the fM5there were calculated:Px<1525,5%;P15x<35=44.8%;P35x<1=0.7%. On the other side, the probability that the mean lies in the interval (-110,110)is about 29,5% while in -210,-110+110;210is 25,5 %. Thus, the probability that the mean depart from zero decreases relative slowly, not so rapidly as is argued elsewhere [10]. Based on the convolution approach or on Renyi’s formula the experimentalist could calculate thepdfof the sample mean for its own nnumber of repeated tests.

3.9. The pdfof mean of some Cauchy distributed variable

Let us consider two Cauchy variable X1and X2whosepdfsdefined on Rare:


Thepdfof the variable Y2=X1+X2is:


In testing practice X1andX2are the same i.e. a1=a2=atherefore fY2is:


There expression of n-fold convolved Cauchy pdfsis :


Thepdfassigned to the mean of a set of noutcomes having Cauchy distribution is:


Eq.(108) shows that thepdfof mean is identical with that of the measurand. As a paradox, repeating a test many times on a measurand whosepdfis of Cauchy type is of no use. Since the dispersion of the Cauchy distribution is infinity then it should be avoided being assigned to a measurand.

3.10. Student’s or t-distribution

The Student’s distribution, also referred to as t-distribution, is used on a large scale to test the exactness of a set of numerical outcomes of repeated tests when µ and σ are not known. The power of t-tests consists in making use of µ and σ as hidden or implicit variable for statistical inference while they remain unknown. Thus, let be a set of n outcomes {x1, x2,...,xn} whose mean x̣and sample dispersion s2are, respectively


The experimentalist is concerned about the exactness ofx̣, i.e. the accuracy x̣-µrelated to the standard deviation of meansx̣=s/n. Thus, the parameter t=x̣-Ϛ/sx̣was found to be the best estimator for the case of a test with unknown µ and σ. Fortunately, a t-parameter can be written as:


Thepdfof the variable Tassigned to tvalues will be derived as the ratio of the variables Zassigned to x̣-µ/ϡx̣and R to (sx̣/ϡx̣). The Zvariable has a pdfof N(0,1)type while the pdfof Rwill be defined a little bit later. Before proceeding to derive the expression of the pdfof Rvariable, let make some reconsiderations about the actual way of deriving thepdfof S2. Thus, on the basis of the well known Eq.(110) the variable S2 assigned to s2is considered as:




Here it is considered that the above issue is quite unproductive because the (n-1)factor replaces the nfactor in Eq.(107) just because sn2approximates better the sample variance related to µ. i.e.


Therefore the variable attached to S2 should be:


whereϳi=N(0,1), i=1,ṇandϳn=i=1nϳi.

Thepdfofϳn, as it was shown in § 3.3.), is:


Thepdfof Sn2variable is:


The pdfof Sn=Sn2variable is:


Thepdfof the variable R=Sn/ϡis:


ThepdfofT=Øn/R, according to §3.1, can be calculated as:


Substitutingxfor [u2(t2+n)]/2 in Eq.(119)


The expression of fTtis consistent with those given in different works [6, 8] if nis replaced by υ which is the so called number of degrees of freedom (ndf) of the variable under consideration.

The derivation offTt, frequently denoted as f(t), is important for two reasons: a) the Student’spdfmust be applied only for a set of identical Gaussian variables or nrepeated test when thepdfof the measurand is of normal (Gaussian) type; b) to correctly establish the ndf.

Thecdfof T, FT(t), cannot be estimated as analytical primitive of fT(t), but its values were tabulated and can easily be found in the open literature [5, 6, 8].The arguing that υ=nis very important when nis 2, 3, 4 because the tn(0; 0.05) varies significantly in this range. Thus, it is a matter of evidence that many times 3 repeated measurements are considered to be enough. In this case, using t2(0.005)=4.303 instead of t3=(0.05)=3.182 increases the expanded uncertainty significantly.

3.11. Fisher-Snedecor distribution

The proficiency testing (PT) is a well defined procedure used for estimating the performances of the collaborative laboratories [11, 12]. ANOVA (ANalysis Of VAriance) is one of the methods used for analysis of sample variances reported by the laboratories. ANOVA is based on Fisher-Snedecor distribution of the two sample variances of a measurand X. Thus, if for the same sample a laboratory labeled A, gives a sample dispersion.


while B laboratory gives:


where xi, i=1,n1̣and xj=1,n2̣are the outcomes obtained for the sameXmeasurand by the laboratories A and B, respectively.

As it was argued in §3.4, to thesA2and sB2may be assigned the variables SA2and SB2whose pdfsare, respectively:


Thepdfof the variable ratio S2A/S2B=Fis:


If the variable yis replaced by u=yn1x+n2/2ϡ2in Eq.(125) then:


The derived expression for fF(x)is identical to that given by Cuculescu, I., [5]. Herein, again arises the problem where ndfis n-1 or n. By my opinion ϛ=n. Thecdfof Fisher-Snedecor distribution is not an elemental analytical form, but its value for different significance level (sl) and different n1, n2, F(sl,n1,n2),are tabulated and can be easily found in open literature [5, 6, 8]. The way of using Fisher-Snedecor distribution as F-test consists of comparing the obtained value of s2A/s2B=fewith the value of F(sl; n1,n2)taken from a Fishercdftable. If feF(sl;n1,n2), then the s2A and s2B are consistent, otherwise one of them is a straggler or an outlier.

Note. A straggler should be considered a correct item, but a statistical outlier is discharged [11].

4. Measurement uncertainty estimation

4.1. General concepts regarding measurement uncertainty estimation

Apparently the terms “test result” and “measurement result” have the same meaning i.e. a numeric outcome of a measurement process. But in metrology a measurement is defined as a “set of operations having the object of determining a value of a quantity” [13, 14] while a test is defined as a “technical operation that consists of the determination of one or more characteristics of a given product, process or service according to a specific procedure” [14]. Thus, a test is a measurement process well documented, fully implemented and permanently supervised. In a test process, the environmental and operational conditions will either be mentioned at standard values or be measured in order to apply correction factors and to express the result in standardized conditions. Besides the rigorous control of the “test conditions“, a test result bears an intrinsic inaccuracy depending on the nature of the measurand, on the performance of the method and on the performances of the equipment. Thus, the entire philosophy of the metrology is based on the fact that the true value of the measurand remains unknown to a certain extent when it is estimated based on a set of test outcomes. In this sense, the measurement uncertainty (MU) is defined as an interval about the test result, which may be expected to encompass a large fraction of values that could reasonably be attributed to the measurand. The half width of this interval is called expanded uncertainty, denoted by Uor U(xx), where xxis the index of the level of confidence associated to the interval. The confidence level is frequently 95% but may be about 99% or 68% depending on the client requirements or on the test performances. According to EN ISO/IEC 17025 a testing laboratory should apply a procedure to estimate the MU[1]. In a similar manner, when estimating the MUall relevant uncertainty components in a given situation should be taken into account using the appropriate method of analysis. The sources of uncertainty include but are not limited to the reference standards and reference materials used, the method and equipment used, the environmental conditions, the properties and conditions of the item being tested and the operator. The degree of rigor of the uncertainty estimation depends on many factors as those above, but the test result quality is better as the MUis smaller. Thus, the MUis the most appropriate quantitative parameter for test result quality evaluation. As a consequence, the approaches of MUestimation were the subject to different committees of the renowned organizations such as ISO, ILAC, EA, EURACHEM, AIEA etc. Among the MUestimation approaches, the one of ISO given in GUM [2] may be considered as the master one. Accordingly, consistency with GUM is generally required for any particular procedure for MUestimation. The basic concepts of GUM are:

  1. The knowledge about any influence factor of the measurand is in principle incomplete, therefore its action shall be considered stochastic following a pdf;

  2. The expected value of thepdfis taken as the best estimate of the influence factor;

  3. The standard deviation of thepdfis taken as the best estimate of the standard uncertainty associated to that estimate;

  4. The type and the parameter(s) of thepdfhave been obtained based on prior knowledge about the influence factors or by repeated trials of the test process.

The MUestimation procedure consists of two main steps. The first step, called formulation, consists of measurand description (physical and mathematical modeling), statistical modeling, input-output modeling and, finally, assigning apdfto the measurand. The second step, called calculation, consists of deriving the pdfsfor the test result estimation (mean, standard derivation, etc.) and the formulas or the algorithm for estimating the MUattributed to the test result.

4.2. Uncertainty estimation according to GUM

4.2.1. Formulation step

The formulation begins with the measurand definition. The X measurand may be classified as directly accessible to the measure or indirectly measured. The directly measurable measurands as length, mass, temperature etc. are not addressed here but those indirectly measured as elemental concentrations, Young modulus, hardness etc. Anyhow, a SI unit of measure shall be assigned to each measurand whenever it is possible, and also a reference for traceability. The indirect measurand may have a deterministic mathematical model as for Young modulus


where Eis the Young modulus, Fis the applied force (N), Sis the area (m2), lois the initial length of the specimen, lis the elongation (m).

Generally, the deterministic model of a measurand Ywhich is estimated based on the values of a set of measurands Xi,i=1,ṇis a function that described the relationship between outcome size and sizes of the inputs expressed as:


Eq.(125) is called the input-output model of the test process. A particular value yof Yis calculated based on the xi values of Xi:i=1,ṇas:


But, from the statistical point of view the mathematical (statistical) model of the measurand Yis [11, 13]:

= µ+b+eE130

where µis the general mean or the true value of Y, bis the systematic error or bias and eis the random error occurring in every measurement.

The bias bmay be detected and corrected by statistical means. The random error eis caused by the whole set of the influence factors of the input measurands that form the so called uncertainty budget. The Xi,i=1,ṇmeasurands may be directly accessible to the measurement or may not but each Xihas its own uncertainty budget. Thus, the statistical model of the Xi ; i=1,nare of the form:


where µiis the test value of Xi, biis the bias of Xiand ei is the random error of Xi

Thus, the systematic errors and the random ones of the input measurands Xi:i=1,ṇare incorporated in the overall uncertainty of the output measurand Y. Each Xi:i=1,ṇhas an uncertainty budget (UB) containing ni influence factors Fij,i=1,ṇ, denoted as UBXi= { Fij};j=1,nị. In this instance, the uncertainty budget of Y, denoted as UBY, could be expressed as:


Eq.(132) shows that the design of an accurate uncertainty budget for a measurand needs advanced knowledge and extended data about the input measurands that, many times, are quite impossible to be achieved. In this context, thepdfof Yremains the most appropriate target for the experimentalist. Thepdfassigned to the Ymay be of Gaussian type if the influence factors are all of the Gaussian type or if its uncertainty budget contains more than 30 uncorrelated influence factors. Otherwise, to the Yshall be assigned apdfhaving a less clustering tendency than the normal one. Same pdfshaving less clustering tendency could be ordered as uniform, trapezoidal, Cauchy, etc. The assigning of an appropriatepdfto a measurand is a difficult task which has to be solved by the experimentalist. One of the arguments that underpins the previous affirmation consists the evidence that each measurand has its own variability which is enlarged by the testing process.

4.2.2. The calculation step

Suppose that thepdfof a measurand X, fX(x), is established. As it was shown in §3.1, the arithmetic mean of a set of test results carried on in repetitive or reproductive conditions is the best estimator for the conventional true value of the measurand. The variable assigned to the mean, M, is described by a linear model as:


where xi, i=1,ṇ, are the statistical variable assigned to each repeated measurement.

The mean is a statistic [2, 5, 6,] with a pdfobtainable by a convolved product as it is shown in §3.2


Once having the fM(y) then it is quite easily to calculate the probabilityp(x̣-Ϛ)ϓi.e. to estimate the level of confidence forx̣-ϓϚx̣+ϓ.

On the other hand, the design of the mathematical model of the measurand and, subsequently, of the sample mean needs substantial scientific efforts and costs that may be prohibitive. As a consequence, many times laboratories adopt an alternative approach based on prior empirically achieved information.

4.3. The empirical estimation of MU

If a testing laboratory does not have a mathematical model as a basis for the evaluation of MUof the test results then it has to implement an empirical procedure for MUestimation. The flowchart of such a procedure is a stepped one [3]. Thus, the first step consists of listing those quantities and parameters that are expected to have a significant influence on the test result. Subsequently, the contribution of each influence factor to the overall MUis assessed. Based on the level of contribution to the overall MU, each factor shall be classified as significant or irrelevant. The irrelevant influence factors are discarded from the list. The equipment, the CRMs, the operator are among the most frequently considered as significant influence factors. If there is a lack of knowledge or prior data about an influence factor then it is strongly recommended to the laboratory to perform specific measurements to evaluate the contribution of that factor. If the contribution of an influence factor to the overall MUis estimated based on a set of outcomes whose variability is assigned exclusively to that factor then its contribution to the overall MUis considered of the A type. Else its contribution is of type B.

4.4. Combined uncertainty calculation

The combined uncertainty incorporates the contributions of the influence factors Fi, i=1,ṇ, to the overall MUi.e. uncertainties of the A type and of B type. The calculus of the combined uncertainty is based on error propagation law [2, 4, 8]. Metallurgical testing practice shows that the influence factors are about always considered as independent. Thus, the combined uncertainty is calculated as:


where uAand uBare the combined uncertainty of A type and of B type, respectively.

The combined uncertainty of the type A is calculated as:


where uAiis the uncertainty of type A assigned to the influence factor Fij,i=1,ṇ.

The combined uncertainty of the type B is calculated as:


where uBiis the uncertainty of type B assigned to the influence factor Fj,j=1,ṃ.

4.5. Estimation of the expanded uncertainty

The degree of confidence associated with the combined uncertainty (uc) is considered, many times, as incompetent, therefore the uncertainty attributed to the measurand is increased to a certain extent in order to comply with a previously stated confidence level, usually 95%. The increased uncertainty is called expanded uncertainty and is usually obtained by multiplying ucwith a factor k, called coverage factor. For example, the kfactor for a measurand with Gaussianpdfis 2 for a 95% confidence level, i.e. the expanded uncertainty for the 95% confidence level is U=2uc. The value of kfor a specific confidence level depends on thepdftype of the measurand and on the number of test results that were used to calculate the sample mean and sample standard derivation. If thepdfof the measurand is Gaussian or the number of test results exceeds 30 then it is justified to consider that the mean is normally distributed. If thepdfof the measurand is Gaussian butn< 30 then the Students’ t-distribution shall be used.

According to [10,14], in many cases the uniform distribution may be assigned to the measurand. In this case, U(95) is calculated as:


where uc is the combined uncertainty of the test result

4.6. Reporting the test result

The test process yields a value as the estimate for the conventional true value of the measurand. In principle, this value is the sample mean ỵor simply y. As standards strongly recommend [1, 2, 4], the yvalue must be reported together with its expanded (extended) uncertainty Ufor a specific confidence level (typically 95%) as follows:


The laboratory may specify thepdftype and kvalue as substitutes for the confidence level specification. If in a testing situation a laboratory could not evaluate a metrological sound numerical value for each component of MUthen this laboratory may report the standard uncertainty of repeatability, but this shall be clearly stated as it is recommended in [3, 14].

The laboratory practice shows that MUestimation is in many cases a time-consuming and costly task. This endeavor shall be justified by the advantage of MUevaluation for testing laboratories. EA 4/16 argues that MUassists in a quantitative manner important issues such as risk control, credibility of the test results, competitiveness by adding value and meaning to the test result etc. But, in the author’s opinion, the MUestimation imposes the operator increased awareness about the influence factors and higher interest in the means to improve the quality of the test results.

5. Uncertainty estimation for Rockwell C hardness test. Case study

5.1. Hardness testing-general principles

Hardness is the measure of how resistant a solid bulk material is against ingression by spherical, pyramid or conical tips. Hardness depends on the mechanical properties of the sample as: ductility, elasticity, stiffness, plasticity, strain, strength, toughness etc [15]. On the other hand, quantity ”hardness” cannot be expressed as a numeric function of a combination of some of these properties. Therefore, hardness is a good example of measurand that cannot be mathematically modeled without referring to a method of measurement [4, 15]. In this view, there is a large number of hardness testing methods available (e.g. Vickers, Brinell, Rockwell, Meyer and Leeb) [15, 16]. As a consequence, there is not a measure unit for hardness independent of its measurement method. In time, based mainly on empirical data, standard methods for hardness testing appropriate to the material grades were developed. In the metallurgical testing field the Brinell, Rockwell and Vickers methods are the most frequently used. For this reason, this case study addresses the well-known Rockwell C hardness by measuring the depth of penetration of an indenter under a specific load. There are different Rockwell type tests denotes with capital letter A÷H [16, 17], but herein referred only C type.

5.2. Description of the measurand

The Rockwell C hardness is the measure of how resistant a solid bulk material is against penetration by a conical diamond indenter, having a tip angle of 120o, which impinges on the flat surface of the sample with a prescribed force.

The Rockwell C hardness scale is defined as:


where dis the penetration depth.

The most meaningful measure for metallurgists is the Rockwell C index, defined as:


5.3. Description of the test method

The method used for Rockwell C is given in ISO 6805 standard [16]. The Rockwell C test is performed on bulk sample having geometrical dimensions complying with the ISO 6805 specifications. The indenter is diamond cone having the tip angle of 1200 0.50. The tip ends with a hemisphere of 0.002 mm in diameter. The indentation is a stepwise process i.e. in the first step the indenter is brought contact with the sample surface and loaded with a preliminary test forces F0=98 N with a dwell time of 1-3 s. Subsequently, during a period of 1s - 8 s, an additional forceF1= 1373 N is loaded. The resulting force F=1471 N is applied for 42s. In the third step the F1force is released and the penetration depth his measured in the period 3s-5s. The ISO 6805 recommends that the number of tests carried on in repetitive (reproductive) conditions to be a multiple of 5. In this instance, 5 indentations are established as practice for estimating the conventional true value of HRC of the specimen that has undergone the test.The recommended environmental conditions of the test are [16]: temperature 10-35oC, protection against: vibrations, magnetic and electric fields, avoiding the soiling of the sample.

5.4. The mathematical model of the measurand

The mathematical model given by GUM for estimating MUof the hardness reference blocks is the most appropriate for the MUestimation in the industrial hardness testing. The mathematical model given by GUM takes into account the correction factors coming from equipment calibration as follows:


where ḍis the mean of the penetration depth; cis the correction assigned to the equipment ; bis the difference between the hardness of the areas indented by reference equipment and the calibrated area; eis the correction assigned to the uncertainty of the reference equipment.

cis assigned to the equipment by comparing its penetration depths, done in a primary hardness reference block, with those of a reference equipment („national etalon machine”).

Although band eare negligible corrections, they are introduced in the expression of the mathematical model to be taken into account as contributors to the MUbudget. According to ISO 6508-1:2005 [16] a „Hardness Reference Block” may be considered as CRM (Certified Reference Materials). Based on the above model it was considered that the hRCXof aXspecimen may be expressed as:


where ḍXis the mean of the penetration depths given by the testing machine onXspecimen; cis the correction assigned to the equipment resulted by comparring the mean penetration depth done on a CRM with those specified by its certificate i.e.


where ḍCRMis the mean of the penetration depth given by the test machine on CRM, doCRM is the expected penetration depth derived as:


where HRC-CRM is the certified Rockwell C hardness of the CRM

5.5. Estimation of the contribution of the influence factors to the overall MU

5.5.1. Contribution assigned to the mean indented depth on specimen (ḍX)

The contribution of ḍXto the averall MUis assessed by the standard deviation of the mean which is a standard uncertainty (type A). In addition to the data spreading aroundḍX, the uncertainty of the resolution of depth measuring system will be considered. The contribution of ḍXto the overall MUis:


where SX2is the sample variance of the identation depths, nis the number of indentations, tis the Student factor, 2/12 is uncertainty assigned to depth reading having a trianglepdf[2]

The SX2can be calculated as:

where di are the indentation depth,i=1,ṇ., nis the number of indentations

5.5.2. Contribution assigned c

The uncertainty assigned to chas two contributors i.e. the standard uncertainty assigned to the measurement process on CRM and that assigned to the CRM itself, which is of B type. The MUattributed to ḍCRMis of the same type with that assigned to ḍXtherefore it may be expressed as:


where SCRM2is the sample variance of the depths given by indentations carried on the CRM, 2/12 has the same significance as in Eq.(7), t is the Student factor.

SCRM2can be calculated using the Eq.(147)

The uncertainty assigned to the CRM is taken from its certificate for k=1 and is denoted as uMRC. In this instance, the combined uncertainty assigned to cis:


The common practice of the MUassessment is to take into account the uncertainties assigned to machine calibration and that attributed to the operator. But, these contributions are included in the MUassigned to ḍXand toc. Thus, the supplementary addition of the mentioned MUwill involve an overestimation of combined uncertainty of the hardness test.

5.6. Combined uncertainty calculation

It is a matter of evidence that the contributions to the overall uncertainty of Rockwell C hardness test are mutually independent and, consequently, the combined uncertainty ucassigned to his:


5.7. Extended uncertainty calculation

The reference documents such as [2, 17] do not specify a calculation methods for extended MUofhRC. These reference documents only specify the combined uncertainty of the measurement. ISO 6508 gives clear clues that a Gaussian pdfis assigned toḍX. These are: tfactor e.g. t=1.14 for n=5, and k=2 for UCRM(Eq.(B.6) in [16]). Underpinned on ISO 6805 one can assign a GaussianpdftoḍX. Thus, the extended uncertainty can be calculated by multiplying uc(hRC)with a coverage factor k, for instance, k=2 for a confidence level of 95%. Thus, the expanded uncertainty assigned to hRCis:


According to metallurgical practice, the expanded MUof the test result is:

) (152)

Thus, the expanded uncertainty of the HRC index is calculated as:


5.8. Reporting the result

If the bias of the test is corrected then the result in HRC unit is:


In this case, the result shall be reported as:

HRCcorr ± UE155

on condition that the level of confidence of Uor the kvalue shall be specified.

If the HRCvalue is not corrected then the HRCindex is:

) (156)

and the assigned expanded MUto the test result is estimated as:


In this case, the result shall be reported as:


5.9. Numerical example

5.9.1. General data about the Rockwell C hardness test

A Rockwell C hardness test was performed in reproductive conditions on a steel specimen using a Balanta Sibiu Rockwell Machine indirect calibrated on a CRM i.e. a hardness reference block having a certified hardness of 40.1 HRCwith a U(95)= 0.22 HRC. The calibration data obtained on CRM SN 48126 are given in Table 1.


Table 1.

The calibration data obtained on CRM SN 48126

Subsequently, according to [16] 5 indentations were carried on the specimen, denoted X. The hardness test data for the X specimen are given in Table 2.


Table 2.

Test data for theXspecimen

From Table 1 and Table 2 it results c= - 0.15 HRC.

5.9.2. MU estimation for the test result according to the classical approach

The ucassigned to the test result is estimated based on data provided by: testing, calibration procedure, certificate of the CRM and on the specification of operating manual. The data used for uccalculation are given in Table 3.

No.Uncertainty sourceSymbolUncertainty[HRC]Evaluation typeAssigned pdf
1Indirect calibrationSM-MRC0.15AGaussian1
4Displaying resolutionϒ0.05BTriangular1

Table 3.

The input data used for uccalculation. 1-the assigning of pdfsare taken after [16] but there are no evidences for these assignments.

The combined uncertainty calculated by replacing the values given in Table 3 in Eq.(147) is:


According to [16] the corrected HRC (see Eq(154)) is:


whereU=0.62HRCis the expanded uncertainty for the 95% confidence level calculated with a coverage factor k=2according to Anex B of ISO 6805-2.

5.9.3. MU for the test result according to the multiconvolutional approach

Whatever is the way of reporting the results, it is quite impossible to assign a confidence level to the expanded uncertainty of the result based on irrefutable arguments. The alternative approach is to consider that the measurand has a uniform pdf. The worst case is a uniformpdfhaving a width equal to the interval of obtained values i.e.

fd(x)=1dmax-dmin,& xdmin-Ucc;dmax+Ucc0,&otherwiseE161

where dmaxand dminare the maximum and the minimum penetration depths among the five test data. The mean of fd(x)is Ϛ=(dmax+dmin)/2while the half width of the rectangular distribution is=(dmax-dmin)/2. According to the data given in Table 2, =0.1159mm, c= 0.003 mm and a=0.0125mm. As shown in &3.2.3 the probability that the mean mof a 5 reproductive outcomes, each one being uniformly distributed in [-a, a], to belong to the interval m-a/10;m+a/10is 29.4%. while in m-a/5;m+a/5is 57.64% Thus, the conventional true value of the penetration depth lies in the interval0.1157;0.1161mmwith a confidence level of 55%. In this instance, the HRCindex of the specimen could be reported as:


for the 55% confidence level calculated on the basis ofpdfof mean.

The confidence level for m-Ϛa25is about 99%. In this case, Rockwell C index could be reported as:


The above result should be interpreted as the HRCconventional true value of the specimen lies in the interval [40.79, 42.99] (HRC) with 99% confidence level. For the conventional case adopted by ISO 6805 and presented above, the 99% confidence level corresponds to a coverage factor k=3 i.e. to a U(99%)=0.93 HRC. By comparison, the reported test result, according to [16], as HRCcorr=41.96±0.93HRCwith 99% confidence level or, according multiconvolutional approach, as HRCcorr=41.89±1.10HRCseems to be quite the same to some extent depending on the rigor claimed by the client. But, from scientific and mathematical statistics point of view one may feel comfortable to use a founded test result having assigned a little bit larger uncertainty than to use a doubtful one.

5.10. Discussion

The most important finding of this case study is that, when dealing with a measurand whose assignedpdfis not known or is insufficiently documented, the best approach is to consider it has a uniform distribution. The interval of variance of the outcomes may be considered, at first glance, as the distribution width. Another important issue is that for estimating MUusing the multiconvolutional approach, only data provided by the testing process are used, while the classical approach uses supplementary data. A sound question regarding the multiconvolutional approach is how to decrease the MUi.e. to improve the test result quality? The classical solution is that the experimentalist should increase the number of reproductive or repetitive tests. The common practice of five tests appears insufficient, but ten tests should be acceptable. Using a ten-fold convolved uniform pdfsone can describe quite accurately the probability of mean displacement about the conventional true value () with at least 0.1increment.


6. Summaries

This article addresses a more meaningful approach for measurement uncertainty estimation, particularly in the metallurgical test field. The chapter contributes to the state of the art by the development of a consistent approach for calculating the pdfsof the sample mean statistic, of the variance and of tparameter. To this end, the concepts of probabilistic theory and the derivation of the main pdfsused for measurement uncertainty evaluation as Poisson, Gauss are presented briefly. The theoretical backgrounds are presented in the paper in the aim to make clear for an experimentalist the specific pdfsto be assigned to a measurand. A considerable part of the chapter addresses the deriving of the pdfsof the compound variables as Y=aX+b, Y=(X1+....+Xn)/n, Y=X2, Y=X, Y=X1/X2because these form the basis for deriving the probability density functions of sample mean, of variance, of Student and of Fisher-Sedecor. The pdfsof sample mean and of Chi distribution are derived in extensofollowing the convolution approach because it provides an easy-to-use and intuitive way to understand how these distributions should be applied for measurement uncertainty estimation. Thus, this approach allows an in-depth understanding of the mathematical formulas in order to avoid their usage exclusively based upon the mathematical literature without understanding of or without concern about the appropriateness to the case addressed. An important contribution of the chapter is the argument for using a number of repeated test, n,as the number of degrees of freedom and not n-1 as is common practice. The last part of the chapter deals with measurement uncertainty estimation using GUM method because it is required by EN ISO/CEI 17025. The GUM approach was applied for the uncertainty estimation of the Rockwell C hardness test according to the ISO 6825 standard. As is underlined in the chapter, this standard does not provide clear evidence for assigning Gaussian distribution to the hardness HRC. Alternative approach to estimate the uncertainty of the Rockwell C hardness test result is given based on the pdfof the sample mean obtained by 5-fold convolved product of the uniform distribution assigned to the measurand.

The entire chapter is designed to emphasize the risk of wrongly estimating the test result uncertainty due to erroneous assumptions regarding the pdf attributed to the measurand or to the influence factors of the uncertainty budget.

© 2012 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Ion Pencea (September 19th 2012). Multiconvolutional Approach to Treat the Main Probability Distribution Functions Used to Assess the Uncertainties of Metallurgical Tests, Metallurgy - Advances in Materials and Processes, Yogiraj Pardhi, IntechOpen, DOI: 10.5772/50511. Available from:

chapter statistics

1642total chapter downloads

1Crossref citations

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Artificial Intelligence Techniques for Modelling of Temperature in the Metal Cutting Process

By Dejan Tanikić and Vladimir Despotovic

Related Book

First chapter

Nanostructure of Materials and Corrosion Resistance

By A. A. El-Meligi

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us