Open access peer-reviewed chapter

Classifying by Bayesian Method and Some Applications

By Tai Vovan

Submitted: December 6th 2016Reviewed: June 8th 2017Published: November 2nd 2017

DOI: 10.5772/intechopen.70052

Downloaded: 1258


This chapter sums up and proposes some results related to classification problem by Bayesian method. We present the classification principle, Bayes error, and establish its relationship with other measures. The determination for Bayes error in reality for one and multi-dimensions is also considered. Based on training set and the object that we need to classify, an algorithm to determine the prior probability that can make to reduce Bayes error is proposed. This algorithm has been performed by the MATLAB procedure that can be applied well with real data. The proposed algorithm is applied in three domains: biology, medicine, and economics through specific problems. With different characteristics of applied data sets, the proposed algorithm always gives the best results in comparison to the existing ones. Furthermore, the examples show the feasibility and potential application in reality of the researched problem.


  • Bayesian method
  • classification
  • error
  • prior
  • application

1. Introduction

Classification problem is one of the main subdomains of discriminant analysis and closely related to many fields in statistics. Classification is to assign an element to the appropriate population in a set of known populations based on certain observed variables. It is an important development direction of multivariate statistics and has applications in many different fields [25, 27]. Recently, this problem is interested by many statisticians in both theories and applied areas [1418, 2225]. According to Tai [22], we have four main methods to solve the classification problem: Fisher method [6, 12], logistic regression method [8], support vector machine (SVM) method [3], and Bayesian method [17]. Because Bayesian method does not require normal condition for data and can classify for two and more populations it has many advantages [2225]. Therefore, it has been used by many scientists in their researches.

Given kpopulations {wi}, with probability density functions (pdfs) and the prior probabilities respectively {fi} and {qi}, i= 1, 2,…, k, where qi(0;1),i=1kqi=1.Pham–Gia et al. [17] used the maximum function of pdfs as a tool to study about Bayesian method and obtained important results. The classification principle and Bayes error were established based on the gmax(x) = max{q1f1(x), q2f2(x), …, qkfk(x)}. The relationship between the upper and lower bounds of the Bayes error and the L1—distance of the pdfs and the overlap coefficient of the pdfs—were established. The function gmax(x) played a very important role in the classification problem by Bayesian method and Pham–Gia et al. [17] continued to do research on it. Using the MATLAB software, Pham–Gia et al. [18] succeeded in identifying gmax(x) for some cases of the bivariate normal distribution. With similar development, Tai [22] has proposed the L1—distance of the {qifi(x)}—and established its relationship with Bayes error. This distance is also used to calculate Bayes error as well as to classify new element. This research has been applied in classifying ability to repay debt of bank customers. However, we think that the survey of two Bayesian approach relevant research was not yet completed. There are some relations between Bayes error and other statistical measures.

Bayesian method has many advantages. However, to our knowledge, the field of applications of this method in practice is narrower than other methods. We can find many applications in banking and medicine using Fisher method, SVM method, logistic method [1, 3, 8, 12]. Recently, all statistics software can effectively and quickly process the classification of large data sets and multivariate statistics using either three of the methods mentioned above, whereas the Bayesian method does not have this advantage. The cause of this problem is the ambiguity in determining prior probability, in estimating pdfs, and the complexity in calculating Bayes error. Although all these issues have been discussed by many authors, the optimal methods have yet to be found [22, 25]. In this chapter, we consider to estimate the pdf and to calculate Bayes error to apply in reality. We will present the problem on how to determine the prior probability in this chapter. In case of noninformation, we normally choose prior probabilities by uniform distribution. If we have some types of past data or training set, the prior probabilities are estimated either by Laplace method: qi= (ni+ n/k)/(N+ n) or by the frequencies of the sample: qi= ni/N, where niand Nare the number of elements in the ith population and training set, respectively, nis the number of dimensions, and kis the number of groups. The above-mentioned approaches have been studied and applied by many authors [14, 15, 22, 25]. We will also propose an algorithm to determine prior probability based on the training set, classified objective, and fuzzy cluster analysis. The proposed algorithm is applied in some specific problems of biology, medicine, and economics and has advantages over existing approaches. All calculations are performed by MATLAB procedures.

The next section of this chapter is structured as follows. Section 2 presents the classification principle and Bayes error. Some results of the Bayes error are also established in this section. Section 3 resolves the related problems in real application of the Bayes method. There are estimation of pdfs and determination of Bayes error in case of one dimension and multidimension. This section also proposes an algorithm to determine prior probability. Section 4 applies the proposed algorithm in real problems and compares outcome results to those obtained using existing approaches. Section 5 concludes this chapter.


2. Classifying by Bayesian method

The classification problem by Bayesian method has been presented in many documents [15, 16, 27], where the classification principle and the Bayes error are established based on Bayes theorem. In this section, we present them via the maximum function of qifi(x), i= 1, 2, …, kthat they have advantages over existing approaches in real application [17, 18, 2125]. This section also establishes the upper and lower bounds of the Bayes error and the relationships of Bayes error with other measures in statistical pattern recognition.

2.1. Classification principle and Bayes error

Given kpopulations w1, w2, …, wkwith qi∈ (0;1) and fi(x) are the prior probability and pdf of ith population, respectively, i= 1, 2, …, k. According to Pham–Gia et al. [17], element x0 will be assigned to wi if


where gi(x)=qifi(x),gmax(x)=max{q1f1(x),q2f2(x),,qkfk(x)}.

Bayes error is given by the formula:


where Rin={x|qifi(x)>qjfj(x),ij,i,j=1,2,,k},(q)=(q1,q2,,qk).

From Eq. (2), we can prove the following result:




The correct probability is determined by Ce1,2,,k(q)=1Pe1,2,,k(q).

For k= 2, we have



λ1,2(q,1q)is the overlap area measure of qf1(x) and (1−q)f2(x) and qf1,(1q)f21=Rn|qf1(x)(1q)f2(x)|dx.

2.2. Some results about Bayes error

Theorem 1. Let fi(x), i=1, 2, …, k, k≥ 3 be kpdfs defined on Rn,n1,qi(0;1).We have the relationships of Bayes error with other measures as follow:

  1. Pe1,2,,k(q)11k1(1j=1kqjαjDT(f1,f2,,fk)α),E5

  • Pe1,2,,k(q)i<jqiβqj1βDT(fi,fj)(β,1β),E6

  • {(k1)ijgi,gj1}/kPe1,2,,k(q)1(1/2)maxi<j{gi,gj1}mini{qi},E7

  • 0Pe1,2,,k(q)maxi{qi},E8

  • where

    α=(α1,α2,,αk);αj,β(0,1),j=1kαj=1, i, j= 1, 2, …, k, and

    DT(f1,f2,,fk)α=Rnj=1k[fj(x)]αjdxis affinity of Toussaint [26].


    1. For each j= 1,2,…,k, we have




      On the other hand,






      Combining Eqs. (9) and (10), we obtain


      Because j=1kqjfjmin1jk{qjfj}includes (k−1) terms, we have




      Integrating the above relation, we obtain:


      Using Rngmax(x)=1Pe1,2,,k(q)for Eq. (11), we have Eq. (5).

    2. From Eq. (2), we have






      Integrating the above inequality, we obtain:


    3. We have


      On the other hand,




      We also have i<j|gigj|j=1k[max{g1,g2,gk}gj]=k[max{g1,g2,gk}]j=1kgj




      Rngi(x)dx=qiandi=1kqi=1, the inequality Eq. (13) becomes:


      Replacing Rngmax(x)=1Pe1,2,,k(q)to Eqs. (12) and (14), we have Eq. (7).

    4. We have

      qifi(x)max{q1f1(x),q2f2(x),,qkfk(x)}i=1kqifi(x)for all i= 1,…,k.

      Integrating the above relation, we obtain:


      Above inequality is true for all i= 1,…,k, so


    Replacing Rngmax(x)=1Pe1,2,,k(q)in above relation, we have Eq. (8).

    From the result of Eqs. (5) and (6), with α1=α2==αk=1/k,, we have the relationship between Bayes error and affinity of Matusita [11]. Especially, when k= 2, we have the relationship between Pe1,2(q,1q)and Hellinger’s distance.

    In addition, we also have the relation between Bayes error and overlap coefficients as well as L1–distance of {g1(x), g2(x), …, gk(x)} (see Ref. [22]). For special case: q1 = q2 = … = qk = 1/k, we had established expressions about relations between Bayes error and L1–distance of {f1(x), f2(x), …, fk(x)}, Pe1,2,,k(1/k)and Pe1,2,,k+1(1/(k+1))(see Ref. [17]).


    3. Related problems in applying of Bayesian method

    To apply Bayesian method in reality, we have to resolve three main problems: (i) Determine prior probability, (ii) compute Bayes error, and (iii) estimate pdfs. In this section, we propose an algorithm to solve for (i) based on fuzzy cluster analysis and classified objective that can reduces Bayes error in comparing with traditional approaches. For (ii), Bayes error is established by closed expression for general case and determine it by an algorithm to find maximum function of gi(x), i= 1, 2, …, kfor one dimension case. The quasi-Monte Carlo method is proposed to compute Bayes error in this section. For (iii), we review the problem to estimate pdfs by kernel function method where the bandwidth parameter and kernel function are specified.

    3.1. Prior probability

    In the n-dimensions space, given Npopulations N(0)={W1(0),W2(0),,WN(0)}with data set Z= [zij]nxN. Let matrix U=[μik]c×n, where μikis probability of the kth element belonging to wi. We have μik[0,1]and satisfies the following conditions:


    We call


    be fuzzy partitioning space of kpopulations,

    DikA2=zkviA2=(zkvi)TA(zkvi)is the matrix whose element dik2is the square of distance from the object zkto the ith representative population. This representative is computed by the following formula:


    where m∈ [1,∞) is the fuzziness parameter.

    Given the data set Zincluding cknown populations w1, w2, …, wc. Assume x0 is an object that we need to classify. To identify the prior probabilities when classifying x0, we propose the following prior probability by fuzzy clustering (PPC) algorithm:

    Algorithm 1. Determining prior probability by fuzzy clustering (PPC)
    Input:The data set Z= [zij]n×Nof cpopulations {w1, w2, …, wc}, x0, ε, mand the initial partition matrix U=U(0)=[μij]c×N+1, where μij= 1 if the jth object belongs to the wiand μij= 0 for the opposite, i=1,c¯;j=1,N¯,μij=1/cfor j= N+ 1.
    Output: The prior probability μi(N+1),i=1,2,c.
        Find the representative object of wi: vi=k=1N(μik)mzkk=1N(μik)m,1ic
        Compute the matrix [Dik]c×N+1(the pairwise distance between objects and representative objects).
        Update the new partition matrix U(new) by the following principle:
          If Dik> 0 for all i=1,2,,c;k=1,2,,N+1then

          Else, μik(new)=0
        Compute S=U(new)U=maxik(|μik(new)μik|)
    Until S< ε;
    The prior probability μi(N+1),i=1,2,c(the final column of the matrix U);

    In the above algorithm, we have:

    1. εis a really small number and is chosen arbitrarily. The smaller εis, the more iterations time is taken. In the examples of this chapter, we choose ε= 0.001.

    2. The distance matrix Dikdepends on the norm-inducing matrix A. When A= I, Dikis the matrix of Euclidean distances. Besides, there are several choices of A, such as diagonal matrix or the inverse of the covariance matrix. In this chapter, we chose the Euclidean distances in the numerical examples and applications.

    3. mis the fuzziness parameter, when m= 1, the fuzzy clustering becomes the nonfuzzy clustering. When m→ ∞, the partition becomes completely fuzzy μik= 1/c. The determining of this parameter, which affects the analysis result, is difficult. Even though Yu et al. [28] proposed two rules to determine the supermom of mfor clustering problems, the searching of the specific mwas done by meshing method (see [2, 4, 5, 9] for more details). By this process, the best mamong several of given values will be chosen. In this chapter, mis also identified by meshing method for the classification problem. The best integer mbetween 2 and 10 will be used.

    At the end of the PPC algorithm, we obtain the prior probabilities of x0 based on the last column of the partition matrix U(μi(N+1),i=1,2,c). The PPC algorithm helps us determine the prior probabilities via the closeness degree between the classified object and the populations. Each object will receive its suitable prior probabilities.

    In this chapter, Bayesian method with prior probabilities calculated by the uniform distribution approach, the ratio of samples approach, the Laplace approach, and the proposed PPC algorithm approach are respectively called BayesU, BayesR, BayesL, and BayesC.

    Example 1. Given the studied marks (scale 10 grading system) of 20 students. Among them, nine students have marks that are lower than 5 (w1: fail the exam) and 11 students have marks that are higher than 5 (w2: pass the exam). The data are given in Table 1.


    Table 1.

    The studied marks of 20 students and the actual classifications.

    Assume that we need to classify the ninth object, x0 = 4.3, to one in two populations. Using the PPC algorithm, we have the following final partition matrix:


    This matrix shows the prior probabilities when assigning the ninth object to w1 and w2 are 0.724 and 0.276, respectively. Meanwhile, the prior probabilities determined by BayesU, BayesR, and BayesL are (0.5; 0.5), (0.421; 0.579), and (0.429; 0.571), respectively.

    From the data in Table 1, we might estimate the pdfs f1(x) and f2(x) and compute the values q1f1(x) and q2f2(x), where q1 and q2 are the calculated prior probabilities. The results of classifying x0 by four approaches: BayesU, BayesR, BayesL, and BayesC are given in Table 2.

    MethodsPriorsgmax(x0)PopulationsBayes errors
    BayesU(0.5; 0.5)0.035320.0538
    BayesR(0.421; 0.579)0.040920.0558
    BayesL(0.429; 0.571)0.040320.0557
    BayesC(0.724; 0.276)0.048510.0241

    Table 2.

    The results when classifying the ninth object.

    Because the actual population of x0 is w1, only BayesC gives the true result. The Bayes error of BayesC is also the smallest. Thus, in this example, the proposed method improves the drawback of the traditional method in determining the prior probabilities.

    3.2. Determining Bayes error

    Theorem 2. Let fi(x), i=1, 2, …, k, k3be kpdfs defined on Rn, n≥ 1 and let qi∈ (0;1),


    The Bayes error is determined by



    To obtain Eq. (18), we need to prove two following results:


    and i=1kRin=R1ni=2k1RinRkn=Rn,fmax(x)=fi(x),xRin.

    Let A¯=Rn\A,we have


    From Eq. (17), we obtain




    On the other hand, from antithesis style of D’Morgan, we have






    In addition, from Eq. (17), we can directly find out


    For k= 2, q1 = q2 = 1/2, we consider the two following special cases:

    1. If f1(x) and f2(x) are two one-dimension normal pdfs (N(μi,σi),i= 1, 2), without loss of generality, we suppose that μ1<μ2(for μ1μ2), σ1<σ2(for σ1σ2), then




      For μ1 = μ2 =μ, the above result becomes:

      Pe1,2(1/2,1/2)={1,    ifσ1=σ2,12[x4f1(x)dx+x4x5f2(x)dx+x5+f1(x)dx]   if σ1<σ2,E47

      where x4=μσ1σ2Eand x5=μ+σ1σ2Ewith E=2σ22σ12ln(σ2σ1)0.

    2. If f1(x) and f2(x) are two n-dimension normal pdfs (N(μi,Σi),n2,i=1,2)then




    In case of n= 2, d(x) can be straight lines or parabola or ellipses or hyperbola.

    3.3. Maximum function in the classification problem

    To classify a new element by the principle (1) and to determine Bayes error by the formula (3), we must find gmax(x). Some authors, such as Pham–Gia et al. [15, 17] and Tai [21, 22], have surveyed relationships between gmax(x) with some related quantities of classification problem. The specific expression for gmax(x) in some special case has been found [18]. However, the general expression for all of cases is a complex problem that has not been still found yet.

    Given kpdfs fi(x) and qi, i= 1, 2, …, kwith q1 + q2 + …+ qk = 1 and let gi(x) = qifi(x), gmax(x) = max{gi(x)}. Now, we take interest in determining gmax(x).

    (a) For one dimension

    In this case, we can find gmax(x) by the following algorithm:

    Algorithm 2. Find thegmax(x) function
    Input: gi(x) = qifi(x), where fi(x) and qi are the probability density function and the prior probability of wi, i= 1, 2, …, k, respectively.
    Output:The gmax(x) function.
    Find all roots of the equations gi(x)gj(x)=0,i=1,k1¯,j=i+1,k¯.
    Let Bbe the set of all roots.
    For xlmB(the roof of equation gl(x)gm(x)=0) do
         For p∈{1,2,…,k}\{l,m} do
          If gl(xlm)<gp(xlm)then B=B\{xlm}
    Arrange the elements of Bin order from smallest to largest:
    (Determine the function gmax(x) in interval(−∞,x1])
         For i= 1 to kdo
             If gi(x1ε1)=max{g1(x1ε1),g2(x1ε1),,gk(x1ε1)}then
                  gmax(x)=gi(x), for all x∈ (−∞,x1]
    (Determine the function gmax (x) in interval(xj,xj+1], j=1,h1¯)
         For i=1 to kdo
          For j=1 to h-1 do
           If gi(xj+ε2)=max{g1(xj+ε2),g2(xj+ε2),,gk(xk+ε2)}then
              gmax(x)=gi(x), for all x(xj,xj+1]
    (Determine the function gmax (x) in interval(h,+))
         For i= 1 to kdo
          If gi(xh+ε3)=max{g1(xh+ε3),g2(xh+ε3),,gk(xh+ε3)}then
                  gmax(x)=gi(x), for all x(h,+)

    In the above algorithm, ε1, ε2, ε3 are the positive constants such that:


    From this algorithm, we have written a MATLAB code to find the gmax(x). When gmax(x) is determined, we will easily calculate Bayes error by using formula (3), as well as classify a new element by principle (1).

    Example 2. Given seven populations having univariate normal pdfs {f1, f2,…, f7} with specific parameters as follows (Figure 1):

    Figure 1.

    The graph of seven one-dimension normal pdfs,fmax(x) andgmax(x).


    Using codes written with qi=1/7,gi(x)=qifi(x),i=1,2,..,7,we have the results:


    (b) For multidimension

    In multidimension cases, it should be very complicated to obtain closed expression for gmax(x). The difficulty comes from the various forms of the intersection space curves between the pdfs surfaces. This problem has been interested by many authors in Refs. [17, 18, 2125]. Pham–Gia et al. [18] have attempted to find the function gmax(x); however, it has been only established for some cases of bivariate normal distribution.

    Example 3. Given the four bivariate normal pdfs N(μi, Σi) with the following specific parameters [16]:


    With q1 = 0.25, q2 = 0.2, q3 = 0.4, and q4 = 0.15, we have the graphs of gi(x) = qifi(x) and their intersection curves as shown in Figure 2.

    Figure 2.

    The graph of three bivariate normal pdfs and theirgmax(x).

    Here, we do not find the expression of gmax(x). We compute Bayes error instead by taking integration of gmax(x) by quasi-Monte Carlo method [17]. An algorithm for doing calculations has been constructed, and a corresponding MATLAB procedure is used in Section 4.

    3.4. Estimate the probability density function

    There are many parameter and nonparameter methods to estimate pdfs. In the examples and applications of Section 4, we use the kernel function method, the popular one in practice nowadays. It has the following formula:


    where xj, j= 1,2,…,nare variables, xj, i= 1,2,…,Nare the ith data of the jth variable, hjis the bandwidth parameter for the jth variable, fj(.) is the kernel function of the jth variable which is usually normal, Epanechnikov, biweight, and triweight. According to this method, the choice of smoothing parameter and the type of kernel function play an important role and affect the result. Although Silverman [20], Martinez and Martinez [10], and some other authors [7, 13, 27] had discussions about this problem, the optimal choice still has not been found yet. In this chapter, the smoothing parameter is from the idea of Scott [19] and the kernel function is the Gaussian one. We have also written the code by MATLAB software to estimate the pdfs in n-dimensions space using this method.

    We have written the complete code for the proposed algorithm by MATLAB software. It is applied effectively for the examples of Section 4.


    4. Some applications

    In this section, we will consider three applications in three domains: biology, medicine, and economics to illustrate for present theories and to test established algorithms. They also show that the proposed algorithm presents more advantages than the existing ones.

    Application 1. We consider classification for well-known Iris flower data, which have been presented in many documents like in Ref. [17]. These data are often used to compare the new method and existing ones in classifying. The three varieties of Iris, namely, Setosa (Se), Versicolor (Ve), and Virginica (Vi), have data in four attributes: X1 = sepal length, X2 = sepal width, X3 = petal length, and X4 = petal width.

    In this application, the cases of one, two, three and four variables are respectively considered to classify for three groups (Se), (Ve), and (Vi) by Bayesian method with different prior probabilities. The purpose of this performance is to compare the results of BayesC with BayesU, BayesR, and BayesL. Because the numbers of the three groups are equal, and the results of BayesU, BayesR, and BayesL are the same. The correct probability of methods is summarized in Table 3.

    VariablesB BayesU = BayesL = BayesRBayesC
    X1, X20.7150.807
    X1, X30.8930.895
    X1, X40.8070.850
    X2, X30.8910.898
    X2, X40.8090.815
    X3, X40.8430.866
    X1, X2, X30.8920.919
    X1, X2, X40.7640.810
    X1, X3, X40.7620.814
    X2, X3, X40.7360.822
    X1, X2, X3, X40.7250.745

    Table 3.

    The correct probability (%) in classifying Iris flower.

    Table 3 shows that in almost all cases, the results of proposed algorithm are better than those using other algorithms, and in the case using three variables X1, X2, and X3, it gives the best results.

    Application 2. This application considers thyroid gland disease (TGD). Thyroid gland is an important and the largest gland in our body. It is responsible for the metabolism and work process of all cells. Some of the common diseases of gland thyroid are hypothyroidism, hyperthyroidism, thyroid nodules, and thyroid cancer. They are dangerous diseases. Recently, the rate of thyroid gland disease has been increasing in some poor countries. Data includes 3772 person with 3541 for ill group (I) and 231 ones for nonill group (NI). Detail for this data is given in–disease, in which the surveyed variables are Age (X1), Query on thyroxin (X2), Anti-thyroid medication (X3), Sick (X4), Pregnant (X5), Thyroid surgery (X6), Thyroid Stimulating Hormone (X7), Triiodothyronine (X8), Total thyroxin (X9), T4U measured (X10), and Referral source (X11). In this application, this chapter will use random 70% of the data size (2479 elements belong to group I and 162 elements belong to group NI) as the training set to determine significant variables, to estimate pdfs, and to find suitable model. About 30% of the remaining data will be used as test set (1062 elements belong to group I and 69 elements belong to group NI). The result of Bayesian method is also compared to others.

    To assess the effect of independent variables in TGD, we build the logistic regression model log(p/1−p) with variables Xi, i= 1, 2, …, 11 (pis the probability of TGD). The analytical results are summarized in Table 4.


    Table 4.

    Value Sigs of logistic regression model.

    In Table 4, the three variables X1, X8, and X11 in bold face have statistical significance in classifying the two groups (I) and (NI) at 5% level, so we use them to classify TGD.

    Applying the PPC algorithm for cases of one variable, two variables, and three variables with all prior probabilities, we obtain the results given in Table 5.

    Table 5 shows that the correct probability is high, in which BayesC always gives the best result in all three cases of variables. BayesC gives the almost exact result with three variables. We also compare BayesC with existing methods (Fisher, SWM, and logistic) for all the above three cases. All cases show that BayesC is more advantageous than others in reducing Bayes error.

    One variableX191.1397.4797.4697.97
    Two variablesX1, X898.7398.7798.7799.78
    X1, X1198.1198.6597.6599.44
    X8, X1198.7198.7798.7799.82
    Three variablesX1, X8, X1198.3598.8998.8999.96

    Table 5.

    The correct probability (%) in classifying TGD by Bayesian method from training set.

    Using the best results for each case of methods from Table 6, classifying for test set (1131 elements), we have the results given in Table 7.

    MethodsOne variableTwo variablesThree variables

    Table 6.

    The correct probability (%) for optimal models of methods in classifying TGD.

    From Table 7, we see that with the test set, BayesC also gives the best result.

    MethodsCorrect numbersFalse numbersCorrect probability

    Table 7.

    Compare the correct probability (%) in classifying TGD from test set.

    Application 3. This application considers the problem of repaying bank debt (RBD) by customers. In bank credit operations, determining the repayment ability of customers is really important. If the lending is too easy, the bank may have bad debt problems. In contrast, the bank will miss a good business. Therefore, in the current years, the classification of credit application on assessing the ability to repay bank debt has been specially studied and has been a difficult problem in Vietnam. In this section, we appraise this ability of companies in Can Tho city (CTC), Vietnam by using the proposed approach. We collect a data on 214 enterprises operating in key sectors as agriculture, industry, and commerce, including 143 cases of good debt (G) and 71 cases of bad debt (B). Data are provided by responsible organizations of CTC. Each company is evaluated by 13 independent variables in the expert opinion. The specific variables are given in Table 8.

    XiIndependent variablesDetail
    X1Financial leverageTotal debt/total equity
    X2ReinvestmentTotal debt/total equity
    X3RoeNet profit/equity
    X4Interest(Net income + depreciation)/total assets
    X5Floating capital(Current assets − current liabilities)/total assets
    X6Liquidity(Cash + Short-term investments)/current liabilities
    X7ProfitsNet profit/total assets
    X8AbilityNet sales/Total assets
    X9SizeLogarithm of total assets
    X10ExperienceYears in business activity
    X11AgricultureAgricultural and forestry sector
    X12IndustryIndustry and construction
    X13CommerceTrade and services

    Table 8.

    The surveyed independent variables.

    Because of sensitive problem, author has to conceal real data and use training data set. The steps to perform in this application are similar as in Application 2. Training set has 100 elements belonging to group G and 50 elements belonging to group B, and the test set has 43 elements belonging to group G and 21 elements belonging to group B. With training set, the logistic regression model shows only three variables X1, X4, and X7 have statistical significance at 5% level, so we use these three variables to perform BayesU, BayesR, BayesL, and BayesC. Their results are given in Table 9.

    Cases variablesBayesUBayesRBayesLBayesC
    One variableX186.2186.1484.1387.13
    Two variablesX1, X487.2588.7287.1989.06
    X1, X788.1688.3483.2689.56
    X4, X789.2589.0489.0291.34
    Three variablesX1, X5, X791.1591.5390.1793.18

    Table 9.

    The correct probability (%) in classifying RBD by Bayesian method from training set.

    From Table 9, we see that BayesC gives the highest probability in all the cases. We also use logistic method, Fisher, and SVM with training set to find the best results. We have the correct probability given in Table 10.

    Using the best model for each case of methods from Table 10 to classify the test set (67 elements), we obtain the results given in Table 11.

    MethodsOne variableTwo variablesThree variables

    Table 10.

    The correct probability (%) for optimal models of methods in classifying RBD.

    Once again from Table 11, we see that with test data, BayesC also gives the best result.

    MethodsCorrect numbersFalse numbersCorrect probability

    Table 11.

    Compare the correct probability (%) in classifying RBD from test set.


    5. Conclusion

    This chapter presents the classification algorithm by Bayesian method in both theory and application aspect. We establish the relations of Bayes error with other measures and consider the problem to compute it in real application for one and multidimensions. An algorithm to determine the prior probabilities which may decrease Bayes error is proposed. The researched problems are applied in three different domains: biology, medicine, and economics. They show that the proposed approach has more advantages than existing ones. In addition, a complete procedure on MATLAB software is completed and is effectively used in some real applications. These examples show that our works present potential applications for research on real problems.

    © 2017 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

    How to cite and reference

    Link to this chapter Copy to clipboard

    Cite this chapter Copy to clipboard

    Tai Vovan (November 2nd 2017). Classifying by Bayesian Method and Some Applications, Bayesian Inference, Javier Prieto Tejedor, IntechOpen, DOI: 10.5772/intechopen.70052. Available from:

    chapter statistics

    1258total chapter downloads

    1Crossref citations

    More statistics for editors and authors

    Login to your personal dashboard for more detailed statistics on your publications.

    Access personal reporting

    Related Content

    This Book

    Next chapter

    Hypothesis Testing for High-Dimensional Problems

    By Naveen K. Bansal

    Related Book

    First chapter

    Making a Predictive Diagnostic Model for Rangeland Management by Implementing a State and Transition Model Within a Bayesian Belief Network (Case Study: Ghom- Iran)

    By Hossein Bashari

    We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

    More About Us