Open access peer-reviewed chapter

Classifying by Bayesian Method and Some Applications

By Tai Vovan

Submitted: December 6th 2016Reviewed: June 8th 2017Published: November 2nd 2017

DOI: 10.5772/intechopen.70052

Downloaded: 337

Abstract

This chapter sums up and proposes some results related to classification problem by Bayesian method. We present the classification principle, Bayes error, and establish its relationship with other measures. The determination for Bayes error in reality for one and multi-dimensions is also considered. Based on training set and the object that we need to classify, an algorithm to determine the prior probability that can make to reduce Bayes error is proposed. This algorithm has been performed by the MATLAB procedure that can be applied well with real data. The proposed algorithm is applied in three domains: biology, medicine, and economics through specific problems. With different characteristics of applied data sets, the proposed algorithm always gives the best results in comparison to the existing ones. Furthermore, the examples show the feasibility and potential application in reality of the researched problem.

Keywords

  • Bayesian method
  • classification
  • error
  • prior
  • application

1. Introduction

Classification problem is one of the main subdomains of discriminant analysis and closely related to many fields in statistics. Classification is to assign an element to the appropriate population in a set of known populations based on certain observed variables. It is an important development direction of multivariate statistics and has applications in many different fields [25, 27]. Recently, this problem is interested by many statisticians in both theories and applied areas [1418, 2225]. According to Tai [22], we have four main methods to solve the classification problem: Fisher method [6, 12], logistic regression method [8], support vector machine (SVM) method [3], and Bayesian method [17]. Because Bayesian method does not require normal condition for data and can classify for two and more populations it has many advantages [2225]. Therefore, it has been used by many scientists in their researches.

Given k populations {wi}, with probability density functions (pdfs) and the prior probabilities respectively {fi} and {qi}, i = 1, 2,…, k, where qi(0;1),i=1kqi=1.Pham–Gia et al. [17] used the maximum function of pdfs as a tool to study about Bayesian method and obtained important results. The classification principle and Bayes error were established based on the gmax(x) = max{q1f1(x), q2f2(x), …, qkfk(x)}. The relationship between the upper and lower bounds of the Bayes error and the L1—distance of the pdfs and the overlap coefficient of the pdfs—were established. The function gmax(x) played a very important role in the classification problem by Bayesian method and Pham–Gia et al. [17] continued to do research on it. Using the MATLAB software, Pham–Gia et al. [18] succeeded in identifying gmax(x) for some cases of the bivariate normal distribution. With similar development, Tai [22] has proposed the L1—distance of the {qifi(x)}—and established its relationship with Bayes error. This distance is also used to calculate Bayes error as well as to classify new element. This research has been applied in classifying ability to repay debt of bank customers. However, we think that the survey of two Bayesian approach relevant research was not yet completed. There are some relations between Bayes error and other statistical measures.

Bayesian method has many advantages. However, to our knowledge, the field of applications of this method in practice is narrower than other methods. We can find many applications in banking and medicine using Fisher method, SVM method, logistic method [1, 3, 8, 12]. Recently, all statistics software can effectively and quickly process the classification of large data sets and multivariate statistics using either three of the methods mentioned above, whereas the Bayesian method does not have this advantage. The cause of this problem is the ambiguity in determining prior probability, in estimating pdfs, and the complexity in calculating Bayes error. Although all these issues have been discussed by many authors, the optimal methods have yet to be found [22, 25]. In this chapter, we consider to estimate the pdf and to calculate Bayes error to apply in reality. We will present the problem on how to determine the prior probability in this chapter. In case of noninformation, we normally choose prior probabilities by uniform distribution. If we have some types of past data or training set, the prior probabilities are estimated either by Laplace method: qi = (ni + n/k)/(N + n) or by the frequencies of the sample: qi = ni/N, where ni and N are the number of elements in the ith population and training set, respectively, n is the number of dimensions, and k is the number of groups. The above-mentioned approaches have been studied and applied by many authors [14, 15, 22, 25]. We will also propose an algorithm to determine prior probability based on the training set, classified objective, and fuzzy cluster analysis. The proposed algorithm is applied in some specific problems of biology, medicine, and economics and has advantages over existing approaches. All calculations are performed by MATLAB procedures.

The next section of this chapter is structured as follows. Section 2 presents the classification principle and Bayes error. Some results of the Bayes error are also established in this section. Section 3 resolves the related problems in real application of the Bayes method. There are estimation of pdfs and determination of Bayes error in case of one dimension and multidimension. This section also proposes an algorithm to determine prior probability. Section 4 applies the proposed algorithm in real problems and compares outcome results to those obtained using existing approaches. Section 5 concludes this chapter.

2. Classifying by Bayesian method

The classification problem by Bayesian method has been presented in many documents [15, 16, 27], where the classification principle and the Bayes error are established based on Bayes theorem. In this section, we present them via the maximum function of qifi(x), i = 1, 2, …, k that they have advantages over existing approaches in real application [17, 18, 2125]. This section also establishes the upper and lower bounds of the Bayes error and the relationships of Bayes error with other measures in statistical pattern recognition.

2.1. Classification principle and Bayes error

Given k populations w1, w2, …, wk with qi ∈ (0;1) and fi(x) are the prior probability and pdf of ith population, respectively, i = 1, 2, …, k. According to Pham–Gia et al. [17], element x0 will be assigned to wi if

gi(x0)=gmax(x0),i=1,2,,kE1

where gi(x)=qifi(x),gmax(x)=max{q1f1(x),q2f2(x),,qkfk(x)}.

Bayes error is given by the formula:

Pe1,2,,k(q)=i=1kRn\Rinqifidx=1i=1kRinqifi(x)dx,E2

where Rin={x|qifi(x)>qjfj(x),ij,i,j=1,2,,k},(q)=(q1,q2,,qk).

From Eq. (2), we can prove the following result:

Pe1,2,,k(q)=j=1kRn\Rjnqjfj(x)dx=j=1k[Rnqjfj(x)dxRjnmax1lk{qlfl(x)}dx]=Rnj=1kqjfj(x)dxj=1kRjnmax1lk{qlfl(x)}dx=1Rnmax1lk{qlfl(x)}dxE20

or

Pe1,2,,k(q)=1Rngmax(x)dx.E3

The correct probability is determined by Ce1,2,,k(q)=1Pe1,2,,k(q).

For k = 2, we have

Pe1,2(q,1q)=Rnmin{qf1(x),(1q)f2(x)}dx=λ1,2(q,1q)=12[1qf1,(1q)f21],E4

where

λ1,2(q,1q)is the overlap area measure of qf1(x) and (1−q)f2(x) and qf1,(1q)f21=Rn|qf1(x)(1q)f2(x)|dx.

2.2. Some results about Bayes error

Theorem 1. Let fi(x), i =1, 2, …, k, k ≥ 3 be k pdfs defined on Rn,n1,qi(0;1).We have the relationships of Bayes error with other measures as follow:

  1. Pe1,2,,k(q)11k1(1j=1kqjαjDT(f1,f2,,fk)α),E5

  • Pe1,2,,k(q)i<jqiβqj1βDT(fi,fj)(β,1β),E6

  • {(k1)ijgi,gj1}/kPe1,2,,k(q)1(1/2)maxi<j{gi,gj1}mini{qi},E7

  • 0Pe1,2,,k(q)maxi{qi},E8

  • where

    α=(α1,α2,,αk);αj,β(0,1),j=1kαj=1, i, j = 1, 2, …, k, and

    DT(f1,f2,,fk)α=Rnj=1k[fj(x)]αjdxis affinity of Toussaint [26].

    Proof:

    1. For each j = 1,2,…,k, we have

      (j=1kqjfj)αi(qifi)αi,i=1,2,,k.E21

      Therefore,

      (j=1kqjfj)α1+α2++αkj=k(qjfj)αjj=1kqjfjj=k(qjfj)αj.E9

      On the other hand,

      (min1jk{qjfj})α1(q1f1)α1,,(min1jk{qjfj})αk(qkfk)αk,E22

      So

      (min1jk{qjfj})α1++αkj=1k(qjfj)αj.E23

      or

      min1jk{qjfj}j=1k(qjfj)αj.E10

      Combining Eqs. (9) and (10), we obtain

      0j=1kqjfjj=1k(qjfj)αjj=1kqjfjmin1jk{qjfj}.E24

      Because j=1kqjfjmin1jk{qjfj}includes (k−1) terms, we have

      j=1kqjfjmin1jk{qjfj}(k1)max1jk{qjfj}.E25

      Thus,

      0j=1kqjfjj=1k(qjfj)αj(k1)max1jk{qjfj}.E26

      Integrating the above relation, we obtain:

      1j=1kqjαjDT(f1,f2,,fk)α(k1)Rngmax(x)dx.E11

      Using Rngmax(x)=1Pe1,2,,k(q)for Eq. (11), we have Eq. (5).

    2. From Eq. (2), we have

      Pe1,2,,k(q)=j=1kRn\Rjnqjfj(x)dx=j=1kjiRjnmin{qifi(x),qjfj(x)}dx=i<jRinmin{qifi(x),qjfj(x)}dx.E27

      Since

      [min{qifi(x),qjfj(x)}]β(qifi)βand[min{qifi(x),qjfj(x)}]1β(qifi)1β,E28

      then

      min{qifi(x),qjfj(x)}(qifi)β(qjfj)1β.E29

      Integrating the above inequality, we obtain:

      Pe1,2,,k(q)i<jRin[(qifi(x))β(qjfj(x))1β]dxi<jqiβqj1βDT(fi,fj)(β,1β)dx.E30

    3. We have

      Rnmax{g1(x),g2(x),,gk(x)}dxmaxi<jRnmax{gi(x),gj(x)}dxE31

      On the other hand,

      maxi<j{Rnmax{gi(x),gj(x)}dx}=maxi<j{12gi,gj1+12(qi+qj)}maxi<j{12gi,gj1}+mini<j{12(qi+qj)}maxi<j{12gi,gj1}+mini<j{(q1,q2,,qk)}.E32

      Hence,

      Rngmax(x)dx12maxi<j{gi,gj1}+mini<j{(q1,q2,,qk)}.E12

      We also have i<j|gigj|j=1k[max{g1,g2,gk}gj]=k[max{g1,g2,gk}]j=1kgj

      Therefore,

      max{g1,g2,gk}1ki<j|gigj|+1kj=1kgj.E13

      Since

      Rngi(x)dx=qiandi=1kqi=1, the inequality Eq. (13) becomes:

      Rngmax(x)dx1ki<jgi,gj1+1k.E14

      Replacing Rngmax(x)=1Pe1,2,,k(q)to Eqs. (12) and (14), we have Eq. (7).

    4. We have

      qifi(x)max{q1f1(x),q2f2(x),,qkfk(x)}i=1kqifi(x)for all i = 1,…,k.

      Integrating the above relation, we obtain:

      qiRngmax(x)dx1.E33

      Above inequality is true for all i = 1,…,k, so

      max{qi}Rngmax(x)dx1.E34

    Replacing Rngmax(x)=1Pe1,2,,k(q)in above relation, we have Eq. (8).

    From the result of Eqs. (5) and (6), with α1=α2==αk=1/k,, we have the relationship between Bayes error and affinity of Matusita [11]. Especially, when k = 2, we have the relationship between Pe1,2(q,1q)and Hellinger’s distance.

    In addition, we also have the relation between Bayes error and overlap coefficients as well as L1–distance of {g1(x), g2(x), …, gk(x)} (see Ref. [22]). For special case: q1 = q2 = … = qk = 1/k, we had established expressions about relations between Bayes error and L1–distance of {f1(x), f2(x), …, fk(x)}, Pe1,2,,k(1/k)and Pe1,2,,k+1(1/(k+1))(see Ref. [17]).

    3. Related problems in applying of Bayesian method

    To apply Bayesian method in reality, we have to resolve three main problems: (i) Determine prior probability, (ii) compute Bayes error, and (iii) estimate pdfs. In this section, we propose an algorithm to solve for (i) based on fuzzy cluster analysis and classified objective that can reduces Bayes error in comparing with traditional approaches. For (ii), Bayes error is established by closed expression for general case and determine it by an algorithm to find maximum function of gi(x), i = 1, 2, …, k for one dimension case. The quasi-Monte Carlo method is proposed to compute Bayes error in this section. For (iii), we review the problem to estimate pdfs by kernel function method where the bandwidth parameter and kernel function are specified.

    3.1. Prior probability

    In the n-dimensions space, given N populations N(0)={W1(0),W2(0),,WN(0)}with data set Z = [zij]nxN. Let matrix U=[μik]c×n, where μik is probability of the kth element belonging to wi. We have μik[0,1]and satisfies the following conditions:

    i=1cμik=1,0<k=1Nμik<N,1ic,1kN.E35

    We call

    Mzc={U=[μik]cxN|μik[0,1],i,k;i=1cμik=1,k;0<k=1Nμik,i}E15

    be fuzzy partitioning space of k populations,

    DikA2=zkviA2=(zkvi)TA(zkvi)is the matrix whose element dik2is the square of distance from the object zk to the ith representative population. This representative is computed by the following formula:

    vi=k=1N(μik)mzkk=1N(μik)m,1ic,E16

    where m ∈ [1,∞) is the fuzziness parameter.

    Given the data set Z including c known populations w1, w2, …, wc. Assume x0 is an object that we need to classify. To identify the prior probabilities when classifying x0, we propose the following prior probability by fuzzy clustering (PPC) algorithm:

    Algorithm 1. Determining prior probability by fuzzy clustering (PPC)
    Input: The data set Z = [zij]n×Nof c populations {w1, w2, …, wc}, x0, ε, m and the initial partition matrix U=U(0)=[μij]c×N+1, where μij = 1 if the jth object belongs to the wi and μij = 0 for the opposite, i=1,c¯;j=1,N¯,μij=1/cfor j = N + 1.
    Output: The prior probability μi(N+1),i=1,2,c.
    Repeat:
        Find the representative object of wi: vi=k=1N(μik)mzkk=1N(μik)m,1ic
        Compute the matrix [Dik]c×N+1(the pairwise distance between objects and representative objects).
        Update the new partition matrix U(new) by the following principle:
          If Dik > 0 for all i=1,2,,c;k=1,2,,N+1then

               μik(new)=1j=1c(Dik/Djk)2/(m1),ij=1,2,,c
          Else, μik(new)=0
          End;
        Compute S=U(new)U=maxik(|μik(new)μik|)
        U=U(new)
    Until S < ε;
    The prior probability μi(N+1),i=1,2,c(the final column of the matrix U);

    In the above algorithm, we have:

    1. ε is a really small number and is chosen arbitrarily. The smaller ε is, the more iterations time is taken. In the examples of this chapter, we choose ε = 0.001.

    2. The distance matrix Dik depends on the norm-inducing matrix A. When A = I, Dik is the matrix of Euclidean distances. Besides, there are several choices of A, such as diagonal matrix or the inverse of the covariance matrix. In this chapter, we chose the Euclidean distances in the numerical examples and applications.

    3. m is the fuzziness parameter, when m = 1, the fuzzy clustering becomes the nonfuzzy clustering. When m → ∞, the partition becomes completely fuzzy μik = 1/c. The determining of this parameter, which affects the analysis result, is difficult. Even though Yu et al. [28] proposed two rules to determine the supermom of m for clustering problems, the searching of the specific m was done by meshing method (see [2, 4, 5, 9] for more details). By this process, the best m among several of given values will be chosen. In this chapter, m is also identified by meshing method for the classification problem. The best integer m between 2 and 10 will be used.

    At the end of the PPC algorithm, we obtain the prior probabilities of x0 based on the last column of the partition matrix U (μi(N+1),i=1,2,c). The PPC algorithm helps us determine the prior probabilities via the closeness degree between the classified object and the populations. Each object will receive its suitable prior probabilities.

    In this chapter, Bayesian method with prior probabilities calculated by the uniform distribution approach, the ratio of samples approach, the Laplace approach, and the proposed PPC algorithm approach are respectively called BayesU, BayesR, BayesL, and BayesC.

    Example 1. Given the studied marks (scale 10 grading system) of 20 students. Among them, nine students have marks that are lower than 5 (w1: fail the exam) and 11 students have marks that are higher than 5 (w2: pass the exam). The data are given in Table 1.

    ObjectsMarksGroupsObjectsMarksGroups
    10.6w1115.6w2
    21.0w1126.1w2
    31.2w1136.4w2
    41.6w1146.4w2
    52.2w1157.3w2
    62.4w1168.4w2
    72.4w1179.2w2
    83.9w1189.4w2
    94.3w1199.6w2
    105.5w2209.8w2

    Table 1.

    The studied marks of 20 students and the actual classifications.

    Assume that we need to classify the ninth object, x0 = 4.3, to one in two populations. Using the PPC algorithm, we have the following final partition matrix:

    (0.9570.9730.9810.99310.9970.9970.8300.3210.2900.1580.10.10.010.0090.0370.0450.0540.0620.7240.0430.0270.0190.00700.0030.0030.1700.6790.7100.8420.90.90.990.9910.9630.9550.9460.9380.276)E36

    This matrix shows the prior probabilities when assigning the ninth object to w1 and w2 are 0.724 and 0.276, respectively. Meanwhile, the prior probabilities determined by BayesU, BayesR, and BayesL are (0.5; 0.5), (0.421; 0.579), and (0.429; 0.571), respectively.

    From the data in Table 1, we might estimate the pdfs f1(x) and f2(x) and compute the values q1f1(x) and q2f2(x), where q1 and q2 are the calculated prior probabilities. The results of classifying x0 by four approaches: BayesU, BayesR, BayesL, and BayesC are given in Table 2.

    MethodsPriorsgmax(x0)PopulationsBayes errors
    BayesU(0.5; 0.5)0.035320.0538
    BayesR(0.421; 0.579)0.040920.0558
    BayesL(0.429; 0.571)0.040320.0557
    BayesC(0.724; 0.276)0.048510.0241

    Table 2.

    The results when classifying the ninth object.

    Because the actual population of x0 is w1, only BayesC gives the true result. The Bayes error of BayesC is also the smallest. Thus, in this example, the proposed method improves the drawback of the traditional method in determining the prior probabilities.

    3.2. Determining Bayes error

    Theorem 2. Let fi(x), i =1, 2, …, k, k3 be k pdfs defined on Rn, n ≥ 1 and let qi ∈ (0;1),

    {R1n={xRn:q1f1(x)>qjfj(x),2jk},Rkn={xRn:qkfk(x)>qjfj(x),1jk},Rln={xRn:qifi(x)>qlfl(x),1ik,2lk1,il}.E17

    The Bayes error is determined by

    Pe1,2,,k(q)=1R1nq1f1(x)dxl=2k1Rlnqlfl(x)dxRknqkf(x)kdx.E18

    Proof:

    To obtain Eq. (18), we need to prove two following results:

    RinRjn=φ,(1ijk)E37

    and i=1kRin=R1ni=2k1RinRkn=Rn,fmax(x)=fi(x),xRin.

    Let A¯=Rn\A,we have

    R¯ij={xRn:qifi(x)qjfj(x)},Rij={xRn:qifi(x)>qjfj(x)},(1i,jk).E38

    From Eq. (17), we obtain

    R1n=j=2kR1j,Rln=ikR¯il,(2l<k).E39

    Therefore,

    R1nRln=(j=2kRij)(ikR¯il)RilR¯1l=φR1nRln=φ,(2l<k).E40

    On the other hand, from antithesis style of D’Morgan, we have

    R1nRln¯=(j=2nR¯ij)(ikRil)R¯ilR1l=φR1nRln=Rn,(2l<k).E41

    Similarly,

    RknRln=φ,(2l<k),R1nRkn=φ,E42

    so

    i=1kRin=Rn,(l=2k1Rln)Rkn=R1n(l=2k1Rln)Rkn=(l=2k1R1nRln)(l=2k1RknRln)=RnRn=Rni=1kRin=Rn.E43

    In addition, from Eq. (17), we can directly find out

    gmax(x)=gi(x),xRin,(1ik).E44

    For k = 2, q1 = q2 = 1/2, we consider the two following special cases:

    1. If f1(x) and f2(x) are two one-dimension normal pdfs (N(μi,σi),i = 1, 2), without loss of generality, we suppose that μ1<μ2(for μ1μ2), σ1<σ2(for σ1σ2), then

      Pe1,2(1/2,1/2)={12[x1f2(x)dx+x1+f1(x)dx],ifσ1=σ2,12[x2f1(x)dx+x2x3f2(x)dx+x3+f1(x)dx],ifσ1<σ2,E45

      where

      x1=μ1+μ22,x2=(μ1σ22μ2σ12)σ1σ2(μ1μ2)2+Kσ22σ12,x3=(μ1σ22μ2σ12)+σ1σ2(μ1μ2)2+Kσ22σ12,K=2(σ22σ12)ln(σ2σ1)0.E46

      For μ1 = μ2 =μ, the above result becomes:

      Pe1,2(1/2,1/2)={1,    ifσ1=σ2,12[x4f1(x)dx+x4x5f2(x)dx+x5+f1(x)dx]   if σ1<σ2,E47

      where x4=μσ1σ2Eand x5=μ+σ1σ2Ewith E=2σ22σ12ln(σ2σ1)0.

    2. If f1(x) and f2(x) are two n-dimension normal pdfs (N(μi,Σi),n2,i=1,2)then

      Pe1,2(1/2,1/2)=12[R1f2(x)dx+R2f1(x)dx],E48

      where

      R1n={x:d(x)0},R2n={x:d(x)>0},d(x)=[μ1T(Σ1)1μ2T(Σ2)1]x12xT[(Σ1)1(Σ2)1]xm,m=12[ln|Σ1||Σ2|+μ1T(Σ1)1μ1μ2T(Σ2)1μ2].E49

    In case of n = 2, d(x) can be straight lines or parabola or ellipses or hyperbola.

    3.3. Maximum function in the classification problem

    To classify a new element by the principle (1) and to determine Bayes error by the formula (3), we must find gmax(x). Some authors, such as Pham–Gia et al. [15, 17] and Tai [21, 22], have surveyed relationships between gmax(x) with some related quantities of classification problem. The specific expression for gmax(x) in some special case has been found [18]. However, the general expression for all of cases is a complex problem that has not been still found yet.

    Given k pdfs fi(x) and qi, i = 1, 2, …, k with q1 + q2 + …+ qk = 1 and let gi(x) = qifi(x), gmax(x) = max{gi(x)}. Now, we take interest in determining gmax(x).

    (a) For one dimension

    In this case, we can find gmax(x) by the following algorithm:

    Algorithm 2. Find the gmax(x) function
    Input: gi(x) = qifi(x), where fi(x) and qi are the probability density function and the prior probability of wi, i = 1, 2, …, k, respectively.
    Output: The gmax(x) function.
    Find all roots of the equations gi(x)gj(x)=0,i=1,k1¯,j=i+1,k¯.
    Let B be the set of all roots.
    For xlmB (the roof of equation gl(x)gm(x)=0) do
         For p ∈{1,2,…,k}\{l,m} do
          If gl(xlm)<gp(xlm)then B=B\{xlm}
          End
         End
    End
    Arrange the elements of B in order from smallest to largest:
           B={x1,x2,,xh},x1<x2<<xh
    (Determine the function gmax(x) in interval (−∞,x1])
         For i = 1 to k do
             If gi(x1ε1)=max{g1(x1ε1),g2(x1ε1),,gk(x1ε1)}then
                  gmax(x)=gi(x), for all x ∈ (−∞,x1]
             End
         End
    (Determine the function gmax (x) in interval (xj,xj+1], j=1,h1¯)
         For i =1 to k do
          For j =1 to h-1 do
           If gi(xj+ε2)=max{g1(xj+ε2),g2(xj+ε2),,gk(xk+ε2)}then
              gmax(x)=gi(x), for all x(xj,xj+1]
           End
          End
         End
    (Determine the function gmax (x) in interval (h,+))
         For i = 1 to k do
          If gi(xh+ε3)=max{g1(xh+ε3),g2(xh+ε3),,gk(xh+ε3)}then
                  gmax(x)=gi(x), for all x(h,+)
          End
         End

    In the above algorithm, ε1, ε2, ε3 are the positive constants such that:

    x1+ε1<x2,xhε3>xh1,xiε2<xi1andxi+ε2<xi+1.E50

    From this algorithm, we have written a MATLAB code to find the gmax(x). When gmax(x) is determined, we will easily calculate Bayes error by using formula (3), as well as classify a new element by principle (1).

    Example 2. Given seven populations having univariate normal pdfs {f1, f2,…, f7} with specific parameters as follows (Figure 1):

    Figure 1.

    The graph of seven one-dimension normal pdfs, fmax(x) and gmax(x).

    μ1=0.3,μ2=4.0,μ3=9.1,μ4=1.9,μ5=5.3,μ6=8,μ7=4.8,σ1=1.0,σ2=1.3,σ3=1.4,σ4=1.6,σ5=2,σ6=1.9,σ7=2.3.E51

    Using codes written with qi=1/7,gi(x)=qifi(x),i=1,2,..,7,we have the results:

    gmax(x)={g1if1.28<x0.99,g2if2.58<x4.89,g3if8.30<x12.52,g4if{7.86<x1.28}{0.99<x2.58},g5if4.89<x6.65,g6if{6.65<x8.30}{12.52<x23.33},g7if{x7.86}{x>23.33}.E52

    (b) For multidimension

    In multidimension cases, it should be very complicated to obtain closed expression for gmax(x). The difficulty comes from the various forms of the intersection space curves between the pdfs surfaces. This problem has been interested by many authors in Refs. [17, 18, 2125]. Pham–Gia et al. [18] have attempted to find the function gmax(x); however, it has been only established for some cases of bivariate normal distribution.

    Example 3. Given the four bivariate normal pdfs N(μi, Σi) with the following specific parameters [16]:

    μ1=[4020],μ2=[4824],μ3=[4332],μ4=[3828],Σ1=(35181820),Σ1=(28202025),Σ1=(15252565),Σ1=(510107)E53

    With q1 = 0.25, q2 = 0.2, q3 = 0.4, and q4 = 0.15, we have the graphs of gi(x) = qifi(x) and their intersection curves as shown in Figure 2.

    Figure 2.

    The graph of three bivariate normal pdfs and their gmax(x).

    Here, we do not find the expression of gmax(x). We compute Bayes error instead by taking integration of gmax(x) by quasi-Monte Carlo method [17]. An algorithm for doing calculations has been constructed, and a corresponding MATLAB procedure is used in Section 4.

    3.4. Estimate the probability density function

    There are many parameter and nonparameter methods to estimate pdfs. In the examples and applications of Section 4, we use the kernel function method, the popular one in practice nowadays. It has the following formula:

    f(x)=1Nh1h2hni=1Nj=1nfj(xjxijhj),E19

    where xj, j = 1,2,…,n are variables, xj, i = 1,2,…,N are the ith data of the jth variable, hj is the bandwidth parameter for the jth variable, fj(.) is the kernel function of the jth variable which is usually normal, Epanechnikov, biweight, and triweight. According to this method, the choice of smoothing parameter and the type of kernel function play an important role and affect the result. Although Silverman [20], Martinez and Martinez [10], and some other authors [7, 13, 27] had discussions about this problem, the optimal choice still has not been found yet. In this chapter, the smoothing parameter is from the idea of Scott [19] and the kernel function is the Gaussian one. We have also written the code by MATLAB software to estimate the pdfs in n-dimensions space using this method.

    We have written the complete code for the proposed algorithm by MATLAB software. It is applied effectively for the examples of Section 4.

    4. Some applications

    In this section, we will consider three applications in three domains: biology, medicine, and economics to illustrate for present theories and to test established algorithms. They also show that the proposed algorithm presents more advantages than the existing ones.

    Application 1. We consider classification for well-known Iris flower data, which have been presented in many documents like in Ref. [17]. These data are often used to compare the new method and existing ones in classifying. The three varieties of Iris, namely, Setosa (Se), Versicolor (Ve), and Virginica (Vi), have data in four attributes: X1 = sepal length, X2 = sepal width, X3 = petal length, and X4 = petal width.

    In this application, the cases of one, two, three and four variables are respectively considered to classify for three groups (Se), (Ve), and (Vi) by Bayesian method with different prior probabilities. The purpose of this performance is to compare the results of BayesC with BayesU, BayesR, and BayesL. Because the numbers of the three groups are equal, and the results of BayesU, BayesR, and BayesL are the same. The correct probability of methods is summarized in Table 3.

    VariablesB BayesU = BayesL = BayesRBayesC
    X10.6670.679
    X20.6680.579
    X30.9030.916
    X40.8150.827
    X1, X20.7150.807
    X1, X30.8930.895
    X1, X40.8070.850
    X2, X30.8910.898
    X2, X40.8090.815
    X3, X40.8430.866
    X1, X2, X30.8920.919
    X1, X2, X40.7640.810
    X1, X3, X40.7620.814
    X2, X3, X40.7360.822
    X1, X2, X3, X40.7250.745

    Table 3.

    The correct probability (%) in classifying Iris flower.

    Table 3 shows that in almost all cases, the results of proposed algorithm are better than those using other algorithms, and in the case using three variables X1, X2, and X3, it gives the best results.

    Application 2. This application considers thyroid gland disease (TGD). Thyroid gland is an important and the largest gland in our body. It is responsible for the metabolism and work process of all cells. Some of the common diseases of gland thyroid are hypothyroidism, hyperthyroidism, thyroid nodules, and thyroid cancer. They are dangerous diseases. Recently, the rate of thyroid gland disease has been increasing in some poor countries. Data includes 3772 person with 3541 for ill group (I) and 231 ones for nonill group (NI). Detail for this data is given inhttp://www.cs.sfu.ca/wangk/ucidata/dataset/thyroid–disease, in which the surveyed variables are Age (X1), Query on thyroxin (X2), Anti-thyroid medication (X3), Sick (X4), Pregnant (X5), Thyroid surgery (X6), Thyroid Stimulating Hormone (X7), Triiodothyronine (X8), Total thyroxin (X9), T4U measured (X10), and Referral source (X11). In this application, this chapter will use random 70% of the data size (2479 elements belong to group I and 162 elements belong to group NI) as the training set to determine significant variables, to estimate pdfs, and to find suitable model. About 30% of the remaining data will be used as test set (1062 elements belong to group I and 69 elements belong to group NI). The result of Bayesian method is also compared to others.

    To assess the effect of independent variables in TGD, we build the logistic regression model log(p/1−p) with variables Xi, i = 1, 2, …, 11 (p is the probability of TGD). The analytical results are summarized in Table 4.

    VariableSig.VariableSig.
    X10.000X70.304
    X20.279X80.000
    X30.998X90.995
    X40.057X100.999
    X50.997X110.000
    X60.997Const0.992

    Table 4.

    Value Sigs of logistic regression model.

    In Table 4, the three variables X1, X8, and X11 in bold face have statistical significance in classifying the two groups (I) and (NI) at 5% level, so we use them to classify TGD.

    Applying the PPC algorithm for cases of one variable, two variables, and three variables with all prior probabilities, we obtain the results given in Table 5.

    Table 5 shows that the correct probability is high, in which BayesC always gives the best result in all three cases of variables. BayesC gives the almost exact result with three variables. We also compare BayesC with existing methods (Fisher, SWM, and logistic) for all the above three cases. All cases show that BayesC is more advantageous than others in reducing Bayes error.

    CasesVariablesBayesUBayesRBayesLBayesC
    One variableX191.1397.4797.4697.97
    X890.7298.5198.5098.65
    X1190.5397.4897.4798.19
    Two variablesX1, X898.7398.7798.7799.78
    X1, X1198.1198.6597.6599.44
    X8, X1198.7198.7798.7799.82
    Three variablesX1, X8, X1198.3598.8998.8999.96

    Table 5.

    The correct probability (%) in classifying TGD by Bayesian method from training set.

    Using the best results for each case of methods from Table 6, classifying for test set (1131 elements), we have the results given in Table 7.

    MethodsOne variableTwo variablesThree variables
    Logistic93.9093.9093.90
    Fisher72.3073.6071.70
    SVM93.8793.8793.87
    BayesC98.6599.8299.96

    Table 6.

    The correct probability (%) for optimal models of methods in classifying TGD.

    From Table 7, we see that with the test set, BayesC also gives the best result.

    MethodsCorrect numbersFalse numbersCorrect probability
    Logistic83529673.8
    Fisher83529673.8
    SVM10626990.9
    BayesC10626993.9

    Table 7.

    Compare the correct probability (%) in classifying TGD from test set.

    Application 3. This application considers the problem of repaying bank debt (RBD) by customers. In bank credit operations, determining the repayment ability of customers is really important. If the lending is too easy, the bank may have bad debt problems. In contrast, the bank will miss a good business. Therefore, in the current years, the classification of credit application on assessing the ability to repay bank debt has been specially studied and has been a difficult problem in Vietnam. In this section, we appraise this ability of companies in Can Tho city (CTC), Vietnam by using the proposed approach. We collect a data on 214 enterprises operating in key sectors as agriculture, industry, and commerce, including 143 cases of good debt (G) and 71 cases of bad debt (B). Data are provided by responsible organizations of CTC. Each company is evaluated by 13 independent variables in the expert opinion. The specific variables are given in Table 8.

    XiIndependent variablesDetail
    X1Financial leverageTotal debt/total equity
    X2ReinvestmentTotal debt/total equity
    X3RoeNet profit/equity
    X4Interest(Net income + depreciation)/total assets
    X5Floating capital(Current assets − current liabilities)/total assets
    X6Liquidity(Cash + Short-term investments)/current liabilities
    X7ProfitsNet profit/total assets
    X8AbilityNet sales/Total assets
    X9SizeLogarithm of total assets
    X10ExperienceYears in business activity
    X11AgricultureAgricultural and forestry sector
    X12IndustryIndustry and construction
    X13CommerceTrade and services

    Table 8.

    The surveyed independent variables.

    Because of sensitive problem, author has to conceal real data and use training data set. The steps to perform in this application are similar as in Application 2. Training set has 100 elements belonging to group G and 50 elements belonging to group B, and the test set has 43 elements belonging to group G and 21 elements belonging to group B. With training set, the logistic regression model shows only three variables X1, X4, and X7 have statistical significance at 5% level, so we use these three variables to perform BayesU, BayesR, BayesL, and BayesC. Their results are given in Table 9.

    Cases variablesBayesUBayesRBayesLBayesC
    One variableX186.2186.1484.1387.13
    X481.1282.9186.1688.19
    X783.2184.6383.1484.52
    Two variablesX1, X487.2588.7287.1989.06
    X1, X788.1688.3483.2689.56
    X4, X789.2589.0489.0291.34
    Three variablesX1, X5, X791.1591.5390.1793.18

    Table 9.

    The correct probability (%) in classifying RBD by Bayesian method from training set.

    From Table 9, we see that BayesC gives the highest probability in all the cases. We also use logistic method, Fisher, and SVM with training set to find the best results. We have the correct probability given in Table 10.

    Using the best model for each case of methods from Table 10 to classify the test set (67 elements), we obtain the results given in Table 11.

    MethodsOne variableTwo variablesThree variables
    Logistic84.0488.2988.69
    Fisher84.7380.7379.32
    SWM82.3482.0383.07
    BayesC88.1991.3493.18

    Table 10.

    The correct probability (%) for optimal models of methods in classifying RBD.

    Once again from Table 11, we see that with test data, BayesC also gives the best result.

    MethodsCorrect numbersFalse numbersCorrect probability
    Logistic531182.81
    Fisher521281.25
    SVM531182.81
    BayesC57789.06

    Table 11.

    Compare the correct probability (%) in classifying RBD from test set.

    5. Conclusion

    This chapter presents the classification algorithm by Bayesian method in both theory and application aspect. We establish the relations of Bayes error with other measures and consider the problem to compute it in real application for one and multidimensions. An algorithm to determine the prior probabilities which may decrease Bayes error is proposed. The researched problems are applied in three different domains: biology, medicine, and economics. They show that the proposed approach has more advantages than existing ones. In addition, a complete procedure on MATLAB software is completed and is effectively used in some real applications. These examples show that our works present potential applications for research on real problems.

    How to cite and reference

    Link to this chapter Copy to clipboard

    Cite this chapter Copy to clipboard

    Tai Vovan (November 2nd 2017). Classifying by Bayesian Method and Some Applications, Bayesian Inference, Javier Prieto Tejedor, IntechOpen, DOI: 10.5772/intechopen.70052. Available from:

    Embed this chapter on your site Copy to clipboard

    <iframe src="http://www.intechopen.com/embed/bayesian-inference/classifying-by-bayesian-method-and-some-applications" />

    Embed this code snippet in the HTML of your website to show this chapter

    chapter statistics

    337total chapter downloads

    More statistics for editors and authors

    Login to your personal dashboard for more detailed statistics on your publications.

    Access personal reporting

    Related Content

    This Book

    Next chapter

    Hypothesis Testing for High-Dimensional Problems

    By Naveen K. Bansal

    Related Book

    First chapter

    Bayesian Networks for Supporting Model Based Predictive Control of Smart Buildings

    By Alessandro Carbonari, Massimo Vaccarini and Alberto Giretti

    We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

    More about us