Open Access is an initiative that aims to make scientific research freely available to all. To date our community has made over 100 million downloads. It’s based on principles of collaboration, unobstructed discovery, and, most importantly, scientific progression. As PhD students, we found it difficult to access the research we needed, so we decided to create a new Open Access publisher that levels the playing field for scientists across the world. How? By making research easy to access, and puts the academic needs of the researchers before the business interests of publishers.

We are a community of more than 103,000 authors and editors from 3,291 institutions spanning 160 countries, including Nobel Prize winners and some of the world’s most-cited researchers. Publishing on IntechOpen allows authors to earn citations and find new collaborators, meaning more people see your work not only from your own field of study, but from other related fields too.

This chapter sums up and proposes some results related to classification problem by Bayesian method. We present the classification principle, Bayes error, and establish its relationship with other measures. The determination for Bayes error in reality for one and multi-dimensions is also considered. Based on training set and the object that we need to classify, an algorithm to determine the prior probability that can make to reduce Bayes error is proposed. This algorithm has been performed by the MATLAB procedure that can be applied well with real data. The proposed algorithm is applied in three domains: biology, medicine, and economics through specific problems. With different characteristics of applied data sets, the proposed algorithm always gives the best results in comparison to the existing ones. Furthermore, the examples show the feasibility and potential application in reality of the researched problem.

Keywords

Bayesian method

classification

error

prior

application

chapter and author info

Author

Tai Vovan*

College of Natural Sciences, Can Tho University, Can Tho City, Vietnam

Classification problem is one of the main subdomains of discriminant analysis and closely related to many fields in statistics. Classification is to assign an element to the appropriate population in a set of known populations based on certain observed variables. It is an important development direction of multivariate statistics and has applications in many different fields [25, 27]. Recently, this problem is interested by many statisticians in both theories and applied areas [14–18, 22–25]. According to Tai [22], we have four main methods to solve the classification problem: Fisher method [6, 12], logistic regression method [8], support vector machine (SVM) method [3], and Bayesian method [17]. Because Bayesian method does not require normal condition for data and can classify for two and more populations it has many advantages [22–25]. Therefore, it has been used by many scientists in their researches.

Given k populations {w_{i}}, with probability density functions (pdfs) and the prior probabilities respectively {f_{i}} and {q_{i}}, i = 1, 2,…, k, where qi∈(0;1),∑i=1kqi=1.Pham–Gia et al. [17] used the maximum function of pdfs as a tool to study about Bayesian method and obtained important results. The classification principle and Bayes error were established based on the g_{max}(x) = max{q_{1}f_{1}(x), q_{2}f_{2}(x), …, q_{k}f_{k}(x)}. The relationship between the upper and lower bounds of the Bayes error and the L^{1}—distance of the pdfs and the overlap coefficient of the pdfs—were established. The function g_{max}(x) played a very important role in the classification problem by Bayesian method and Pham–Gia et al. [17] continued to do research on it. Using the MATLAB software, Pham–Gia et al. [18] succeeded in identifying g_{max}(x) for some cases of the bivariate normal distribution. With similar development, Tai [22] has proposed the L^{1}—distance of the {q_{i}f_{i}(x)}—and established its relationship with Bayes error. This distance is also used to calculate Bayes error as well as to classify new element. This research has been applied in classifying ability to repay debt of bank customers. However, we think that the survey of two Bayesian approach relevant research was not yet completed. There are some relations between Bayes error and other statistical measures.

Bayesian method has many advantages. However, to our knowledge, the field of applications of this method in practice is narrower than other methods. We can find many applications in banking and medicine using Fisher method, SVM method, logistic method [1, 3, 8, 12]. Recently, all statistics software can effectively and quickly process the classification of large data sets and multivariate statistics using either three of the methods mentioned above, whereas the Bayesian method does not have this advantage. The cause of this problem is the ambiguity in determining prior probability, in estimating pdfs, and the complexity in calculating Bayes error. Although all these issues have been discussed by many authors, the optimal methods have yet to be found [22, 25]. In this chapter, we consider to estimate the pdf and to calculate Bayes error to apply in reality. We will present the problem on how to determine the prior probability in this chapter. In case of noninformation, we normally choose prior probabilities by uniform distribution. If we have some types of past data or training set, the prior probabilities are estimated either by Laplace method: q_{i} = (n_{i} + n/k)/(N + n) or by the frequencies of the sample: q_{i} = n_{i}/N, where n_{i} and N are the number of elements in the ith population and training set, respectively, n is the number of dimensions, and k is the number of groups. The above-mentioned approaches have been studied and applied by many authors [14, 15, 22, 25]. We will also propose an algorithm to determine prior probability based on the training set, classified objective, and fuzzy cluster analysis. The proposed algorithm is applied in some specific problems of biology, medicine, and economics and has advantages over existing approaches. All calculations are performed by MATLAB procedures.

The next section of this chapter is structured as follows. Section 2 presents the classification principle and Bayes error. Some results of the Bayes error are also established in this section. Section 3 resolves the related problems in real application of the Bayes method. There are estimation of pdfs and determination of Bayes error in case of one dimension and multidimension. This section also proposes an algorithm to determine prior probability. Section 4 applies the proposed algorithm in real problems and compares outcome results to those obtained using existing approaches. Section 5 concludes this chapter.

2. Classifying by Bayesian method

The classification problem by Bayesian method has been presented in many documents [15, 16, 27], where the classification principle and the Bayes error are established based on Bayes theorem. In this section, we present them via the maximum function of q_{i}f_{i}(x), i = 1, 2, …, k that they have advantages over existing approaches in real application [17, 18, 21–25]. This section also establishes the upper and lower bounds of the Bayes error and the relationships of Bayes error with other measures in statistical pattern recognition.

2.1. Classification principle and Bayes error

Given k populations w_{1}, w_{2}, …, w_{k} with q_{i} ∈ (0;1) and f_{i}(x) are the prior probability and pdf of ith population, respectively, i = 1, 2, …, k. According to Pham–Gia et al. [17], element x_{0} will be assigned to w_{i} if

gi(x0)=gmax(x0),i=1,2,…,kE1

where gi(x)=qifi(x),gmax(x)=max{q1f1(x),q2f2(x),…,qkfk(x)}.

λ1,2(q,1−q)is the overlap area measure of qf_{1}(x) and (1−q)f_{2}(x) and ‖qf1,(1−q)f2‖1=∫Rn|qf1(x)−(1−q)f2(x)|dx.

2.2. Some results about Bayes error

Theorem 1. Let f_{i}(x), i =1, 2, …, k, k ≥ 3 be k pdfs defined on Rn,n≥1,qi∈(0;1).We have the relationships of Bayes error with other measures as follow:

qifi(x)≤max{q1f1(x),q2f2(x),…,qkfk(x)}≤∑i=1kqifi(x)for all i = 1,…,k.

Integrating the above relation, we obtain:

qi≤∫Rngmax(x)dx≤1.E33

Above inequality is true for all i = 1,…,k, so

max{qi}≤∫Rngmax(x)dx≤1.E34

Replacing ∫Rngmax(x)=1−Pe1,2,…,k(q)in above relation, we have Eq. (8).

From the result of Eqs. (5) and (6), with α1=α2=…=αk=1/k,, we have the relationship between Bayes error and affinity of Matusita [11]. Especially, when k = 2, we have the relationship between Pe1,2(q,1−q)and Hellinger’s distance.

In addition, we also have the relation between Bayes error and overlap coefficients as well as L^{1}–distance of {g_{1}(x), g_{2}(x), …, g_{k}(x)} (see Ref. [22]). For special case: q_{1} = q_{2} = … = q_{k} = 1/k, we had established expressions about relations between Bayes error and L^{1}–distance of {f_{1}(x), f_{2}(x), …, f_{k}(x)}, Pe1,2,…,k(1/k)and Pe1,2,…,k+1(1/(k+1))(see Ref. [17]).

3. Related problems in applying of Bayesian method

To apply Bayesian method in reality, we have to resolve three main problems: (i) Determine prior probability, (ii) compute Bayes error, and (iii) estimate pdfs. In this section, we propose an algorithm to solve for (i) based on fuzzy cluster analysis and classified objective that can reduces Bayes error in comparing with traditional approaches. For (ii), Bayes error is established by closed expression for general case and determine it by an algorithm to find maximum function of g_{i}(x), i = 1, 2, …, k for one dimension case. The quasi-Monte Carlo method is proposed to compute Bayes error in this section. For (iii), we review the problem to estimate pdfs by kernel function method where the bandwidth parameter and kernel function are specified.

3.1. Prior probability

In the n-dimensions space, given N populations N(0)={W1(0),W2(0),…,WN(0)}with data set Z = [z_{ij}]_{nxN}. Let matrix U=[μik]c×n, where μ_{ik} is probability of the kth element belonging to w_{i}. We have μik∈[0,1]and satisfies the following conditions:

DikA2=‖zk−vi‖A2=(zk−vi)TA(zk−vi)is the matrix whose element dik2is the square of distance from the object z_{k} to the ith representative population. This representative is computed by the following formula:

Given the data set Z including c known populations w_{1,}w_{2}, …, w_{c}. Assume x_{0} is an object that we need to classify. To identify the prior probabilities when classifying x_{0}, we propose the following prior probability by fuzzy clustering (PPC) algorithm:

Algorithm 1. Determining prior probability by fuzzy clustering (PPC) Input: The data set Z = [zij]n×Nof c populations {w_{1}, w_{2}, …, w_{c}}, x_{0}, ε, m and the initial partition matrix U=U(0)=[μij]c×N+1, where μ_{ij} = 1 if the jth object belongs to the w_{i} and μ_{ij} = 0 for the opposite, i=1,c¯;j=1,N¯,μij=1/cfor j = N + 1. Output: The prior probability μi(N+1),i=1,2,…c.

Repeat: Find the representative object of w_{i}: vi=∑k=1N(μik)mzk∑k=1N(μik)m,1≤i≤c Compute the matrix [Dik]c×N+1(the pairwise distance between objects and representative objects). Update the new partition matrix U^{(new)} by the following principle: If D_{ik} > 0 for all i=1,2,…,c;k=1,2,…,N+1then

μik(new)=1∑j=1c(Dik/Djk)2/(m−1),i≠j=1,2,…,c Else, μik(new)=0 End; Compute S=‖U(new)−U‖=maxik(|μik(new)−μik|) U=U(new) Until S < ε; The prior probability μi(N+1),i=1,2,…c(the final column of the matrix U);

In the above algorithm, we have:

ε is a really small number and is chosen arbitrarily. The smaller ε is, the more iterations time is taken. In the examples of this chapter, we choose ε = 0.001.

The distance matrix D_{ik} depends on the norm-inducing matrix A. When A = I, D_{ik} is the matrix of Euclidean distances. Besides, there are several choices of A, such as diagonal matrix or the inverse of the covariance matrix. In this chapter, we chose the Euclidean distances in the numerical examples and applications.

m is the fuzziness parameter, when m = 1, the fuzzy clustering becomes the nonfuzzy clustering. When m → ∞, the partition becomes completely fuzzy μ_{ik} = 1/c. The determining of this parameter, which affects the analysis result, is difficult. Even though Yu et al. [28] proposed two rules to determine the supermom of m for clustering problems, the searching of the specific m was done by meshing method (see [2, 4, 5, 9] for more details). By this process, the best m among several of given values will be chosen. In this chapter, m is also identified by meshing method for the classification problem. The best integer m between 2 and 10 will be used.

At the end of the PPC algorithm, we obtain the prior probabilities of x_{0} based on the last column of the partition matrix U(μi(N+1),i=1,2,…c). The PPC algorithm helps us determine the prior probabilities via the closeness degree between the classified object and the populations. Each object will receive its suitable prior probabilities.

In this chapter, Bayesian method with prior probabilities calculated by the uniform distribution approach, the ratio of samples approach, the Laplace approach, and the proposed PPC algorithm approach are respectively called BayesU, BayesR, BayesL, and BayesC.

Example 1. Given the studied marks (scale 10 grading system) of 20 students. Among them, nine students have marks that are lower than 5 (w_{1}: fail the exam) and 11 students have marks that are higher than 5 (w_{2}: pass the exam). The data are given in Table 1.

Objects

Marks

Groups

Objects

Marks

Groups

1

0.6

w_{1}

11

5.6

w_{2}

2

1.0

w_{1}

12

6.1

w_{2}

3

1.2

w_{1}

13

6.4

w_{2}

4

1.6

w_{1}

14

6.4

w_{2}

5

2.2

w_{1}

15

7.3

w_{2}

6

2.4

w_{1}

16

8.4

w_{2}

7

2.4

w_{1}

17

9.2

w_{2}

8

3.9

w_{1}

18

9.4

w_{2}

9

4.3

w_{1}

19

9.6

w_{2}

10

5.5

w_{2}

20

9.8

w_{2}

Table 1.

The studied marks of 20 students and the actual classifications.

Assume that we need to classify the ninth object, x_{0} = 4.3, to one in two populations. Using the PPC algorithm, we have the following final partition matrix:

This matrix shows the prior probabilities when assigning the ninth object to w_{1} and w_{2} are 0.724 and 0.276, respectively. Meanwhile, the prior probabilities determined by BayesU, BayesR, and BayesL are (0.5; 0.5), (0.421; 0.579), and (0.429; 0.571), respectively.

From the data in Table 1, we might estimate the pdfs f_{1}(x) and f_{2}(x) and compute the values q_{1}f_{1}(x) and q_{2}f_{2}(x), where q_{1} and q_{2} are the calculated prior probabilities. The results of classifying x_{0} by four approaches: BayesU, BayesR, BayesL, and BayesC are given in Table 2.

Methods

_{Priors}

_{gmax(x0)}

Populations

Bayes errors

BayesU

(0.5; 0.5)

0.0353

2

0.0538

BayesR

(0.421; 0.579)

0.0409

2

0.0558

BayesL

(0.429; 0.571)

0.0403

2

0.0557

BayesC

(0.724; 0.276)

0.0485

1

0.0241

Table 2.

The results when classifying the ninth object.

Because the actual population of x_{0} is w_{1}, only BayesC gives the true result. The Bayes error of BayesC is also the smallest. Thus, in this example, the proposed method improves the drawback of the traditional method in determining the prior probabilities.

3.2. Determining Bayes error

Theorem 2. Let f_{i}(x), i =1, 2, …, k, k ≥ 3 be k pdfs defined on R^{n}, n ≥ 1 and let q_{i} ∈ (0;1),

In addition, from Eq. (17), we can directly find out

gmax(x)=gi(x),∀x∈Rin,(1≤i≤k).E44

For k = 2, q_{1} = q_{2} = 1/2, we consider the two following special cases:

If f_{1}(x) and f_{2}(x) are two one-dimension normal pdfs (N(μi,σi),i = 1, 2), without loss of generality, we suppose that μ1<μ2(for μ1≠μ2), σ1<σ2(for σ1≠σ2), then

In case of n = 2, d(x) can be straight lines or parabola or ellipses or hyperbola.

3.3. Maximum function in the classification problem

To classify a new element by the principle (1) and to determine Bayes error by the formula (3), we must find g_{max}(x). Some authors, such as Pham–Gia et al. [15, 17] and Tai [21, 22], have surveyed relationships between g_{max}(x) with some related quantities of classification problem. The specific expression for g_{max}(x) in some special case has been found [18]. However, the general expression for all of cases is a complex problem that has not been still found yet.

Given k pdfs f_{i}(x) and q_{i}, i = 1, 2, …, k with q_{1} + q_{2} + …+ q_{k} = 1 and let g_{i}(x) = q_{i}f_{i}(x), g_{max}(x) = max{g_{i}(x)}. Now, we take interest in determining g_{max}(x).

(a) For one dimension

In this case, we can find g_{max}(x) by the following algorithm:

Algorithm 2. Find the g_{max}(x) function Input: g_{i}(x) = q_{i}f_{i}(x), where f_{i}(x) and q_{i} are the probability density function and the prior probability of w_{i,}i = 1, 2, …, k, respectively. Output: The g_{max}(x) function.

Find all roots of the equations gi(x)−gj(x)=0,i=1,k−1¯,j=i+1,k¯. Let B be the set of all roots. For x_{lm} ∈ B (the roof of equation gl(x)−gm(x)=0) do For p ∈{1,2,…,k}\{l,m} do If gl(xlm)<gp(xlm)then B=B\{xlm} End End End Arrange the elements of B in order from smallest to largest: B={x1,x2,…,xh},x1<x2<…<xh

(Determine the function g_{max}(x) in interval (−∞,x_{1}]) For i = 1 to k do If gi(x1−ε1)=max{g1(x1−ε1),g2(x1−ε1),…,gk(x1−ε1)}then gmax(x)=gi(x), for all x ∈ (−∞,x_{1}] End End (Determine the function g_{max} (x) in interval(xj,xj+1], j=1,h−1¯) For i =1 to k do For j =1 to h-1 do If gi(xj+ε2)=max{g1(xj+ε2),g2(xj+ε2),…,gk(xk+ε2)}then gmax(x)=gi(x), for all x∈(xj,xj+1] End End End (Determine the function g_{max} (x) in interval (h,+∞)) For i = 1 to k do If gi(xh+ε3)=max{g1(xh+ε3),g2(xh+ε3),…,gk(xh+ε3)}then gmax(x)=gi(x), for all x∈(h,+∞) End End

In the above algorithm, ε_{1}, ε_{2}, ε_{3} are the positive constants such that:

x1+ε1<x2,xh−ε3>xh−1,xi−ε2<xi−1andxi+ε2<xi+1.E50

From this algorithm, we have written a MATLAB code to find the g_{max}(x). When g_{max}(x) is determined, we will easily calculate Bayes error by using formula (3), as well as classify a new element by principle (1).

Example 2. Given seven populations having univariate normal pdfs {f_{1}, f_{2},…, f_{7}} with specific parameters as follows (Figure 1):

In multidimension cases, it should be very complicated to obtain closed expression for g_{max}(x). The difficulty comes from the various forms of the intersection space curves between the pdfs surfaces. This problem has been interested by many authors in Refs. [17, 18, 21–25]. Pham–Gia et al. [18] have attempted to find the function g_{max}(x); however, it has been only established for some cases of bivariate normal distribution.

Example 3. Given the four bivariate normal pdfs N(μ_{i}, Σ_{i}) with the following specific parameters [16]:

With q_{1} = 0.25, q_{2} = 0.2, q_{3} = 0.4, and q_{4} = 0.15, we have the graphs of g_{i}(x) = q_{i}f_{i}(x) and their intersection curves as shown in Figure 2.

Here, we do not find the expression of g_{max}(x). We compute Bayes error instead by taking integration of g_{max}(x) by quasi-Monte Carlo method [17]. An algorithm for doing calculations has been constructed, and a corresponding MATLAB procedure is used in Section 4.

3.4. Estimate the probability density function

There are many parameter and nonparameter methods to estimate pdfs. In the examples and applications of Section 4, we use the kernel function method, the popular one in practice nowadays. It has the following formula:

f⌢(x)=1Nh1h2…hn∑i=1N∏j=1nfj(xj−xijhj),E19

where x_{j}, j = 1,2,…,n are variables, x_{j}, i = 1,2,…,N are the ith data of the jth variable, h_{j} is the bandwidth parameter for the jth variable, f_{j}(.) is the kernel function of the jth variable which is usually normal, Epanechnikov, biweight, and triweight. According to this method, the choice of smoothing parameter and the type of kernel function play an important role and affect the result. Although Silverman [20], Martinez and Martinez [10], and some other authors [7, 13, 27] had discussions about this problem, the optimal choice still has not been found yet. In this chapter, the smoothing parameter is from the idea of Scott [19] and the kernel function is the Gaussian one. We have also written the code by MATLAB software to estimate the pdfs in n-dimensions space using this method.

We have written the complete code for the proposed algorithm by MATLAB software. It is applied effectively for the examples of Section 4.

4. Some applications

In this section, we will consider three applications in three domains: biology, medicine, and economics to illustrate for present theories and to test established algorithms. They also show that the proposed algorithm presents more advantages than the existing ones.

Application 1. We consider classification for well-known Iris flower data, which have been presented in many documents like in Ref. [17]. These data are often used to compare the new method and existing ones in classifying. The three varieties of Iris, namely, Setosa (Se), Versicolor (Ve), and Virginica (Vi), have data in four attributes: X1 = sepal length, X2 = sepal width, X3 = petal length, and X4 = petal width.

In this application, the cases of one, two, three and four variables are respectively considered to classify for three groups (Se), (Ve), and (Vi) by Bayesian method with different prior probabilities. The purpose of this performance is to compare the results of BayesC with BayesU, BayesR, and BayesL. Because the numbers of the three groups are equal, and the results of BayesU, BayesR, and BayesL are the same. The correct probability of methods is summarized in Table 3.

Variables

B BayesU = BayesL = BayesR

BayesC

X1

0.667

0.679

X2

0.668

0.579

X3

0.903

0.916

X4

0.815

0.827

X1, X2

0.715

0.807

X1, X3

0.893

0.895

X1, X4

0.807

0.850

X2, X3

0.891

0.898

X2, X4

0.809

0.815

X3, X4

0.843

0.866

X1, X2, X3

0.892

0.919

X1, X2, X4

0.764

0.810

X1, X3, X4

0.762

0.814

X2, X3, X4

0.736

0.822

X1, X2, X3, X4

0.725

0.745

Table 3.

The correct probability (%) in classifying Iris flower.

Table 3 shows that in almost all cases, the results of proposed algorithm are better than those using other algorithms, and in the case using three variables X1, X2, and X3, it gives the best results.

Application 2. This application considers thyroid gland disease (TGD). Thyroid gland is an important and the largest gland in our body. It is responsible for the metabolism and work process of all cells. Some of the common diseases of gland thyroid are hypothyroidism, hyperthyroidism, thyroid nodules, and thyroid cancer. They are dangerous diseases. Recently, the rate of thyroid gland disease has been increasing in some poor countries. Data includes 3772 person with 3541 for ill group (I) and 231 ones for nonill group (NI). Detail for this data is given inhttp://www.cs.sfu.ca/wangk/ucidata/dataset/thyroid–disease, in which the surveyed variables are Age (X1), Query on thyroxin (X2), Anti-thyroid medication (X3), Sick (X4), Pregnant (X5), Thyroid surgery (X6), Thyroid Stimulating Hormone (X7), Triiodothyronine (X8), Total thyroxin (X9), T4U measured (X10), and Referral source (X11). In this application, this chapter will use random 70% of the data size (2479 elements belong to group I and 162 elements belong to group NI) as the training set to determine significant variables, to estimate pdfs, and to find suitable model. About 30% of the remaining data will be used as test set (1062 elements belong to group I and 69 elements belong to group NI). The result of Bayesian method is also compared to others.

To assess the effect of independent variables in TGD, we build the logistic regression model log(p/1−p) with variables Xi, i = 1, 2, …, 11 (p is the probability of TGD). The analytical results are summarized in Table 4.

Variable

Sig.

Variable

Sig.

X1

0.000

X7

0.304

X2

0.279

X8

0.000

X3

0.998

X9

0.995

X4

0.057

X10

0.999

X5

0.997

X11

0.000

X6

0.997

Const

0.992

Table 4.

Value Sigs of logistic regression model.

In Table 4, the three variables X1, X8, and X11 in bold face have statistical significance in classifying the two groups (I) and (NI) at 5% level, so we use them to classify TGD.

Applying the PPC algorithm for cases of one variable, two variables, and three variables with all prior probabilities, we obtain the results given in Table 5.

Table 5 shows that the correct probability is high, in which BayesC always gives the best result in all three cases of variables. BayesC gives the almost exact result with three variables. We also compare BayesC with existing methods (Fisher, SWM, and logistic) for all the above three cases. All cases show that BayesC is more advantageous than others in reducing Bayes error.

Cases

Variables

BayesU

BayesR

BayesL

BayesC

One variable

X1

91.13

97.47

97.46

97.97

X8

90.72

98.51

98.50

98.65

X11

90.53

97.48

97.47

98.19

Two variables

X1, X8

98.73

98.77

98.77

99.78

X1, X11

98.11

98.65

97.65

99.44

X8, X11

98.71

98.77

98.77

99.82

Three variables

X1, X8, X11

98.35

98.89

98.89

99.96

Table 5.

The correct probability (%) in classifying TGD by Bayesian method from training set.

Using the best results for each case of methods from Table 6, classifying for test set (1131 elements), we have the results given in Table 7.

Methods

One variable

Two variables

Three variables

Logistic

93.90

93.90

93.90

Fisher

72.30

73.60

71.70

SVM

93.87

93.87

93.87

BayesC

98.65

99.82

99.96

Table 6.

The correct probability (%) for optimal models of methods in classifying TGD.

From Table 7, we see that with the test set, BayesC also gives the best result.

Methods

Correct numbers

False numbers

Correct probability

Logistic

835

296

73.8

Fisher

835

296

73.8

SVM

1062

69

90.9

BayesC

1062

69

93.9

Table 7.

Compare the correct probability (%) in classifying TGD from test set.

Application 3. This application considers the problem of repaying bank debt (RBD) by customers. In bank credit operations, determining the repayment ability of customers is really important. If the lending is too easy, the bank may have bad debt problems. In contrast, the bank will miss a good business. Therefore, in the current years, the classification of credit application on assessing the ability to repay bank debt has been specially studied and has been a difficult problem in Vietnam. In this section, we appraise this ability of companies in Can Tho city (CTC), Vietnam by using the proposed approach. We collect a data on 214 enterprises operating in key sectors as agriculture, industry, and commerce, including 143 cases of good debt (G) and 71 cases of bad debt (B). Data are provided by responsible organizations of CTC. Each company is evaluated by 13 independent variables in the expert opinion. The specific variables are given in Table 8.

Xi

Independent variables

Detail

X1

Financial leverage

Total debt/total equity

X2

Reinvestment

Total debt/total equity

X3

Roe

Net profit/equity

X4

Interest

(Net income + depreciation)/total assets

X5

Floating capital

(Current assets − current liabilities)/total assets

Because of sensitive problem, author has to conceal real data and use training data set. The steps to perform in this application are similar as in Application 2. Training set has 100 elements belonging to group G and 50 elements belonging to group B, and the test set has 43 elements belonging to group G and 21 elements belonging to group B. With training set, the logistic regression model shows only three variables X1, X4, and X7 have statistical significance at 5% level, so we use these three variables to perform BayesU, BayesR, BayesL, and BayesC. Their results are given in Table 9.

Cases variables

BayesU

BayesR

BayesL

BayesC

One variable

X1

86.21

86.14

84.13

87.13

X4

81.12

82.91

86.16

88.19

X7

83.21

84.63

83.14

84.52

Two variables

X1, X4

87.25

88.72

87.19

89.06

X1, X7

88.16

88.34

83.26

89.56

X4, X7

89.25

89.04

89.02

91.34

Three variables

X1, X5, X7

91.15

91.53

90.17

93.18

Table 9.

The correct probability (%) in classifying RBD by Bayesian method from training set.

From Table 9, we see that BayesC gives the highest probability in all the cases. We also use logistic method, Fisher, and SVM with training set to find the best results. We have the correct probability given in Table 10.

Using the best model for each case of methods from Table 10 to classify the test set (67 elements), we obtain the results given in Table 11.

Methods

One variable

Two variables

Three variables

Logistic

84.04

88.29

88.69

Fisher

84.73

80.73

79.32

SWM

82.34

82.03

83.07

BayesC

88.19

91.34

93.18

Table 10.

The correct probability (%) for optimal models of methods in classifying RBD.

Once again from Table 11, we see that with test data, BayesC also gives the best result.

Methods

Correct numbers

False numbers

Correct probability

Logistic

53

11

82.81

Fisher

52

12

81.25

SVM

53

11

82.81

BayesC

57

7

89.06

Table 11.

Compare the correct probability (%) in classifying RBD from test set.

5. Conclusion

This chapter presents the classification algorithm by Bayesian method in both theory and application aspect. We establish the relations of Bayes error with other measures and consider the problem to compute it in real application for one and multidimensions. An algorithm to determine the prior probabilities which may decrease Bayes error is proposed. The researched problems are applied in three different domains: biology, medicine, and economics. They show that the proposed approach has more advantages than existing ones. In addition, a complete procedure on MATLAB software is completed and is effectively used in some real applications. These examples show that our works present potential applications for research on real problems.

Tai Vovan (November 2nd 2017). Classifying by Bayesian Method and Some Applications, Bayesian Inference, Javier Prieto Tejedor, IntechOpen, DOI: 10.5772/intechopen.70052. Available from:

Bayesian Networks for Supporting Model Based Predictive Control of Smart Buildings

By Alessandro Carbonari, Massimo Vaccarini and Alberto Giretti

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.