1. Introduction
A training set is a special set of labeled data providing known information that is used in the supervised learning to build a classification or regression model. We can imagine each training instance as a feature vector together with an appropriate output value (label, class identifier). A supervised learning algorithm deduces a classification or regression function from the given training set. The deduced classification or regression function should predict an appropriate output value for any input vector. The goal of the training phase is to estimate parameters of a model to predict output values with a good predictive performance in real use of the model.
When a model is built we need to evaluate it in order to compare it with another model or parameter settings or in order to estimate predictive performance of the model. Strategies and measures for the model evaluation are described in section 2.
For a reliable future error prediction we need to evaluate our model on a different, independent and identically distributed set that is different to the set that we have used for building the model. In absence of an independent identically distributed dataset we can split the original dataset into more subsets to simulate the effect of having more datasets. Some splitting algorithms proposed in literature are described in section 3.
During a learning process most learning algorithms use all instances from the given training set to estimate parameters of a model, but commonly lot of instances in the training set are useless. These instances can not improve predictive performance of the model or even can degrade it. There are several reasons to ignore these useless instances. The first one is a noise reduction, because many learning algorithms are noise sensitive [31] and we apply these algorithms before learning phase. The second reason is to speed up a model response by reducing computation. It is especially important for instance-based learners such as k-nearest neighbours, which classify instances by finding the most similar instances from a training set and assigning them the dominant class. These types of learners are commonly called lazy learners, memory-based learners or case-based learners [14]. Reduction of training sets can be necessary if the sets are huge. The size and structure of a training set needed to correctly estimate the parameters of a model can differ from problem to problem and a chosen instance selection method [14]. Moreover, the chosen instance selection method is closely related to the classification and regression method. The process of instance reduction is also called instance selection in the literature. A review of instance selection methods is in section 4.
The most of learning algorithms assumes that the training sets, used to estimate the parameters of a model or to evaluate a model, have proportionally the same representation of classes. But many particular domains have classes represented by a few instances while other classes have a large number of representative instances. Methods that deal with the class imbalance problem are described in section 5.
1.1. Basic notations
In this section we set up basic notation and definitions used in the document.
A population is a set of all existing feature vectors (features). By
According to the previous definition the term representativeness is closely related. We can define a representative set
It is significantly smaller in size compared to the original dataset.
It captures the most of information from the original dataset compared to any subset of the same size.
It has low redundancy among the representatives it contains.
A training set is in the idealized case a representative set of a population. Any of mentioned methods is not needed if we have representative subset of the population. But we never have itin practise. We usually have a random sample set of the population and we use various methods to make it as representative as possible. We will denote a training set by
In order to define a representative set we can define a minimal consistent subset of a training set. Given a training set
Sets used for an evaluation of a model are the validation set
2. Model evaluation
The model evaluation is an important but often underestimated part of model building and assessment. When we have prepared and preprocessed data we want to build a model with the ability to accurately predict future observations. We do not want a model that perfectly fits training data, but we need a model that is reliable after deployment in the real use. For this purpose we should have two phases of a model evaluation. In the first phase we evaluate a model in order to estimate the parameters of the model during the learning phase. This is a part of the model selection when we select the model with the best results. This phase is also called as the validation phase. It does not necessary mean that we choose a model that best fits a particular set of data. The well learned model captures only the underlying phenomenon, not the noise. A model that captures a noise is called as over-fitted [47]. In the second phase we evaluate the selected model in order to assess the real performance of the model on new unseen data. Process steps are shown below.
Model selection
Model learning (Training phase)
Model validation (Validation phase)
Model assessment (Testing phase)
2.1. Evaluation methods
During building a model, we need to evaluate its performance in order to validate or assess it as we mention earlier. There are more methods how to check our model, but not all are usually sufficient or applicable in all situations. We should always choose the most appropriate and reliable method for our purpose. Some of common evaluation methods are [83]:
Comparison of the model with physical theoryA comparison of the model with a physical theory is the first and probably the easiest way how to check our model. For example, if our model predicts a negative quantity or parameters outside of a possible range, it points to a poorly estimated model. However, a comparison with a physical theory is not always possible nor sufficient as a quality indicator.
Comparison of model with theoretical or empirical modelSometimes a theoretical model exists, but may be to complicated for a practical use. In this case, the theoretical model could be used for a comparison or evaluation of the accuracy of the built model.
Collect new data for evaluationThe use of data collected in an independent experiment is the best and the most preferred way for a model evaluation. It is the only way that gives us a real estimate of the model performance on new data. Only new collected data can reveal a bias in a previous sampling process. This is the easiest way if we can easily repeat the experiment and sampling process. Unfortunately, there are situations when we are not capable to collect new independent data for this purpose either due to a high cost of the experiment or another unrepeatability of the process.
Use the same data as for model buildingThe use the same data for evaluation and for a model building usually leads to an optimistic estimation of real performance due to a positive bias. This is not recommended method and if there is another way it could not be used for the model evaluation at all.
Reserve part of the learning data for evaluationA reserve part of the learning data is in practise the most common way how to deal with the absence of an independent dataset for model evaluation. As the reserve part selection from the data is usually not a simple task, many methods were invented. Their usage depends on a particular domain. Splitting the data is wished to have the same effect as having two independent datasets. However, this is not true, only newly collected data can point out the bias in the training dataset.
2.1.1. Evaluation measures
For evaluating a classifier or predictor there is a large variation of performance measures. However, a measure, good for evaluating a model in a particular domain, could be inappropriate in another domain and vice versa. The choice of an evaluation measure depends on the domain of use and the given problem. Moreover, different measures are used for classification and regression problems. The measures below are shortly described basics for the model evaluation. For more details see [46, 100].
Measures for classification evaluationThe basis for analysing classifier performance is a confusion matrix. The confusion matrix describes how well a classifier can recognize different classes. For

Figure 1.
Confusion matrix
The first and the most commonly used measure is the accuracy denoted as
or in a two-classes case
In order of having defined the accuracy, we can define the error rate of a classifier as
which is the percentage of incorrectly classified instances.
If costs of making a wrong classification are known, we can assign different cost or benefit to each correct classification. This simple method is known as costs and benefits or risks and gains. The cost matrix has then the structure shown in Figure 2, where

Figure 2.
Cost matrix
Using the accuracy measure fails in cases, when classes are significantly imbalanced (The class imbalanced problem is discussed in section 5 ). Good examples could be medical data, where we can have a lot of negative instances (for example
Alternatives for the accuracy measure are:
Sensitivity (also called True Positive Rate or Recall) - the percentage of truly positive instances that were classified as positive,
Specificity (also called True Negative Rate) - the percentage of truly negative instances that were classified as negative,
Precision - the percentage of positively classified instances that are truly positive,
It can be shown that the accuracy is a function of the sensitivity and specificity:
F-measure combines precision and recall. It is generally defined as
where
The geometric mean puts all classes on an equal footing, unfortunately there is no way to overweight any class [22].
The evaluation measure should be appropriate to the domain of use. If is it possible, usually the best way to write a report is to provide the whole confusion matrix. The reader than can calculate the measure which he is most interested in.
Measures for regression evaluationThe measures described above are mainly used for classification problems rather than for regression problems. For regression problems more appropriate error measures are used. They are focused on how close is the actual model to the ideal model instead of looking if the predicted value is correct or incorrect. The difference between known value
The square loss is one of the most common measures used for regression purposes, is it defined as
A disadvantage of this measure is its sensitivity to outliers (because squaring of the error scales the loss quadratically). Therefore, data should be filtered for outliers before using of this measure. Another measure commonly used in regression is the absolute loss, defined as
It avoids the problem of outliers by scaling the loss linearly. Closely similar measure to the absolute loss is the ϓ-insensitive loss. The difference between both is that this measure does not penalize errors within some defined range
The average of the loss over the dataset is called generalization error or error rate. On the basis of the loss functions described above we can define the mean absolute error and mean squared error as
, respectively. Often used measure is also the root mean squared error
, which has the same scale as the quantity being estimated. As well as the squared loss the mean squared error is sensitive to outliers, while the mean absolute error is not. When a relative measure is more appropriate, we can use the relative absolute error
or the relative squared error
where
2.1.2.Bias. and variance
With the most important performance measure - the mean square error, the bias, variance and bias/variance dilemma is directly related. They are described thoroughly in [?]. Due the importance of these characteristics, it is in place to describe them more in detail.
With given statistical model characterized by parameter vector
The MSE is equal to the sum of the variance and the squared bias of the estimate, formally
Thus either bias or variance can contribute to poor performance of the estimator.
The bias of an estimator is defined as a difference between the expected value of the method and the true value of the parameter, formally
In another words the bias says whether the estimator is correct on average. If the bias is equal to zero, the estimator is said to be unbiased. The estimator can be biased for many reasons, but the most common source of an optimistic bias is using of the training data (or not independent data from the training data) to estimate predictive performance.
The variance gives us an interval within which the error appears. For an unbiased estimator the MSE is equal to the variance. It means that even though an estimator is unbiased it still may have large MSE if the variance is large.
Since the MSE can be decomposed into a sum of the bias and variance, both characteristics need to be minimized to achieve good predictive performance. It is common to trade-off some increase in the bias for a larger decrease in the variance [41].
2.2.Comparing. algorithms
When we have learned more models and we need to select the best one, we usually use some of described measures to estimate the performance of the model and then we simply choose the one with the highest performance. This is often sufficient way for a model selection. Another problem is when we need to prove the improvement in the model performance, especially if we want to show that one model really outperforms another on a particular learning task. In this way we have to use a test of statistical significance and verify the hypothesis of the improved performance.
The most known and most popular in machine learning is the paired t test and its improved version the k-fold cross-validated pair test. In paired
Unfortunately the given assumption is less than true. Individual error rates are not independent as well as error rate differences are not independent, because the training sets and the testing sets in each iteration overlaps. The k-fold cross-validated pair test mentioned above is build on the same basis. The difference is in the splitting into a training and a testing set, instead of a random dividing. The original set
The improved version, the 5xcv paired t test, proposed in [28] performs 5 replications of 2-fold cross-validation. In each replication, the original dataset is divided into two subsets
Another approaches described in literature are McNemar’s test [33], The test for the difference of two proportions [82] and many others.
Methods described above consider comparison over one dataset, for comparison of classifiers over multiple data sets see [26].
2.3. Dataset comparison
In some cases we need to compare two datasets, if they have the same distributions. For example if we split the original dataset into a training and a testing set, we expect that a representative sample will be in each subset and distributions of the sets will be the same (with a specific tolerance of deviation). If we assess splitting algorithms, one of the criteria will be the capability of the algorithm to divide the original dataset into the two identically distributed subsets.
For comparing datasets distributions we should use a statistical test under the null hypothesis that distributions of the datasets are the same. These tests are usually called goodness-of-fit tests and they are widely described in literature [2, 8, 59, 85]. For an univariate case we can compare distributions relatively easily using one of the numerous graphical or statistical tests e.g. histograms, PP and QQ plots, the Chi-square test for a dicrete multinominal distribution or the Kolmogorov-Smirnov non-parametric test. For more details see [87].
A multivariate case is more complicated because generalization to more dimensions is not so straightforward. Generalization of the most cited goodness-of-fit test, the Kolmogorov-Smirnov test, is in [10, 35, 54].
In the case of comparing two subsets of one set, we use a naive approach for their comparison. We suppose that two sets are approximately the same, based on comparing basic multivariate data characteristic. We believe, that for our purpose the naive approach is sufficient. Advantages of this approach are its simplicity and a low computational complexity in comparison with the goodness-of-fit tests. A description of commonly used multivariate data characteristics follows.
The first characteristic is the mean vector. Let
where
Therefore,
Second characteristic is the covariance matrix. Let
Because
where
The covariance matrix contains
The geometric interpretation of the generalized sample variance is a
More details about the multivariate data characteristic can be found in [77].
3. Data splitting
In the ideal situation we have collected more independent data sets or we can simply and inexpensively repeat an experiment to collect new ones. We can use independent data sets for learning, model selection and even an assessment of the prediction performance. In this situation we have not any reason to split any particular dataset. But in situation when only one dataset is available and we are not capable to collect new data, we need some strategy to perform particular tasks described earlier. In this section we review several data splitting strategies and data splitting algorithms which try to deal with the problem of absence of independent datasets.
3.1. Data splitting strategies
When only one dataset is given, several possible ways how to use available data come into consideration to perform tasks described in section 2 (training, validation, testing). We can split available data into two or more parts and use each to perform a particular task. Common practise is to split data into two or three sets:

Figure 3.
Two and three way splitting
Training set - a set used for learning and estimating parameters of the model.
Validation set - a set used to evaluate the model, usually for model selection.
Testing set - a set of examples used to assess the predictive performance of the model.
Let us define following data splitting strategies according to how data used in a process of model building are available.
The null strategy (Strategy 0) is when all available data are used for all tasks. Training, selecting and making an assessment on the same data usually leads to over-fitting of the model and to an over-optimistic estimate of the predictive accuracy. The error estimated on the same set as the model was trained is known as re-substitution error.
The strategy motivated by the arrival of new data (Strategy 1) uses one set for training and the second set, containing the first set and newly collected data, for the assessment. Merging new collected data with the old data loses the independence of model selection and assessment, which can lead to an over-optimistic estimate of the performance of the model.
The most commonly used strategy is to split data into two sets, a training set and a testing set. The training set (also called the estimation set) is used to estimate the parameters of the model and also for model selection (validation). The testing set is then used to assess the prediction performance of the model (Strategy 2).
Another strategy (Strategy 3) which splits data into two sets uses one set for learning and the second for model selection and to assess its predictive performance.
The use an independent set for each task is generally recommended. This strategy (Strategy 4) splits available data into three sets.
Strategy | Training | Validation | Testing [b] |
All data | All data | All data | |
Part 1 | All data | All data | |
Part 1 | Part 1 | Part 2 [t] | |
Part 1 | Part 2 | Part 2 | |
Part 1 | Part 2 | Part 3 |
Table 1.
Data usage in different splitting strategies
3.2. Data splitting algorithms
Many data splitting algorithms were proposed. Quality and complexity of algorithms differ and not any approach is superior in general. Data splitting methods and algorithms and their comparison can be found in literature [15, 68, 83, 86]. Some of commonly used algorithms are described bellow.
The holdout method described in [67] is the simplest method that takes an original dataset and splits it randomly into two sets. Common practise is to use one third for testing and the rest for training or half to half. Assuming that the performance of the model increases with the count of seen instances and decreases with the count of left instances apart of the training leads to higher bias and decreases the performance. In other words, both subsets might have different distributions. Moreover, if a dataset is not large enough, and it is usually not, the holdout method is inefficient in the use of data. For example in a classification problem one or more classes might be missing in one of the subsets, which leads to poor estimation of the model as well as to its evaluation. In deal with this some advanced versions use so called stratification. Stratified sampling is a probability sampling, where an original dataset is divided into non-overlapping groups called strata, and instances are selected from each strata proportionally to the appropriate probability. It ensures that each class is represented with the same frequency in both subsets. But it still does not prevent inception of the bias in training and testing sets. For better reliability of the error estimation, the methods are repeated and the resulting accuracy is calculated as an average over all iterations. It can positively reduce the bias. The Repeated holdout method is also known as Monte Carlo Cross-validation, Random Sub-sampling or Repeated Evaluation Sets.
The most popular resampling method is Cross-validation. In k-fold cross-validation, the original data set is splitted into

Figure 4.
Cross-validation
Leave-one-out cross-validation (LOOCV) is the special case of the k-fold cross-validation in which
The Bootstrap method was introduced in [89]. The main idea of the method is described as follows. Given a dataset

Figure 5.
Bootstrap
The most known and commonly used approach is the.632 bootstrap. The number 0.632 in the name means the expected fraction of distinct instances of the original dataset appeared in the training set. Each instance has a probability of
where
Kennard-Stone’s algorithm (CADEX) [25, 55] is used for splitting data sets into two distinct subsets which cover approximately the same region of the factor space defined by the original dataset. Instead of measuring coverage by an explicit criterion, the algorithm follows two guidelines. The first one is that no instance from one set should be too far from any instance of the other set, and the second one is that the coverage should start on the boundary of the factor space. The instances are chosen sequentially and the aim is to select the instances in each iteration to get uniformly distributed instances over the space defined by original dataset. The algorithm works as follows. Let
The algorithm starts with adding two most distant instances from
In other words, for each instance remaining in the data set

Figure 6.
CADEX
Other methods can be considered when we take into account the following assumption. We suppose that two sets
To find the optimal splitting to the two sets is computationally very expensive. Two heuristic approaches come to mind. The first is a method based on the Nearest neighbour rule. This simple method splits original datasets into two or more datasets by finding the nearest instance (nearest neighbour) of randomly chosen instance and putting each instance into a different subset. The second heuristics finds the closest pair (described in [88]) of instances in
4. Instance selection
As was mentioned earlier the instance selection is a process of reducing original data set. A lot of instance selection methods have been described in the literature. In [14] it is argued that instance selection methods are problem dependent and none of them is superior over many problems then others. In this section we review several instance selection methods.
According to the strategy used for selecting instances, we can divide instance selection methods into two groups [71]:
Wrapper methodsThe selection criterion is based on the predictive performance or the error of a model (commonly, instances that do not contribute to the predictive performance are discarded from the training set).

Figure 7.
Wraper method
The selection criterion is a function that is not based upon an algorithm used for prediction but rather on features of the instance vector.

Figure 8.
Filter method
Other dividing is also used in literature. Dividing of instance selection methods according to the type of application is proposed in [49]. Noise filters are focused on discarding useless instances while prototype selection is based on building a set of representatives (prototypes). How instance selection methods create final dataset offers the last presented dividing method. Incremental methods start with
A good review of instance selection methods is in [65, 71]. A comparison of instance selection algorithms on several benchmark databases is presented in [50]. Some of instance selection algorithms are described bellow.
4.1. Wrapper methods
The first published instance selection algorithm is probably Condensed Nearest Neighbour (CNN) [23]. It is an incremental method starting with new set

Figure 9.
CNN - selected instances
Reduced Nearest Neighbour (RNN) is a modification of the CNN introduced by [39]. The RNN is a decremental method that starts with
Selective Nearest Neighbour (SNN) [79] is based on the CNN. It finds a subset
Generalized Condensed Nearest Neighbour (GCNN)[21] is another instance selection decision rule based on the CNN. The GCNN works the same way as the CNN, but it also defines the following absorption criterion: instance
Edited Nearest Neighbour (ENN) described in [98] is a decremental algorithm starting with

Figure 10.
ENN - discarded instances (3-NN)
Instance Based (IB1-3) methods were proposed in [1]. The IB2 selects the instances misclassified by the IB1 (the IB1 is the same as the 1-NN algorithm). It is quite similar to the CNN, but the IB2 does not include one instance per class and does not repeat the process after the first pass through a training set like the CNN. The last version, the IB3, is an incremental algorithm extending the IB2. the IB3 uses a significance test and accepts an instance only if its accuracy is statistically significantly greater than the frequency of its class. Similarly, an instance is rejected if its accuracy is statistically significantly lower than the frequency of its class. Confidence intervals are used to determine the impact of the instance (0.9 to accept, 0.7 to reject).
Decremental Reduction Optimization Procedures (DROP1-5) are instance selection algorithms presented in [99]. These methods use an associate that can be defined by function
Iterative Case Filtering (ICF) are described in [14]. They define
The algorithm is based on these two properties. At first, the ICF uses the ENN to filter noise instances then the ICF repeatedly computes defined instance properties and in each iteration removes instances that have
Many other methods were proposed in literature. Some of them are based on evolutionary algorithms (EA)[38, 64, 84, 91], other methods use the support vector machine (SVM) [9, 17, 61, 62] or tabu search (TS) [18, 42, 103].
4.2. Filter methods
The Pattern by Ordered Projections (POP) method [78] is a heuristic approach to find representative patterns. The main idea of the algorithm is to select only some border instances and eliminate the instances that are not on the boundaries of the regions to which they belong. It uses the function
Another method based on finding border instances is the Pair Opposite Class-Nearest Neighbour (POC-NN) [75]. The POC-NN calculates the mean of all instances in each class and finds a border instance
The Maxdiff kd trees described in [69] is a method based on kd trees [37]. The algorithm builds a binary tree from an original data set. All instances are in the root node and child’s nodes are constructed by splitting the node by a pivot, which is a feature with the maximum difference between consecutively ordered values. The process is repeated until no node can be split. Leaves of the tree are the output condensed set.
Several methods are based on clustering. They split an original dataset into
Some prototype filtering methods were proposed in the literature. The first described is Weighting prototype (WS)[73] method. The WS method assigns a weight to each prototype (
5. Class balancing
A data set is well-balanced, when all classes are represented with the same proportion, but in practise many domains of classification tasks are characterized by a small proportion of positive instances and a large proportion of negative instances, where the positive instances are usually our points of interest. This problem is commonly known as the class imbalance problem.
Although the performance of a classifier over all instances can be high, we are usually interested in classification of positive instances (true positive rate) only, where the classifier often fails, because it tends to classify all instances into the majority class. To avoid this problem some strategy should be used when a dataset is imbalanced.
Class-balancing methods can be divided into the three main groups according to the strategy of their use. Data level methods are used in preprocessing and usually utilize various ways of re-sampling. Algorithm-level methods modify a classifier or a learning process to solve the imbalance. The last strategy is based on combining various methods to increase the performance.
This chapter gives an overview of class balancing strategies and some particular methods. Two good and detailed reviews were published in [44, 57].
5.1. Data-level methods
The aim of these methods is to change distributions of classes by increasing the number of instances of the minority class (over-sampling), decreasing the number of instances of the majority class (under-sampling), by combinations of these methods or using other advanced sampling ways.
5.1.1. Under-sampling
The first and the most naive under-sampling method is random under-sampling [52]. The random under-sampling method balances the class distributions by discarding, at random, instances of the majority class. Because of the randomness of elimination, the method discards potentially useful instances, which can lead to a decrease of the model performance.
Several heuristic under-sampling methods have been proposed in literature, some of them are linked with instance selection metods mentioned in section 4. The first described algorithm is Condensed nearest neighbour (CNN) [23] and the second is Wilson’s Edited Nearest Neighbour (ENN)[98]. Both are based on discarding noisy instances.
A method based on the ENN, the Neighbourhood Cleaning Rule (NCL) [63], discards instances from the minority and majority class separately. If an instance belongs to the majority class and it is misclassified by its three nearest neighbours’ instances (the nearest neighbour rule [23]), then the instance is discarded. If an instance is misclassified in the same way and belongs to the minority class, then neighbours that belongs to the majority class are discarded.
Another method based on the Nearest Neighbour Rule is the One-side Sampling (OSS) [60] method. It is based on the idea of discarding instances distant from a decision border, since these instances can be considered as useless for learning. The OSS uses 1-NN over the set
The Tomek Links [90] focuses on instances near a decision border. Let
5.1.2. Over-sampling
The random over-sampling is a naive method, that balances class distributions by replication, at random, instances of the minority class. Two disadvantages of this method were described in literature. The first one, the instance replication increases likelihood of the over-fitting [19] and the second, enlarging the training set by the over-sampling can lead to a longer learning phase and a model response [60], mainly for lazy learners.
The most known over-sampling method is Synthetic Minority Over-sampling Technique (SMOTE) [19]. The SMOTE does not over-sample with replacement, instead, it generates "synthetic" instances of the minority class. The minority class is over-sampled by taking each instance of the minority class and its nearest neighbour and placing the "synthetic" instance, at random, along the line joining these instances (Figure 11). This approach avoids over-fitting and causes that a classifier creates larger and less specific decision regions, rather than smaller and more specific ones. The method based on the SMOTE reported better experimental results in TP-rate and F-measure [45], the Borderline_SMOTE. It over-samples only the borderline instances of the minority class.

Figure 11.
SMOTE - synthetic instances
5.1.3. Advanced sampling
Some advanced re-sampling methods are based on re-sampling of results of the preliminary classification [44].
Over-sampling Algorithm Based on Preliminary Classification (OSPC) was proposed in [46]. It was reported that the OSPC can outperform under-sampling methods and the SMOTE in terms of classification performance [44].
The heuristic method proposed in [96, 97], the Budget-sensitive progresive sampling algorithm iteratively enlarges a training set on the basis of performance results from the previous iteration.
A combination of over-sampling and under-sampling methods to improve generalization features of learners was proposed in [45, 58, 63]. A comparison of various re-sampling strategies is presented in [7].
5.2. Algorithm level methods
Another approach to deal with imbalanced datasets modifies a classifier or a learning process rather than changing distributions of datasets by discarding or replicating instances. These methods are mainly based on overweighting the minority class, discriminating the majority class, penalization for misclassification or biasing the learning algorithm. A short description of published methods follows.
5.2.1. Algorithm modification
Ineffectiveness of the over-sampling method when the C4.5 decision tree learner with the default settings is used was reported in [30]. It was noted that under-sampling produces a reasonable sensitivity to changes in misclassification costs and a class distribution when over-sampling produces little or no change in the performance. It was also noted that modifications of C4.5 parameters in relation to the under/over-sampling does have a strong effect on overall performance.
A method that deals with imbalanced datasets by internally biasing the discrimination procedure is proposed in [6]. This method uses a weighted distance function in a classification phase of the k-NN. Weights are assigned to classes such that the majority class has a greater weighting factor than the minority class. This weighting causes that the distance to minority class instances is lower than the distance to instances of the majority class. Instances of the minority class are then used more often when classifying a new instance.
Different approaches using the SVM biased by various ways for dealing with imbalanced datasets were published. The method proposed in [102] modifies a kernel function for this purpose. In [95] it two schemes for controlling the balance between false positives and false negatives are proposed.
5.2.2. One-class learning
A one-class learning is an alternative to discriminative approaches that deal with imbalanced datasets. In the one-class learning, a model is built using only target class instances. The model is then learned to recognize these instances, which can be under certain conditions superior to discriminative approaches [51]. Two one-class learning algorithms were studied in literature, particularly the SVM [66, 81] and auto-encoders [51, 66]. An experimental comparison of these two methods can be found in [66]. Usefulness of the one-class learning on extremely unbalanced data sets composed of high dimensional noisy features is showed in [76].
5.2.3. Cost-sensitive learning
A cost-sensitive learning is another commonly used way in the context of imbalanced datasets. A classification model is extended with a cost model in the form of a cost matrix. Given the cost matrix as shown in Figure 2 in section 2 we can define conditional risk for making decision
where
A method which makes classifier cost sensitive, the MetaCost, is proposed in [29]. The MetaCost learns an internal cost-sensitive model, then estimates class probabilities and re-labels training instances with their minimum expected cost classes. A new model is built using the relabelled dataset.
The AdaCost [34] method based on Adaboost [36] has been made a cost-sensitive by an over-weighting instances from the minority class, which are misclassified. Empirical experiments have shown, that the AdaCost has lower cumulative misclassification costs in comparison with the AdaBoost.
5.3. Ensemble learning methods
Ensemble methods are methods, which use a combination of methods with the aim to achieve better results. Two most known ensemble methods are bagging and boosting. The bagging (Bootstrap aggregating) proposed in [12] initially generates
The boosting, firstly described in [80], is based on the idea a powerful model is created using a set of weak models. The method is quite similar to the bagging. Like the bagging the boosting uses voting for a classification task or averaging for a regression task to predict the output value. However, the boosting is an iterative method. In each iteration a newly built model is influenced by the performance of those built previously. By assigning greater weights to the instances that were misclassified in previous iterations the model pays more attention on these instances.
Another in comparison with bagging and boosting less widely used method is stacking proposed in [101]. In the stacking method the original dataset is splitted into two disjoint sets, a training set and a validation set. Several base models are learned on the training set and then applied to the validation set. Using predictions from the validation set as inputs and correct values as the outputs, a higher level model is build. In comparison with the bagging and boosting, the stacking can be used to combine different types of models.
Ensemble methods such the bagging, boosting and stacking often outperform another methods. Therefore, they have been widely studied in recent years and lot of approaches have been proposed. The earlier mentioned Adaboost [36] and AdaCost [34] are other methods that use the boosting are RareBoost [53] or SMOTEBoost [20]. A method combining the bagging and stacking to identify the best combination of classifiers is used in [74]. Three agents (Naive Bayes, C4.5, 5-NN) are combined in the approach proposed in [58]. There are many other methods utilizing the mentioned approaches.
6. Conclusion
Several methods for training set re-sampling, instance selection and class balancing, published in literature, were reviewed. All of these methods are very important in processes of construction of training and testing sets. Re-sampling methods allow to split a data set into more subsets in the case of absence of an independent set for model validation or prediction performance assessment. Instance selection methods reduce a training set by removing instances useless for estimating parameters of a model, which can speed up the learning phase and response time, especially for lazy learners. Class balancing algorithms solve the problem of inequality in class distributions.
Acknowledgement
This work was supported by the Institute of Computer Science of the Czech Academy of Sciences RVO: 67985807.
The work was supported by Ministry of Education of the Czech Republic under INGO project No. LG 12020.