Open access peer-reviewed chapter

The Usage of Statistical Learning Methods on Wearable Devices and a Case Study: Activity Recognition on Smartwatches

Written By

Serkan Balli and Ensar Arif Sağbas

Submitted: 12 April 2016 Reviewed: 07 October 2016 Published: 26 April 2017

DOI: 10.5772/66213

From the Edited Volume

Advances in Statistical Methodologies and Their Application to Real Problems

Edited by Tsukasa Hokimoto

Chapter metrics overview

2,408 Chapter Downloads

View Full Metrics

Abstract

The aim of this study is to explore the usage of statistical learning methods on wearable devices and realize an experimental study for recognition of human activities by using smartwatch sensor data. To achieve this objective, mobile applications that run on smartwatch and smartphone were developed to gain training data and detect human activity momentarily; 500 pattern data were obtained with 4‐second intervals for each activity (walking, typing, stationary, running, standing, writing on board, brushing teeth, cleaning and writing). Created dataset was tested with five different statistical learning methods (Naive Bayes, k nearest neighbour (kNN), logistic regression, Bayesian network and multilayer perceptron) and their performances were compared.

Keywords

  • statistical learning
  • activity recognition
  • wearable devices
  • smartwatch
  • Bayesian networks

1. Introduction

The usage of wearable technology is increasing rapidly, and the effects of user healthcare are enormous. Today's smart devices have more built‐in sensors than before. Wearable sensors are small devices which are carried by people, while they are performing daily activities. These sensors such as an accelerometer, microphone, GPS and barometer record the physical condition of person such as location change, moving direction and moving speed. Latest smartphones and smartwatches have many wearable sensors as built‐in [1, 2]. Because of equipped with various on‐board sensors, smartphones and wrist‐worn devices such as smartwatches are being extensively used for activity recognition in recent studies [3]. With the popularity of the smartwatches, wrist‐worn sensor devices will become an increasingly important tool in personal health monitoring [4]. Statistical learning methods are generally used in activity recognition studies. Statistical learning refers to a set of tools for modelling and understanding complex datasets. It is a recently developed area in statistics and blends with parallel developments in computer science and, in particular, machine learning [5].

The aim of this chapter is to investigate the usage of statistical learning methods on wearable devices and carry out a case study for recognition of human activities with accelerometer data of smartwatch by using statistical learning methods. This chapter is organized as follows: related works are described in detail in Section 2. Then, overview of statistical methods is mentioned in Section 3. Next, human activity recognition with smartwatches is explained in Section 4. Finally, Section 5 concludes the chapter.

Advertisement

2. Related works

When examining the literature, various studies are found statistical learning methods with wearable devices. Wang et al. [6] imagined a user typing on a laptop keyboard while wearing a smartwatch. The accelerometer and gyroscope data, which obtained from Samsung Galaxy Live, were used as training data, and processed through a sequence of steps, including key‐press detection, hand‐motion tracking, character point cloud computation and Bayesian modelling and inference. Shoaib et al. [3] carried out to recognize of different living activities by using a smartphone and a smartwatch simultaneously and evaluated their effectiveness in recognizing human activities. They used J48, kNN and SVM (support vector machines) to recognize 13 various activities. da Silva and Galeazzo [7] presented the development of a system based on computational intelligence techniques and on an accelerometer to perform, in a comfortable and non‐intrusive manner, the recognition of basic movements of a person’s routine. Three different computational intelligence techniques were evaluated in order to search for the best performance of the recognition of the movements executed by the watch user. Chernbumroong et al. [8] studied classification of five human activities by using only accelerometer data and two learning algorithms: Artificial Neural Networks and Decision Tree C4.5. Scholl and van Laerhoven [9] presented a feasibility study with smokers wearing an accelerometer device on their wrist over the course of a week to detect their smoking habits based on detecting typical gestures carried out while smoking a cigarette. The Gaussian method was used as a classifier. Dong et al. [10] described a new method that uses a watch‐like configuration of sensors to continuously track wrist motion throughout the day and automatically detect periods of eating. Accelerometer and gyroscope sensor data were used in this study. Ramos‐Garcia and Hoover [11] developed a Hidden Markov model (HMM) and compared its recognition performance against a non‐sequential classifier (kNN), using a set of four actions (rest, utensiling, bite and drink). Trost et al. [12] compared the activity recognition rates of an activity classifier trained on acceleration signal collected on the wrist and hip. Features were extracted from 10 seconds windows and inputted into a regularized logistic regression model. Guiry et al. [1] investigated the role of smart devices including smartphones and smartwatches which can play in identifying activities of daily living. The activities examined include walking, running, cycling, standing, sitting, elevator ascents, elevator descents, stair ascents and stair descents. Data from this study were used to train and test five well‐known statistical machine learning algorithms: C4.5, CART, naïve Bayes, multilayer perceptrons and finally support vector machines. Mortazavi et al. [4] introduced a framework for platform creation (e.g. accelerometer only system versus accelerometer and gyroscope) and machine learning of some activities, which can be especially useful in the emerging market of smartwatches. Random forests, decision trees, Naive Bayes and SVM methods were compared. Khan et al. [13] implemented a smartphone‐based HAR scheme in accordance with these requirements. Time domain features were extracted from only three smartphone sensors, and a nonlinear discriminatory approach was employed to recognize 15 activities with a high accuracy. Evaluations were performed in both offline and online settings. Dadashi et al. [14] carried out detection of important breaststroke swimming events automatically by using Hidden Markov model (HMM) and wearable sensors. Parkka et al. [15] used accelerometers and gyroscopes attached to ankle, wrist and hip to estimate intensity of physical activity. Data from common everyday tasks and exercise were collected with 11 subjects. Shen et al. [16] tracked the 3D posture of the entire arm—both wrist and elbow—using the motion and magnetic sensors on smartwatches. Bieber and Peter [17] studied behaviour analysis using 3D sensor data and learning techniques and obtained sufficient results. Bao and Intille [18] developed an algorithm and evaluated to detect physical activities from data acquired using five small biaxial accelerometers worn simultaneously on different parts of the body. Kim et al. [19] developed an application by using sensor signals from smartphone and smartwatch. Summary of the literature is given in Table 1.

Ref No.AuthorYearDetectionDeviceSensorsMethods
[16]Shen et al.2016Arm postureSmartwatchAccelerometer, gyroscope, compassHidden Markov model
[6]Wang et al.2015Typing on a laptop keyboardSamsung Gear LiveAccelerometer gyroscopeBayessian Inference
[3]Shoaib et al.2015Smoking, eating, typing, writing, drinking coffee, talking, walking, jogging, biking, walking upstairs and downstairs, sitting, standingSmartphone and smartwatchAccelerometer gyroscopeSupport vector machine, k nearest neighbour, J48 decision trees
[12]Trost et al.2014Lying down, sitting, standing, walking, running, basketball and dancingActiGraph GT3X+AccelerometerLogistic regression
[1]Guiry et al.2014Walking, running, cycling, standing, sitting, elevator ascents, elevator descents, stair ascents, stair descentsSamsung Galaxy Nexus smartphone, Motorola MotoActv smartwatchAccelerometer, magnetometer, gyroscope, GPS, light, pressure. Smartwatch only accelerometerC4.5, CART, Naive Bayes, multilayer perceptron and support vector machine
[4]Mortazavi et al.2014Bicep Curls, crunches, jumping jacks, push‐ups, shoulder lateral raisesSamsung Galaxy GearAccelerometer, gyroscopeRandom forests, decision Trees, SVM and Naive Bayes
[13]Khan et al.201416 different subjectLG Nexus 4 SmartphoneAccelerometer, pressure, microphoneArtificial neural network, Support vector machines and Gaussian mixture model
[10]Dong et al.2014Period of EatingiPhone 4Accelerometer gyroscopeNaive Bayes
[11]Ramos‐Garcia and Hoover2013Gesture recognitionWrist‐worn accelerometer and gyroscopeAccelerometer gyroscopeHidden Markov model, k Nearest neighbour
[14]Dadashi et al.2013Breaststroke swimming temporal phasesIMU wearable sensorAccelerometer, gyroscopeHidden Markov model
[7]da Silva and Galeazzo2013Walking, running, sitting, standing, lying, climbing stairs, coming down stairs and working on computerEz‐430 ChoronosAccelerometerMultilayer perceptron, k nearest neighbour, support vector machine
[9]Scholl and van Laerhoven2012Cigarette smokingHedgehogAccelerometerGaussian classifier
[8]Chernbumroong et al.2011Sitting, standing, lying, walking, runningEz‐430 ChoronosAccelerometerArtificial neural network, decision tree
[17]Bieber and Peter2008Walking, running, cycling, and restingBosch 3D‐acceleration sensorAccelerometerSVM, Bayesian nets and decision trees, J48
[15]Parkka et al.2007ironing, vacuuming, walking, running, cycling on exercise bicycleKionix accelerometer, XV‐3500 gyroscopeAccelerometer, gyroscopePearson linear correlation
[18]Bao and Intille200420 different subjectADXL210E accelerometers (On Body)AccelerometerDecision table, IBL, C4.5, naïve Bayes

Table 1.

Summary of the studies.

Advertisement

3. Overview of statistical learning

Statistical learning contains a large number of unsupervised and supervised tools for inferencing from data. In general terms, supervised statistical learning is employed as a statistical model to estimate or predict an output using relevant inputs in various areas such as public policy, medicine, astrophysics and business. In unsupervised statistical learning, learning of relationships and structure of data is possible without supervising the output [5]. In this chapter, supervised statistical learning methods (Naive Bayes, logistic regression, Bayesian network, k nearest neighbour (kNN) and multilayer Perceptron) are used for activity recognition.

The Naive Bayes method is applied to learn and represent probabilistic information from data with clear and easy understanding by using supervised learning tasks in which classes are known in training phase, in which prediction of classes is realized in the test phase [20]. Multilayer perceptron is a feedforward structure of artificial neural networks because the output of the input layer and all intermediate layers is submitted only to the higher layer. Here ‘layer’ means a layer of perceptrons. The number of hidden layers and the number of perceptrons at each hidden layer are not limited [21]. In kNN, the whole of the calibration data set is used as a classification model. In other words, kNN does not create a different model from calibration data set due to its non‐parametric construction. In the same multidimensional hyperspace, a test set is used as the calibration set for classification. From the new test set object to the calibration objects, the K nearest neighbours are computed. The smallest length using a chosen norm is called as ‘nearest’ [22]. Logistic regression is used to describe and test suppositions about associations between class variable and other related predictor variables by estimating probabilities using a logistic function. Logistic regression can be binomial, ordinal or multinomial [23]. One of the probabilistic graphical models is Bayesian networks. In Bayesian networks, the knowledge about a vague subject is showed as graphical structures. In particular, variables are represented as nodes in the graph, whereas probabilistic dependencies among the variables are represented as the edges. The values of the edges in the graph can be calculated by using known computational and statistical methods [24]. The model structure of the Bayesian Network used for the research in the case study is shown in Figure 1. Variables are standard deviations and averages of x‐, y‐ and z‐axis of accelerometer sensor.

Figure 1.

The model structure of the Bayesian network.

Advertisement

4. Case study: activity recognition on smartwatches using statistical learning methods

In this study, activity recognition is performed by using accelerometer sensor data. Accelerometer measures the acceleration force in m/s2 that is applied to a device on all three physical axes (x, y and z) given in Figure 2, including the force of gravity [25, 26].

Figure 2.

Smartwatch accelerometer axes.

Figure 3 shows amplitude change of accelerometer x‐axis for nine different daily activities (typing, writing, writing on board, walking, running, cleaning, standing, brushing teeth and stationary).

Figure 3.

Amplitude change of accelerometer x‐axis.

Accelerometer signals of smartwatch are utilized for activity detection by using statistical learning methods. Figure 4 represents the flowchart of activity recognition which includes collecting data, feature selection, classification and development of smartwatch application steps. Information about these steps is given in the following sub‐sections.

Figure 4.

Flowchart of activity recognition.

4.1. Collecting data and feature selection

Hardware: Motorola Moto 360 [27] smartwatch (Figure 5) is used. This device has quad core 1.2 GHz processor, 512 MB RAM and built‐in accelerometer, pedometer ambient light and optical heart rate monitor sensors. In this chapter, only accelerometer sensor is used to detect human activities. While coding the smartwatch application, SENSOR_DELAY_UI is set as sampling rate which allows the sampling rate of 50 Hz. This device is capable of tracing daily life activities about all day long with 400 mAh battery. It has Android wear operating system.

Figure 5.

Smartwatch that used in this case study.

Software: For collecting dataset, two Android‐based applications are developed by using Java programming language. One of these applications runs on smartwatch (Figure 6b) and the other one runs on smartphone (Figure 6a) that connected smartwatch. Because collected sensor data are sent to smartphone to keep in internal storage.

Figure 6.

(a) Smartphone dataset application, (b) smartwatch dataset application.

Smartwatch application has only one push button. This button serves to begin and end collecting the sensor data. Figure 7 shows structure of storing sensor data to smartphone internal storage. The collected sensor data are transferred to the connected smartphone and stored in smartphone internal memory as CSV format with the desired label name. In order to start the data collection process, the user writes performing activity name to mobile phone application and press the ‘Begin’ button on smartwatch application. During the data collection, smartwatch must be located on the wrist.

Figure 7.

Structure of dataset application.

For training statistical methods, raw sensor data are collected on nine different human activities viz running, walking, typing, writing, standing, writing on board, stationary, cleaning and teeth brushing which consists 900.000 lines (100.000 samples for each activity). Then data are split into parts of 200 lines (4‐second intervals) to form a pattern. Thus, each activity has 500 patterns. Features are extracted from raw accelerometer data. These features are standard deviations and average values of x‐, y‐ and z‐axes of accelerometer data given in Figure 1.

4.2. Classification with statistical learning methods

Experimental study: Extracted features are tested by five different statistical learning methods (Naive Bayes, kNN, logistic regression, Bayesian network and multilayer perceptron) using WEKA Toolkit [28] comprising these methods. Half of the data is used for training and remaining to test. For training and testing data are split randomly. Table 2 displays the comparison of evaluation metrics such as accuracy rates, F‐measure, ROC area, root mean squared error (RMSE) of statistical learning methods. F‐measure is a measure of a test’s accuracy. Formulation of F‐measure is given in Eq. (1). FN, FP, TP and TN represents the number of false negatives, the number of false positives, the number of true positives and the number of true negatives, respectively [29].

MethodsAccuracy ratesF measureROC areaRMSE
Naive Bayes81.330.8190.9740.1644
Bayesian network91.550.9160.9930.1242
kNN (k = 3)89.680.8960.9710.135
Logistic regression85.550.8540.9770.1507
Multilayer perceptron74.570.7340.9570.1937

Table 2.

The accuracy rates, F‐measure, ROC area, Root mean squared error values of statistical methods.

Fmeasure=2xTNTP+FPxTPTP+FNTPTP+FN+TNTP+FPE1

The RMSE of a model prediction with respect to the estimated variable Xmodel is defined as the square root of the mean squared error given in Eq. (2):

RMSE=1nxi=1n(Xobs,iXmodel,i)2E2

where Xobs is observed values and Xmodel is modelled values at time/place i [30].

ROC (receiver operating characteristic) area is also known as area under curve (AUC) is calculated as in Eq. (3).

AUC=1mni=1mj=1n1pi>pjE3

Here, i runs over all m data points with true label 1, and j runs over all n data points with true label 0; pi and pj denote the probability score assigned by the classifier to data point i and j, respectively. 1 is the indicator function: it outputs 1 if the condition is satisfied [31].

According to Table 2, the Bayesian network method has the best accuracy rate 91.55% and minimum RMSE value. F‐measure is a measure of a test’s accuracy. The best values of both ROC area and F‐measure belong to the Bayesian network method. Confusion matrix for Bayesian network is given in Table 3 and ROC curves of five different methods (Bayesian network, kNN, Naïve Bayes, logistic regression and multilayer perceptron) are given in Figures 812.

Classified asABCDEFGHI
A = Typing24900000000
B = Cleaning11850141330139
C = Writing01253000006
D = Running01025300000
E = Walking010002330000
F = Writing board092002070029
G = Standing09002723500
H = Stationary92200002400
I = Brushing teeth119001801205

Table 3.

Confusion matrix of Bayesian network.

Figure 8.

ROC curve for classification by Bayesian network.

Figure 9.

ROC curve for classification by kNN.

Figure 10.

ROC curve for classification by naïve Bayes.

Figure 11.

ROC curve for classification by logistic regression.

Figure 12.

ROC curve for classification by multilayer perceptron.

According to Table 3, recognition accuracy for cleaning is about 75%. This activity does not have simple characteristics and is easily confused with other activities. For example, 19 of 235 brushing teeth activity are misclassified as cleaning and 39 of 256 cleaning activity are misclassified as brushing teeth. In addition, writing board activities are confused with brushing teeth and cleaning activities are confused with running and walking. Because cleaning activity involves walking.

Development mobile application: According to the results shown in Table 2, the Bayesian network method is used in Android wear‐based classification application for recognition human activities (Figure 13).

Figure 13.

Detected human activities.

Developed mobile application for smartwatches collects sensor data and converts it as a pattern in 4 seconds intervals. Then it classifies the data by using trained Bayesian network model and WEKA API and shows detected activity on smartwatch screen (Figure 14b). At this step, the smartwatch application does not need the smartphone. Also it is possible to report detected activities on smartphone screen via developed application for Android smartphone (Figure 14a).

Figure 14.

(a) Detected activities reporting application for smartphone, (b) activity recognition application for smartwatch.

Steps of the algorithm and sample Java codes used in activity detection application are given in Figure 15.

Figure 15.

Steps of algorithm and sample Java codes.

Advertisement

5. Conclusion

In this chapter, human activity recognition on smartwatches by using statistical methods is studied. It is found that the Bayesian network method is the best method for the dataset used in the study. Through this work, it is possible to understand how to classify the human activities by using statistical learning methods and sensor data. Only accelerometer sensor data are used for nine different activities. To use different sensors, which smartwatches have (heart rate monitor, ambient light, GPS and gyroscope), to detect more activities by increasing the number of classes (handshake, smoking, drinking, etc.) or to separate more complex parts of activities (e.g. walking hands in pockets, walking hand in hand, etc.) can improve the studies for human activity recognition in the future.

Nowadays, smartwatches and wrist‐worn sensors are used in daily activity monitoring and healthy lifestyle applications. These devices can also help to warn the user in daily life for creating a healthy sportive habit. For example, smartwatch can send a reminder to user to warn about staying stationary for a long time. Such devices and applications can give information about people such as how much they walk, how long they sleep and how many calories they burn. In addition, this kind of work also contributes to virtual reality applications.

Advertisement

Acknowledgments

This study is supported by Muğla Sıtkı Koçman University Scientific Research Projects under the grant number 016‐061.

References

  1. 1. John J. Guiry, Pepijn van de Ven and John Nelson. Multi‐Sensor Fusion for Enhanced Contextual Awareness of Everyday Activities with Ubiquitous Devices. Sensors. 2014;14(3):5687–5701. DOI: 10.3390/s140305687
  2. 2. Ensar Arif Sağbaş and Serkan Ballı. Transportation Mode Detection By Using Smartphone Sensors And Machine Learning. Pamukkale University Journal of Engineering Sciences. 2016;22(5):376–383. DOI: 10.5505/pajes.2015.63308
  3. 3. Muhammad Shoaib, Stephan Bosch, Hans Scholten, Paul J. M. Havinga and Ozlem Durmaz Incel. Towards Detection of Bad Habits by Fusing Smartphone and Smartwatch Sensors. In: Pervasive Computing and Communication Workshops; 23–27 March; St. Louis, MO. IEEE; 2015. pp. 591–596. DOI: 10.1109/PERCOMW.2015.7134104
  4. 4. Bobak Jack Mortazavi, Mohammad Pourhomayoun, Gabriel Alsheikh, Nabil Alshurafa, Sunghoon Ivan Lee and Majid Sarrafzadeh. Determining the Single Best Axis for Exercise Repetition Recognition and Counting on Smartwatches. In: Wearable and Implantable Body Sensor Networks; 16–19 June; Zurich. IEEE; 2014. pp. 33–38. DOI: 10.1109/BSN.2014.21
  5. 5. Gareth James, Daniela Witten, Trevor Hastie, Robert Tibshirani. An Introduction to Statistical Learning. New York: Springer; 2013. 426 p. DOI: 10.1007/978‐1‐4614‐7138‐7
  6. 6. He Wang, Ted Tsung‐Te Lai and Romit Roy Choudhury. MoLe: Motion Leaks through Smartwatch Sensors. In: Conference on Mobile Computing and Networking; 7–11 September; Paris, France. ACM; 2015. pp. 155–166. DOI: 10.1145/2789168.2790121
  7. 7. Fernando Ginez da Silva and Elisabete Galeazzo. Accelerometer Based Intelligent System for Human Movement Recognition. In: Advances in Sensors and Interfaces; 13–14 June; Bari. IEEE; 2013. pp. 20–24. DOI: 10.1109/IWASI.2013.6576063
  8. 8. Saisakul Chernbumroong, Anthony S. Atkins and Hongnian Yu. Activity Classification Using a Single Wrist‐Worn Accelerometer. In: Software, Knowledge Information, Industrial Management and Applications; 8–11 September; Benevento. IEEE; 2011. pp. 1–6. DOI: 10.1109/SKIMA.2011.6089975
  9. 9. Philipp M. Scholl and Kristof van Laerhoven. A Feasibility Study of Wrist‐Worn Accelerometer Based Detection of Smoking Habits. In: Innovative Mobile and Internet Services in Ubiquitous Computing; 4–6 July; Palermo. IEEE; 2012. pp. 886–891. DOI: 10.1109/IMIS.2012.96
  10. 10. Yujie Dong, Jenna Scisco, Mike Wilson, Eric Muth and Adam Hoover. Detecting Periods of Eating During Free‐Living by Tracking Wrist Motion. Journal of Biomedical and Health Informatics. 2014;18(4):1253–1260. DOI: 10.1109/JBHI.2013.2282471
  11. 11. Raul I. Ramos‐Garcia and Adam W. Hoover. A Study of Temporal Action Sequencing During Consumption of a Meal. In: Bioinformatics, Computational Biology and Biomedical Informatics; 22–25 September; Washington DC. New York: ACM; 2013. 68 p. DOI: 10.1145/2506583.2506596
  12. 12. Stewart G Trost, Yonglei Zheng and Weng‐Keen Wong. Machine Learning for Activity Recognition: Hip versus Wrist Data. Physiological Measurement. 2014;35(11):2183.
  13. 13. Adil Mehmood Khan, Ali Tufail, Asad Masood Khattak and Teemu H. Laine . Activity Recognition on Smartphones via Sensor-Fusion and KDA-Based SVMs. International Journal of Distributed Sensor Networks. 2014;10(5):1–14. DOI: 10.1155/2014/503291
  14. 14. Farzin Dadashi, Arash Arami, Florent Crettenand, Gregoire P. Millet, John Komar, Ludovic Seifert and Kamiar Aminian. A Hidden Markov Model of the Breaststroke Swimming Temporal Phases Using Uearable Inertial Measurement Units. In: Body Sensor Networks; 6–9 May; Cambridge, MA, USA. IEEE; 2013. pp. 1–6. DOI: 10.1109/BSN.2013.6575461
  15. 15. Juha Parkka, Mikka Ermes, Kari Antila, Mark Van Gils, Ari Manttari and Heikki Nieminen. Estimating Intensity of Physical Activity: A Comparison of Wearable Accelerometer and Gyro Sensors and 3 Sensor Locations. In: Engineering in Medicine and Biology Society; 22‐26 August; Lyon. IEEE; 2007. pp. 1511–1514. DOI: 10.1109/IEMBS.2007.4352588
  16. 16. Sheng Shen, He Wang and Romit Roy Choudhury. I Am a Smartwatch and I Can Track My User’s Arm. In: Mobile Systems, Applications and Services; 25–30 June; Singapore. Singapore: ACM; 2016. DOI: 10.1145/2906388.2906407
  17. 17. Gerald Bieber and Christian Peter. Using Physical Activity for User Behavior Analysis. In: Pervasive Technologies Related to Assistive Environments; 15–19 July; Athens, Greece. New York: ACM; 2008. DOI: 10.1145/1389586.1389692
  18. 18. Ling Bao and Stephen S. Intille. Activity Recognition from User-Annotated Acceleration Data. In: Pervasive Computing; 18-23 April; Linz. Berlin: Springer-Verlag; 2004. p. 1–17.
  19. 19. Ki‐Hoon Kim, Mi‐Young Jeon, Ju‐Young Lee, Ji‐Hoon Jeong and Gu‐Min Jeong . A Study on the App Development Using Sensor Signals from Smartphone and Smartwatch. Advanced Science and Technology Letters. 2014;62:66–69. DOI: 10.14257/astl.2014.62.17
  20. 20. George H. John and Pat Langley. Estimating Continuous Distributions in Bayesian Classifiers. In: Uncertainty in Artificial Intelligence; 18–20 August; Quebec. San Francisco: Morgan Kaufmann; 1995. pp. 338–345.
  21. 21. Ludmila I. Kuncheva. Combining Pattern Classifiers Methods and Algorithms. New Jersey: John Wiley & Sons, Inc.; 2004. 350 p.
  22. 22. Bjørn Kåre Alsberg, Royston Goodacre, Jem J Rowland and Douglas Kell. Classification of Pyrolysis Mass Spectra by Fuzzy Multivariate Rule Induction-Comparison with Regression, K-Nearest Neighbour, Neural and Decision-Tree Methods. Analytical Chimica Acta. 1997;348(1):389–407. DOI: 10.1016/S0003-2670(97)00064-0
  23. 23. Chao‐Ying Joanne Peng, Kuk Lida Lee and Gary M. Ingersoll. An Introduction to Logistic Regression Analysis and Reporting. The Journal of Educational Research. 2002;96(1):3–14. DOI: 10.1080/00220670209598786
  24. 24. Irad Ben-Gal. Bayesian Networks. In: Fabrizio Ruggeri, Ron Kenett and Frederick Faltin, editors. Encyclopedia of Statistics in Quality and Reliability. Chichester, UK: Wiley; 2007.
  25. 25. Rahul Ravindran, Riya Suchdev, Yash Tanna and Sridhar Swamy. Context Aware and Pattern Oriented Machine Learning Framework (CAPOMF) for Android. In: Advances in Engineering and Technology Research; 1–2 August; Unnao. IEEE; 2014. pp. 1–7. DOI: 10.1109/ICAETR.2014.7012912
  26. 26. Android. Sensors Overview [Internet]. Available from: https://developer.android.com/guide/topics/sensors/sensors_overview.html [Accessed: 15.06.2016]
  27. 27. Motorola. Moto 360 [Internet]. Available from: http://www.motorola.com/us/products/moto‐360 [Accessed: 15.06.2016]
  28. 28. Stephen R. Garner. WEKA: The Waikato Environment for Knowledge Analysis. In: Computer Science Research Students Conference; April; New Zealand 1995. pp. 57–64.
  29. 29. Aytuğ Onan, Serdar Korukoğlu and Hasan Bulut. Ensemble of Keyword Extraction Methods and Classifiers in Text Classification. Expert Systems with Applications. 2016;57:232–247.
  30. 30. Root Mean Square Error (RMSE) [Internet]. Available from: http://www.ctec.ufal.br/professor/crfj/Graduacao/MSH/Model%20evaluation%20methods.doc [Accessed: 09.09.2016]
  31. 31. Calculating ROC Curves and AUC Scores [Internet]. Available from: http://www.cs.ru.nl/~tomh/onderwijs/dm/dm_files/roc_auc.pdf [Accessed: 09.09.2016]

Written By

Serkan Balli and Ensar Arif Sağbas

Submitted: 12 April 2016 Reviewed: 07 October 2016 Published: 26 April 2017