Abstract
Current benchmark reports of classification algorithms generally concern common classifiers and their variants but do not include many algorithms that have been introduced in recent years. Moreover, important properties such as the dependency on number of classes and features and CPU running time are typically not examined. In this paper, we carry out a comparative empirical study on both established classifiers and more recently proposed ones on 71 data sets originating from different domains, publicly available at UCI and KEEL repositories. The list of 11 algorithms studied includes Extreme Learning Machine (ELM), Sparse Representation based Classification (SRC), and Deep Learning (DL), which have not been thoroughly investigated in existing comparative studies. It is found that Stochastic Gradient Boosting Trees (GBDT) matches or exceeds the prediction performance of Support Vector Machines (SVM) and Random Forests (RF), while being the fastest algorithm in terms of prediction efficiency. ELM also yields good accuracy results, ranking in the top-5, alongside GBDT, RF, SVM, and C4.5 but this performance varies widely across all data sets. Unsurprisingly, top accuracy performers have average or slow training time efficiency. DL is the worst performer in terms of accuracy but second fastest in prediction efficiency. SRC shows good accuracy performance but it is the slowest classifier in both training and testing.
Original language | English (US) |
---|---|
Pages (from-to) | 128-150 |
Number of pages | 23 |
Journal | Expert Systems with Applications |
Volume | 82 |
DOIs | |
State | Published - Apr 5 2017 |
Bibliographical note
KAUST Repository Item: Exported on 2020-10-01Acknowledgements: This work is partially funded by the National Science Foundation of China (NSFC) under Grant no. 41401466 and 61300215, as well as Henan Science and Technology Project under Grant no. 132102210188. It is also supported by Henan University under Grant no. xxjc20140005 and 2013YBZR014. The authors acknowledge the help of Ms. Jingjun Bi on reorganising the experimental results.