Abstract
Herein, theoretical results are presented to provide insights into the effectiveness of subsampling methods in reducing the amount of instances required in the training stage when applying support vector machines (SVMs) for classification in big data scenarios. Our main theorem states that under some conditions, there exists, with high probability, a feasible solution to the SVM problem for a randomly chosen training subsample, with the corresponding classifier as close as desired (in terms of classification error) to the classifier obtained from training with the complete dataset. The main theorem also reflects the curse of dimensionalityin that the assumptions made for the results are much more restrictive in large dimensions; thus, subsampling methods will perform better in lower dimensions. Additionally, we propose an importance sampling and bagging subsampling method that expands the nearest-neighbors ideas presented in previous work. Using different benchmark examples, the method proposed herein presents a faster solution to the SVM problem (without significant loss in accuracy) compared with the available state-of-the-art techniques.
Original language | English (US) |
---|---|
Pages (from-to) | 3776 |
Journal | Mathematics |
Volume | 10 |
Issue number | 20 |
DOIs | |
State | Published - Oct 13 2022 |
Bibliographical note
KAUST Repository Item: Exported on 2022-11-07Acknowledgements: This research has been supported by King Abdullah University of Science and Technology, KAUST. This work was partly performed while RB visited the Departamento de Matemáticas, Universidad de los Andes, Colombia (RB as a visiting graduate student supported by the Mixed Scholarship CONACYT, Mexico). Their hospitality and support are gratefully acknowledged. The work of AJQ was supported, in part, by the STAI program of Universidad de Los Andes. We would like to thank the Science Faculty at Universidad de los Andes. The support of Kind Abdullah University of Science and Technology is also gratefully acknowledged.