Abstract
In high dimension, low sample size (HDLSS) settings, classifiers based on Euclidean distances like the nearest neighbor classifier and the average distance classifier perform quite poorly if differences between locations of the underlying populations get masked by scale differences. To rectify this problem, several modifications of these classifiers have been proposed in the literature. However, existing methods are confined to location and scale differences only, and they often fail to discriminate among populations differing outside of the first two moments. In this article, we propose some simple transformations of these classifiers resulting in improved performance even when the underlying populations have the same location and scale. We further propose a generalization of these classifiers based on the idea of grouping of variables. High-dimensional behavior of the proposed classifiers is studied theoretically. Numerical experiments with a variety of simulated examples as well as an extensive analysis of benchmark data sets from three different databases exhibit advantages of the proposed methods.
Original language | English (US) |
---|---|
Journal | Journal of Machine Learning Research |
Volume | 23 |
State | Published - 2022 |
Bibliographical note
Funding Information:The first and third authors have been partially supported by the DST-SERB grant ECR/2017/000374. The authors would like to thank the Action Editor for his encourag-ment, and the three anonymous reviewers for their constructive comments and suggestions that substantially improved the paper.
Publisher Copyright:
© 2022 Sarbojit Roy, Soham Sarkar, Subhajit Dutta and Anil K. Ghosh.
Keywords
- Block covariance structure
- Convergence in probability
- HDLSS asymptotics
- Hierarchical clustering
- Mean absolute difference of distances
- Robustness
- Scale-adjusted average distances
ASJC Scopus subject areas
- Control and Systems Engineering
- Software
- Statistics and Probability
- Artificial Intelligence