Robust high dimensional expectation maximization algorithm via trimmed hard thresholding

Di Wang, Xiangyu Guo, Shi Li, Jinhui Xu

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

In this paper, we study the problem of estimating latent variable models with arbitrarily corrupted samples in high dimensional space (i.e., d≫ n) where the underlying parameter is assumed to be sparse. Specifically, we propose a method called Trimmed (Gradient) Expectation Maximization which adds a trimming gradients step and a hard thresholding step to the Expectation step (E-step) and the Maximization step (M-step), respectively. We show that under some mild assumptions and with an appropriate initialization, the algorithm is corruption-proofing and converges to the (near) optimal statistical rate geometrically when the fraction of the corrupted samples ϵ is bounded by O~(1n). Moreover, we apply our general framework to three canonical models: mixture of Gaussians, mixture of regressions and linear regression with missing covariates. Our theory is supported by thorough numerical results.
Original languageEnglish (US)
Pages (from-to)2283-2311
Number of pages29
JournalMachine Learning
Volume109
Issue number12
DOIs
StatePublished - Dec 1 2020
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-15

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software

Fingerprint

Dive into the research topics of 'Robust high dimensional expectation maximization algorithm via trimmed hard thresholding'. Together they form a unique fingerprint.

Cite this