Differentially Private Empirical Risk Minimization with Non-convex Loss Functions

Di Wang, Changyou Chen, Jinhui Xu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

12 Scopus citations

Abstract

We study the problem of Empirical Risk Minimization (ERM) with (smooth) non-convex loss functions under the differential-privacy (DP) model. We first study the expected excess empirical (or population) risk, which was primarily used as the utility to measure the quality for convex loss functions. Specifically, we show that the excess empirical (or population) risk can be upper bounded by (Equation presnted) in the (ε, δ)-DP settings, where n is the data size and d is the dimensionality of the space. The 1/log2 term in the empirical risk bound can be further improved to 1/2Ω(1) (when d is a constant) by a highly non-trivial analysis on the time-average error. To obtain more efficient solutions, we also consider the connection between achieving differential privacy and finding approximate local minimum. Particularly, we show that when the size n is large enough, there are (ε,δ)-DP algorithms which can find an approximate local minimum of the empirical risk with high probability in both the constrained and non-constrained settings. These results indicate that one can escape saddle points privately.
Original languageEnglish (US)
Title of host publication36th International Conference on Machine Learning, ICML 2019
PublisherInternational Machine Learning Society (IMLS)[email protected]
Pages11334-11343
Number of pages10
ISBN (Print)9781510886988
StatePublished - Jan 1 2019
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-15

Fingerprint

Dive into the research topics of 'Differentially Private Empirical Risk Minimization with Non-convex Loss Functions'. Together they form a unique fingerprint.

Cite this