Abstract
We study the problem of Empirical Risk Minimization (ERM) with (smooth) non-convex loss functions under the differential-privacy (DP) model. We first study the expected excess empirical (or population) risk, which was primarily used as the utility to measure the quality for convex loss functions. Specifically, we show that the excess empirical (or population) risk can be upper bounded by (Equation presnted) in the (ε, δ)-DP settings, where n is the data size and d is the dimensionality of the space. The 1/log2 term in the empirical risk bound can be further improved to 1/2Ω(1) (when d is a constant) by a highly non-trivial analysis on the time-average error. To obtain more efficient solutions, we also consider the connection between achieving differential privacy and finding approximate local minimum. Particularly, we show that when the size n is large enough, there are (ε,δ)-DP algorithms which can find an approximate local minimum of the empirical risk with high probability in both the constrained and non-constrained settings. These results indicate that one can escape saddle points privately.
Original language | English (US) |
---|---|
Title of host publication | 36th International Conference on Machine Learning, ICML 2019 |
Publisher | International Machine Learning Society (IMLS)[email protected] |
Pages | 11334-11343 |
Number of pages | 10 |
ISBN (Print) | 9781510886988 |
State | Published - Jan 1 2019 |
Externally published | Yes |