Parallel majorization minimization with dynamically restricted domains for nonconvex optimization

Yan Kaganovsky, Ikenna Odinaka, David Carlson, Lawrence Carin

Research output: Chapter in Book/Report/Conference proceedingConference contribution


We propose an optimization framework for nonconvex problems based on majorization-minimization that is particularity well-suited for parallel computing. It reduces the optimization of a high dimensional nonconvex objective function to successive optimizations of locally tight and convex upper bounds which are additively separable into low dimensional objectives. The original problem is then broken into simpler parallel sub-problems while still guaranteeing the monotonic reduction of the original objective function and convergence to a local minimum. Due to the low dimensionality of each sub-problem, second-order optimization methods become computationally feasible and can be used to accelerate convergence. In addition, the upper bound can be restricted to a local dynamic convex domain, so that it is better matched to the local curvature of the objective function, resulting in accelerated convergence.
Original languageEnglish (US)
Title of host publicationProceedings of the 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016
Number of pages9
StatePublished - Jan 1 2016
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2021-02-09


Dive into the research topics of 'Parallel majorization minimization with dynamically restricted domains for nonconvex optimization'. Together they form a unique fingerprint.

Cite this