LEARNING SPARSE GRAPHS UNDER SMOOTHNESS PRIOR

Sundeep Prabhakar Chepuri, Sijia Liu, Geert Leus, Alfred O. Hero

Research output: Chapter in Book/Report/Conference proceedingConference contribution

101 Scopus citations

Abstract

In this paper, we are interested in learning the underlying graph structure behind training data. Solving this basic problem is essential to carry out any graph signal processing or machine learning task. To realize this, we assume that the data is smooth with respect to the graph topology, and we parameterize the graph topology using an edge sampling function. That is, the graph Laplacian is expressed in terms of a sparse edge selection vector, which provides an explicit handle to control the sparsity level of the graph. We solve the sparse graph learning problem given some training data in both the noiseless and noisy settings. Given the true smooth data, the posed sparse graph learning problem can be solved optimally and is based on simple rank ordering. Given the noisy data, we show that the joint sparse graph learning and denoising problem can be simplified to designing only the sparse edge selection vector, which can be solved using convex optimization.
Original languageEnglish (US)
Title of host publication2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
PublisherIEEE
Pages6508-6512
Number of pages5
ISBN (Print)9781509041176
DOIs
StatePublished - 2017
Externally publishedYes

Bibliographical note

KAUST Repository Item: Exported on 2022-06-23
Acknowledged KAUST grant number(s): OSR-2015-Sensors-2700
Acknowledgements: This work is supported in part by the KAUST-MIT-TUD consortium under grant OSR-2015-Sensors-2700 and the US Army Research Office under grant W911NF-15-1-0479.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.

Fingerprint

Dive into the research topics of 'LEARNING SPARSE GRAPHS UNDER SMOOTHNESS PRIOR'. Together they form a unique fingerprint.

Cite this