Recurrent attention walk for semi-supervised classification

Uchenna Thankgod Akujuobi, Qiannan Zhang, Han Yufei, Xiangliang Zhang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Scopus citations

Abstract

In this paper, we study the graph-based semi-supervised learning for classifying nodes in attributed networks, where the nodes and edges possess content information. Recent approaches like graph convolution networks and attention mechanisms have been proposed to ensemble the first-order neighbors and incorporate the relevant neighbors. However, it is costly (especially in memory) to consider all neighbors without a prior differentiation. We propose to explore the neighborhood in a reinforcement learning setting and find a walk path well-tuned for classifying the unlabelled target nodes. We let an agent (of node classification task) walk over the graph and decide where to move to maximize classification accuracy. We define the graph walk as a partially observable Markov decision process (POMDP). The proposed method is flexible for working in both transductive and inductive setting. Extensive experiments on four datasets demonstrate that our proposed method outperforms several state-of-the-art methods. Several case studies also illustrate the meaningful movement trajectory made by the agent.
Original languageEnglish (US)
Title of host publicationProceedings of the 13th International Conference on Web Search and Data Mining
PublisherACM
Pages16-24
Number of pages9
ISBN (Print)9781450368223
DOIs
StatePublished - Jan 22 2020

Bibliographical note

KAUST Repository Item: Exported on 2020-10-01
Acknowledged KAUST grant number(s): FCC/1/1976-19-01
Acknowledgements: This work was partially supported and funded by King Abdullah University of Science and Technology (KAUST), under award number FCC/1/1976-19-01, and NSFC No. 61828302.

Fingerprint

Dive into the research topics of 'Recurrent attention walk for semi-supervised classification'. Together they form a unique fingerprint.

Cite this