Abstract
In this paper, we study the graph-based semi-supervised learning for classifying nodes in attributed networks, where the nodes and edges possess content information. Recent approaches like graph convolution networks and attention mechanisms have been proposed to ensemble the first-order neighbors and incorporate the relevant neighbors. However, it is costly (especially in memory) to consider all neighbors without a prior differentiation. We propose to explore the neighborhood in a reinforcement learning setting and find a walk path well-tuned for classifying the unlabelled target nodes. We let an agent (of node classification task) walk over the graph and decide where to move to maximize classification accuracy. We define the graph walk as a partially observable Markov decision process (POMDP). The proposed method is flexible for working in both transductive and inductive setting. Extensive experiments on four datasets demonstrate that our proposed method outperforms several state-of-the-art methods. Several case studies also illustrate the meaningful movement trajectory made by the agent.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of the 13th International Conference on Web Search and Data Mining |
Publisher | ACM |
Pages | 16-24 |
Number of pages | 9 |
ISBN (Print) | 9781450368223 |
DOIs | |
State | Published - Jan 22 2020 |
Bibliographical note
KAUST Repository Item: Exported on 2020-10-01Acknowledged KAUST grant number(s): FCC/1/1976-19-01
Acknowledgements: This work was partially supported and funded by King Abdullah University of Science and Technology (KAUST), under award number FCC/1/1976-19-01, and NSFC No. 61828302.