Deep Multi-type Objects Multi-view Multi-instance Multi-label Learning

Yuanlin Yang, Guoxian Yu, Carlotta Domeniconi, Xiangliang Zhang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Multi-view multi-instance multi-label learning (M3L) can model complex objects (bags) that are composed of multiple instances, represented with heterogeneous feature views and annotated with multiple related semantic labels. Although significant progress has been made toward M3L tasks, the current solutions still focus on a single-type of complex objects, and cannot effectively mine the widely-witnessed interconnected objects of multi-types. To bridge this gap, we propose a Deep Multi-type objects Multi-view Multi-instance Multi-label Learning solution (DeepM4L) based on heterogeneous network embedding. DeepM4L first encodes the inter- and intra-relations among multi-type objects using a heterogeneous network, and performs instance neighbor embedding to learn the representation vectors of instances. Next, it obtains the instance-label score tensor for each view and uses a max pooling operation to induce the bag-label score tensor for each bag. After that, it combines bag-label scores by multi-view learning to guarantee the semantic consistency between bags of different views. Our empirical study on benchmark datasets shows that DeepM4L is significantly superior to the recent advanced baselines.
Original languageEnglish (US)
Title of host publicationProceedings of the 2021 SIAM International Conference on Data Mining (SDM)
PublisherSociety for Industrial and Applied Mathematics
Pages486-494
Number of pages9
ISBN (Print)9781611976700
DOIs
StatePublished - Apr 26 2021

Bibliographical note

KAUST Repository Item: Exported on 2021-05-04
Acknowledgements: Supported by NSFC (61872300, 62031003 and 62072380)

Fingerprint

Dive into the research topics of 'Deep Multi-type Objects Multi-view Multi-instance Multi-label Learning'. Together they form a unique fingerprint.

Cite this