Multi-typed objects multi-view multi-instance multi-label learning

Yuanlin Yang, Guoxian Yu, Jun Wang, Carlotta Domeniconi, Xiangliang Zhang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Scopus citations

Abstract

Multi-typed objects Multi-view Multi-instance Multi-label Learning (M4L) deals with interconnected multi-typed objects (or bags) that are made of diverse instances, represented with heterogeneous feature views and annotated with a set of non-exclusive but semantically related labels. M4L is more general and powerful than the typical Multi-view Multi-instance Multi-label Learning (M3L), which only accommodates single-typed bags and lacks the power to jointly model the naturally interconnected multi-typed objects in the physical world. To combat with this novel and challenging learning task, we develop a joint matrix factorization based solution (M4L-JMF). Particularly, M4L-JMF firstly encodes the diverse attributes and multiple inter(intra)-associations among multi-typed bags into respective data matrices, and then jointly factorizes these matrices into low-rank ones to explore the composite latent representation of each bag and its instances (if any). In addition, it incorporates a dispatch and aggregation term to distribute the labels of bags to individual instances and reversely aggregate the labels of instances to their affiliated bags in a coherent manner. Experimental results on benchmark datasets show that M4L-JMF achieves significantly better results than simple adaptions of existing M3L solutions on this novel problem.
Original languageEnglish (US)
Title of host publication2020 IEEE International Conference on Data Mining (ICDM)
PublisherIEEE
Pages1370-1375
Number of pages6
ISBN (Print)9781728183169
DOIs
StatePublished - Nov 2020

Bibliographical note

KAUST Repository Item: Exported on 2021-03-01

Fingerprint

Dive into the research topics of 'Multi-typed objects multi-view multi-instance multi-label learning'. Together they form a unique fingerprint.

Cite this