Multi-modal Network Representation Learning

Chuxu Zhang, Meng Jiang, Xiangliang Zhang, Yanfang Ye, Nitesh V. Chawla

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

In today's information and computational society, complex systems are often modeled as multi-modal networks associated with heterogeneous structural relation, unstructured attribute/content, temporal context, or their combinations. The abundant information in multi-modal network requires both a domain understanding and large exploratory search space when doing feature engineering for building customized intelligent solutions in response to different purposes. Therefore, automating the feature discovery through representation learning in multi-modal networks has become essential for many applications. In this tutorial, we systematically review the area of multi-modal network representation learning, including a series of recent methods and applications. These methods will be categorized and introduced in the perspectives of unsupervised, semi-supervised and supervised learning, with corresponding real applications respectively. In the end, we conclude the tutorial and raise open discussions. The authors of this tutorial are active and productive researchers in this area.
Original languageEnglish (US)
Title of host publicationProceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining
PublisherACM
Pages3557-3558
Number of pages2
ISBN (Print)9781450379984
DOIs
StatePublished - Aug 20 2020

Bibliographical note

KAUST Repository Item: Exported on 2020-10-01

Fingerprint

Dive into the research topics of 'Multi-modal Network Representation Learning'. Together they form a unique fingerprint.

Cite this