Cross-modal similarity learning via pairs, preferences, and active supervision

Yi Zhen, Piyush Rai, Hongyuan Zha, Lawrence Carin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

17 Scopus citations

Abstract

We present a probabilistic framework for learning pairwise similarities between objects belonging to different modalities, such as drugs and proteins, or text and images. Our framework is based on learning a binary code based representation for objects in each modality, and has the following key properties: (i) it can leverage both pairwise as well as easy-to-obtain relative preference based cross-modal constraints, (ii) the probabilistic framework naturally allows querying for the most useful/informative constraints, facilitating an active learning setting (existing methods for cross-modal similarity learning do not have such a mechanism), and (iii) the binary code length is learned from the data. We demonstrate the effectiveness of the proposed approach on two problems that require computing pairwise similarities between cross-modal object pairs: cross-modal link prediction in bipartite graphs, and hashing based cross-modal similarity search.
Original languageEnglish (US)
Title of host publicationProceedings of the National Conference on Artificial Intelligence
PublisherAI Access [email protected]
Pages3203-3209
Number of pages7
ISBN (Print)9781577357025
StatePublished - Jun 1 2015
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2021-02-09

Fingerprint

Dive into the research topics of 'Cross-modal similarity learning via pairs, preferences, and active supervision'. Together they form a unique fingerprint.

Cite this