Multimodal similarity-preserving hashing

Jonathan Masci, Michael M. Bronstein, Alexander M. Bronstein, Jurgen Schmidhuber

Research output: Contribution to journalArticlepeer-review

153 Scopus citations

Abstract

We introduce an efficient computational framework for hashing data belonging to multiple modalities into a single representation space where they become mutually comparable. The proposed approach is based on a novel coupled siamese neural network architecture and allows unified treatment of intra-and inter-modality similarity learning. Unlike existing cross-modality similarity learning approaches, our hashing functions are not limited to binarized linear projections and can assume arbitrarily complex forms. We show experimentally that our method significantly outperforms state-of-the-art hashing approaches on multimedia retrieval tasks. © 2014 IEEE.
Original languageEnglish (US)
Pages (from-to)824-830
Number of pages7
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume36
Issue number4
DOIs
StatePublished - Jan 1 2014
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-14

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computational Theory and Mathematics
  • Software
  • Applied Mathematics
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Multimodal similarity-preserving hashing'. Together they form a unique fingerprint.

Cite this