Efficiently Disentangle Causal Representations

Yuanpeng Li*, Joel Hestness, Mohamed Elhoseiny, Liang Zhao, Kenneth Church

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

Abstract

This paper proposes an efficient approach to learning disentangled representations with causal mechanisms based on the difference of conditional probabilities in original and new distributions. We approximate the difference with models' generalization abilities so that it fits in the standard machine learning framework and can be computed efficiently. In contrast to the state-of-the-art approach, which relies on the learner's adaptation speed to new distribution, the proposed approach only requires evaluating the model's generalization ability. We provide a theoretical explanation for the advantage of the proposed method, and our experiments showthat the proposed technique is 1.9-11.0× more sample efficient and 9.4-32.4× quicker than the previous method on various tasks. The source code is available at https://github.com/yuanpeng16/EDCR.

Original languageEnglish (US)
Pages54-71
Number of pages18
StatePublished - 2024
Event1st Conference on Parsimony and Learning, CPAL 2024 - Hongkong, China
Duration: Jan 3 2024Jan 6 2024

Conference

Conference1st Conference on Parsimony and Learning, CPAL 2024
Country/TerritoryChina
CityHongkong
Period01/3/2401/6/24

Bibliographical note

Publisher Copyright:
© 2024 Proceedings of Machine Learning Research

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Efficiently Disentangle Causal Representations'. Together they form a unique fingerprint.

Cite this