Papers
arxiv:1802.04942

Isolating Sources of Disentanglement in Variational Autoencoders

Published on Feb 14, 2018
Authors:
,
,
,

Abstract

The paper introduces a refinement of the $\beta$-VAE objective, $\beta$-TCVAE, to encourage disentangled representations and proposes a principled classifier-free measure called the mutual information gap (MIG).

AI-generated summary

We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate our beta-TCVAE (Total Correlation Variational Autoencoder), a refinement of the state-of-the-art beta-VAE objective for learning disentangled representations, requiring no additional hyperparameters during training. We further propose a principled classifier-free measure of disentanglement called the mutual information gap (MIG). We perform extensive quantitative and qualitative experiments, in both restricted and non-restricted settings, and show a strong relation between total correlation and disentanglement, when the latent variables model is trained using our framework.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 1802.04942
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/1802.04942 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/1802.04942 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/1802.04942 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.