Login / Signup

Sequential Variational Autoencoder with Adversarial Classifier for Video Disentanglement.

Takeshi HagaHiroshi KeraKazuhiko Kawamoto
Published in: Sensors (Basel, Switzerland) (2023)
In this paper, we propose a sequential variational autoencoder for video disentanglement, which is a representation learning method that can be used to separately extract static and dynamic features from videos. Building sequential variational autoencoders with a two-stream architecture induces inductive bias for video disentanglement. However, our preliminary experiment demonstrated that the two-stream architecture is insufficient for video disentanglement because static features frequently contain dynamic features. Additionally, we found that dynamic features are not discriminative in the latent space. To address these problems, we introduced an adversarial classifier using supervised learning into the two-stream architecture. The strong inductive bias through supervision separates dynamic features from static features and yields discriminative representations of the dynamic features. Through a comparison with other sequential variational autoencoders, we qualitatively and quantitatively demonstrate the effectiveness of the proposed method on the Sprites and MUG datasets.
Keyphrases
  • randomized controlled trial
  • systematic review
  • machine learning
  • single cell