Login / Signup

Diffusion Probabilistic Modeling for Video Generation.

Ruihan YangPrakhar SrivastavaStephan Mandt
Published in: Entropy (Basel, Switzerland) (2023)
Denoising diffusion probabilistic models are a promising new class of generative models that mark a milestone in high-quality image generation. This paper showcases their ability to sequentially generate video, surpassing prior methods in perceptual and probabilistic forecasting metrics. We propose an autoregressive, end-to-end optimized video diffusion model inspired by recent advances in neural video compression. The model successively generates future frames by correcting a deterministic next-frame prediction using a stochastic residual generated by an inverse diffusion process. We compare this approach against six baselines on four datasets involving natural and simulation-based videos. We find significant improvements in terms of perceptual quality and probabilistic frame forecasting ability for all datasets.
Keyphrases
  • working memory
  • rna seq
  • deep learning
  • quality improvement
  • single cell
  • current status
  • machine learning