Login / Signup

ASMNet: Action and Style-Conditioned Motion Generative Network for 3D Human Motion Generation.

Zongying LiYong WangXin DuCan WangReinhard KochMengyuan Liu
Published in: Cyborg and bionic systems (Washington, D.C.) (2024)
Extensive research has explored human motion generation, but the generated sequences are influenced by different motion styles. For instance, the act of walking with joy and sorrow evokes distinct effects on a character's motion. Due to the difficulties in motion capture with styles, the available data for style research are also limited. To address the problems, we propose ASMNet, an action and style-conditioned motion generative network. This network ensures that the generated human motion sequences not only comply with the provided action label but also exhibit distinctive stylistic features. To extract motion features from human motion sequences, we design a spatial temporal extractor. Moreover, we use the adaptive instance normalization layer to inject style into the target motion. Our results are comparable to state-of-the-art approaches and display a substantial advantage in both quantitative and qualitative evaluations. The code is available at https://github.com/ZongYingLi/ASMNet.git.
Keyphrases
  • high speed
  • endothelial cells
  • systematic review
  • mental health
  • oxidative stress
  • deep learning
  • network analysis