Login / Signup

VI-Net-View-Invariant Quality of Human Movement Assessment.

Faegheh SardariAdeline PaiementSion HannunaMajid Mirmehdi
Published in: Sensors (Basel, Switzerland) (2020)
We propose a view-invariant method towards the assessment of the quality of human movements which does not rely on skeleton data. Our end-to-end convolutional neural network consists of two stages, where at first a view-invariant trajectory descriptor for each body joint is generated from RGB images, and then the collection of trajectories for all joints are processed by an adapted, pre-trained 2D convolutional neural network (CNN) (e.g., VGG-19 or ResNeXt-50) to learn the relationship amongst the different body parts and deliver a score for the movement quality. We release the only publicly-available, multi-view, non-skeleton, non-mocap, rehabilitation movement dataset (QMAR), and provide results for both cross-subject and cross-view scenarios on this dataset. We show that VI-Net achieves average rank correlation of 0.66 on cross-subject and 0.65 on unseen views when trained on only two views. We also evaluate the proposed method on the single-view rehabilitation dataset KIMORE and obtain 0.66 rank correlation against a baseline of 0.62.
Keyphrases
  • convolutional neural network
  • deep learning
  • endothelial cells
  • climate change
  • quality improvement
  • depressive symptoms
  • resistance training
  • artificial intelligence
  • big data
  • body composition