A multi-camera and multimodal dataset for posture and gait analysis.
Manuel PalermoJoão M LopesJoão AndréAna C MatiasJoão José CerqueiraCristina Peixoto SantosPublished in: Scientific data (2022)
Monitoring gait and posture while using assisting robotic devices is relevant to attain effective assistance and assess the user's progression throughout time. This work presents a multi-camera, multimodal, and detailed dataset involving 14 healthy participants walking with a wheeled robotic walker equipped with a pair of affordable cameras. Depth data were acquired at 30 fps and synchronized with inertial data from Xsens MTw Awinda sensors and kinematic data from the segments of the Xsens biomechanical model, acquired at 60 Hz. Participants walked with the robotic walker at 3 different gait speeds, across 3 different walking scenarios/paths at 3 different locations. In total, this dataset provides approximately 92 minutes of total recording time, which corresponds to nearly 166.000 samples of synchronized data. This dataset may contribute to the scientific research by allowing the development and evaluation of: (i) vision-based pose estimation algorithms, exploring classic or deep learning approaches; (ii) human detection and tracking algorithms; (iii) movement forecasting; and (iv) biomechanical analysis of gait/posture when using a rehabilitation device.
Keyphrases
- deep learning
- electronic health record
- machine learning
- big data
- minimally invasive
- cerebral palsy
- endothelial cells
- convolutional neural network
- robot assisted
- artificial intelligence
- climate change
- data analysis
- optical coherence tomography
- high resolution
- finite element
- lower limb
- finite element analysis
- induced pluripotent stem cells