Viewpoint-Agnostic Taekwondo Action Recognition Using Synthesized Two-Dimensional Skeletal Datasets.
Chenglong LuoSung-Woo KimHun-Young ParkKiwon LimHoeryoung JungPublished in: Sensors (Basel, Switzerland) (2023)
Issues of fairness and consistency in Taekwondo poomsae evaluation have often occurred due to the lack of an objective evaluation method. This study proposes a three-dimensional (3D) convolutional neural network-based action recognition model for an objective evaluation of Taekwondo poomsae. The model exhibits robust recognition performance regardless of variations in the viewpoints by reducing the discrepancy between the training and test images. It uses 3D skeletons of poomsae unit actions collected using a full-body motion-capture suit to generate synthesized two-dimensional (2D) skeletons from desired viewpoints. The 2D skeletons obtained from diverse viewpoints form the training dataset, on which the model is trained to ensure consistent recognition performance regardless of the viewpoint. The performance of the model was evaluated against various test datasets, including projected 2D skeletons and RGB images captured from diverse viewpoints. Comparison of the performance of the proposed model with those of previously reported action recognition models demonstrated the superiority of the proposed model, underscoring its effectiveness in recognizing and classifying Taekwondo poomsae actions.