Login / Signup

AI enabled sign language recognition and VR space bidirectional communication using triboelectric smart glove.

Feng WenZixuan ZhangTianyiyi HeChengkuo Lee
Published in: Nature communications (2021)
Sign language recognition, especially the sentence recognition, is of great significance for lowering the communication barrier between the hearing/speech impaired and the non-signers. The general glove solutions, which are employed to detect motions of our dexterous hands, only achieve recognizing discrete single gestures (i.e., numbers, letters, or words) instead of sentences, far from satisfying the meet of the signers' daily communication. Here, we propose an artificial intelligence enabled sign language recognition and communication system comprising sensing gloves, deep learning block, and virtual reality interface. Non-segmentation and segmentation assisted deep learning model achieves the recognition of 50 words and 20 sentences. Significantly, the segmentation approach splits entire sentence signals into word units. Then the deep learning model recognizes all word elements and reversely reconstructs and recognizes sentences. Furthermore, new/never-seen sentences created by new-order word elements recombination can be recognized with an average correct rate of 86.67%. Finally, the sign language recognition results are projected into virtual space and translated into text and audio, allowing the remote and bidirectional communication between signers and non-signers.
Keyphrases
  • deep learning
  • artificial intelligence
  • convolutional neural network
  • machine learning
  • big data
  • virtual reality
  • autism spectrum disorder
  • physical activity
  • dna damage