Indexed by:
Abstract:
Gesture recognition is critical in the field of Human-Computer Interaction, especially in healthcare, rehabilitation, sign language translation, etc. Conventionally, the gesture recognition data collected by the inertial measurement unit (IMU) sensors is relayed to the cloud or a remote device with higher computing power to train models. However, it is not convenient for remote follow-up treatment of movement rehabilitation training. In this paper, based on a field-programmable gate array (FPGA) accelerator and the Cortex-M0 IP core, we propose a wearable deep learning system that is capable of locally processing data on the end device. With a pre-stage processing module and serial-parallel hybrid method, the device is of low-power and low-latency at the micro control unit (MCU) level, however, it meets or exceeds the performance of single board computers (SBC). For example, its performance is more than twice as much of Cortex-A53 (which is usually used in Raspberry Pi). Moreover, a convolutional neural network (CNN) and a multilayer perceptron neural network (NN) is used in the recognition model to extract features and classify gestures, which helps achieve a high recognition accuracy at 97%. Finally, this paper offers a software-hardware co-design method that is worth referencing for the design of edge devices in other scenarios.
Keyword:
Reprint 's Address:
Email:
Source :
MATHEMATICAL BIOSCIENCES AND ENGINEERING
ISSN: 1547-1063
Year: 2020
Issue: 1
Volume: 18
Page: 132-153
2 . 0 8
JCR@2020
2 . 6 0 0
JCR@2022
ESI Discipline: MATHEMATICS;
ESI HC Threshold:50
JCR Journal Grade:3
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 13
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2
Affiliated Colleges: