Indexed by:
Abstract:
The hand gesture recognition (HGR) technology in A-mode ultrasound human-machine interface (HMI-A), based on traditional machine learning, relies on intricate feature reduction methods. Researchers need prior knowledge and multiple validations to achieve the optimal combination of features and machine learning algorithms. Furthermore, anatomical differences in the forearm muscles among different subjects prevent specific subject models from applying to unknown subjects, necessitating repetitive retraining. This increases users' time costs and limits the real-world application of HMI-A. Hence, this article presents a lightweight 1-D four-branch squeeze-to-excitation convolutional neural network (CNN) (4-branch SENet) that outperforms traditional machine learning methods in both feature extraction and gesture classification. Building upon this, a weight fine-tuning strategy using transfer learning enables rapid gesture recognition across subjects and time. Comparative analysis indicates that the freeze feature and fine-tuning fully connected (FC) layers result in an average accuracy of 96.35% ± 3.04% and an average runtime of 4.8 ± 0.15 s, making it 52.9% faster than subject-specific models. This method further extends the application scenarios of HMI-A in fields such as medical rehabilitation and intelligent prosthetics. © 2001-2012 IEEE.
Keyword:
Reprint 's Address:
Email:
Source :
IEEE Sensors Journal
ISSN: 1530-437X
Year: 2024
Issue: 10
Volume: 24
Page: 17183-17192
4 . 3 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count: 4
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: