Indexed by:
Abstract:
Aiming at the problem that convolutional neural network is difficult to deploy on small embedded devices due to its high complexity and large storage space requirement, this paper propose a convolutional neural network FPGA accelerator architecture based on binarization. Using the gray scale processing, binarization processing, threshold setting to reduce the number of parameters. Designing Parallel structures of convolution kernels, feature maps, and matrix blocks to accelerate. The designed architecture can be deployed on the AX7103 FPGA development platform with limited resources. The experimental results show that the convolutional neural network after parallel acceleration design can achieve a recognition accuracy rate of 98.73% on the premise of reducing the data bit width from 32 bits to 8 bits, the recognition speed is about 0.21 seconds/time. © 2020 Published under licence by IOP Publishing Ltd.
Keyword:
Reprint 's Address:
Email:
Source :
Journal of Physics: Conference Series
ISSN: 1742-6588
Year: 2020
Issue: 1
Volume: 1621
Language: English
Cited Count:
SCOPUS Cited Count: 2
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: