• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

author:

Wang, Renping (Wang, Renping.) [1] (Scholars:王仁平) | Li, Shun (Li, Shun.) [2] | Tang, Enhao (Tang, Enhao.) [3] | Lan, Sen (Lan, Sen.) [4] | Liu, Yajing (Liu, Yajing.) [5] | Yang, Jing (Yang, Jing.) [6] | Huang, Shizhen (Huang, Shizhen.) [7] (Scholars:黄世震) | Hu, Hailong (Hu, Hailong.) [8] (Scholars:胡海龙)

Indexed by:

Scopus SCIE

Abstract:

Graph convolution networks (GCN) have demonstrated success in learning graph structures; however, they are limited in inductive tasks. Graph attention networks (GAT) were proposed to address the limitations of GCN and have shown high performance in graph -based tasks. Despite this success, GAT faces challenges in hardware acceleration, including: 1) The GAT algorithm has difficulty adapting to hardware; 2) challenges in efficiently implementing Sparse matrix multiplication (SPMM); and 3) complex addressing and pipeline stall issues due to irregular memory accesses. To this end, this paper proposed SH-GAT, an FPGA-based GAT accelerator that achieves more efficient GAT inference. The proposed approach employed several optimizations to enhance GAT performance. First, this work optimized the GAT algorithm using split weights and softmax approximation to make it more hardware -friendly. Second, a load -balanced SPMM kernel was designed to fully leverage potential parallelism and improve data throughput. Lastly, data preprocessing was performed by pre -fetching the source node and its neighbor nodes, effectively addressing pipeline stall and complexly addressing issues arising from irregular memory access. SH-GAT was evaluated on the Xilinx FPGA Alveo U280 accelerator card with three popular datasets. Compared to existing CPU, GPU, and state-of-the-art (SOTA) FPGA-based accelerators, SH-GAT can achieve speedup by up to 3283x, 13x, and 2.3x.

Keyword:

accelerator co-design FPGA graph graph attention networks

Community:

  • [ 1 ] [Wang, Renping]Fuzhou Univ, Coll Phys & Informat Engn, Fuzhou 350108, Peoples R China
  • [ 2 ] [Li, Shun]Fuzhou Univ, Coll Phys & Informat Engn, Fuzhou 350108, Peoples R China
  • [ 3 ] [Tang, Enhao]Fuzhou Univ, Coll Phys & Informat Engn, Fuzhou 350108, Peoples R China
  • [ 4 ] [Liu, Yajing]Fuzhou Univ, Coll Phys & Informat Engn, Fuzhou 350108, Peoples R China
  • [ 5 ] [Yang, Jing]Fuzhou Univ, Coll Phys & Informat Engn, Fuzhou 350108, Peoples R China
  • [ 6 ] [Huang, Shizhen]Fuzhou Univ, Coll Phys & Informat Engn, Fuzhou 350108, Peoples R China
  • [ 7 ] [Hu, Hailong]Fuzhou Univ, Coll Phys & Informat Engn, Fuzhou 350108, Peoples R China
  • [ 8 ] [Lan, Sen]Shantou Univ, Coll Sci, Shantou, Peoples R China

Reprint 's Address:

  • 胡海龙

    [Hu, Hailong]Fuzhou Univ, Coll Phys & Informat Engn, Fuzhou 350108, Peoples R China

Show more details

Related Keywords:

Source :

ELECTRONIC RESEARCH ARCHIVE

ISSN: 2688-1594

Year: 2024

Issue: 4

Volume: 32

Page: 2310-2322

1 . 0 0 0

JCR@2023

Cited Count:

WoS CC Cited Count: 1

SCOPUS Cited Count: 1

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 0

Online/Total:225/10053493
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1