• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

author:

Fu, Yingkai (Fu, Yingkai.) [1] | Li, Meng (Li, Meng.) [2] | Liu, Wenxi (Liu, Wenxi.) [3] (Scholars:刘文犀) | Wang, Yuanchen (Wang, Yuanchen.) [4] | Zhang, Jiqing (Zhang, Jiqing.) [5] | Yin, Baocai (Yin, Baocai.) [6] | Wei, Xiaopeng (Wei, Xiaopeng.) [7] | Yang, Xin (Yang, Xin.) [8]

Indexed by:

EI SCIE

Abstract:

Event cameras, or dynamic vision sensors, have recently achieved success from fundamental vision tasks to high-level vision researches. Due to its ability to asynchronously capture light intensity changes, event camera has an inherent advantage to capture moving objects in challenging scenarios including objects under low light, high dynamic range, or fast moving objects. Thus event camera are natural for visual object tracking. However, the current event-based trackers derived from RGB trackers simply modify the input images to event frames and still follow conventional tracking pipeline that mainly focus on object texture for target distinction. As a result, the trackers may not be robust dealing with challenging scenarios such as moving cameras and cluttered foreground. In this paper, we propose a distractor-aware event-based tracker that introduces transformer modules into Siamese network architecture (named DANet). Specifically, our model is mainly composed of a motion-aware network and a target-aware network, which simultaneously exploits both motion cues and object contours from event data, so as to discover motion objects and identify the target object by removing dynamic distractors. Our DANet can be trained in an end-to-end manner without any post-processing and can run at over 80 FPS on a single V100. We conduct comprehensive experiments on two large event tracking datasets to validate the proposed model. We demonstrate that our tracker has superior performance against the state-of-the-art trackers in terms of both accuracy and efficiency.

Keyword:

deep neural network Event camera vision transformer visual object tracking

Community:

  • [ 1 ] [Fu, Yingkai]Dalian Univ Technol, Key Lab Social Comp & Cognit Intelligence, Minist Educ, Dalian 116024, Peoples R China
  • [ 2 ] [Wang, Yuanchen]Dalian Univ Technol, Key Lab Social Comp & Cognit Intelligence, Minist Educ, Dalian 116024, Peoples R China
  • [ 3 ] [Zhang, Jiqing]Dalian Univ Technol, Key Lab Social Comp & Cognit Intelligence, Minist Educ, Dalian 116024, Peoples R China
  • [ 4 ] [Wei, Xiaopeng]Dalian Univ Technol, Key Lab Social Comp & Cognit Intelligence, Minist Educ, Dalian 116024, Peoples R China
  • [ 5 ] [Yang, Xin]Dalian Univ Technol, Key Lab Social Comp & Cognit Intelligence, Minist Educ, Dalian 116024, Peoples R China
  • [ 6 ] [Li, Meng]HiSilicon Shanghai Technol Co Ltd, Shanghai 201799, Peoples R China
  • [ 7 ] [Liu, Wenxi]Fuzhou Univ, Coll Comp & Data Sci, Fuzhou 350108, Peoples R China
  • [ 8 ] [Yin, Baocai]Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China

Reprint 's Address:

Show more details

Version:

Related Keywords:

Related Article:

Source :

IEEE TRANSACTIONS ON IMAGE PROCESSING

ISSN: 1057-7149

Year: 2023

Volume: 32

Page: 6129-6141

1 0 . 8

JCR@2023

1 0 . 8 0 0

JCR@2023

JCR Journal Grade:1

CAS Journal Grade:1

Cited Count:

WoS CC Cited Count: 2

SCOPUS Cited Count: 5

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 1

Online/Total:130/10033044
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1