• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

author:

Ding, Jianchuan (Ding, Jianchuan.) [1] | Gao, Lingping (Gao, Lingping.) [2] | Liu, Wenxi (Liu, Wenxi.) [3] | Piao, Haiyin (Piao, Haiyin.) [4] | Pan, Jia (Pan, Jia.) [5] | Du, Zhenjun (Du, Zhenjun.) [6] | Yang, Xin (Yang, Xin.) [7] | Yin, Baocai (Yin, Baocai.) [8]

Indexed by:

EI

Abstract:

Deep reinforcement learning has achieved great success in laser-based collision avoidance works because the laser can sense accurate depth information without too much redundant data, which can maintain the robustness of the algorithm when it is migrated from the simulation environment to the real world. However, high-cost laser devices are not only difficult to deploy for a large scale of robots but also demonstrate unsatisfactory robustness towards the complex obstacles, including irregular obstacles, e.g., tables, chairs, and shelves, as well as complex ground and special materials. In this paper, we propose a novel monocular camera-based complex obstacle avoidance framework. Particularly, we innovatively transform the captured RGB images to pseudo-laser measurements for efficient deep reinforcement learning. Compared to the traditional laser measurement captured at a certain height that only contains one-dimensional distance information away from the neighboring obstacles, our proposed pseudo-laser measurement fuses the depth and semantic information of the captured RGB image, which makes our method effective for complex obstacles. We also design a feature extraction guidance module to weight the input pseudo-laser measurement, and the agent has more reasonable attention for the current state, which is conducive to improving the accuracy and efficiency of the obstacle avoidance policy. Besides, we adaptively add the synthesized noise to the laser measurement during the training stage to decrease the sim-to-real gap and increase the robustness of our model in the real environment. Finally, the experimental results show that our framework achieves state-of-the-art performance in several virtual and real-world scenarios. © 1991-2012 IEEE.

Keyword:

Cameras Collision avoidance Deep learning Laser beams Reinforcement learning Robot vision Semantics

Community:

  • [ 1 ] [Ding, Jianchuan]Dalian University of Technology, School of Computer Science, Dalian; 116024, China
  • [ 2 ] [Ding, Jianchuan]Hebei University of Water Resources and Electric Engineering, School of Computer Science and Information Engineering, Cangzhou; 061016, China
  • [ 3 ] [Gao, Lingping]Dalian University of Technology, School of Computer Science, Dalian; 116024, China
  • [ 4 ] [Gao, Lingping]Alibaba Group, Hangzhou; 310000, China
  • [ 5 ] [Liu, Wenxi]Fuzhou University, College of Mathematics and Computer Science, Fuzhou; 350108, China
  • [ 6 ] [Piao, Haiyin]Northwestern Polytechnical University, Xi'an; 710072, China
  • [ 7 ] [Pan, Jia]The University of Hong Kong, Department of Computer Science, Hong Kong, Hong Kong
  • [ 8 ] [Du, Zhenjun]Siasun Robot & Automation Company Ltd., Shenyang; 110168, China
  • [ 9 ] [Yang, Xin]Dalian University of Technology, Dalian; 116024, China
  • [ 10 ] [Yin, Baocai]Dalian University of Technology, Dalian; 116024, China

Reprint 's Address:

Email:

Show more details

Related Keywords:

Related Article:

Source :

IEEE Transactions on Circuits and Systems for Video Technology

ISSN: 1051-8215

Year: 2023

Issue: 2

Volume: 33

Page: 756-770

8 . 3

JCR@2023

8 . 3 0 0

JCR@2023

ESI HC Threshold:35

JCR Journal Grade:1

CAS Journal Grade:1

Cited Count:

WoS CC Cited Count: 0

SCOPUS Cited Count: 4

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 0

Affiliated Colleges:

Online/Total:153/10046234
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1