• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

author:

Fang, Lina (Fang, Lina.) [1] (Scholars:方莉娜) | Shen, Guixi (Shen, Guixi.) [2] | You, Zhilong (You, Zhilong.) [3] | Guo, Yingya (Guo, Yingya.) [4] (Scholars:郭迎亚) | Fu, Huasheng (Fu, Huasheng.) [5] | Zhao, Zhiyuan (Zhao, Zhiyuan.) [6] (Scholars:赵志远) | Chen, Chongcheng (Chen, Chongcheng.) [7] (Scholars:陈崇成)

Indexed by:

EI PKU

Abstract:

Accurately identifying roadside objects like trees, cars, and traffic poles from mobile LiDAR point clouds is of great significance for some applications such as intelligent traffic systems, navigation and location services, autonomous driving, and high precision map. In the paper, we proposed a point-group-view network (PGVNet) to classify the roadside objects into trees, cars, traffic poles, and others, which utilize and fuse the advanced global features of multi-view images and the spatial geometry information of point cloud. To reduce redundant information between similar views and highlight salient view features, the PGVNet model employs a hierarchical view-group-shape architecture to split all views into different groups according to their discriminative level, which uses the pre-trained VGG network as the bone network. In view-group-shape architecture, global-level significant features are further generated from group descriptors with their weights. Moreover, an attention-guided fusion network is used to fuse the global features from multi-view images and local geometric features from point clouds. In particular, the global advanced features from multi-view images are quantified and leveraged as the attention mask to further refine the intrinsic correlation and discriminability of the local geometric features from point clouds, which contributions to recognize the roadside objects. We have evaluated the proposed method on five different mobile LiDAR point cloud data. Five test datasets of different urban scenes by different mobile laser scanning systems are used to evaluate the validities of the proposed method. Four accuracy evaluation metrics precision, recall, quality and Fscore of trees, cars and traffic poles on the selected testing datasets achieve (99.19%, 94.27%, 93.58%, 96.63%), (94.20%, 97.56%, 92.02%, 95.68%), (91.48%, 98.61%, 90.39%, 94.87%), respectively. Experimental results and comparisons with state-of-the-art methods demonstrate that the PGVNet model is available to effectively identify roadside objects from the mobile LiDAR point cloud, which can provide data support for elements construction and vectorization in high precision map applications. © 2021, Surveying and Mapping Press. All right reserved.

Keyword:

Classification (of information) Deep learning Forestry Geometry Laser applications Optical radar Poles Quality control Roadsides

Community:

  • [ 1 ] [Fang, Lina]Academy of Digital China(Fujian), Fuzhou University, Fuzhou; 350002, China
  • [ 2 ] [Shen, Guixi]Academy of Digital China(Fujian), Fuzhou University, Fuzhou; 350002, China
  • [ 3 ] [You, Zhilong]Academy of Digital China(Fujian), Fuzhou University, Fuzhou; 350002, China
  • [ 4 ] [Guo, Yingya]College of Computer and Data Science, Fuzhou University, Fuzhou; 350002, China
  • [ 5 ] [Fu, Huasheng]Design & Research Institute of Water Conservancy & Hydropower, Fuzhou; 350002, China
  • [ 6 ] [Zhao, Zhiyuan]Academy of Digital China(Fujian), Fuzhou University, Fuzhou; 350002, China
  • [ 7 ] [Chen, Chongcheng]Academy of Digital China(Fujian), Fuzhou University, Fuzhou; 350002, China

Reprint 's Address:

Email:

Show more details

Related Keywords:

Related Article:

Source :

Acta Geodaetica et Cartographica Sinica

ISSN: 1001-1595

CN: 11-2089/P

Year: 2021

Issue: 11

Volume: 50

Page: 1558-1573

Cited Count:

WoS CC Cited Count: 0

SCOPUS Cited Count: 6

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 8

Online/Total:3/9853358
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1