• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

author:

Liu, Xuanguang (Liu, Xuanguang.) [1] | Li, Yujie (Li, Yujie.) [2] | Dai, Chenguang (Dai, Chenguang.) [3] | Zhang, Zhenchao (Zhang, Zhenchao.) [4] | Ding, Lei (Ding, Lei.) [5] | Li, Mengmeng (Li, Mengmeng.) [6] (Scholars:李蒙蒙) | Wang, Hanyun (Wang, Hanyun.) [7]

Indexed by:

Scopus SCIE

Abstract:

Building extraction from very high-resolution remote-sensing images still faces two main issues: (1) small buildings are severely omitted and the extracted building shapes have a low consistency with ground truths. (2) supervised deep-learning methods have poor performance in few-shot scenarios, limiting the practical application of these methods. To address the first issue, we propose an asymmetric Siamese multitask network integrating adversarial edge learning called ASMBR-Net for building extraction. It contains an efficient asymmetric Siamese feature extractor comprising pre-trained backbones of convolutional neural networks and Transformers under pre-training and fine-tuning paradigms. This extractor balances the local and global feature representation and reduces training costs. Adversarial edge-learning technology automatically integrates edge constraints and strengthens the modeling ability of small and complex building-shaped patterns. Aiming to overcome the second issue, we introduce a self-training framework and design an instance transfer strategy to generate reliable pseudo-samples. We examined the proposed method on the WHU and Massachusetts (MA) datasets and a self-constructed Dongying (DY) dataset, comparing it with state-of-the-art methods. The experimental results show that our method achieves the highest F1-score of 96.06%, 86.90%, and 84.98% on the WHU, MA, and DY datasets, respectively. Ablation experiments further verify the effectiveness of the proposed method. The code is available at: https://github.com/liuxuanguang/ASMBR-Net

Keyword:

Adversarial learning Building extraction Multitask learning Self-learning VHR remote-sensing image

Community:

  • [ 1 ] [Liu, Xuanguang]Informat Engn Univ, Inst Geospatial Informat, Zhengzhou, Peoples R China
  • [ 2 ] [Dai, Chenguang]Informat Engn Univ, Inst Geospatial Informat, Zhengzhou, Peoples R China
  • [ 3 ] [Zhang, Zhenchao]Informat Engn Univ, Inst Geospatial Informat, Zhengzhou, Peoples R China
  • [ 4 ] [Ding, Lei]Informat Engn Univ, Inst Geospatial Informat, Zhengzhou, Peoples R China
  • [ 5 ] [Wang, Hanyun]Informat Engn Univ, Inst Geospatial Informat, Zhengzhou, Peoples R China
  • [ 6 ] [Liu, Xuanguang]Key Lab Smart Earth, Beijing, Peoples R China
  • [ 7 ] [Dai, Chenguang]Key Lab Smart Earth, Beijing, Peoples R China
  • [ 8 ] [Zhang, Zhenchao]Key Lab Smart Earth, Beijing, Peoples R China
  • [ 9 ] [Wang, Hanyun]Key Lab Smart Earth, Beijing, Peoples R China
  • [ 10 ] [Li, Yujie]Fuzhou Univ, Acad Digital China Fujian, Fuzhou, Peoples R China
  • [ 11 ] [Li, Mengmeng]Fuzhou Univ, Acad Digital China Fujian, Fuzhou, Peoples R China

Reprint 's Address:

  • [Dai, Chenguang]Informat Engn Univ, Inst Geospatial Informat, Zhengzhou, Peoples R China

Show more details

Version:

Related Keywords:

Source :

INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION

ISSN: 1569-8432

Year: 2025

Volume: 136

7 . 6 0 0

JCR@2023

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count:

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 5

Online/Total:997/9709537
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1