elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Imprint | Privacy Policy | Contact | Deutsch
Fontsize: [-] Text [+]

CrossGeoNet: A Framework for Building Footprint Generation of Label-Scarce Geographical Regions

Li, Qingyu and Mou, LiChao and Hua, Yuansheng and Shi, Yilei and Zhu, Xiao Xiang (2022) CrossGeoNet: A Framework for Building Footprint Generation of Label-Scarce Geographical Regions. International Journal of Applied Earth Observation and Geoinformation, 111, p. 102824. Elsevier. doi: 10.1016/j.jag.2022.102824. ISSN 1569-8432.

[img] PDF - Published version
16MB

Official URL: https://www.sciencedirect.com/science/article/pii/S1569843222000267

Abstract

Building footprints are essential for understanding urban dynamics. Planet satellite imagery with daily repetition frequency and high resolution has opened new opportunities for building mapping at large scales. However, suitable building mapping methods are scarce for less developed regions, as these regions lack massive annotated samples to provide strong supervisory information. To address this problem, we propose to learn cross-geolocation attention maps in a co-segmentation network, which is able to improve the discriminability of buildings within the target city and provide a more general building representation in different cities. In this way, the limited supervisory information resulting from insufficient training examples in target cities can be compensated. Our method is termed as CrossGeoNet, and consists of three elemental modules: a Siamese encoder, a cross-geolocation attention module, and a Siamese decoder. More specifically, the encoder learns feature maps from a pair of images from two different geo-locations. The cross-location attention module aims at learning similarity based on these two feature maps and can provide a global overview of common objects (e.g., buildings) in different cities. The decoder predicts segmentation masks of buildings using the learned cross-location attention maps and the original convolved images. The proposed method is evaluated on two datasets with different spatial resolutions, i.e., Planet dataset (3 m/pixel) and Inria dataset (0.3 m/pixel), which are collected from various locations around the world. Experimental results show that CrossGeoNet can well extract buildings of different sizes and alleviate false detections, which significantly outperforms other competitors.

Item URL in elib:https://elib.dlr.de/186569/
Document Type:Article
Title:CrossGeoNet: A Framework for Building Footprint Generation of Label-Scarce Geographical Regions
Authors:
AuthorsInstitution or Email of AuthorsAuthor's ORCID iDORCID Put Code
Li, QingyuUNSPECIFIEDUNSPECIFIEDUNSPECIFIED
Mou, LiChaoUNSPECIFIEDUNSPECIFIEDUNSPECIFIED
Hua, YuanshengUNSPECIFIEDUNSPECIFIEDUNSPECIFIED
Shi, YileiUNSPECIFIEDUNSPECIFIEDUNSPECIFIED
Zhu, Xiao XiangUNSPECIFIEDUNSPECIFIEDUNSPECIFIED
Date:July 2022
Journal or Publication Title:International Journal of Applied Earth Observation and Geoinformation
Refereed publication:Yes
Open Access:Yes
Gold Open Access:Yes
In SCOPUS:Yes
In ISI Web of Science:Yes
Volume:111
DOI:10.1016/j.jag.2022.102824
Page Range:p. 102824
Editors:
EditorsEmailEditor's ORCID iDORCID Put Code
Li, JonathanUNSPECIFIEDUNSPECIFIEDUNSPECIFIED
Publisher:Elsevier
ISSN:1569-8432
Status:Published
Keywords:Building footprint, Semantic segmentation, Convolutional neural network Co-segmentation, Planet satellite
HGF - Research field:Aeronautics, Space and Transport
HGF - Program:Space
HGF - Program Themes:Earth Observation
DLR - Research area:Raumfahrt
DLR - Program:R EO - Earth Observation
DLR - Research theme (Project):R - Artificial Intelligence
Location: Oberpfaffenhofen
Institutes and Institutions:Remote Sensing Technology Institute > EO Data Science
Deposited By: Beuchert, Tobias
Deposited On:30 May 2022 11:32
Last Modified:19 Oct 2023 12:49

Repository Staff Only: item control page

Browse
Search
Help & Contact
Information
electronic library is running on EPrints 3.3.12
Website and database design: Copyright © German Aerospace Center (DLR). All rights reserved.