elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Imprint | Privacy Policy | Contact | Deutsch
Fontsize: [-] Text [+]

Deep Relearning in the Geospatial Domain for Semantic Remote Sensing Image Segmentation

Geiß, Christian and Zhu, Yue and Qiu, Chunping and Mou, LiChao and Zhu, Xiao Xiang and Taubenböck, Hannes (2021) Deep Relearning in the Geospatial Domain for Semantic Remote Sensing Image Segmentation. IEEE Geoscience and Remote Sensing Letters, pp. 1-5. IEEE - Institute of Electrical and Electronics Engineers. doi: 10.1109/LGRS.2020.3031339. ISSN 1545-598X. (In Press)

[img] PDF - Preprint version (submitted draft)
4MB

Official URL: https://ieeexplore.ieee.org/document/9247397

Abstract

We present a classification postprocessing (CPP) technique based on fully convolutional neural networks (CNNs) for semantic remote sensing image segmentation. Conventional CPP techniques aim to enhance the classification accuracy by imposing smoothness priors in the image domain. Contrary to that, here, a relearning strategy is proposed where the initial classification outcome of a CNN model is provided to a subsequent CNN model via an extended input space to guide the learning of discriminative feature representations in an end-to-end fashion. This deep relearning CNN (DRCNN) explicitly accounts for the geospatial domain by taking the spatial alignment of preliminary class labels into account. Hereby, we evaluate to learn the DRCNN in a cumulative and noncumulative way, i.e., extending the input space based on all previous or solely preceding model outputs, respectively, during an iterative procedure. Besides, the DRCNN can also be conveniently coupled with alternative CPP techniques such as object-based voting (OBV). The experimental results obtained from two test sites of WorldView-II imagery underline the beneficial performance properties of the DRCNN models. They can increase the accuracies of the initial CNN models on average from 72.64% to 76.01% and from 92.43% to 94.52% in terms of κ statistic. An additional increase of 1.65 and 2.84 percentage points can be achieved when combining the DRCNN models with an OBV strategy. From an epistemological point of view, our results underline that CNNs can benefit from the consideration of preliminary model outcomes and that conventional CPP techniques can profit from an upstream relearning strategy.

Item URL in elib:https://elib.dlr.de/137428/
Document Type:Article
Title:Deep Relearning in the Geospatial Domain for Semantic Remote Sensing Image Segmentation
Authors:
AuthorsInstitution or Email of AuthorsAuthor's ORCID iD
Geiß, ChristianChristian.Geiss (at) dlr.deUNSPECIFIED
Zhu, Yueyz591 (at) cam.ac.ukUNSPECIFIED
Qiu, ChunpingTechnical University MünchenUNSPECIFIED
Mou, LiChaoLiChao.Mou (at) dlr.deUNSPECIFIED
Zhu, Xiao Xiangxiao.zhu (at) dlr.deUNSPECIFIED
Taubenböck, HannesHannes.Taubenboeck (at) dlr.deUNSPECIFIED
Date:2021
Journal or Publication Title:IEEE Geoscience and Remote Sensing Letters
Refereed publication:Yes
Open Access:Yes
Gold Open Access:No
In SCOPUS:Yes
In ISI Web of Science:Yes
DOI :10.1109/LGRS.2020.3031339
Page Range:pp. 1-5
Publisher:IEEE - Institute of Electrical and Electronics Engineers
ISSN:1545-598X
Status:In Press
Keywords:Classification postprocessing (CPP), convolutional neural networks (CNNs), deep learning, relearning
HGF - Research field:Aeronautics, Space and Transport
HGF - Program:Space
HGF - Program Themes:Earth Observation
DLR - Research area:Raumfahrt
DLR - Program:R EO - Earth Observation
DLR - Research theme (Project):R - Remote Sensing and Geo Research, R - Geoscientific remote sensing and GIS methods
Location: Oberpfaffenhofen
Institutes and Institutions:German Remote Sensing Data Center > Geo Risks and Civil Security
Remote Sensing Technology Institute > EO Data Science
Deposited By: Geiß, Christian
Deposited On:19 Nov 2020 11:22
Last Modified:14 Jan 2021 10:13

Repository Staff Only: item control page

Browse
Search
Help & Contact
Information
electronic library is running on EPrints 3.3.12
Copyright © 2008-2017 German Aerospace Center (DLR). All rights reserved.