elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Imprint | Privacy Policy | Contact | Deutsch
Fontsize: [-] Text [+]

StfNet: A Two-Stream Convolutional Neural Network for Spatiotemporal Image Fusion

Liu, Xun and Deng, Chenwei and Chanussot, Jocelyn and Hong, Danfeng and Zhang, Baojun (2019) StfNet: A Two-Stream Convolutional Neural Network for Spatiotemporal Image Fusion. IEEE Transactions on Geoscience and Remote Sensing, 57 (9), pp. 6552-6564. IEEE - Institute of Electrical and Electronics Engineers. DOI: 10.1109/TGRS.2019.2907310 ISSN 0196-2892

[img] PDF - Registered users only until July 2020 - Postprint version (accepted manuscript)
7MB

Official URL: https://ieeexplore.ieee.org/document/8693668

Abstract

Spatiotemporal image fusion is considered as a promising way to provide Earth observations with both high spatial resolution and frequent coverage, and recently, learning-based solutions have been receiving broad attention. However, these algorithms treating spatiotemporal fusion as a single image super-resolution problem, generally suffers from the significant spatial information loss in coarse images, due to the large upscaling factors in real applications. To address this issue, in this paper, we exploit temporal information in fine image sequences and solve the spatiotemporal fusion problem with a two-stream convolutional neural network called StfNet. The novelty of this paper is twofold. First, considering the temporal dependence among image sequences, we incorporate the fine image acquired at the neighboring date to super-resolve the coarse image at the prediction date. In this way, our network predicts a fine image not only from the structural similarity between coarse and fine image pairs but also by exploiting abundant texture information in the available neighboring fine images. Second, instead of estimating each output fine image independently, we consider the temporal relations among time-series images and formulate a temporal constraint. This temporal constraint aiming to guarantee the uniqueness of the fusion result and encourages temporal consistent predictions in learning and thus leads to more realistic final results. We evaluate the performance of the StfNet using two actual data sets of Landsat-Moderate Resolution Imaging Spectroradiometer (MODIS) acquisitions, and both visual and quantitative evaluations demonstrate that our algorithm achieves state-of-the-art performance.

Item URL in elib:https://elib.dlr.de/128212/
Document Type:Article
Title:StfNet: A Two-Stream Convolutional Neural Network for Spatiotemporal Image Fusion
Authors:
AuthorsInstitution or Email of AuthorsAuthors ORCID iD
Liu, XunBeijing Institute of TechnologyUNSPECIFIED
Deng, ChenweiBeijing Institute of TechnologyUNSPECIFIED
Chanussot, Jocelyninstitute nationale polytechnique de grenobleUNSPECIFIED
Hong, DanfengDanfeng.Hong (at) dlr.deUNSPECIFIED
Zhang, BaojunBeijing Institute of TechnologyUNSPECIFIED
Date:April 2019
Journal or Publication Title:IEEE Transactions on Geoscience and Remote Sensing
Refereed publication:Yes
Open Access:No
Gold Open Access:No
In SCOPUS:Yes
In ISI Web of Science:Yes
Volume:57
DOI :10.1109/TGRS.2019.2907310
Page Range:pp. 6552-6564
Publisher:IEEE - Institute of Electrical and Electronics Engineers
ISSN:0196-2892
Status:Published
Keywords:Convolutional neural network, spatiotemporal image fusion, super-resolution, temporal consistency, temporal dependence (TD)
HGF - Research field:Aeronautics, Space and Transport
HGF - Program:Space
HGF - Program Themes:Earth Observation
DLR - Research area:Raumfahrt
DLR - Program:R EO - Erdbeobachtung
DLR - Research theme (Project):R - Vorhaben hochauflösende Fernerkundungsverfahren
Location: Oberpfaffenhofen
Institutes and Institutions:Remote Sensing Technology Institute > EO Data Science
Deposited By: Hong, Danfeng
Deposited On:05 Jul 2019 10:24
Last Modified:04 Dec 2019 17:52

Repository Staff Only: item control page

Browse
Search
Help & Contact
Information
electronic library is running on EPrints 3.3.12
Copyright © 2008-2017 German Aerospace Center (DLR). All rights reserved.