DLR-Logo -> http://www.dlr.de
DLR Portal Home | Imprint | Privacy Policy | Contact | Deutsch
Fontsize: [-] Text [+]

Semantic Extraction from TerraSAR-X Dataset - Different Perspectives

Yao, Wei (2019) Semantic Extraction from TerraSAR-X Dataset - Different Perspectives. TerraSAR-X Science Team Meeting 2019, 21-24 Oct 2019, Oberpfaffenhofen, Germany.

Full text not available from this repository.

Official URL: https://tandemx-science.dlr.de/cgi-bin/wcm.pl?page=Tdm-Science-Team-Meeting


With the purpose of semantic extraction using TerraSAR-X dataset, in this paper, the problem has been analyzed from three different perspectives: visualization which helps better understand and interpret the dataset, a semiautomated method for hierarchical clustering and classification, together with openstreetmap data as groundtruth, fully convolutional network has also been applied for this objective. A visualization tool to enhance the understanding of up to big data sets has been proposed. Compared to classic data models which rely on the computing of the features (color, texture, etc.), this tool is fully feature free, as it processes directly on the data file. The Fast Compression Distance (FCD) and t-distributed Stochastic Neighbor Embedding (tSNE) have been applied to visualize a large TerraSAR-X dataset which are annotated with up to three layers of hierarchical semantic labels, and a Sentinel-1 dataset with 10 annotated classes, in VV and VH polarization modes. We analyze the visualization results in manifold space, and try to understand and interpret them with the available semantic labels. The visualization interpretation is based on a vega-style interactive tool, which allows user zoom in, zoom out for processing large amount of data points. Secondly, we propose a semi-automated hierarchical clustering and classification framework for Synthetic Aperture Radar (SAR) image annotation. Our implementation of the framework allows the classification and annotation of image data ranging from single scenes up to large satellite data archives. Our framework comprises three stages: Firstly, each image is cut into patches and each patch is transformed into a texture feature vector. Secondly, similar feature vectors are grouped into clusters, where the number of clusters is determined by repeated cluster splitting to optimize their Gaussianity. Finally, the most appropriate class (i.e., a semantic label) is assigned to each image patch. This is accomplished by semi-supervised learning. For the testing and validation of our implemented framework, a concept for a two-level hierarchical semantic image content annotation had been designed and applied to a manually annotated reference data set consisting of various TerraSAR-X image patches with meter-scale resolution. Here, the upper level contains general classes, while the lower level provides more detailed sub-classes for each parent class. For a quantitative and visual evaluation of the proposed framework, we compared the relationships between the clustering results, the semi-supervised classification results, and the two-level annotations. It turned out that our proposed method is able to obtain reliable results for the upper level (i.e., general class) semantic classes; however, due to the too many detailed sub-classes versus the few instances of each subclass, the proposed method generates inferior results for the lower level. The most important contributions of this paper are the integration of modified Gaussian-means and modified cluster-then-label algorithms, for the purpose of large-scale SAR image annotation; as well as the measurement of the clustering and classification performances of various distance metrics. Semantic segmentation for synthetic aperture radar (SAR) imagery is a rarely touched area, due to the specific image characteristics of SAR images, we propose a dataset which consists of three data sources: TerraSAR-X images, Google Earth images and OpenStreetMap data, with the purpose of performing SAR and optical image semantic segmentation. By using fully convolutional networks and deep residual networks with pre-trained weights, weinvestigate the accuracy and mean IOU values of semantic segmentation for both SAR and optical image patches. The best segmentation accuracy results for SAR and optical data are around 74% and 82%. Moreover, we study SAR models by combining multiple data sources: Google Earth images and OpenStreetMap data.

Item URL in elib:https://elib.dlr.de/131131/
Document Type:Conference or Workshop Item (Speech)
Title:Semantic Extraction from TerraSAR-X Dataset - Different Perspectives
AuthorsInstitution or Email of AuthorsAuthors ORCID iD
Yao, WeiWei.Yao (at) dlr.deUNSPECIFIED
Refereed publication:No
Open Access:No
Gold Open Access:No
In ISI Web of Science:No
Keywords:Semantic Extraction, Synthetic Aperture Radar, Fast Compression Distance
Event Title:TerraSAR-X Science Team Meeting 2019
Event Location:Oberpfaffenhofen, Germany
Event Type:international Conference
Event Dates:21-24 Oct 2019
HGF - Research field:Aeronautics, Space and Transport
HGF - Program:Space
HGF - Program Themes:Earth Observation
DLR - Research area:Raumfahrt
DLR - Program:R EO - Erdbeobachtung
DLR - Research theme (Project):R - Vorhaben hochauflösende Fernerkundungsverfahren
Location: Oberpfaffenhofen
Institutes and Institutions:Remote Sensing Technology Institute > EO Data Science
Deposited By: Karmakar, Chandrabali
Deposited On:04 Dec 2019 15:04
Last Modified:04 Dec 2019 15:04

Repository Staff Only: item control page

Help & Contact
electronic library is running on EPrints 3.3.12
Copyright © 2008-2017 German Aerospace Center (DLR). All rights reserved.