Yao, Wei (2019) Semantic Extraction from TerraSAR-X Dataset - Different Perspectives. TerraSAR-X Science Team Meeting 2019, 2019-10-21 - 2019-10-24, Oberpfaffenhofen, Germany.
Dieses Archiv kann nicht den Volltext zur Verfügung stellen.
Offizielle URL: https://tandemx-science.dlr.de/cgi-bin/wcm.pl?page=Tdm-Science-Team-Meeting
Kurzfassung
With the purpose of semantic extraction using TerraSAR-X dataset, in this paper, the problem has been analyzed from three different perspectives: visualization which helps better understand and interpret the dataset, a semiautomated method for hierarchical clustering and classification, together with openstreetmap data as groundtruth, fully convolutional network has also been applied for this objective. A visualization tool to enhance the understanding of up to big data sets has been proposed. Compared to classic data models which rely on the computing of the features (color, texture, etc.), this tool is fully feature free, as it processes directly on the data file. The Fast Compression Distance (FCD) and t-distributed Stochastic Neighbor Embedding (tSNE) have been applied to visualize a large TerraSAR-X dataset which are annotated with up to three layers of hierarchical semantic labels, and a Sentinel-1 dataset with 10 annotated classes, in VV and VH polarization modes. We analyze the visualization results in manifold space, and try to understand and interpret them with the available semantic labels. The visualization interpretation is based on a vega-style interactive tool, which allows user zoom in, zoom out for processing large amount of data points. Secondly, we propose a semi-automated hierarchical clustering and classification framework for Synthetic Aperture Radar (SAR) image annotation. Our implementation of the framework allows the classification and annotation of image data ranging from single scenes up to large satellite data archives. Our framework comprises three stages: Firstly, each image is cut into patches and each patch is transformed into a texture feature vector. Secondly, similar feature vectors are grouped into clusters, where the number of clusters is determined by repeated cluster splitting to optimize their Gaussianity. Finally, the most appropriate class (i.e., a semantic label) is assigned to each image patch. This is accomplished by semi-supervised learning. For the testing and validation of our implemented framework, a concept for a two-level hierarchical semantic image content annotation had been designed and applied to a manually annotated reference data set consisting of various TerraSAR-X image patches with meter-scale resolution. Here, the upper level contains general classes, while the lower level provides more detailed sub-classes for each parent class. For a quantitative and visual evaluation of the proposed framework, we compared the relationships between the clustering results, the semi-supervised classification results, and the two-level annotations. It turned out that our proposed method is able to obtain reliable results for the upper level (i.e., general class) semantic classes; however, due to the too many detailed sub-classes versus the few instances of each subclass, the proposed method generates inferior results for the lower level. The most important contributions of this paper are the integration of modified Gaussian-means and modified cluster-then-label algorithms, for the purpose of large-scale SAR image annotation; as well as the measurement of the clustering and classification performances of various distance metrics. Semantic segmentation for synthetic aperture radar (SAR) imagery is a rarely touched area, due to the specific image characteristics of SAR images, we propose a dataset which consists of three data sources: TerraSAR-X images, Google Earth images and OpenStreetMap data, with the purpose of performing SAR and optical image semantic segmentation. By using fully convolutional networks and deep residual networks with pre-trained weights, weinvestigate the accuracy and mean IOU values of semantic segmentation for both SAR and optical image patches. The best segmentation accuracy results for SAR and optical data are around 74% and 82%. Moreover, we study SAR models by combining multiple data sources: Google Earth images and OpenStreetMap data.
elib-URL des Eintrags: | https://elib.dlr.de/131131/ | ||||||||
---|---|---|---|---|---|---|---|---|---|
Dokumentart: | Konferenzbeitrag (Vortrag) | ||||||||
Titel: | Semantic Extraction from TerraSAR-X Dataset - Different Perspectives | ||||||||
Autoren: |
| ||||||||
Datum: | 2019 | ||||||||
Referierte Publikation: | Nein | ||||||||
Open Access: | Nein | ||||||||
Gold Open Access: | Nein | ||||||||
In SCOPUS: | Nein | ||||||||
In ISI Web of Science: | Nein | ||||||||
Status: | veröffentlicht | ||||||||
Stichwörter: | Semantic Extraction, Synthetic Aperture Radar, Fast Compression Distance | ||||||||
Veranstaltungstitel: | TerraSAR-X Science Team Meeting 2019 | ||||||||
Veranstaltungsort: | Oberpfaffenhofen, Germany | ||||||||
Veranstaltungsart: | internationale Konferenz | ||||||||
Veranstaltungsbeginn: | 21 Oktober 2019 | ||||||||
Veranstaltungsende: | 24 Oktober 2019 | ||||||||
HGF - Forschungsbereich: | Luftfahrt, Raumfahrt und Verkehr | ||||||||
HGF - Programm: | Raumfahrt | ||||||||
HGF - Programmthema: | Erdbeobachtung | ||||||||
DLR - Schwerpunkt: | Raumfahrt | ||||||||
DLR - Forschungsgebiet: | R EO - Erdbeobachtung | ||||||||
DLR - Teilgebiet (Projekt, Vorhaben): | R - Vorhaben hochauflösende Fernerkundungsverfahren (alt) | ||||||||
Standort: | Oberpfaffenhofen | ||||||||
Institute & Einrichtungen: | Institut für Methodik der Fernerkundung > EO Data Science | ||||||||
Hinterlegt von: | Karmakar, Chandrabali | ||||||||
Hinterlegt am: | 04 Dez 2019 15:04 | ||||||||
Letzte Änderung: | 24 Apr 2024 20:34 |
Nur für Mitarbeiter des Archivs: Kontrollseite des Eintrags