elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Impressum | Datenschutz | Kontakt | English
Schriftgröße: [-] Text [+]

Multi-Label Guided Soft Contrastive Learning for Efficient Earth Observation Pretraining

Wang, Yi und Albrecht, Conrad M und Zhu, Xiao Xiang (2024) Multi-Label Guided Soft Contrastive Learning for Efficient Earth Observation Pretraining. IEEE Transactions on Geoscience and Remote Sensing. IEEE - Institute of Electrical and Electronics Engineers. ISSN 0196-2892.

[img] PDF - Preprintversion (eingereichte Entwurfsversion)
5MB

Offizielle URL: https://doi.org/10.48550/arXiv.2405.20462

Kurzfassung

Self-supervised pretraining on large-scale satellite data has raised great interest in building Earth observation (EO) foundation models. However, many important resources beyond pure satellite imagery, such as land-cover-land-use products that provide free global semantic information, as well as vision foundation models that hold strong knowledge of the natural world, are not widely studied. In this work, we show these free additional resources not only help resolve common contrastive learning bottlenecks, but also significantly boost the efficiency and effectiveness of EO pretraining. Specifically, we first propose soft contrastive learning that optimizes cross-scene soft similarity based on land-cover-generated multi-label supervision, naturally solving the issue of multiple positive samples and too strict positive matching in complex scenes. Second, we revisit and explore cross-domain continual pretraining for both multispectral and SAR imagery, building efficient EO foundation models from strongest vision models such as DINOv2. Adapting simple weight-initialization and Siamese masking strategies into our soft contrastive learning framework, we demonstrate impressive continual pretraining performance even when the input modalities are not aligned. Without prohibitive training, we produce multispectral and SAR foundation models that achieve significantly better results in 10 out of 11 downstream tasks than most existing SOTA models. For example, our ResNet50/ViT-S achieve 84.8/85.0 linear probing mAP scores on BigEarthNet-10% which are better than most existing ViT-L models; under the same setting, our ViT-B sets a new record of 86.8 in multispectral, and 82.5 in SAR, the latter even better than many multispectral models

elib-URL des Eintrags:https://elib.dlr.de/207106/
Dokumentart:Zeitschriftenbeitrag
Titel:Multi-Label Guided Soft Contrastive Learning for Efficient Earth Observation Pretraining
Autoren:
AutorenInstitution oder E-Mail-AdresseAutoren-ORCID-iDORCID Put Code
Wang, YiYi4.Wang (at) tum.deNICHT SPEZIFIZIERTNICHT SPEZIFIZIERT
Albrecht, Conrad MConrad.Albrecht (at) dlr.dehttps://orcid.org/0009-0009-2422-7289NICHT SPEZIFIZIERT
Zhu, Xiao Xiangxiaoxiang.zhu (at) tum.deNICHT SPEZIFIZIERTNICHT SPEZIFIZIERT
Datum:2024
Erschienen in:IEEE Transactions on Geoscience and Remote Sensing
Referierte Publikation:Ja
Open Access:Ja
Gold Open Access:Nein
In SCOPUS:Ja
In ISI Web of Science:Ja
Verlag:IEEE - Institute of Electrical and Electronics Engineers
ISSN:0196-2892
Status:akzeptierter Beitrag
Stichwörter:weakly supervised learning, contrastive self-supervised learning, multispectral, SAR, geospatial foundation model
HGF - Forschungsbereich:Luftfahrt, Raumfahrt und Verkehr
HGF - Programm:Raumfahrt
HGF - Programmthema:Erdbeobachtung
DLR - Schwerpunkt:Raumfahrt
DLR - Forschungsgebiet:R EO - Erdbeobachtung
DLR - Teilgebiet (Projekt, Vorhaben):R - SAR-Methoden, R - Optische Fernerkundung, R - Künstliche Intelligenz
Standort: Oberpfaffenhofen
Institute & Einrichtungen:Institut für Methodik der Fernerkundung > EO Data Science
Hinterlegt von: Albrecht, Conrad M
Hinterlegt am:07 Okt 2024 10:19
Letzte Änderung:11 Okt 2024 14:02

Nur für Mitarbeiter des Archivs: Kontrollseite des Eintrags

Blättern
Suchen
Hilfe & Kontakt
Informationen
electronic library verwendet EPrints 3.3.12
Gestaltung Webseite und Datenbank: Copyright © Deutsches Zentrum für Luft- und Raumfahrt (DLR). Alle Rechte vorbehalten.