Chaudhuri, Ushashi und Banerjee, Biplab und Bhattacharya, Avik und Datcu, Mihai (2020) CMIR-NET : A deep learning based model for cross-modal retrieval in remote sensing. Pattern Recognition Letters, 131, Seiten 456-462. Elsevier. doi: 10.1016/j.patrec.2020.02.006. ISSN 0167-8655.
Dieses Archiv kann nicht den Volltext zur Verfügung stellen.
Offizielle URL: https://www.sciencedirect.com/science/article/abs/pii/S0167865520300453
Kurzfassung
We address the problem of cross-modal information retrieval in the domain of remote sensing. In particular, we are interested in two application scenarios: i) cross– modal retrieval between panchromatic (PAN) and multi-spectral imagery, and ii) multi-label image retrieval between very high resolution (VHR) images and speech based label annotations. These multi-modal retrieval scenarios are more challenging than the traditional uni-modal retrieval approaches given the inherent differences in distributions between the modalities. However, with the increasing availability of multi-source remote sensing data and the scarcity of enough semantic annotations, the task of multi-modal retrieval has recently become extremely important. In this regard, we propose a novel deep neural network based architecture which is considered to learn a discriminative shared feature space for all the input modalities, suitable for semantically coherent information retrieval. Extensive experiments are carried out on the benchmark large-scale PAN - multi-spectral DSRSID dataset and the multi-label UC-Merced dataset. Together with the Merced dataset, we generate a corpus of speech signals corresponding to the labels. Superior performance with respect to the current state-of-the-art is observed in all the cases.
| elib-URL des Eintrags: | https://elib.dlr.de/130883/ | ||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Dokumentart: | Zeitschriftenbeitrag | ||||||||||||||||||||
| Titel: | CMIR-NET : A deep learning based model for cross-modal retrieval in remote sensing | ||||||||||||||||||||
| Autoren: |
| ||||||||||||||||||||
| Datum: | März 2020 | ||||||||||||||||||||
| Erschienen in: | Pattern Recognition Letters | ||||||||||||||||||||
| Referierte Publikation: | Ja | ||||||||||||||||||||
| Open Access: | Nein | ||||||||||||||||||||
| Gold Open Access: | Nein | ||||||||||||||||||||
| In SCOPUS: | Ja | ||||||||||||||||||||
| In ISI Web of Science: | Ja | ||||||||||||||||||||
| Band: | 131 | ||||||||||||||||||||
| DOI: | 10.1016/j.patrec.2020.02.006 | ||||||||||||||||||||
| Seitenbereich: | Seiten 456-462 | ||||||||||||||||||||
| Verlag: | Elsevier | ||||||||||||||||||||
| ISSN: | 0167-8655 | ||||||||||||||||||||
| Status: | veröffentlicht | ||||||||||||||||||||
| Stichwörter: | cross-modal information retrieval, panchromaticimagery, multii-spectral imagery | ||||||||||||||||||||
| HGF - Forschungsbereich: | Luftfahrt, Raumfahrt und Verkehr | ||||||||||||||||||||
| HGF - Programm: | Raumfahrt | ||||||||||||||||||||
| HGF - Programmthema: | Erdbeobachtung | ||||||||||||||||||||
| DLR - Schwerpunkt: | Raumfahrt | ||||||||||||||||||||
| DLR - Forschungsgebiet: | R EO - Erdbeobachtung | ||||||||||||||||||||
| DLR - Teilgebiet (Projekt, Vorhaben): | R - Vorhaben hochauflösende Fernerkundungsverfahren (alt) | ||||||||||||||||||||
| Standort: | Oberpfaffenhofen | ||||||||||||||||||||
| Institute & Einrichtungen: | Institut für Methodik der Fernerkundung > EO Data Science | ||||||||||||||||||||
| Hinterlegt von: | Karmakar, Chandrabali | ||||||||||||||||||||
| Hinterlegt am: | 09 Mär 2020 13:12 | ||||||||||||||||||||
| Letzte Änderung: | 16 Jun 2023 10:18 |
Nur für Mitarbeiter des Archivs: Kontrollseite des Eintrags