elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Impressum | Datenschutz | Kontakt | English
Schriftgröße: [-] Text [+]

CrossATNet - a novel cross-attention based framework for sketch-based image retrieval

Chaudhuri, Ushasi und Banerjee, Biplab und Bhattacharya, Avik und Datcu, Mihai (2020) CrossATNet - a novel cross-attention based framework for sketch-based image retrieval. Image and Vision Computing, 104, Seite 104003. Elsevier. doi: 10.1016/j.imavis.2020.104003. ISSN 0262-8856.

[img] PDF - Preprintversion (eingereichte Entwurfsversion)
1MB

Offizielle URL: https://www.sciencedirect.com/science/article/abs/pii/S0262885620301359

Kurzfassung

We propose a novel framework for cross-modal zero-shot learning (ZSL) in the context of sketch-based image retrieval (SBIR). Conventionally, the SBIR schema mainly considers simultaneous mappings among the two image views and the semantic side information. Therefore, it is desirable to consider fine-grained classes mainly in the sketch domain using highly discriminative and semantically rich feature space. However, the existing deep generative modeling based SBIR approaches majorly focus on bridging the gaps between the seen and unseen classes by generating pseudo-unseen-class samples. Besides, violating the ZSL protocol by not utilizing any unseen-class information during training, such techniques do not pay explicit attention to modeling the discriminative nature of the shared space. Also, we note that learning a unified feature space for both the multi-view visual data is a tedious task considering the significant domain difference between sketches and the color images. In this respect, as a remedy, we introduce a novel framework for zero-shot SBIR. While we define a cross-modal triplet loss to ensure the discriminative nature of the shared space, an innovative cross-modal attention learning strategy is also proposed to guide feature extraction from the image domain exploiting information from the respective sketch counterpart. In order to preserve the semantic consistency of the shared space, we consider a graph CNN based module which propagates the semantic class topology to the shared space. To ensure an improved response time during inference, we further explore the possibility of representing the shared space in terms of hash-codes. Experimental results obtained on the benchmark TU-Berlin and the Sketchy datasets confirm the superiority of CrossATNet in yielding the state-of-the-art results.

elib-URL des Eintrags:https://elib.dlr.de/138086/
Dokumentart:Zeitschriftenbeitrag
Titel:CrossATNet - a novel cross-attention based framework for sketch-based image retrieval
Autoren:
AutorenInstitution oder E-Mail-AdresseAutoren-ORCID-iDORCID Put Code
Chaudhuri, UshasiIndian Institute of Technology Bombay, IndiaNICHT SPEZIFIZIERTNICHT SPEZIFIZIERT
Banerjee, BiplabIndian Institute of Technology BombayNICHT SPEZIFIZIERTNICHT SPEZIFIZIERT
Bhattacharya, AvikIndian Institute of Technology BombayNICHT SPEZIFIZIERTNICHT SPEZIFIZIERT
Datcu, MihaiMihai.Datcu (at) dlr.deNICHT SPEZIFIZIERTNICHT SPEZIFIZIERT
Datum:Dezember 2020
Erschienen in:Image and Vision Computing
Referierte Publikation:Ja
Open Access:Ja
Gold Open Access:Nein
In SCOPUS:Ja
In ISI Web of Science:Ja
Band:104
DOI:10.1016/j.imavis.2020.104003
Seitenbereich:Seite 104003
Verlag:Elsevier
ISSN:0262-8856
Status:veröffentlicht
Stichwörter:Neural networks,Sketch-based image retrieval,Cross-modal retrieval,Deep-learning,Cross-attention network,Cross-triplets
HGF - Forschungsbereich:Luftfahrt, Raumfahrt und Verkehr
HGF - Programm:Raumfahrt
HGF - Programmthema:Erdbeobachtung
DLR - Schwerpunkt:Raumfahrt
DLR - Forschungsgebiet:R EO - Erdbeobachtung
DLR - Teilgebiet (Projekt, Vorhaben):R - Vorhaben hochauflösende Fernerkundungsverfahren (alt)
Standort: Oberpfaffenhofen
Institute & Einrichtungen:Institut für Methodik der Fernerkundung > EO Data Science
Hinterlegt von: Karmakar, Chandrabali
Hinterlegt am:25 Nov 2020 16:37
Letzte Änderung:24 Okt 2022 09:30

Nur für Mitarbeiter des Archivs: Kontrollseite des Eintrags

Blättern
Suchen
Hilfe & Kontakt
Informationen
electronic library verwendet EPrints 3.3.12
Gestaltung Webseite und Datenbank: Copyright © Deutsches Zentrum für Luft- und Raumfahrt (DLR). Alle Rechte vorbehalten.