elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Imprint | Privacy Policy | Contact | Deutsch
Fontsize: [-] Text [+]

CrossATNet - a novel cross-attention based framework for sketch-based image retrieval

Chaudhuri, Ushasi and Banerjee, Biplab and Bhattacharya, Avik and Datcu, Mihai (2020) CrossATNet - a novel cross-attention based framework for sketch-based image retrieval. Image and Vision Computing, 104, p. 104003. Elsevier. doi: 10.1016/j.imavis.2020.104003. ISSN 0262-8856.

[img] PDF - Preprint version (submitted draft)
1MB

Official URL: https://www.sciencedirect.com/science/article/abs/pii/S0262885620301359

Abstract

We propose a novel framework for cross-modal zero-shot learning (ZSL) in the context of sketch-based image retrieval (SBIR). Conventionally, the SBIR schema mainly considers simultaneous mappings among the two image views and the semantic side information. Therefore, it is desirable to consider fine-grained classes mainly in the sketch domain using highly discriminative and semantically rich feature space. However, the existing deep generative modeling based SBIR approaches majorly focus on bridging the gaps between the seen and unseen classes by generating pseudo-unseen-class samples. Besides, violating the ZSL protocol by not utilizing any unseen-class information during training, such techniques do not pay explicit attention to modeling the discriminative nature of the shared space. Also, we note that learning a unified feature space for both the multi-view visual data is a tedious task considering the significant domain difference between sketches and the color images. In this respect, as a remedy, we introduce a novel framework for zero-shot SBIR. While we define a cross-modal triplet loss to ensure the discriminative nature of the shared space, an innovative cross-modal attention learning strategy is also proposed to guide feature extraction from the image domain exploiting information from the respective sketch counterpart. In order to preserve the semantic consistency of the shared space, we consider a graph CNN based module which propagates the semantic class topology to the shared space. To ensure an improved response time during inference, we further explore the possibility of representing the shared space in terms of hash-codes. Experimental results obtained on the benchmark TU-Berlin and the Sketchy datasets confirm the superiority of CrossATNet in yielding the state-of-the-art results.

Item URL in elib:https://elib.dlr.de/138086/
Document Type:Article
Title:CrossATNet - a novel cross-attention based framework for sketch-based image retrieval
Authors:
AuthorsInstitution or Email of AuthorsAuthor's ORCID iDORCID Put Code
Chaudhuri, UshasiIndian Institute of Technology Bombay, IndiaUNSPECIFIEDUNSPECIFIED
Banerjee, BiplabIndian Institute of Technology BombayUNSPECIFIEDUNSPECIFIED
Bhattacharya, AvikIndian Institute of Technology BombayUNSPECIFIEDUNSPECIFIED
Datcu, MihaiUNSPECIFIEDUNSPECIFIEDUNSPECIFIED
Date:December 2020
Journal or Publication Title:Image and Vision Computing
Refereed publication:Yes
Open Access:Yes
Gold Open Access:No
In SCOPUS:Yes
In ISI Web of Science:Yes
Volume:104
DOI:10.1016/j.imavis.2020.104003
Page Range:p. 104003
Publisher:Elsevier
ISSN:0262-8856
Status:Published
Keywords:Neural networks,Sketch-based image retrieval,Cross-modal retrieval,Deep-learning,Cross-attention network,Cross-triplets
HGF - Research field:Aeronautics, Space and Transport
HGF - Program:Space
HGF - Program Themes:Earth Observation
DLR - Research area:Raumfahrt
DLR - Program:R EO - Earth Observation
DLR - Research theme (Project):R - Vorhaben hochauflösende Fernerkundungsverfahren (old)
Location: Oberpfaffenhofen
Institutes and Institutions:Remote Sensing Technology Institute > EO Data Science
Deposited By: Karmakar, Chandrabali
Deposited On:25 Nov 2020 16:37
Last Modified:24 Oct 2022 09:30

Repository Staff Only: item control page

Browse
Search
Help & Contact
Information
electronic library is running on EPrints 3.3.12
Website and database design: Copyright © German Aerospace Center (DLR). All rights reserved.