elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Imprint | Privacy Policy | Contact | Deutsch
Fontsize: [-] Text [+]

Unknown Object Segmentation from Stereo Images

Durner, Maximilian and Boerdijk, Wout and Sundermeyer, Martin and Friedl, Werner and Marton, Zoltan-Csaba and Triebel, Rudolph (2021) Unknown Object Segmentation from Stereo Images. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2021. International Conference on Intelligent Robots and Systems, 2021-09-27 - 2021-10-01, Prague (online). doi: 10.1109/IROS51168.2021.9636281. ISBN 978-166541714-3. ISSN 2153-0858.

[img] PDF
5MB

Abstract

Although instance-aware perception is a key prerequisite for many autonomous robotic applications, most of the methods only partially solve the problem by focusing solely on known object categories. However, for robots interacting in dynamic and cluttered environments, this is not realistic and severely limits the range of potential applications. Therefore, we propose a novel object instance segmentation approach that does not require any semantic or geometric information of the objects beforehand. In contrast to existing works, we do not explicitly use depth data as input, but rely on the insight that slight viewpoint changes, which for example are provided by stereo image pairs, are often sufficient to determine object boundaries and thus to segment objects. Focusing on the versatility of stereo sensors, we employ a transformer-based architecture that maps directly from the pair of input images to the object instances. This has the major advantage that instead of a noisy, and potentially incomplete depth map as an input, on which the segmentation is computed, we use the original image pair to infer the object instances and a dense depth map. In experiments in several different application domains, we show that our Instance Stereo Transformer (INSTR) algorithm outperforms current state-of-the-art methods that are based on depth maps. Training code and pretrained models are available at https://github.com/DLR-RM/instr

Item URL in elib:https://elib.dlr.de/145858/
Document Type:Conference or Workshop Item (Speech)
Title:Unknown Object Segmentation from Stereo Images
Authors:
AuthorsInstitution or Email of AuthorsAuthor's ORCID iDORCID Put Code
Durner, MaximilianUNSPECIFIEDhttps://orcid.org/0000-0001-8885-5334UNSPECIFIED
Boerdijk, WoutUNSPECIFIEDhttps://orcid.org/0000-0003-0789-5970UNSPECIFIED
Sundermeyer, MartinUNSPECIFIEDhttps://orcid.org/0000-0003-0587-9643UNSPECIFIED
Friedl, WernerUNSPECIFIEDhttps://orcid.org/0000-0003-3002-7274UNSPECIFIED
Marton, Zoltan-CsabaUNSPECIFIEDhttps://orcid.org/0000-0002-3035-493XUNSPECIFIED
Triebel, RudolphUNSPECIFIEDhttps://orcid.org/0000-0002-7975-036XUNSPECIFIED
Date:2021
Journal or Publication Title:2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2021
Refereed publication:Yes
Open Access:Yes
Gold Open Access:No
In SCOPUS:Yes
In ISI Web of Science:Yes
DOI:10.1109/IROS51168.2021.9636281
ISSN:2153-0858
ISBN:978-166541714-3
Status:Published
Keywords:instance segmentation unknown object segmentation stereo-vision
Event Title:International Conference on Intelligent Robots and Systems
Event Location:Prague (online)
Event Type:international Conference
Event Start Date:27 September 2021
Event End Date:1 October 2021
HGF - Research field:Aeronautics, Space and Transport
HGF - Program:Space
HGF - Program Themes:Robotics
DLR - Research area:Raumfahrt
DLR - Program:R RO - Robotics
DLR - Research theme (Project):R - Multisensory World Modelling (RM) [RO]
Location: Oberpfaffenhofen
Institutes and Institutions:Institute of Robotics and Mechatronics (since 2013) > Perception and Cognition
Institute of Robotics and Mechatronics (since 2013)
Deposited By: Durner, Maximilian
Deposited On:22 Nov 2021 09:55
Last Modified:24 Apr 2024 20:44

Repository Staff Only: item control page

Browse
Search
Help & Contact
Information
electronic library is running on EPrints 3.3.12
Website and database design: Copyright © German Aerospace Center (DLR). All rights reserved.