elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Imprint | Privacy Policy | Contact | Deutsch
Fontsize: [-] Text [+]

Multi-Path Learning for Object Pose Estimation Across Domains

Sundermeyer, Martin and Durner, Maximilian and Puang, En Yen and Marton, Zoltan-Csaba and Vaskevicius, Narunas and Kai, O. Arras and Triebel, Rudolph (2020) Multi-Path Learning for Object Pose Estimation Across Domains. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, pp. 13916-13925. IEEE. IEEE Conference on Computer Vision and Pattern Recognition, 2020-06-14 - 2020-06-19, Seattle, USA. doi: 10.1109/CVPR42600.2020.01393. ISBN 978-172817168-5. ISSN 1063-6919.

[img] PDF
2MB

Official URL: https://openaccess.thecvf.com/content_CVPR_2020/html/Sundermeyer_Multi-Path_Learning_for_Object_Pose_Estimation_Across_Domains_CVPR_2020_paper.html

Abstract

We introduce a scalable approach for object pose estima-tion trained on simulated RGB views of multiple 3D modelstogether. We learn an encoding of object views that doesnot only describe an implicit orientation of all objects seenduring training, but can also relate views of untrained ob-jects. Our single-encoder-multi-decoder network is trainedusing a technique we denote multi-path learning: Whilethe encoder is shared by all objects, each decoder only re-constructs views of a single object. Consequently, viewsof different instances do not have to be separated in thelatent space and can share common features. The result-ing encoder generalizes well from synthetic to real dataand across various instances, categories, model types anddatasets. We systematically investigate the learned encod-ings, their generalization, and iterative refinement strate-gies on the ModelNet40 and T-LESS dataset. Despite train-ing jointly on multiple objects, our 6D Object Detectionpipeline achieves state-of-the-art results on T-LESS at muchlower runtimes than competing approaches.

Item URL in elib:https://elib.dlr.de/135550/
Document Type:Conference or Workshop Item (Poster)
Title:Multi-Path Learning for Object Pose Estimation Across Domains
Authors:
AuthorsInstitution or Email of AuthorsAuthor's ORCID iDORCID Put Code
Sundermeyer, MartinUNSPECIFIEDhttps://orcid.org/0000-0003-0587-9643UNSPECIFIED
Durner, MaximilianUNSPECIFIEDhttps://orcid.org/0000-0001-8885-5334UNSPECIFIED
Puang, En YenUNSPECIFIEDUNSPECIFIEDUNSPECIFIED
Marton, Zoltan-CsabaUNSPECIFIEDhttps://orcid.org/0000-0002-3035-493XUNSPECIFIED
Vaskevicius, NarunasUNSPECIFIEDUNSPECIFIEDUNSPECIFIED
Kai, O. ArrasUNSPECIFIEDUNSPECIFIEDUNSPECIFIED
Triebel, RudolphUNSPECIFIEDhttps://orcid.org/0000-0002-7975-036XUNSPECIFIED
Date:June 2020
Journal or Publication Title:2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020
Refereed publication:Yes
Open Access:Yes
Gold Open Access:No
In SCOPUS:Yes
In ISI Web of Science:Yes
DOI:10.1109/CVPR42600.2020.01393
Page Range:pp. 13916-13925
Publisher:IEEE
ISSN:1063-6919
ISBN:978-172817168-5
Status:Published
Keywords:Object Pose Estimation, Encodings, Multi Object, Synthetic Data, Symmetries, Autoencoder, Embedding, 6D Object Detection, T-LESS, Relative Pose Estimation
Event Title:IEEE Conference on Computer Vision and Pattern Recognition
Event Location:Seattle, USA
Event Type:international Conference
Event Start Date:14 June 2020
Event End Date:19 June 2020
HGF - Research field:Aeronautics, Space and Transport
HGF - Program:Space
HGF - Program Themes:Space System Technology
DLR - Research area:Raumfahrt
DLR - Program:R SY - Space System Technology
DLR - Research theme (Project):R - Vorhaben Multisensorielle Weltmodellierung (old)
Location: Oberpfaffenhofen
Institutes and Institutions:Institute of Robotics and Mechatronics (since 2013) > Perception and Cognition
Deposited By: Sundermeyer, Martin
Deposited On:22 Jul 2020 18:48
Last Modified:04 Jun 2024 15:06

Repository Staff Only: item control page

Browse
Search
Help & Contact
Information
electronic library is running on EPrints 3.3.12
Website and database design: Copyright © German Aerospace Center (DLR). All rights reserved.