elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Impressum | Datenschutz | Kontakt | English
Schriftgröße: [-] Text [+]

Learning a Generic Robot Representation from Motion

Baljeet Singh, Yogesh Kumar (2021) Learning a Generic Robot Representation from Motion. DLR-Interner Bericht. DLR-IB-RM-OP-2021-241. Masterarbeit. Saarland University.

Dies ist die aktuellste Version dieses Eintrags.

[img] PDF - Nur DLR-intern zugänglich
14MB

Kurzfassung

The interaction of a robot and its environment is one of the most important robotic skills required in a wide range of applications. Besides estimating the pose of the objects for object manipulation, the robot's state has also to be predicted. While visual sensors are used to determine the state of an object, kinematics or sensors in joint motors are used to measure the state of a robot. Traditionally, calibration has to be performed to estimate the pose of object in robot's coordinate system where both the camera and the base of the robot must be stationary. The main limitation is that if either of these components (camera or robot's base) is moved, the calibration becomes void and must be repeated which usually takes a significant amount of time. To address the limitation mentioned above, this thesis proposes incorporating an additional robot representation from a separate sensor - the vision sensor. The purpose of this thesis is to estimate the joints of the robot using camera images and use them for fast camera calibration. Specifically, the main contributions of this thesis are as follows: First, an estimator that predicts the robot's 2D keypoints from pairs of images based on the motion of a specific joint of the robot is developed. To obtain the training data for this estimator, a BlenderProc (synthetic dataset generation pipeline) plugin for generating photo-realistic images of various robots in different environments is created which bridges the simulation-to real gap. This is developed to avoid the time-consuming, labor-intensive generation and annotation of real-world training data. Second, since the estimator can only detect the joints that had been moved, the estimator is integrated into a transformer-based framework for tracking all 2D keypoints across consecutive frames. Finally, these models can also be applied to previously unknown similar robots and are generalizable to real-world data. The effectiveness of these methods is validated through extensive experimental evaluation on various synthetic robot models, real-world data, and ablation studies.

elib-URL des Eintrags:https://elib.dlr.de/147272/
Dokumentart:Berichtsreihe (DLR-Interner Bericht, Masterarbeit)
Titel:Learning a Generic Robot Representation from Motion
Autoren:
AutorenInstitution oder E-Mail-AdresseAutoren-ORCID-iDORCID Put Code
Baljeet Singh, Yogesh KumarGerman Aerospace Center, DLRNICHT SPEZIFIZIERTNICHT SPEZIFIZIERT
Datum:2021
Referierte Publikation:Nein
Open Access:Nein
Status:veröffentlicht
Stichwörter:robot segmentation, hand-eye calibration, keypoint detection, joint pose estimation, perspective-n-point, deep learning, computer vision, transformers
Institution:Saarland University
Abteilung:Faculty of Mathematics and Computer Science
HGF - Forschungsbereich:Luftfahrt, Raumfahrt und Verkehr
HGF - Programm:Raumfahrt
HGF - Programmthema:Robotik
DLR - Schwerpunkt:Raumfahrt
DLR - Forschungsgebiet:R RO - Robotik
DLR - Teilgebiet (Projekt, Vorhaben):R - Multisensorielle Weltmodellierung (RM) [RO]
Standort: Oberpfaffenhofen
Institute & Einrichtungen:Institut für Robotik und Mechatronik (ab 2013) > Perzeption und Kognition
Hinterlegt von: Boerdijk, Wout
Hinterlegt am:13 Dez 2021 10:04
Letzte Änderung:12 Jan 2022 09:05

Verfügbare Versionen dieses Eintrags

  • Learning a Generic Robot Representation from Motion. (deposited 13 Dez 2021 10:04) [Gegenwärtig angezeigt]

Nur für Mitarbeiter des Archivs: Kontrollseite des Eintrags

Blättern
Suchen
Hilfe & Kontakt
Informationen
electronic library verwendet EPrints 3.3.12
Gestaltung Webseite und Datenbank: Copyright © Deutsches Zentrum für Luft- und Raumfahrt (DLR). Alle Rechte vorbehalten.