elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Impressum | Datenschutz | Barrierefreiheit | Kontakt | English
Schriftgröße: [-] Text [+]

Autonomous Vision-Based Grasping for a Feedback-Free Low-Cost Robotic Arm in Planetary Exploration

Martin Enciso, Ivan Gilberto (2026) Autonomous Vision-Based Grasping for a Feedback-Free Low-Cost Robotic Arm in Planetary Exploration. Masterarbeit, TU Berlin.

[img] PDF
73MB

Kurzfassung

Autonomous object grasping is essential for planetary exploration missions where rovers must collect geological samples without continuous ground operator control. This thesis develops and validates a complete vision-based autonomous grasping pipeline for the Lunar Rover Mini (LRM), a small-scale planetary exploration platform. The system uses a low-cost monocular camera mounted on the robotic arm end-effector combined with a RealSense depth camera at the rover front, enabling object detection, pose estimation, and grasp execution without force sensors or specialized tactile feedback. The main technical challenge of the LRM robotic arm is the lack of joint position feedback from the servo motors. Once a position command is sent, the onboard computer cannot directly verify the final arm configuration. Effects such as gravity, structural elasticity, and load variations introduce positioning errors that accumulate during manipulation and prevent reliable grasp execution. Visual odometry drift from the rover's 180° rotation after initial detection further compounds these errors. This work addresses these limitations through visual feedback, allowing the system to estimate the relative pose between the end-effector and the target object and iteratively correct positioning errors. The system performs feature-based pose estimation between the RealSense depth image and the monocular end-effector camera image. Scale-Invariant Feature Transform (SIFT) feature extraction and matching establishes 3D-2D correspondences, which the system solves using Perspective-n-Point (PNP) with Random Sample Consensus (RANSAC)-based outlier rejection and Levenberg-Marquardt refinement. An object center reprojection error metric validates pose estimates before commanding arm motion. Image-based visual refinement compensates for residual positioning errors, where the system iteratively adjusts gripper position based on pixel error feedback until the object center aligns with the target pixel location. Grasp detection uses motor current measurements, including pre-grasp baseline calibration and threshold-based stability evaluation. The complete pipeline uses RMC advanced Flow Control (RAFCON) hierarchical state machines with modular Python components. The Robot Operating System (ROS) Transform (TF) tree handles all coordinate transformations. Experimental validation at the German Aerospace Center (DLR) Planetary Exploration Lab evaluated both individual stages and integrated performance. Object detection achieved 100% success in the integrated runs. PNP pose estimation converged in 77.8% of runs (7/9), with accepted object center reprojection errors between 30.96 and 48.93 px. Visual refinement and grasp detection achieved 100% success in runs that reached these stages. End-to-end task completion reached 55.6% (5/9 runs), limited primarily by PNP convergence. Once pose estimation converged and the object remained within the reachable workspace and field of view, downstream stages completed without failure.

elib-URL des Eintrags:https://elib.dlr.de/224038/
Dokumentart:Hochschulschrift (Masterarbeit)
Titel:Autonomous Vision-Based Grasping for a Feedback-Free Low-Cost Robotic Arm in Planetary Exploration
Autoren:
AutorenInstitution oder E-Mail-AdresseAutoren-ORCID-iDORCID Put Code
Martin Enciso, Ivan GilbertoDLR-RMNICHT SPEZIFIZIERTNICHT SPEZIFIZIERT
DLR-Supervisor:
BeitragsartDLR-SupervisorInstitution oder E-Mail-AdresseDLR-Supervisor-ORCID-iD
Thesis advisorWedler, ArminArmin.Wedler (at) dlr.dehttps://orcid.org/0000-0001-8641-0163
Datum:2026
Open Access:Ja
Seitenanzahl:165
Status:veröffentlicht
Stichwörter:Autonomous, vision, grasping, robotic arm, planetary exploration, motion
Institution:TU Berlin
HGF - Forschungsbereich:Luftfahrt, Raumfahrt und Verkehr
HGF - Programm:Raumfahrt
HGF - Programmthema:Robotik
DLR - Schwerpunkt:Raumfahrt
DLR - Forschungsgebiet:R RO - Robotik
DLR - Teilgebiet (Projekt, Vorhaben):R - Autonome, lernende Roboter [RO]
Standort: Oberpfaffenhofen
Institute & Einrichtungen:Institut für Robotik und Mechatronik (ab 2013)
Hinterlegt von: Geyer, Günther
Hinterlegt am:20 Apr 2026 10:00
Letzte Änderung:20 Apr 2026 10:00

Nur für Mitarbeiter des Archivs: Kontrollseite des Eintrags

Blättern
Suchen
Hilfe & Kontakt
Informationen
OpenAIRE Validator logo electronic library verwendet EPrints 3.3.12
Gestaltung Webseite und Datenbank: Copyright © Deutsches Zentrum für Luft- und Raumfahrt (DLR). Alle Rechte vorbehalten.