elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Impressum | Datenschutz | Kontakt | English
Schriftgröße: [-] Text [+]

Multi-Modal Place Recognition in Aliased and Low-Texture Environments

Garcia Hernandez, Alberto (2023) Multi-Modal Place Recognition in Aliased and Low-Texture Environments. Masterarbeit, University of Zaragoza.

[img] PDF
15MB

Kurzfassung

In planetary environments with extreme visual aliasing, traditional place recognition systems for robots encounter difficulties in unstructured and aliased environments. Effective place recognition is essential for robust localization and mapping, which, in turn, significantly impacts the performance of Simultaneous Localization and Mapping (SLAM) systems. This research aims to enhance existing place recognition systems by utilizing both LiDAR and visual information, improving performance in extreme environments. The use of LiDAR is crucial, as it provides valuable geometric data that complements visual data, resulting in more expressive and robust 3D grounded global features. We evaluated our methods using the Mt. Etna dataset and a synthetic dataset generated with the OAISYS tool. Our comprehensive review of state-of-the-art place recognition systems led to the development of a novel UMF (Unifying Local and Global Multimodal Features with Transformers) model, specifically designed for place recognition in environments with extreme aliasing. The UMF model integrates elements from the most advanced methods, enhancing performance in challenging environments by capturing intricate relationships between local and global context in both LiDAR and visual data. Two variants of the UMF model were explored, offering alternative ways of processing and utilizing fine local features. Our UMF model outperforms other state-of-the-art methods in place recognition tasks, demonstrating the project's success. The improved place recognition capabilities offered by the UMF model can contribute to more accurate and robust SLAM systems, enabling robots to better navigate and explore unstructured and aliased environments. This research highlights the importance of multi-modal fusion, particularly the integration of LiDAR and visual data, in addressing the challenges of place recognition in aliased and low-texture environments. It also opens an exciting line of research focus in unified fusion multimodal approaches for robotics, computer vision, and machine learning applications, with a direct impact on SLAM and other related fields.

elib-URL des Eintrags:https://elib.dlr.de/196289/
Dokumentart:Hochschulschrift (Masterarbeit)
Titel:Multi-Modal Place Recognition in Aliased and Low-Texture Environments
Autoren:
AutorenInstitution oder E-Mail-AdresseAutoren-ORCID-iDORCID Put Code
Garcia Hernandez, Albertoalberto.garciahernandez (at) dlr.deNICHT SPEZIFIZIERTNICHT SPEZIFIZIERT
Datum:2023
Referierte Publikation:Ja
Open Access:Ja
Gold Open Access:Nein
In SCOPUS:Nein
In ISI Web of Science:Nein
Seitenanzahl:82
Status:veröffentlicht
Stichwörter:Place Recognition, LiDAR, Multi-modal, SLAM
Institution:University of Zaragoza
Abteilung:Escuela de Ingegneria Y Arquitectura
HGF - Forschungsbereich:Luftfahrt, Raumfahrt und Verkehr
HGF - Programm:Raumfahrt
HGF - Programmthema:Robotik
DLR - Schwerpunkt:Raumfahrt
DLR - Forschungsgebiet:R RO - Robotik
DLR - Teilgebiet (Projekt, Vorhaben):R - Planetare Exploration
Standort: Oberpfaffenhofen
Institute & Einrichtungen:Institut für Robotik und Mechatronik (ab 2013) > Perzeption und Kognition
Hinterlegt von: Giubilato, Riccardo
Hinterlegt am:01 Aug 2023 07:58
Letzte Änderung:01 Aug 2023 07:58

Nur für Mitarbeiter des Archivs: Kontrollseite des Eintrags

Blättern
Suchen
Hilfe & Kontakt
Informationen
electronic library verwendet EPrints 3.3.12
Gestaltung Webseite und Datenbank: Copyright © Deutsches Zentrum für Luft- und Raumfahrt (DLR). Alle Rechte vorbehalten.