elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Imprint | Privacy Policy | Accessibility | Contact | Deutsch
Fontsize: [-] Text [+]

Multi-Modal Place Recognition in Aliased and Low-Texture Environments

Garcia Hernandez, Alberto (2023) Multi-Modal Place Recognition in Aliased and Low-Texture Environments. Master's, University of Zaragoza.

[img] PDF
15MB

Abstract

In planetary environments with extreme visual aliasing, traditional place recognition systems for robots encounter difficulties in unstructured and aliased environments. Effective place recognition is essential for robust localization and mapping, which, in turn, significantly impacts the performance of Simultaneous Localization and Mapping (SLAM) systems. This research aims to enhance existing place recognition systems by utilizing both LiDAR and visual information, improving performance in extreme environments. The use of LiDAR is crucial, as it provides valuable geometric data that complements visual data, resulting in more expressive and robust 3D grounded global features. We evaluated our methods using the Mt. Etna dataset and a synthetic dataset generated with the OAISYS tool. Our comprehensive review of state-of-the-art place recognition systems led to the development of a novel UMF (Unifying Local and Global Multimodal Features with Transformers) model, specifically designed for place recognition in environments with extreme aliasing. The UMF model integrates elements from the most advanced methods, enhancing performance in challenging environments by capturing intricate relationships between local and global context in both LiDAR and visual data. Two variants of the UMF model were explored, offering alternative ways of processing and utilizing fine local features. Our UMF model outperforms other state-of-the-art methods in place recognition tasks, demonstrating the project's success. The improved place recognition capabilities offered by the UMF model can contribute to more accurate and robust SLAM systems, enabling robots to better navigate and explore unstructured and aliased environments. This research highlights the importance of multi-modal fusion, particularly the integration of LiDAR and visual data, in addressing the challenges of place recognition in aliased and low-texture environments. It also opens an exciting line of research focus in unified fusion multimodal approaches for robotics, computer vision, and machine learning applications, with a direct impact on SLAM and other related fields.

Item URL in elib:https://elib.dlr.de/196289/
Document Type:Thesis (Master's)
Title:Multi-Modal Place Recognition in Aliased and Low-Texture Environments
Authors:
AuthorsInstitution or Email of AuthorsAuthor's ORCID iDORCID Put Code
Garcia Hernandez, AlbertoUNSPECIFIEDUNSPECIFIEDUNSPECIFIED
Date:2023
Refereed publication:Yes
Open Access:Yes
Gold Open Access:No
In SCOPUS:No
In ISI Web of Science:No
Number of Pages:82
Status:Published
Keywords:Place Recognition, LiDAR, Multi-modal, SLAM
Institution:University of Zaragoza
Department:Escuela de Ingegneria Y Arquitectura
HGF - Research field:Aeronautics, Space and Transport
HGF - Program:Space
HGF - Program Themes:Robotics
DLR - Research area:Raumfahrt
DLR - Program:R RO - Robotics
DLR - Research theme (Project):R - Planetary Exploration
Location: Oberpfaffenhofen
Institutes and Institutions:Institute of Robotics and Mechatronics (since 2013) > Perception and Cognition
Deposited By: Giubilato, Riccardo
Deposited On:01 Aug 2023 07:58
Last Modified:01 Aug 2023 07:58

Repository Staff Only: item control page

Browse
Search
Help & Contact
Information
OpenAIRE Validator logo electronic library is running on EPrints 3.3.12
Website and database design: Copyright © German Aerospace Center (DLR). All rights reserved.