elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Impressum | Datenschutz | Kontakt | English
Schriftgröße: [-] Text [+]

Uncertainty-Aware Attention Guided Sensor Fusion For Monocular Visual Inertial Odometry

Shinde, Kashmira (2020) Uncertainty-Aware Attention Guided Sensor Fusion For Monocular Visual Inertial Odometry. DLR-Interner Bericht. DLR-IB-RM-OP-2020-82. Masterarbeit. Technical University Dortmund (TU Dortmund).

[img] PDF
9MB

Kurzfassung

Visual-Inertial Odometry (VIO) refers to dead reckoning based navigation integrating visual and inertial data. With the advent of deep learning (DL), a lot of research has been done in this realm yielding competitive performances. DL based VIO approaches usually adopt a sensor fusion strategy which can have varying levels of intricacy. However, sensor data can suffer from corruptions and missing frames and is therefore imperfect. Hence, need arises for a strategy which not only fuses sensor data but also selects the features based on their reliability. This work addresses the monocular VIO problem with a more representative sensor fusion strategy involving attention mechanism. The proposed framework neither needs extrinsic sensor calibration nor the knowledge of intrinsic inertial measurement unit (IMU) parameters. The network, being trained in an end-to-end fashion, is assessed with various types of sensory data corruptions and compared against popular baselines. The work highlights the complementary nature of the employed sensors in such scenarios. The proposed approach has achieved state-of-the-art results showing competitive performance against the baselines, thereby contributing to an advance in the field. We also make use of Bayesian uncertainty in order to obtain information about model’s certainty in its predictions. The model is cast into a Bayesian Neural Network (BNN) without making any explicit changes in it and inference is made using a simple tractable approach - Laplace approximation. We show that notion of uncertainty can be exploited for VIO and sensor fusion, particularly that sensor degradation results in more uncertain predictions and the uncertainty correlates well with pose errors.

elib-URL des Eintrags:https://elib.dlr.de/137048/
Dokumentart:Berichtsreihe (DLR-Interner Bericht, Masterarbeit)
Titel:Uncertainty-Aware Attention Guided Sensor Fusion For Monocular Visual Inertial Odometry
Autoren:
AutorenInstitution oder E-Mail-AdresseAutoren-ORCID-iDORCID Put Code
Shinde, KashmiraKashmira.Shinde (at) dlr.deNICHT SPEZIFIZIERTNICHT SPEZIFIZIERT
Datum:2 Juni 2020
Referierte Publikation:Nein
Open Access:Ja
Status:veröffentlicht
Stichwörter:Deep Learning, Uncertainty in Deep Learning, Camera motion estimation, Visual-Inertial Fusion
Institution:Technical University Dortmund (TU Dortmund)
HGF - Forschungsbereich:Luftfahrt, Raumfahrt und Verkehr
HGF - Programm:Raumfahrt
HGF - Programmthema:Technik für Raumfahrtsysteme
DLR - Schwerpunkt:Raumfahrt
DLR - Forschungsgebiet:R SY - Technik für Raumfahrtsysteme
DLR - Teilgebiet (Projekt, Vorhaben):Vorhaben Intelligente Mobilität (alt)
Standort: Oberpfaffenhofen
Institute & Einrichtungen:Institut für Robotik und Mechatronik (ab 2013) > Perzeption und Kognition
Institut für Robotik und Mechatronik (ab 2013)
Hinterlegt von: Lee, Jongseok
Hinterlegt am:04 Nov 2020 18:15
Letzte Änderung:04 Nov 2020 18:15

Nur für Mitarbeiter des Archivs: Kontrollseite des Eintrags

Blättern
Suchen
Hilfe & Kontakt
Informationen
electronic library verwendet EPrints 3.3.12
Gestaltung Webseite und Datenbank: Copyright © Deutsches Zentrum für Luft- und Raumfahrt (DLR). Alle Rechte vorbehalten.