elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Imprint | Privacy Policy | Contact | Deutsch
Fontsize: [-] Text [+]

Uncertainty-Aware Attention Guided Sensor Fusion For Monocular Visual Inertial Odometry

Shinde, Kashmira (2020) Uncertainty-Aware Attention Guided Sensor Fusion For Monocular Visual Inertial Odometry. DLR-Interner Bericht. DLR-IB-RM-OP-2020-82. Master's. Technical University Dortmund (TU Dortmund).

[img] PDF
9MB

Abstract

Visual-Inertial Odometry (VIO) refers to dead reckoning based navigation integrating visual and inertial data. With the advent of deep learning (DL), a lot of research has been done in this realm yielding competitive performances. DL based VIO approaches usually adopt a sensor fusion strategy which can have varying levels of intricacy. However, sensor data can suffer from corruptions and missing frames and is therefore imperfect. Hence, need arises for a strategy which not only fuses sensor data but also selects the features based on their reliability. This work addresses the monocular VIO problem with a more representative sensor fusion strategy involving attention mechanism. The proposed framework neither needs extrinsic sensor calibration nor the knowledge of intrinsic inertial measurement unit (IMU) parameters. The network, being trained in an end-to-end fashion, is assessed with various types of sensory data corruptions and compared against popular baselines. The work highlights the complementary nature of the employed sensors in such scenarios. The proposed approach has achieved state-of-the-art results showing competitive performance against the baselines, thereby contributing to an advance in the field. We also make use of Bayesian uncertainty in order to obtain information about model’s certainty in its predictions. The model is cast into a Bayesian Neural Network (BNN) without making any explicit changes in it and inference is made using a simple tractable approach - Laplace approximation. We show that notion of uncertainty can be exploited for VIO and sensor fusion, particularly that sensor degradation results in more uncertain predictions and the uncertainty correlates well with pose errors.

Item URL in elib:https://elib.dlr.de/137048/
Document Type:Monograph (DLR-Interner Bericht, Master's)
Title:Uncertainty-Aware Attention Guided Sensor Fusion For Monocular Visual Inertial Odometry
Authors:
AuthorsInstitution or Email of AuthorsAuthor's ORCID iDORCID Put Code
Shinde, KashmiraUNSPECIFIEDUNSPECIFIEDUNSPECIFIED
Date:2 June 2020
Refereed publication:No
Open Access:Yes
Status:Published
Keywords:Deep Learning, Uncertainty in Deep Learning, Camera motion estimation, Visual-Inertial Fusion
Institution:Technical University Dortmund (TU Dortmund)
HGF - Research field:Aeronautics, Space and Transport
HGF - Program:Space
HGF - Program Themes:Space System Technology
DLR - Research area:Raumfahrt
DLR - Program:R SY - Space System Technology
DLR - Research theme (Project):Vorhaben Intelligente Mobilität (old)
Location: Oberpfaffenhofen
Institutes and Institutions:Institute of Robotics and Mechatronics (since 2013) > Perception and Cognition
Institute of Robotics and Mechatronics (since 2013)
Deposited By: Lee, Jongseok
Deposited On:04 Nov 2020 18:15
Last Modified:04 Nov 2020 18:15

Repository Staff Only: item control page

Browse
Search
Help & Contact
Information
electronic library is running on EPrints 3.3.12
Website and database design: Copyright © German Aerospace Center (DLR). All rights reserved.