elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Impressum | Datenschutz | Kontakt | English
Schriftgröße: [-] Text [+]

Fusion of multimodal imaging techniques towards autonomous navigation

Sharif, Helia (2021) Fusion of multimodal imaging techniques towards autonomous navigation. Dissertation, Universität Bremen. doi: 10.26092/elib/1077.

[img] PDF - Nur DLR-intern zugänglich
115MB

Kurzfassung

"Earth is the cradle of humanity, but one cannot live in a cradle forever." -Konstantin E. Tsiolkovsky, an early pioneer of rocketry and astronautics. Space robotics enable humans to explore beyond our home planet. Traditional techniques for tele-operated robotic guidance make it possible for a driver to direct a rover that is up to 245.55Mkm away. However, relying on manual terrestrial operators for guidance is a key limitation for exploration missions today, as real-time communication between rovers and operators is delayed by long distances and limited uplink opportunities. Moreover, autonomous guidance techniques in use today are generally limited in scope and capacity; for example, some autonomous techniques presently in use require the application of special markers on targets in order to enable detection, while other techniques provide autonomous vision-based flight navigation but only at limited altitudes in ideal visibility conditions. Improving autonomy is thus essential to expanding the scope of viable space missions. In this thesis, a fusion of monocular visible and infrared imaging cameras is employed to estimate the relative pose of a nearby target while compensating for each spectrum's shortcomings. The robustness of the algorithm was tested in a number of different scenarios by simulating harsh space environments while imaging a subject of similar characteristics to a spacecraft in orbit. It is shown that the fusion of visual odometries from two spectrums performs well where knowledge of the target's physical characteristics is limited. The result of this thesis research is an autonomous, robust vision-based tracking system designed for space applications. This appealing solution can be used onboard most spacecraft and adapted for the specific application of any given mission.

elib-URL des Eintrags:https://elib.dlr.de/145543/
Dokumentart:Hochschulschrift (Dissertation)
Titel:Fusion of multimodal imaging techniques towards autonomous navigation
Autoren:
AutorenInstitution oder E-Mail-AdresseAutoren-ORCID-iDORCID Put Code
Sharif, HeliaNICHT SPEZIFIZIERThttps://orcid.org/0000-0002-1256-1329NICHT SPEZIFIZIERT
Datum:17 September 2021
Referierte Publikation:Ja
Open Access:Nein
DOI:10.26092/elib/1077
Status:veröffentlicht
Stichwörter:fusion of multimodal sensors; visual odometry; monocular imaging; Extended Kalman Filter; autonomous vision-based navigation for space applications;
Institution:Universität Bremen
Abteilung:Fachbereich 03: Mathematik/Informatik (FB 03)
HGF - Forschungsbereich:Luftfahrt, Raumfahrt und Verkehr
HGF - Programm:Raumfahrt
HGF - Programmthema:Technik für Raumfahrtsysteme
DLR - Schwerpunkt:Raumfahrt
DLR - Forschungsgebiet:R SY - Technik für Raumfahrtsysteme
DLR - Teilgebiet (Projekt, Vorhaben):R - Optische Navigation auf hybrider Avionikarchitektur
Standort: Bremen
Institute & Einrichtungen:Institut für Raumfahrtsysteme > Navigations- und Regelungssysteme
Hinterlegt von: Theil, Dr.-Ing. Stephan
Hinterlegt am:12 Nov 2021 09:06
Letzte Änderung:12 Nov 2021 09:06

Nur für Mitarbeiter des Archivs: Kontrollseite des Eintrags

Blättern
Suchen
Hilfe & Kontakt
Informationen
electronic library verwendet EPrints 3.3.12
Gestaltung Webseite und Datenbank: Copyright © Deutsches Zentrum für Luft- und Raumfahrt (DLR). Alle Rechte vorbehalten.