DLR-Logo -> http://www.dlr.de
DLR Portal Home | Impressum | Kontakt | English
Schriftgröße: [-] Text [+]

Automatic Image to Image Registration for Multimodal Remote Sensing Images

Suri, Sahil (2010) Automatic Image to Image Registration for Multimodal Remote Sensing Images. Dissertation, Technische Universität München.

PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader


During the last decades, remote sensing sensors have undergone a rapid development in terms of both data quantity and characteristics. With advancements in remote sensing technology, the use of satellite images in disparate fields has received a tremendous boost. Few of these include generation of 3D models and topographic maps, early warning systems, urban growth, damage assessment, crisis information management and disaster mitigation. These applications normally utilize image processing techniques like image fusion, change detection, GIS overlay operations or 3D visualization which requires registered images procured from different sources. Image registration is a fundamental task in remote sensing image processing that is used to match two or more images taken, for example, at different times, from different sensors or from different view points. A lot of automation has been achieved in this field but ever sprouting data quality and characteristics compel innovators to design new and/or improve existing registration techniques. In literature, image registration methodologies are broadly classified into intensity and feature based approaches. In this dissertation, we have evolved and combined two distinct techniques from each of the broad classes to extend their applicability for answering contemporary challenges in remote sensing image registration. Generally, remote sensing applications need to accommodate images from different sensors/modalities; reason might be specific application demands or data availability. For example in case of a natural calamity, decision makers might be forced to use old archived optical data with a newly acquired (post-disaster) SAR image. Misalignment within procured SAR and optical imagery (both orthorectified) in such scenarios is a common phenomenon and these registration differences need to be taken care of prior to their joint application. Considering the recently available very high resolution (VHR) data available from satellites like TerraSAR-X, Risat, IKONOS, Quickbird, ALOS etc, registering these images manually is a mammoth task (due to volume and scene characteristics). Intensity based similarity metrics like mutual information (MI) and cluster reward algorithm (CRA) have been found useful for achieving registration of SARoptical data from satellites like Landsat, Radarsat, SPOT, and IRS but still their application for high resolution data especially acquired over urban areas is limited. In this dissertation, we analyze in detail the performance of MI for very high resolution remote sensing images and evaluate (feature extraction, classification, segmentation, discrete optimization) for improving its accuracy, applicability and processing time for VHR images (mainly TerraSAR-X and IKONOS-2) acquired over dense urban areas. Further, on basis of the proposed modifications, we also present a novel method to improve the sensor orientation of high resolution optical data (IKONOS-2) by obtaining ground control through local image matching, taking geometrically much more accurate TerraSAR-X images as a reference. Apart from the joint application demands of SAR and optical imagery, the improved spatial resolution of SAR images from latest and future satellites like TerraSAR-X and TanDEM-X, is set to make a paramount impact on their usability. Here, the lack of any proven point feature detection and matching scheme for multisensor/multimodal SAR image matching encourages us to review the advancements in the field of computer vision and extend the applicability of Scale Invariant Feature Transform (SIFT) operator for SAR point feature matching. We have analysed the feature detection, identification and matching steps of the original SIFT processing chain. After thorough analysis, we propose steps to counter the speckle influence which deteriorates the SIFT operator performance for SAR images, in feature identification we evaluate different local gradient estimating techniques and highlight the fact that giving up the SIFT’s rotation invariance characteristic increases the potential number of matches. In the feature matching stage we propose to combine MI and the SIFT operator capabilities for effective results in challenging SAR image matching scenarios. Further, our results indicate that a significant speedup is achieved on incorporating above suggested changes to the original SIFT processing chain.

Dokumentart:Hochschulschrift (Dissertation)
Titel:Automatic Image to Image Registration for Multimodal Remote Sensing Images
AutorenInstitution oder E-Mail-AdresseAutoren-ORCID-iD
Suri, Sahilsahilsuri4u@gmail.comNICHT SPEZIFIZIERT
In Open Access:Nein
In ISI Web of Science:Nein
Stichwörter:Image registration, Mutual information, Georeferencing,
Institution:Technische Universität München
Abteilung:Fakultät für Bauingenieur- und Vermessungswesen
HGF - Forschungsbereich:Verkehr und Weltraum (alt)
HGF - Programm:Weltraum (alt)
HGF - Programmthema:W EO - Erdbeobachtung
DLR - Schwerpunkt:Weltraum
DLR - Forschungsgebiet:W EO - Erdbeobachtung
DLR - Teilgebiet (Projekt, Vorhaben):W - Vorhaben Photogrammetrie und Bildanalyse (alt)
Standort: Oberpfaffenhofen
Institute & Einrichtungen:Institut für Methodik der Fernerkundung > Photogrammetrie und Bildanalyse
Hinterlegt von: Reinartz, Prof. Dr.. Peter
Hinterlegt am:30 Mär 2011 15:59
Letzte Änderung:12 Dez 2013 21:16

Nur für Mitarbeiter des Archivs: Kontrollseite des Eintrags

Hilfe & Kontakt
electronic library verwendet EPrints 3.3.12
Copyright © 2008-2017 Deutsches Zentrum für Luft- und Raumfahrt (DLR). Alle Rechte vorbehalten.