elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Imprint | Contact | Deutsch
Fontsize: [-] Text [+]

Automatic Image to Image Registration for Multimodal Remote Sensing Images

Suri, Sahil (2010) Automatic Image to Image Registration for Multimodal Remote Sensing Images. Dissertation, Technische Universität München.

[img]
Preview
PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
32MB

Abstract

During the last decades, remote sensing sensors have undergone a rapid development in terms of both data quantity and characteristics. With advancements in remote sensing technology, the use of satellite images in disparate fields has received a tremendous boost. Few of these include generation of 3D models and topographic maps, early warning systems, urban growth, damage assessment, crisis information management and disaster mitigation. These applications normally utilize image processing techniques like image fusion, change detection, GIS overlay operations or 3D visualization which requires registered images procured from different sources. Image registration is a fundamental task in remote sensing image processing that is used to match two or more images taken, for example, at different times, from different sensors or from different view points. A lot of automation has been achieved in this field but ever sprouting data quality and characteristics compel innovators to design new and/or improve existing registration techniques. In literature, image registration methodologies are broadly classified into intensity and feature based approaches. In this dissertation, we have evolved and combined two distinct techniques from each of the broad classes to extend their applicability for answering contemporary challenges in remote sensing image registration. Generally, remote sensing applications need to accommodate images from different sensors/modalities; reason might be specific application demands or data availability. For example in case of a natural calamity, decision makers might be forced to use old archived optical data with a newly acquired (post-disaster) SAR image. Misalignment within procured SAR and optical imagery (both orthorectified) in such scenarios is a common phenomenon and these registration differences need to be taken care of prior to their joint application. Considering the recently available very high resolution (VHR) data available from satellites like TerraSAR-X, Risat, IKONOS, Quickbird, ALOS etc, registering these images manually is a mammoth task (due to volume and scene characteristics). Intensity based similarity metrics like mutual information (MI) and cluster reward algorithm (CRA) have been found useful for achieving registration of SARoptical data from satellites like Landsat, Radarsat, SPOT, and IRS but still their application for high resolution data especially acquired over urban areas is limited. In this dissertation, we analyze in detail the performance of MI for very high resolution remote sensing images and evaluate (feature extraction, classification, segmentation, discrete optimization) for improving its accuracy, applicability and processing time for VHR images (mainly TerraSAR-X and IKONOS-2) acquired over dense urban areas. Further, on basis of the proposed modifications, we also present a novel method to improve the sensor orientation of high resolution optical data (IKONOS-2) by obtaining ground control through local image matching, taking geometrically much more accurate TerraSAR-X images as a reference. Apart from the joint application demands of SAR and optical imagery, the improved spatial resolution of SAR images from latest and future satellites like TerraSAR-X and TanDEM-X, is set to make a paramount impact on their usability. Here, the lack of any proven point feature detection and matching scheme for multisensor/multimodal SAR image matching encourages us to review the advancements in the field of computer vision and extend the applicability of Scale Invariant Feature Transform (SIFT) operator for SAR point feature matching. We have analysed the feature detection, identification and matching steps of the original SIFT processing chain. After thorough analysis, we propose steps to counter the speckle influence which deteriorates the SIFT operator performance for SAR images, in feature identification we evaluate different local gradient estimating techniques and highlight the fact that giving up the SIFT’s rotation invariance characteristic increases the potential number of matches. In the feature matching stage we propose to combine MI and the SIFT operator capabilities for effective results in challenging SAR image matching scenarios. Further, our results indicate that a significant speedup is achieved on incorporating above suggested changes to the original SIFT processing chain.

Document Type:Thesis (Dissertation)
Title:Automatic Image to Image Registration for Multimodal Remote Sensing Images
Authors:
AuthorsInstitution or Email of Authors
Suri, Sahilsahilsuri4u@gmail.com
Date:2010
Number of Pages:220
Status:Published
Keywords:Image registration, Mutual information, Georeferencing,
Institution:Technische Universität München
Department:Fakultät für Bauingenieur- und Vermessungswesen
HGF - Research field:Aeronautics, Space and Transport (old)
HGF - Program:Space (old)
HGF - Program Themes:W EO - Erdbeobachtung
DLR - Research area:Space
DLR - Program:W EO - Erdbeobachtung
DLR - Research theme (Project):W - Vorhaben Photogrammetrie und Bildanalyse (old)
Location: Oberpfaffenhofen
Institutes and Institutions:Remote Sensing Technology Institute > Photogrammetry and Image Analysis
Deposited By: Dr.-Ing. Peter Reinartz
Deposited On:30 Mar 2011 15:59
Last Modified:12 Dec 2013 21:16

Repository Staff Only: item control page

Browse
Search
Help & Contact
Informationen
electronic library is running on EPrints 3.3.12
Copyright © 2008-2012 German Aerospace Center (DLR). All rights reserved.