elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Imprint | Privacy Policy | Contact | Deutsch
Fontsize: [-] Text [+]

Single-View Depth from Focused Plenoptic Cameras

Lasheras Hernández, Blanca (2024) Single-View Depth from Focused Plenoptic Cameras. Master's, Universidad de Zaragoza.

[img] PDF - Only accessible within DLR
40MB

Abstract

In recent years, the research progress in computer vision has boosted the capabilities of machines for interpreting visual data, thereby expanding the complexity and range of tasks that robots could perform in fields such as autonomous driving, medicine, and industrial automation. A principal facet of computer vision is depth estimation, crucial for enabling robots to perceive, navigate, and interact with their environment in an effective and safe manner. Traditional setups, like stereo or multi-camera, face challenges such as calibration intricacies and computational and hardware complexity. Further, their accuracy is limited by the baseline between the cameras. Monocular depth estimation, thus using a single camera, offers a more compact alternative but is however limited by the unobservability of the scale. Light field imaging technologies represent a promising solution to the above issues by capturing both the intensity and direction of light rays not only through the main lens, but also through a large number of microlenses placed within the camera. By these means, depth in front of the camera can be measured owing to depth-dependent refraction at the main lens. Despite their potential, there are limited studies exploring their application to single-view dense depth estimation. This scarcity can be attributed to several factors. The technology remains relatively costly and inaccessible for its widespread adoption, leading to a lack of datasets suitable for training deep neural networks. As a consequence, few projects have used light field imaging for depth estimation, and existing efforts often rely on outdated iterations of the technology. Furthermore, the lack of an open-source geometrical model impedes the development of model-based estimation. This thesis explores the potential of focused plenoptic cameras for single-view depth estimation using learning-based methods. The proposed approach integrates techniques from image processing, deep learning, and scale alignment achieved through foundational models and robust statistics, to generate dense metric depth maps. To support this approach, a novel real-world dataset of light field images with stereo depth labels was generated, addressing a current gap in existing resources. Experimental results demonstrate that the developed pipeline can reliably produce accurate metric depth predictions, setting a foundation for further research in this domain.

Item URL in elib:https://elib.dlr.de/205263/
Document Type:Thesis (Master's)
Title:Single-View Depth from Focused Plenoptic Cameras
Authors:
AuthorsInstitution or Email of AuthorsAuthor's ORCID iDORCID Put Code
Lasheras Hernández, BlancaUniversidad de ZaragozaUNSPECIFIEDUNSPECIFIED
Date:2024
Open Access:No
Number of Pages:78
Status:Published
Keywords:plenoptic cameras; depth estimation
Institution:Universidad de Zaragoza
Department:Escuela de Ingeniería y Arquitectura
HGF - Research field:Aeronautics, Space and Transport
HGF - Program:Space
HGF - Program Themes:Robotics
DLR - Research area:Raumfahrt
DLR - Program:R RO - Robotics
DLR - Research theme (Project):R - Impulse project SaiNSOR [RO], R - Multisensory World Modelling (RM) [RO]
Location: Oberpfaffenhofen
Institutes and Institutions:Institute of Robotics and Mechatronics (since 2013) > Perception and Cognition
Institute of Robotics and Mechatronics (since 2013)
Deposited By: Strobl, Dr. Klaus H.
Deposited On:15 Jul 2024 09:17
Last Modified:15 Jul 2024 09:17

Repository Staff Only: item control page

Browse
Search
Help & Contact
Information
electronic library is running on EPrints 3.3.12
Website and database design: Copyright © German Aerospace Center (DLR). All rights reserved.