elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Imprint | Privacy Policy | Contact | Deutsch
Fontsize: [-] Text [+]

Deep learning based methodology for identification of windthrows in forestry data

Jaiswal, Nishant (2022) Deep learning based methodology for identification of windthrows in forestry data. Master's, Friedrich-Alexander-Universität Erlangen-Nürnberg.

[img] PDF - Only accessible within DLR
368MB

Abstract

Post-storm management is necessary after a storm has occurred. Post-storm management helps maintain the ecosystem of the forests. For post-storm management to start a rapid survey, a reliable estimation of the damage that happened must be acquired. Wind damage to forests has significant economic, ecological, and social impacts. In the case of managed forests, storms lead to increased risk of forest fires, loss of timer, damage to soil, insect and fungal damage, and damage to civil infrastructure. Along with these disadvantages, dead wood in controlled quantity is essential for the forest ecosystem. Thus, it is vital to recognize wind-damaged areas, specifically fallen trees due to winds/ windthrows, for proper management and restoration of forests. Change detection has been previously used for windthrow detection, which requires two images before and after the storm, which may be challenging to acquire due to atmospheric conditions and revisit times of remote platforms like satellites or Unmanned Aerial Vehicle (UAV). Hence, there is a push for single-image identification of windthrows. While deep learning has already been studied for summer storms, winter storms have not been the research focus. In this master thesis, single post-storm imagery is used. Data is from winter storms from the forests of Lower Saxony, Germany, and the task of windthrow identification is studied using deep learning methods. The dataset used has four channels, RGB-Near-Infrared (NIR). The post-storm images had a detailed prediction map consisting of four classes: no forest, forest with no windthrows, forest with windthrows, and cleared areas. Two deep learning models have been compared: DeepLabv3+ and U-Net. We study different channel combinations and input image tile sizes to obtain the best configuration for windthrow detection. DeepLabv3+ model outperforms the U-Net model with a prediction accuracy of 86.27% for windthrows, with the best accuracy of 95.03% across all classes, and a class IoU of 0.7440 for windthrow compared to a prediction accuracy of 80.97% for windthrows and class IoU of 0.6944 for windthrows for the U-Net model. DeepLabv3+ and U-Net models were able to process 2048 × 2048 mosaics with an input image tile size of 512 × 512 in nearly 889ms and 875ms, respectively. As a result, a fast and well-performing windthrow detection model based on DeepLabv3+ is developed.

Item URL in elib:https://elib.dlr.de/189568/
Document Type:Thesis (Master's)
Title:Deep learning based methodology for identification of windthrows in forestry data
Authors:
AuthorsInstitution or Email of AuthorsAuthor's ORCID iDORCID Put Code
Jaiswal, NishantFriedrich-Alexander-Universität Erlangen-NürnbergUNSPECIFIEDUNSPECIFIED
Date:2022
Refereed publication:Yes
Open Access:No
Status:Published
Keywords:windthrows, deep leanring, image segmentation, deeplab, forest management
Institution:Friedrich-Alexander-Universität Erlangen-Nürnberg
Department:Lehrstuhl für Multimediakommunikation und Signalverarbeitung
HGF - Research field:Aeronautics, Space and Transport
HGF - Program:Space
HGF - Program Themes:Earth Observation
DLR - Research area:Raumfahrt
DLR - Program:R EO - Earth Observation
DLR - Research theme (Project):R - Artificial Intelligence
Location: Berlin-Adlershof
Institutes and Institutions:Institute of Optical Sensor Systems
Deposited By: Bhattacharjee, Protim
Deposited On:07 Nov 2022 16:48
Last Modified:07 Nov 2022 16:48

Repository Staff Only: item control page

Browse
Search
Help & Contact
Information
electronic library is running on EPrints 3.3.12
Website and database design: Copyright © German Aerospace Center (DLR). All rights reserved.