DLR-Logo -> http://www.dlr.de
DLR Portal Home | Imprint | Privacy Policy | Contact | Deutsch
Fontsize: [-] Text [+]

Accelerating Neural Network Training with Distributed Asynchronous and Selective Optimization (DASO)

Coquelin, Daniel and Debus, Charlotte and Götz, Markus and von der Lehr, Fabrice and Kahn, James and Siggel, Martin and Streit, Achim (2021) Accelerating Neural Network Training with Distributed Asynchronous and Selective Optimization (DASO). [Other]

[img] PDF

Official URL: https://arxiv.org/abs/2104.05588


With increasing data and model complexities, the time required to train neural networks has become prohibitively large. To address the exponential rise in training time, users are turning to data parallel neural networks (DPNN) to utilize large-scale distributed resources on computer clusters. Current DPNN approaches implement the network parameter updates by synchronizing and averaging gradients across all processes with blocking communication operations. This synchronization is the central algorithmic bottleneck. To combat this, we introduce the Distributed Asynchronous and Selective Optimization (DASO) method which leverages multi-GPU compute node architectures to accelerate network training. DASO uses a hierarchical and asynchronous communication scheme comprised of node-local and global networks while adjusting the global synchronization rate during the learning process. We show that DASO yields a reduction in training time of up to 34% on classical and state-of-the-art networks, as compared to other existing data parallel training methods.

Item URL in elib:https://elib.dlr.de/146819/
Document Type:Other
Title:Accelerating Neural Network Training with Distributed Asynchronous and Selective Optimization (DASO)
AuthorsInstitution or Email of AuthorsAuthor's ORCID iDORCID Put Code
Götz, MarkusKarlsruher Institut für Technologie (KIT)https://orcid.org/0000-0002-2233-1041UNSPECIFIED
von der Lehr, FabriceUNSPECIFIEDhttps://orcid.org/0009-0000-2134-6754UNSPECIFIED
Siggel, MartinUNSPECIFIEDhttps://orcid.org/0000-0002-3952-4659UNSPECIFIED
Date:12 April 2021
Journal or Publication Title:ArXiV
Refereed publication:No
Open Access:Yes
Number of Pages:12
Keywords:Computer Science, Machine Learning, Neural Network, Optimization, Distributed Training, Data parallelism
HGF - Research field:Aeronautics, Space and Transport
HGF - Program:Space
HGF - Program Themes:Space System Technology
DLR - Research area:Raumfahrt
DLR - Program:R SY - Space System Technology
DLR - Research theme (Project):R - Tasks SISTEC
Location: Köln-Porz
Institutes and Institutions:Institut of Simulation and Software Technology > High Performance Computing
Institute for Software Technology
Institute for Software Technology > High-Performance Computing
Deposited By: von der Lehr, Fabrice
Deposited On:09 Dec 2021 09:05
Last Modified:16 Dec 2021 13:33

Repository Staff Only: item control page

Help & Contact
electronic library is running on EPrints 3.3.12
Website and database design: Copyright © German Aerospace Center (DLR). All rights reserved.