elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Imprint | Privacy Policy | Contact | Deutsch
Fontsize: [-] Text [+]

Accelerating Neural Network Training with Distributed Asynchronous and Selective Optimization (DASO)

Coquelin, Daniel and Debus, Charlotte and Götz, Markus and von der Lehr, Fabrice and Kahn, James and Siggel, Martin and Streit, Achim (2021) Accelerating Neural Network Training with Distributed Asynchronous and Selective Optimization (DASO). [Other]

[img] PDF
441kB

Official URL: https://arxiv.org/abs/2104.05588

Abstract

With increasing data and model complexities, the time required to train neural networks has become prohibitively large. To address the exponential rise in training time, users are turning to data parallel neural networks (DPNN) to utilize large-scale distributed resources on computer clusters. Current DPNN approaches implement the network parameter updates by synchronizing and averaging gradients across all processes with blocking communication operations. This synchronization is the central algorithmic bottleneck. To combat this, we introduce the Distributed Asynchronous and Selective Optimization (DASO) method which leverages multi-GPU compute node architectures to accelerate network training. DASO uses a hierarchical and asynchronous communication scheme comprised of node-local and global networks while adjusting the global synchronization rate during the learning process. We show that DASO yields a reduction in training time of up to 34% on classical and state-of-the-art networks, as compared to other existing data parallel training methods.

Item URL in elib:https://elib.dlr.de/146819/
Document Type:Other
Title:Accelerating Neural Network Training with Distributed Asynchronous and Selective Optimization (DASO)
Authors:
AuthorsInstitution or Email of AuthorsAuthor's ORCID iD
Coquelin, Danieldaniel.coquelin (at) kit.eduUNSPECIFIED
Debus, Charlottecharlotte.debus (at) kit.eduUNSPECIFIED
Götz, MarkusKarlsruher Institut für Technologie (KIT)https://orcid.org/0000-0002-2233-1041
von der Lehr, FabriceFabrice.Lehr (at) dlr.deUNSPECIFIED
Kahn, Jamesjames.kahn (at) kit.eduUNSPECIFIED
Siggel, Martinmartin.siggel (at) dlr.dehttps://orcid.org/0000-0002-3952-4659
Streit, Achimachim.streit (at) kit.eduUNSPECIFIED
Date:12 April 2021
Journal or Publication Title:ArXiV
Refereed publication:No
Open Access:Yes
Gold Open Access:No
In SCOPUS:No
In ISI Web of Science:No
Number of Pages:12
Status:Published
Keywords:Computer Science, Machine Learning, Neural Network, Optimization, Distributed Training, Data parallelism
HGF - Research field:Aeronautics, Space and Transport
HGF - Program:Space
HGF - Program Themes:Space System Technology
DLR - Research area:Raumfahrt
DLR - Program:R SY - Space System Technology
DLR - Research theme (Project):R - Tasks SISTEC
Location: Köln-Porz
Institutes and Institutions:Institut of Simulation and Software Technology > High Performance Computing
Institute for Software Technology
Institute for Software Technology > High-Performance Computing
Deposited By: von der Lehr, Fabrice
Deposited On:09 Dec 2021 09:05
Last Modified:16 Dec 2021 13:33

Repository Staff Only: item control page

Browse
Search
Help & Contact
Information
electronic library is running on EPrints 3.3.12
Copyright © 2008-2017 German Aerospace Center (DLR). All rights reserved.