elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Impressum | Datenschutz | Kontakt | English
Schriftgröße: [-] Text [+]

Performance of the Low-Rank TT-SVD for Large Dense Tensors on Modern MultiCore CPUs

Röhrig-Zöllner, Melven und Thies, Jonas und Basermann, Achim (2022) Performance of the Low-Rank TT-SVD for Large Dense Tensors on Modern MultiCore CPUs. SIAM Journal on Scientific Computing, 44 (4), C287-C309. SIAM - Society for Industrial and Applied Mathematics. doi: 10.1137/21m1395545. ISSN 1064-8275.

[img] PDF - Preprintversion (eingereichte Entwurfsversion)
606kB

Kurzfassung

There are several factorizations of multidimensional tensors into lower-dimensional components, known as ``tensor networks."" We consider the popular ``tensor-train"" (TT) format and ask, How efficiently can we compute a low-rank approximation from a full tensor on current multicore CPUs? Compared to sparse and dense linear algebra, kernel libraries for multilinear algebra are rare and typically not as well optimized. Linear algebra libraries like BLAS and LAPACK may provide the required operations in principle but often at the cost of additional data movements for rearranging memory layouts. Furthermore, these libraries are typically optimized for the compute-bound case (e.g., square matrix operations), whereas low-rank tensor decompositions lead to memory bandwidth limited operations. We propose a ``TT singular value decomposition"" (TT-SVD) algorithm based on two building blocks: a ``Q-less tall-skinny QR"" factorization and a fused tall-skinny matrixmatrix multiplication and reshape operation. We analyze the performance of the resulting TT-SVD algorithm using the roofline performance model. In addition, we present performance results for different algorithmic variants for shared-memory as well as distributed-memory architectures. Our experiments show that commonly used TT-SVD implementations suffer severe performance penalties. We conclude that a dedicated library for tensor factorization kernels would benefit the community: Computing a low-rank approximation can be as cheap as reading the data twice from main memory. As a consequence, an implementation that achieves realistic performance will move the limit at which one has to resort to randomized methods that only process part of the data.

elib-URL des Eintrags:https://elib.dlr.de/190125/
Dokumentart:Zeitschriftenbeitrag
Titel:Performance of the Low-Rank TT-SVD for Large Dense Tensors on Modern MultiCore CPUs
Autoren:
AutorenInstitution oder E-Mail-AdresseAutoren-ORCID-iDORCID Put Code
Röhrig-Zöllner, MelvenMelven.Roehrig-Zoellner (at) dlr.dehttps://orcid.org/0000-0001-9851-5886NICHT SPEZIFIZIERT
Thies, JonasJ.Thies (at) tudelft.nlhttps://orcid.org/0000-0001-9231-9999NICHT SPEZIFIZIERT
Basermann, AchimAchim.Basermann (at) dlr.dehttps://orcid.org/0000-0003-3637-3231161994900
Datum:Juli 2022
Erschienen in:SIAM Journal on Scientific Computing
Referierte Publikation:Ja
Open Access:Ja
Gold Open Access:Nein
In SCOPUS:Ja
In ISI Web of Science:Ja
Band:44
DOI:10.1137/21m1395545
Seitenbereich:C287-C309
Verlag:SIAM - Society for Industrial and Applied Mathematics
ISSN:1064-8275
Status:veröffentlicht
Stichwörter:tensor decomposition, performance modeling, high-dimensional problems, higherorder SVD high-performence computing, TT-format
HGF - Forschungsbereich:Luftfahrt, Raumfahrt und Verkehr
HGF - Programm:Raumfahrt
HGF - Programmthema:Technik für Raumfahrtsysteme
DLR - Schwerpunkt:Raumfahrt
DLR - Forschungsgebiet:R SY - Technik für Raumfahrtsysteme
DLR - Teilgebiet (Projekt, Vorhaben):R - Aufgaben SISTEC
Standort: Köln-Porz
Institute & Einrichtungen:Institut für Softwaretechnologie
Institut für Softwaretechnologie > High-Performance Computing
Hinterlegt von: Röhrig-Zöllner, Melven
Hinterlegt am:17 Nov 2022 07:45
Letzte Änderung:20 Jun 2024 13:39

Nur für Mitarbeiter des Archivs: Kontrollseite des Eintrags

Blättern
Suchen
Hilfe & Kontakt
Informationen
electronic library verwendet EPrints 3.3.12
Gestaltung Webseite und Datenbank: Copyright © Deutsches Zentrum für Luft- und Raumfahrt (DLR). Alle Rechte vorbehalten.