Valls Mascaró, Esteve und Ahn, Hyemin und Lee, Dongheui (2024) A Unified Masked Autoencoder with Patchified Skeletons for Motion Synthesis. In: 38th AAAI Conference on Artificial Intelligence, AAAI 2024, 38 (6), Seiten 5261-5269. The 38th Annual AAAI Conference on Artificial Intelligence, 2024-02-20, Vancouver, Canada. doi: 10.1609/aaai.v38i6.28333. ISBN 978-1-57735-887-9. ISSN 2159-5399.
Dieses Archiv kann nicht den Volltext zur Verfügung stellen.
Offizielle URL: https://ojs.aaai.org/index.php/AAAI/article/view/28333
Kurzfassung
The synthesis of human motion has traditionally been addressed through task-dependent models that focus on specific challenges, such as predicting future motions or filling in intermediate poses conditioned on known key-poses. In this paper, we present a novel task-independent model called UNIMASK-M, which can effectively address these challenges using a unified architecture. Our model obtains comparable or better performance than the state-of-the-art in each field. Inspired by Vision Transformers (ViTs), our UNIMASK-M model decomposes a human pose into body parts to leverage the spatio-temporal relationships existing in human motion. Moreover, we reformulate various pose-conditioned motion synthesis tasks as a reconstruction problem with different masking patterns given as input. By explicitly informing our model about the masked joints, our UNIMASK-M becomes more robust to occlusions. Experimental results show that our model successfully forecasts human motion on the Human3.6M dataset while achieving state-of-the-art results in motion inbetweening on the LaFAN1 dataset for long transition periods.
elib-URL des Eintrags: | https://elib.dlr.de/208539/ | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Dokumentart: | Konferenzbeitrag (Vortrag) | ||||||||||||||||
Titel: | A Unified Masked Autoencoder with Patchified Skeletons for Motion Synthesis | ||||||||||||||||
Autoren: |
| ||||||||||||||||
Datum: | 24 März 2024 | ||||||||||||||||
Erschienen in: | 38th AAAI Conference on Artificial Intelligence, AAAI 2024 | ||||||||||||||||
Referierte Publikation: | Ja | ||||||||||||||||
Open Access: | Nein | ||||||||||||||||
Gold Open Access: | Nein | ||||||||||||||||
In SCOPUS: | Ja | ||||||||||||||||
In ISI Web of Science: | Ja | ||||||||||||||||
Band: | 38 | ||||||||||||||||
DOI: | 10.1609/aaai.v38i6.28333 | ||||||||||||||||
Seitenbereich: | Seiten 5261-5269 | ||||||||||||||||
ISSN: | 2159-5399 | ||||||||||||||||
ISBN: | 978-1-57735-887-9 | ||||||||||||||||
Status: | veröffentlicht | ||||||||||||||||
Stichwörter: | motion synthesis | ||||||||||||||||
Veranstaltungstitel: | The 38th Annual AAAI Conference on Artificial Intelligence | ||||||||||||||||
Veranstaltungsort: | Vancouver, Canada | ||||||||||||||||
Veranstaltungsart: | internationale Konferenz | ||||||||||||||||
Veranstaltungsdatum: | 20 Februar 2024 | ||||||||||||||||
HGF - Forschungsbereich: | Luftfahrt, Raumfahrt und Verkehr | ||||||||||||||||
HGF - Programm: | Raumfahrt | ||||||||||||||||
HGF - Programmthema: | Robotik | ||||||||||||||||
DLR - Schwerpunkt: | Raumfahrt | ||||||||||||||||
DLR - Forschungsgebiet: | R RO - Robotik | ||||||||||||||||
DLR - Teilgebiet (Projekt, Vorhaben): | R - Intuitive Mensch-Roboter Schnittstelle [RO] | ||||||||||||||||
Standort: | Oberpfaffenhofen | ||||||||||||||||
Institute & Einrichtungen: | Institut für Robotik und Mechatronik (ab 2013) | ||||||||||||||||
Hinterlegt von: | Strobl, Dr.-Ing. Klaus H. | ||||||||||||||||
Hinterlegt am: | 14 Nov 2024 11:42 | ||||||||||||||||
Letzte Änderung: | 14 Nov 2024 11:42 |
Nur für Mitarbeiter des Archivs: Kontrollseite des Eintrags