elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Imprint | Privacy Policy | Contact | Deutsch
Fontsize: [-] Text [+]

HOW DOES PRE-TRAINED WAV2VEC 2.0 PERFORM ON DOMAIN-SHIFTED ASR? AN EXTENSIVE BENCHMARK ON AIR TRAFFIC CONTROL COMMUNICATIONS

Zuluaga-Gomez, Juan Pablo and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Seyyed Saeed and Motlicek, Petr and Kleinert, Matthias and Helmke, Hartmut and Ohneiser, Oliver and Zhan, Qingran (2023) HOW DOES PRE-TRAINED WAV2VEC 2.0 PERFORM ON DOMAIN-SHIFTED ASR? AN EXTENSIVE BENCHMARK ON AIR TRAFFIC CONTROL COMMUNICATIONS. In: 2022 IEEE Spoken Language Technology Workshop, SLT 2022 - Proceedings. The 2022 IEEE Spoken Language Workshop Technology Workshop (SLT 2022), 2023-01-09 - 2023-01-12, Doha, Qatar. doi: 10.1109/SLT54892.2023.10022724. ISBN 979-835039690-4. ISSN 2639-5479.

[img] PDF
292kB

Abstract

Recent work on self-supervised pre-training focus on leveraging large-scale unlabeled speech data to build robust end-to-end (E2E) acoustic models (AM) that can be later fine-tuned on downstream tasks e.g., automatic speech recognition (ASR). Yet, few works investigated the impact on performance when the data properties substantially differ between the pre-training and fine-tuning phases, termed domain shift. We target this scenario by analyzing the robustness of Wav2Vec 2.0 and XLS-R models on downstream ASR for a completely unseen domain, air traffic control (ATC) communications. We benchmark these two models on several open-source and challenging ATC databases with signal-to-noise ratio between 5 to 20 dB. Relative word error rate (WER) reductions between 20% to 40% are obtained in comparison to hybrid-based ASR baselines by only fine-tuning E2E acoustic models with a smaller fraction of labeled data. We analyze WERs on the low-resource scenario and gender bias carried by one ATC dataset.

Item URL in elib:https://elib.dlr.de/189418/
Document Type:Conference or Workshop Item (Speech)
Title:HOW DOES PRE-TRAINED WAV2VEC 2.0 PERFORM ON DOMAIN-SHIFTED ASR? AN EXTENSIVE BENCHMARK ON AIR TRAFFIC CONTROL COMMUNICATIONS
Authors:
AuthorsInstitution or Email of AuthorsAuthor's ORCID iDORCID Put Code
Zuluaga-Gomez, Juan PabloIdiap, EPFLUNSPECIFIEDUNSPECIFIED
Prasad, AmruthaIdiap, BUTUNSPECIFIEDUNSPECIFIED
Nigmatulina, IuliiaIdiapUNSPECIFIEDUNSPECIFIED
Sarfjoo, Seyyed SaeedIdiapUNSPECIFIEDUNSPECIFIED
Motlicek, PetrUNSPECIFIEDUNSPECIFIEDUNSPECIFIED
Kleinert, MatthiasUNSPECIFIEDhttps://orcid.org/0000-0002-0782-4147UNSPECIFIED
Helmke, HartmutUNSPECIFIEDhttps://orcid.org/0000-0002-1939-0200UNSPECIFIED
Ohneiser, OliverUNSPECIFIEDhttps://orcid.org/0000-0002-5411-691XUNSPECIFIED
Zhan, QingranSchool of Information and Electronics, Beijing Institute of TechnologyUNSPECIFIEDUNSPECIFIED
Date:2023
Journal or Publication Title:2022 IEEE Spoken Language Technology Workshop, SLT 2022 - Proceedings
Refereed publication:Yes
Open Access:Yes
Gold Open Access:No
In SCOPUS:Yes
In ISI Web of Science:Yes
DOI:10.1109/SLT54892.2023.10022724
ISSN:2639-5479
ISBN:979-835039690-4
Status:Published
Keywords:Automatic speech recognition, Wav2Vec 2.0, self-supervised pre-training, air traffic control communications
Event Title:The 2022 IEEE Spoken Language Workshop Technology Workshop (SLT 2022)
Event Location:Doha, Qatar
Event Type:international Conference
Event Start Date:9 January 2023
Event End Date:12 January 2023
HGF - Research field:Aeronautics, Space and Transport
HGF - Program:Aeronautics
HGF - Program Themes:Air Transportation and Impact
DLR - Research area:Aeronautics
DLR - Program:L AI - Air Transportation and Impact
DLR - Research theme (Project):L - Integrated Flight Guidance
Location: Braunschweig
Institutes and Institutions:Institute of Flight Guidance > Controller Assistance
Deposited By: Diederich, Kerstin
Deposited On:12 Dec 2022 09:35
Last Modified:24 Apr 2024 20:50

Repository Staff Only: item control page

Browse
Search
Help & Contact
Information
electronic library is running on EPrints 3.3.12
Website and database design: Copyright © German Aerospace Center (DLR). All rights reserved.