Murali, Prajval Kumar und Wang, Cong und Lee, Dongheui und Dahiya, Ravinder und Kaboli, Mohsen (2022) Deep Active Cross-Modal Visuo-Tactile Transfer Learning for Robotic Object Recognition. IEEE Robotics and Automation Letters, 7 (4), Seiten 9557-9564. IEEE - Institute of Electrical and Electronics Engineers. doi: 10.1109/LRA.2022.3191408. ISSN 2377-3766.
PDF
- Verlagsversion (veröffentlichte Fassung)
2MB |
Offizielle URL: https://ieeexplore.ieee.org/document/9830870
Kurzfassung
We proposeforthe firsttime, a novel deep active visuotactile cross-modal full-fledged framework for object recognition by autonomous robotic systems. Our proposed network xAVTNet is actively trained with labelled point clouds from a vision sensor with one robot and tested with an active tactile perception strategy to recognise objects never touched before using another robot. We propose a novel visuo-tactile loss (VTLoss) to minimise the discrepancy between the visual and tactile domains for unsupervised domain adaptation.Our framework leverages the strengths of deep neural networks for cross-modal recognition along with active perception and active learning strategies for increased efficiency by minimising redundant data collection. Our method is extensively evaluated on a real robotic system and compared against baselines and other state-of-art approaches. We demonstrate clear outperformance in recognition accuracy compared to the state-of-art visuo-tactile cross-modal recognition method.
elib-URL des Eintrags: | https://elib.dlr.de/194560/ | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Dokumentart: | Zeitschriftenbeitrag | ||||||||||||||||||||||||
Titel: | Deep Active Cross-Modal Visuo-Tactile Transfer Learning for Robotic Object Recognition | ||||||||||||||||||||||||
Autoren: |
| ||||||||||||||||||||||||
Datum: | Oktober 2022 | ||||||||||||||||||||||||
Erschienen in: | IEEE Robotics and Automation Letters | ||||||||||||||||||||||||
Referierte Publikation: | Ja | ||||||||||||||||||||||||
Open Access: | Ja | ||||||||||||||||||||||||
Gold Open Access: | Nein | ||||||||||||||||||||||||
In SCOPUS: | Ja | ||||||||||||||||||||||||
In ISI Web of Science: | Ja | ||||||||||||||||||||||||
Band: | 7 | ||||||||||||||||||||||||
DOI: | 10.1109/LRA.2022.3191408 | ||||||||||||||||||||||||
Seitenbereich: | Seiten 9557-9564 | ||||||||||||||||||||||||
Verlag: | IEEE - Institute of Electrical and Electronics Engineers | ||||||||||||||||||||||||
ISSN: | 2377-3766 | ||||||||||||||||||||||||
Status: | veröffentlicht | ||||||||||||||||||||||||
Stichwörter: | Active visuo-tactile object recognition, perception for grasping and manipulation, transfer learning, visuo-tactile cross-modal learning | ||||||||||||||||||||||||
HGF - Forschungsbereich: | Luftfahrt, Raumfahrt und Verkehr | ||||||||||||||||||||||||
HGF - Programm: | Raumfahrt | ||||||||||||||||||||||||
HGF - Programmthema: | Robotik | ||||||||||||||||||||||||
DLR - Schwerpunkt: | Raumfahrt | ||||||||||||||||||||||||
DLR - Forschungsgebiet: | R RO - Robotik | ||||||||||||||||||||||||
DLR - Teilgebiet (Projekt, Vorhaben): | R - Autonome, lernende Roboter [RO] | ||||||||||||||||||||||||
Standort: | Oberpfaffenhofen | ||||||||||||||||||||||||
Institute & Einrichtungen: | Institut für Robotik und Mechatronik (ab 2013) > Leitungsbereich | ||||||||||||||||||||||||
Hinterlegt von: | Geyer, Günther | ||||||||||||||||||||||||
Hinterlegt am: | 31 Mär 2023 12:51 | ||||||||||||||||||||||||
Letzte Änderung: | 28 Jun 2023 13:55 |
Nur für Mitarbeiter des Archivs: Kontrollseite des Eintrags