Reif, Aliza (2024) The Image Scaling Attack and Unveiling its Risks in Traffic Sign Classification. sonstiger Bericht. Studienarbeit. Radboud Uni. (nicht veröffentlicht)
![]() |
PDF
8MB |
Kurzfassung
Image scaling is a necessary prerequisite to feed images into a machine learning model that only accepts inputs of a certain size (Quiring et al., 2023). If the input image is bigger than this specific size, then it is downscaled before being used in the model (Quiring and Rieck, 2020). Scaling algorithms only have a finite amount of available scaling methods (Xiao et al., 2019): for example, they can choose one pixel out of a section in the neighborhood, or they take the average of the pixels in that section. Because of this, it is possible to determine, even in a black box setting (Gao et al., 2022), which pixels have the most relevance for the scaled image, which allows for the image scaling attack to occur: before scaling, it is possible to manipulate specific pixels in such a way that after scaling, the image that is shown is not a scaled version of the original image but a partially or completely different image (Quiring et al., 2023; Xiao et al., 2019). This has many advantages over other data poisoning attacks: a human observer cannot see the trigger or that the data has been manipulated at all, because the input image to the model looks correct. But the machine learning model receives a completely different image after scaling, and classifies that image instead. Image scaling attacks can be implemented at training or test time, depending on how the model has been trained and what the adversary can access (Quiring et al., 2023; Quiring and Rieck, 2020; Quiring et al., 2020). The following contributions are made: demonstrating three representative versions of the image scaling attack on traffic sign data, with and without manipulating the labels, replicating the trained local trigger with a physical trigger on test images, and demonstrating how the image scaling attack is universally compatible with other backdoor and evasion attacks.
elib-URL des Eintrags: | https://elib.dlr.de/211643/ | ||||||||
---|---|---|---|---|---|---|---|---|---|
Dokumentart: | Berichtsreihe (sonstiger Bericht, Studienarbeit) | ||||||||
Titel: | The Image Scaling Attack and Unveiling its Risks in Traffic Sign Classification | ||||||||
Autoren: |
| ||||||||
Datum: | Oktober 2024 | ||||||||
Open Access: | Ja | ||||||||
Status: | nicht veröffentlicht | ||||||||
Stichwörter: | Image Scaling Attack, Adverserial Attacks | ||||||||
Institution: | Radboud Uni | ||||||||
Abteilung: | Faculty of Science Digital Security Group | ||||||||
HGF - Forschungsbereich: | Luftfahrt, Raumfahrt und Verkehr | ||||||||
HGF - Programm: | Verkehr | ||||||||
HGF - Programmthema: | Straßenverkehr | ||||||||
DLR - Schwerpunkt: | Verkehr | ||||||||
DLR - Forschungsgebiet: | V ST Straßenverkehr | ||||||||
DLR - Teilgebiet (Projekt, Vorhaben): | V - KoKoVI - Koordinierter kooperativer Verkehr mit verteilter, lernender Intelligenz | ||||||||
Standort: | andere | ||||||||
Institute & Einrichtungen: | Institut für KI-Sicherheit | ||||||||
Hinterlegt von: | Stolz, Tarek | ||||||||
Hinterlegt am: | 08 Jan 2025 18:14 | ||||||||
Letzte Änderung: | 08 Jan 2025 18:14 |
Nur für Mitarbeiter des Archivs: Kontrollseite des Eintrags