elib
DLR-Header
DLR-Logo -> http://www.dlr.de
DLR Portal Home | Impressum | Datenschutz | Barrierefreiheit | Kontakt | English
Schriftgröße: [-] Text [+]

Enhancing Trust in AI Systems through Adaptive Image Quality Compensation

Kees, Yannick und Hoemann, Elena und Hallerbach, Sven und Koester, Frank (2025) Enhancing Trust in AI Systems through Adaptive Image Quality Compensation. Helmholtz Imaging Conference 2025, 2025-06-25 - 2025-06-27, Potsdam, Deutschland. (nicht veröffentlicht)

[img] PDF - Nur DLR-intern zugänglich
543kB

Kurzfassung

Perception is one of the main applications in which neural networks are far superior to conventional algorithms. One example is AI systems for automated driving, which can detect pedestrians based on image data and avoid them accordingly. One problem with these systems is that their output depends heavily on the quality of the input images. For example, if an image is of inferior quality because it is heavily contaminated with noise or is too dark, accurate predictions are hardly feasible. In addition, different types of errors can occur that are of different relevance to the trustworthiness of the underlying system. For example, it may be more critical not to recognize an existing person than to recognize a person where there is none. We want to show that we can still avoid the most critical errors in situations of poor image quality. To do this, we want to compare two different approaches. In the first approach, we lower the networks' confidence threshold based on the estimated image quality to make them perceive things more cautiously in uncertain situations. In the second approach, we learn more cautious behavior directly during training by modifying the loss function by penalizing different types of errors depending on the image quality of the training data. We also aim to demonstrate that, in practice, combining these two approaches can yield the best results. In summary, we will present a design strategy for AI-based systems that can deal with poor-quality input data without resorting to fallback solutions. In our example, this can be achieved by making a system react with varying degrees of caution. Such measures strengthen trust in AI-based systems and increase safety under unfavorable conditions.

elib-URL des Eintrags:https://elib.dlr.de/214934/
Dokumentart:Konferenzbeitrag (Poster)
Titel:Enhancing Trust in AI Systems through Adaptive Image Quality Compensation
Autoren:
AutorenInstitution oder E-Mail-AdresseAutoren-ORCID-iDORCID Put Code
Kees, Yannickyannick.kees (at) dlr.dehttps://orcid.org/0009-0004-3614-7220NICHT SPEZIFIZIERT
Hoemann, Elenaelena.hoemann (at) dlr.dehttps://orcid.org/0000-0001-9315-548XNICHT SPEZIFIZIERT
Hallerbach, SvenSven.Hallerbach (at) dlr.deNICHT SPEZIFIZIERTNICHT SPEZIFIZIERT
Koester, Frankfrank.koester (at) dlr.deNICHT SPEZIFIZIERTNICHT SPEZIFIZIERT
Datum:2025
Referierte Publikation:Nein
Open Access:Nein
Gold Open Access:Nein
In SCOPUS:Nein
In ISI Web of Science:Nein
Status:nicht veröffentlicht
Stichwörter:Trustworthiness, AI Safety, Machine Learning, Object Detection
Veranstaltungstitel:Helmholtz Imaging Conference 2025
Veranstaltungsort:Potsdam, Deutschland
Veranstaltungsart:nationale Konferenz
Veranstaltungsbeginn:25 Juni 2025
Veranstaltungsende:27 Juni 2025
Veranstalter :Helmholtz Imaging
HGF - Forschungsbereich:Luftfahrt, Raumfahrt und Verkehr
HGF - Programm:Verkehr
HGF - Programmthema:Straßenverkehr
DLR - Schwerpunkt:Verkehr
DLR - Forschungsgebiet:V ST Straßenverkehr
DLR - Teilgebiet (Projekt, Vorhaben):V - V&V4NGC - Methoden, Prozesse und Werkzeugketten für die Validierung & Verifikation von NGC
Standort: andere
Institute & Einrichtungen:Institut für KI-Sicherheit
Hinterlegt von: Kees, Yannick
Hinterlegt am:30 Jun 2025 08:41
Letzte Änderung:10 Jul 2025 11:50

Nur für Mitarbeiter des Archivs: Kontrollseite des Eintrags

Blättern
Suchen
Hilfe & Kontakt
Informationen
OpenAIRE Validator logo electronic library verwendet EPrints 3.3.12
Gestaltung Webseite und Datenbank: Copyright © Deutsches Zentrum für Luft- und Raumfahrt (DLR). Alle Rechte vorbehalten.