Benchmarking and deeper analysis of adversarial patch attack on object detectors - IRT SystemX Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Benchmarking and deeper analysis of adversarial patch attack on object detectors

Stéphane Herbin
  • Fonction : Auteur
  • PersonId : 966946
Milad Leyli-Abadi

Résumé

Adversarial attacks (either norm bounded or patch-based) have received much attention from the computer vision community over the last decade. The criticality of those attacks in the physical world, however, is questionable. Indeed, none of the proposed attacks in the literature has been demonstrated in a realistic physical implementation verifying simultaneously significant contextual effects, radiometric and geometrical robustness in either black or gray box settings. To advance this issue, in this paper we propose an evaluation framework for patch attacks against object detectors. This framework focuses on robustness and transferability properties by considering various image transformations and learning conditions. We validate our framework on three state-of-the-art patch attacks using PASCAL VOC dataset, providing a more comprehensive view of their criticality.
Fichier principal
Vignette du fichier
23.pdf (1.19 Mo) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-03806714 , version 1 (08-10-2022)

Identifiants

  • HAL Id : hal-03806714 , version 1

Citer

Pol Labarbarie, Adrien Chan-Hon-Tong, Stéphane Herbin, Milad Leyli-Abadi. Benchmarking and deeper analysis of adversarial patch attack on object detectors. Workshop Artificial Intelligence Safety - AI Safety (IJCAI-ECAI conference), Jul 2022, Vienna, Austria. ⟨hal-03806714⟩
120 Consultations
47 Téléchargements

Partager

Gmail Facebook X LinkedIn More