Towards a Certification of Deep Image Classifiers against Convolutional Attacks - IRT SystemX Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Towards a Certification of Deep Image Classifiers against Convolutional Attacks

Résumé

Deep learning models do not achieve sufficient confidence, explainability and transparency levels to be integrated into safety-critical systems. In the context of DNN-based image classifier, robustness have been first studied under simple image attacks (2D rotation, brightness), and then, subsequently, under other geometrical perturbations. In this paper, we intend to introduce a new method to certify deep image classifiers against convolutional attacks. Using the abstract interpretation theory, we formulate the lower and upper bounds with abstract intervals to support other classes of advanced attacks including image filtering. We experiment the proposed method on MNIST and CIFAR10 databases and several DNN architectures. The obtained results show that convolutional neural networks are more robust against filtering attacks. Multilayered perceptron robustness decreases when increasing number of neurons and hidden layers. These results prove that the complexity of DNN models improves prediction’s accuracy but often impacts robustness.

Dates et versions

hal-03622400 , version 1 (29-03-2022)

Identifiants

Citer

Mallek Mziou-Sallami, Faouzi Adjed. Towards a Certification of Deep Image Classifiers against Convolutional Attacks. 14th International Conference on Agents and Artificial Intelligence, Feb 2022, Online Streaming, France. pp.419-428, ⟨10.5220/0010870400003116⟩. ⟨hal-03622400⟩

Collections

CEA IRT-SYSTEMX
27 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More