Efficient Defenses against Adversarial Examples for Deep Neural Networks presented at Grehack 2017

by Valentina Zantedeschi, Irina Nicolae,

Summary : Following the recent adoption of deep neural networks (DNN) in a wide range of application fields, adversarial attacks against these models have proven to be an indisputable threat. Adversarial samples are crafted with a deliberate intention of producing a specific response from the system. Multiple attacks and defenses have been proposed in the literature, but the lack of better understanding of sensitivity of DNNs justifies adversarial samples still being an open question. This talk proposes a new defense method based on practical observations which is easy to integrate into models and performs better than state-of-the-art defenses. The proposed solution is meant to reinforce the structure of a DNN, making its prediction more stable and less likely to be fooled by adversarial samples. The extensive experimental study proves the efficiency of our method against multiple attacks, comparing it to multiple defenses, both in white-box and black-box setups. Additionally, the implementation brings almost no overhead to the training procedure, while maintaining the prediction performance of the original model on clean samples. A live demo of creating adversarial images will take place during the talk.