Peeking into the Pool of Input-Agnostic Adversarial Perturbations

Seminar talk titled “Peeking into the Pool of Input-Agnostic Adversarial Perturbations”

Title Of the Talk: Peeking into the Pool of Input-Agnostic Adversarial Perturbations
Speaker: Dr. Konda Reddy
Date &Time: Monday, 10th Feb 2020 11:30
Venue: Room No A-414


Deep Neural Networks (DNNs) have become the driving force behind the recent Artificial Intelligence (AI) success. Adversarial perturbations are the small but structured additive noise that can make the DNNs unstable. In this talk I demonstrate simple yet very effective approaches to craft input-agnostic (Universal) adversarial perturbations that can confuse the DNN classifiers. More specifically, first I will show that it is possible to fool these models via encouraging them to behave approximately linearly. Because of the generic nature of this objective, we can observe its effectiveness to fool DNNs trained across multiple Computer Vision applications even in the data-free environment. After convincing that there exist easier ways to craft input-agnostic adversarial perturbations, I will discuss our attempts to find the scale of such perturbations for one or an ensemble of DNN classifiers. More specifically, I will take you through the pool of such perturbations captured by our generative model. This adversarially trained GAN-like generative framework can learn the manifold of such perturbations and paints a bigger picture about the DNN’s vulnerability. I will conclude with the interesting directions possible from our findings in the space of models’ robustness. In the end, I will briefly discuss my recent research work related to the adaptability of DNNs.

Speaker Profile:

Dr. Konda Reddy is currently a Postdoctoral Research Associate at the University of Edinburgh, and completed his PhD from IISc earlier. His research interests are in deep learning, machine learning and computer vision. He has published his research in premier venues such as ICML, CVPR and ECCV, and is interested in building robust and explainable AI systems that understand information and draw useful inferences as humans do.
For more information, please see

Monday, 10th Feb 2020 11:30