How benign is benign overfitting in deep neural networks?
Seminar talk titled How benign is benign overfitting in deep neural networks?
Title Of the Talk: How benign is benign overfitting in deep neural networks?
Speaker: Amartya Sanyal
Host Faculty: Dr. Vineeth N Balasubramanian
Date &Time: Tuesday, 18th Aug 2020 15:00
Venue: Google Meet
We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorly) trained models. When trained with SGD, deep neural networks essentially achieve zero training error, even in the presence of label noise, while also exhibiting good generalization on natural test data, something referred to as benign overfitting. However, these models are vulnerable to adversarial attacks. We identify label noise as one of the causes for adversarial vulnerability, and provide theoretical and empirical evidence in support of this. Surprisingly, we find several instances of label noise in datasets such as MNIST and CIFAR, and that robustly trained models incur training error on some of these, i.e. they don’t fit the noise. However, removing noisy labels alone does not suffice to achieve adversarial robustness. Standard training procedures bias neural networks towards learning “simple” classification boundaries, which may be less robust than more complex ones. We observe that adversarial training does produce more complex decision boundaries. We conjecture that in part the need for complex decision boundaries arises from sub-optimal representation learning. By means of simple toy examples, we show theoretically how the choice of representation can drastically affect adversarial robustness.
Amartya Sanyal is a doctoral candidate at the University of Oxford supervised by Prof. Varun Kanade and Prof. Philip Torr. He graduated with a B.Tech in Computer Science from the Indian Institute of Technology, Kanpur in 2017 with a minor in Linguistic Theory. His primary research interests span theoretical and empirical investigation on the reliability of modern deep learning methods in aspects of robustness and generalization in the presence of noise. In particular, he also looks at the importance of proper regularizations and representation learning that can help to effectively improve these properties. His research has also looked at improving privacy, calibration and computational efficiency of deep learning methods. Outside Oxford and IIT Kanpur, he has spent time in various research labs including the Montreal Institute of Learning Algorithms (MILA) with Prof. Yoshua Bengio, Laboratory for Computational and Statistical Learning (LCSL) with Prof. Lorenzo Rosasco, Twitter Cortex with Nicoloas Koumchatzky, and Facebook AI Research labs (FAIR) with Edward Grefenstette.
Tuesday, 18th Aug 2020 15:00
Google Meet Link: https://meet.google.com/mke-fzwb-ggv