View all Seminars  |  Download ICal for this event

Finding Adversarially Robust Representations

Series: Theory Seminar

Speaker: Aravindan Vijayaraghavan, Associate Professor, CS department, Northwestern University.

Date/Time: May 12 21:00:00

Location: Microsoft Teams - ON-LINE

Faculty Advisor:

Adversarial robustness measures the susceptibility of a machine learning algorithm to small perturbations made to the input either at test time or at training time. Our current theoretical understanding of adversarial robustness is limited, and has mostly focused on supervised learning tasks. In this talk, I will consider a natural extension of Principal Component Analysis (PCA) where the goal is to find a low dimensional subspace to represent the given data with minimum projection error, and that is in addition robust to small perturbations. Unlike PCA which is solvable in polynomial time, this formulation is computationally intractable to optimize as it generalizes a well-studied sparse PCA problem. I will describe an efficient algorithm that find approximately optimal solutions and show how this can be used as a robust subroutine for many downstream learning tasks, including training more certifiably robust neural networks. Based on joint works with Pranjal Awasthi, Xue Chen, Vaggos Chatziafratis, Himanshu Jain and Ankit Singh Rawat. ----------- Microsoft Teams Link: For more details about the seminar please visit the website at

Speaker Bio:

Host Faculty: Sruthi Gorantla and Rahul Madhavan