SeminarsView all Seminars | Download ICal for this event
Dismantling the deep neural network black box
Series: Department Seminar (ON-LINE)
Speaker: Dr. Chandrashekar LakshminarayananIndian Institute of Technology Palakkad Kerala
Date/Time: Jan 21 15:00:00
Location: Microsoft Teams - ON-LINE
Deep neural networks (DNNs) have been quite successful in a variety of supervised learning tasks. A key reason attributed to the success of DNNs is their ability to automatically learn high level representation of the data. The standard view is that low level representations are learnt in the initial layers, and as one proceeds in depth, sophisticated high level representations are learnt in the deeper layers. In this talk, we will focus on DNNs with rectified linear unit (ReLU) activations (ReLU-DNNs), a widely used sub-class of DNNs. We will exploit the gating property of ReLU activations to build an alternative theory for representation learning in ReLU-DNNs. The highlights are:
1) We encode gating information in a novel neural path feature. We analytically show that the standalone role of gates is characterised by the associated neural path kernel (NPK).
2) We show via experiments (on standard datasets) that almost all useful information is stored in the gates, and that neural path features are learnt during training.
3) We show that the neural path kernel has a composite structure. In case of fully connected DNNs, the NPK is a product of the base kernel, in the case of residual networks with skip connections, the NPK has sum of product (of base kernels) form, and in the case of convolutional nets, the NPK is rotationally invariant.
Chandrashekar Lakshminarayanan obtained his PhD from the Department of Computer Science and Automation, Indian Institute of Science (2016), and was a post-doctoral research fellow at the Department of Computing Science (July 2016- June 2017), University of Alberta, and a research scientist at DeepMind, London (August 2017-July 2018). Prior to his PhD, he was an analog design engineer at Cosmic Circuits, Bangalore for a period of 3 years. He joined IITPKD as an assistant professor in July 2018. His research interests include deep learning, reinforcement learning, stochastic approximation algorithms. He is also interested in machine learning for human-in-the-loop systems.
Microsoft Teams link:
Host Faculty: Prof. Shalabh Bhatnagar