BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//project/author//NONSGML v1.0//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
DTEND:20200123T120000Z
UID:e06431579dced24d649ea26440cef524-52
DTSTAMP:19700101T120011Z
DESCRIPTION:Rethinking the role of optimization in learning.
URL;VALUE=URI:https://www.csa.iisc.ac.in/newweb/event/52/rethinking-the-role-of-optimization-in-learning/
SUMMARY:In this talk, I will overview our recent progress towards understanding how we learn large capacity machine learning models. In the modern practice of machine learning, especially deep learning, many successful models have far more trainable parameters compared to the number of training examples. Consequently, the optimization objective for training such models have multiple minimizers that perfectly fit the training data. More problematically, while some of these minimizers generalize well to new examples, most minimizers will simply overfit or memorize the training data and will perform poorly on new examples. In practice though, when such ill-posed objectives are minimized using local search algorithms like (stochastic) gradient descent ((S)GD), the
DTSTART:20200123T120000Z
END:VEVENT
END:VCALENDAR