BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//project/author//NONSGML v1.0//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
DTEND:20201012T120000Z
UID:0269111a8e94c9d3a357b1c4e50b83e0-101
DTSTAMP:19700101T120018Z
DESCRIPTION:Statistics, computation and adaptation in offline and online learning
URL;VALUE=URI:https://www.csa.iisc.ac.in/newweb/event/101/statistics-computation-and-adaptation-in-offline-and-online-learning/
SUMMARY:In this talk, I will address the statistical, computational and adaptive aspects of learning theory; both in offline (batch) as well as in online settings. The first one-third of the talk deals with a computationally efficient and statistically sound Alternating Minimization (AM) algorithm (often called hard EM), typically used to solve non-convex problems. In particular, we apply AM to a classical non-convex problem, namely max-affine regression. Max-affine regression can be thought of as a generalization of the (real) Phase Retrieval problem, and closely resembles the canonical problem of convex regression in non-parametric statistics. In the next segment of the talk, I characterize the (exact) convergence speed of the AM algorithm. In particular, a super-linear convergence of AM is (theoretically) proved, resolving a long-standing (1995) conjecture of Lei Xu and Micheal I. Jordan. The final part of the talk deals with adaptation, in a non-trivial online (bandit) setting. I will talk about my recent works on model selection in contextual bandits, which partially solves an open problem of COLT 2020.
DTSTART:20201012T120000Z
END:VEVENT
END:VCALENDAR