Seminars
View all Seminars | Download ICal for this eventOffline Data Enhanced On-Policy Policy Gradient
Series: Bangalore Theory Seminars
Speaker: Ayush Sekhari, MIT, Boston
Date/Time: Apr 25 18:30:00
Location: Online Talk (See Teams link below)
Abstract:
Hybrid RL is the setting where an RL agent has access to both offline data and online data by interacting with the real-world environment. In this work, we propose a new hybrid RL algorithm that combines an on-policy actor-critic method with offline data. On-policy methods such as policy gradient and natural policy gradient (NPG) have shown to be more robust to model misspecification, though sometimes it may not be as sample efficient as methods that rely on off-policy learning. On the other hand, offline methods that depend on off-policy training often require strong assumptions in theory and are less stable to train in practice. Our new approach integrates a procedure of off-policy training on the offline data into an on-policy NPG framework. We show that our approach, in theory, can obtain a best-of-both-worlds type of result -- it achieves the state-of-art theoretical guarantees of offline RL when offline RL-specific assumptions hold, while at the same time maintaining the theoretical guarantees of on-policy NPG regardless of the offline RL assumptions validity. Experimentally, in challenging rich-observation environments, we show that our approach outperforms a state-of-the-art hybrid RL baseline which only relies on off-policy policy optimization, demonstrating the empirical benefit of combining on-policy and off-policy learning.
<br>
Microsoft teams link:
<br>
Link
We are grateful to the Kirani family for generously supporting the theory seminar series
<br>
<br>
Hosts: Rameesh Paul, KVN Sreenivas, Rahul Madhavan, Debajyoti Kar