Seminars

View all Seminars  |  Download ICal for this event

Model-based Safe Deep Reinforcement Learning and Empirical Analysis of Safety via Attribution

Series: M.Tech (Research) Colloquium- ON-LINE

Speaker: Mr. Ashish Kumar Jayant, M.Tech (Research) student, Dept. of CSA

Date/Time: Apr 01 14:00:00

Location: Microsoft Teams - ON-LINE

Faculty Advisor: Prof. Shalabh Bhatnagar

Abstract:
During initial iterations of training in most Reinforcement Learning (RL) algorithms, agents perform a significant number of random exploratory steps, which in the real-world limit the practicality of these algorithms as this can lead to potentially dangerous behavior. Hence safe exploration is a critical issue in applying RL algorithms in the real world. This problem is well studied in the literature under the Constrained Markov Decision Process (CMDP) Framework, where in addition to single-stage rewards, state transitions receive single-stage costs as well. The prescribed cost functions are responsible for mapping undesirable behavior at any given time-step to a scalar value. Then we aim to find a feasible policy that maximizes reward returns and keeps cost returns below a prescribed threshold during training as well as deployment.
We propose a novel On-policy Model-based Safe Deep RL algorithm in which we learn the transition dynamics of the environment in an online manner as well as find a feasible optimal policy using Lagrangian Relaxation-based Proximal Policy Optimization. This combination of transition dynamics learning, and a safety-promoting RL algorithm leads to ~3-4 times less environment interactions and less cumulative hazard violations compared to the model-free approach. We use an ensemble of neural networks with different initializations to tackle epistemic and aleatoric uncertainty issues faced during environment model learning. We present our results on a challenging Safe Reinforcement Learning benchmark - the Open AI Safety Gym.
In addition to this, we perform an attribution analysis of actions taken by the Deep Neural Network-based policy at each time step. This analysis helps us to :
1. Identify the feature in state representation which is significantly responsible for the current action.
2.Empirically provide the evidence of the safety-aware agents ability to deal with hazards in the environment provided that hazard information is present in the state representation. In order to perform the above analysis, we assume state representation has meaningful information about hazards and goals. Then we calculate an attribution vector of the same dimension as state using a well-known attribution technique known as Integrated Gradients. The resultant attribution vector provides the importance of each state feature for the current action.
Microsoft teams link:
https://teams.microsoft.com/l/meetup-join/19%3ameeting_YzU5MGY4YTctNThmYi00NjlkLTllZmItNDc5ZjExMzY2ZTU4%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97-f6a7-41e3-b2c5-ad4193976476%22%2c%22Oid%22%3a%2271844033-661c-432d-9a6f-418de5b8c819%22%7d

Speaker Bio:

Host Faculty: