Safe_Reinforcement_Learning

Repository containing the code for the paper "Safe Model-Based Reinforcement Learning using Robust Control Barrier Functions". Specifically, an implementation of SAC + Robust Control Barrier Functions (RCBFs) for safe reinforcement learning in two custom environments

View on GitHub

Safe Reinforcement Learning


Overview

Repository containing the code for the paper “Safe Model-Based Reinforcement Learning using Robust Control Barrier Functions”. Specifically, an implementation of SAC + Robust Control Barrier Functions (RCBFs) for safe reinforcement learning in two custom environments.

While exploring, an RL agent can take actions that lead the system to unsafe states. Here, we use a differentiable RCBF safety layer that minimially alters (in the least-squares sense) the actions taken by the RL agent to ensure the safety of the agent.

Usage

Following are the list of commands to compile \& run the codes for the various implementations mentioned above:

To install Anaconda follow the instructions in the following webpage:
https://www.digitalocean.com/community/tutorials/how-to-install-the-anaconda-python-distribution-on-ubuntu-20-04-quickstart

Create a conda environment for the Safe RL:

conda create --name safe_rl  

Switch to the newly create environment:

conda activate safe_rl  

Then, clone the repository on your system:

git clone https://github.com/tayalmanan28/Safe_Reinforcement_Learning.git

Install the following required packages:

pip install -r requirements.txt

Running the Experiments

The environment used in this experiment is Unicycle. Unicycle involves a unicycle robot tasked with reaching a desired location while avoiding obstacles

Training:

Testing

LICENSE

The code is licenced under the MIT license and free to use by anyone without any restrictions.


Created with :heart: by Manan Tayal