Testing Different Robots on the Same Reinforcement Learning Task Open Class
What you will learn in this Open Class
This live-class is about how to make different robots learn the same task by training them with reinforcement learning and using ROS, Gazebo and openai_ros.
Imagine that you want to compare the performance of different robots learning to do the same task. For example, how good is the Turtlebot2 and the ROSbot on learning how to move around a maze. We will show you how to use openai_ros package to reuse the same RL algorithm, the same TaskEnvironment and only change the RobotEnvironment (one for each robot). Remember that the package openai_ros already provides the RobotEnvironment for both those two robots, so it will be just a matter of knowing where to instantiate the class for each robot.
We will see:
- An overview of the openai_ros package for training robots with RL with ROS and Gazebo
- Where the learning algorithm must be put (provided to the attendants)
- Where to put the TaskEnvironment (also provided to the attendants)
- Where are the RobotEnvironment for each robot (Turtlebot2 and ROSbot) located inside the openai_ros package
- Where in the whole pipeline to put the RobotEnvironment
- How to connect everything to make it learn and compare results between the two robots