Inverse Reinforcement Learning from Visual Demonstration to Train AI Systems

tanishq Last Updated : 05 Jul, 2021
3 min read

Overview

  • Create AI systems that can learn in the real world as efficiently as people can.
  • The first system to use model-based inverse reinforcement learning (IRL) using visual demonstrations on a physical robot.
  • This research improves building AI that learns a range of tasks from a few visual demonstrations and improvements will be seen as the entire codebase has been open-sourced.

Introduction

We as humans are highly skilled at learning simple or complex tasks just by watching someone else do it. From learning how to pick up a bottle or kicking a football, these actions can be easily performed just by watching a demonstration. But can machines achieve the same level of skill?

It wasn’t possible for AI Robots to look at a visual demonstration of the task being performed and reenact it, without being programmed or having specific rewards for each task. This field of learning from visual demonstrations is a very active area of research.

Rather than using pure trial and error, a robot has been trained to learn a model of its environment, observe human behavior, and then infer an appropriate reward function.

A simple example would be to teach a robot to place a bottle. The first step is to create a reward so it learns to hold the bottle on the right side up over the table. Then a separate reward will be given that focuses on teaching the robot to place the bottle down. As you can imagine this is a slow and tedious process for a very simple task.

Several challenges still exist with the method. Learning a good visual predictive model is difficult especially one that assumes that demonstrations are given from the perspective of the robot. One of the biggest challenges is finding various starting configurations and ways to generalize our approach from one context to another.

Most research using IRL has been done in a simulation environment where the robot already knows its surroundings and understands how its actions will change its environment. It’s far more difficult for AI Robots to learn and adapt to the complexities and noise of the real world.

How does it work?

A major drawback of other IRL approaches is the coupling of action and state measurements which proves to be very costly to inherit. A high-level overview of the algorithm:-

  1. Training key point detectors that extract low-dimensional vision features both on human demonstrations as well as on the robot.
  2. Pre-train a model with which the robot can predict how its actions change this low-dimensional feature representation.
  3. Once the robot has observed a trajectory from the human demonstration, it can use its own model to optimize its actions to achieve relatively the same trajectory.
  4. The team has introduced a novel approach using an inverse reinforcement learning algorithm that builds on recent progress in gradient-based optimization techniques, which allows for more stable and effective optimization.

The objective of IRL is to learn reward functions so that the result of the policy optimization step matches the visual demonstrations well.

The proposed system, depicted above, comprises a keypoint detector (An autoencoder with a structural bottleneck to detect 2D keypoints that correspond to pixel positions or areas with maximum variability in the input data) that produces low-dimensional visual representations as explained in point (1), in the form of key points, from RGB image inputs.

A model that takes in the current joint state and actions u and predicts the key points and joint state at the next time step. Finally, a gradient-based model given the trained model and a cost function optimize the actions for a given task.

Ending notes

This is an amazing development for the future of Artificial Intelligence. A model like this could be used to build AI systems that learn a plethora of skills just by observing video examples. Not only that, if a system like this is able to learn from limited examples it might lead to much smarter systems in robotics manipulation.

Cutting-edge research by Facebook.ai using self-supervised learning, Reinforcement Learning, and gradient-based optimization techniques, has shown that it’s possible for AI systems to learn the simple task of holding and placing a bottle on a table without their being explicitly told how to move it.

The team at Facebook.ai has open-sourced the entire codebase so feel free to check out the implementation and the published paper.

Do share your valuable feedback in the comments section below and let me know what you think of this development and it’s possible use cases in the future.

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details