Info: Zenodo’s user support line is staffed on regular business days between Dec 23 and Jan 5. Response times may be slightly longer than normal.

Published July 14, 2020 | Version v1
Conference paper Open

Imitation Learning over Heterogeneous Agents with Restraining Bolts

  • 1. Sapienza University of Rome, Italy

Description

A common problem in Reinforcement Learning (RL) is that the reward function is hard to express. This can be overcome by resorting to Inverse Reinforcement Learning (IRL), which consists in first obtaining a reward function from a set of execution traces generated by an expert agent, and then making the learning agent learn the expert's behavior –this is known as Imitation Learning (IL). Typical IRL solutions rely on a numerical representation of the reward function, which raises problems related to the adopted optimization procedures.

We describe an IL method where the execution traces generated by the expert agent, possibly via planning, are used to produce a logical (as opposed to numerical) specification of the reward function, to be incorporated into a device known as Restraining Bolt (RB). The RB can be attached to the learning agent to drive the learning process and ultimately make it imitate the expert. We show that IL can be applied to heterogeneous agents, with the expert, the learner and the RB using different representations of the environment's actions and states, without specifying mappings among their representations.

Notes

Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS), 2020

Files

6747-Article Text-9976-1-10-20200522 (1).pdf

Files (596.5 kB)

Name Size Download all
md5:4db72ce91e443c29a5350077c7196619
596.5 kB Preview Download

Additional details

Funding

AI4EU – A European AI On Demand Platform and Ecosystem 825619
European Commission