Published August 19, 2021
| Version v0.2
Software
Open
rlberry - A Reinforcement Learning Library for Research and Education
Description
Improving interface and tools for parallel execution (#50)
AgentStatsrenamed toAgentManager.AgentManagercan handle agents that cannot be pickled.Agentinterface requireseval()method instead ofpolicy()to handle more general agents (e.g. reward-free, POMDPs etc).- Multi-processing and multi-threading are now done with
ProcessPoolExecutorandThreadPoolExecutor(allowing nested processes for example). Processes are created withspawn(jax does not work withfork, see #51).
New experimental features (see #51, #62)
- JAX implementation of DQN and replay buffer using reverb.
rlberry.network: server and client interfaces to exchange messages via sockets.RemoteAgentManagerto train agents in a remote server and gather the results locally (usingrlberry.network).
Logging and rendering:
- Data logging with a new
DefaultWriterand improved evaluation and plot methods inrlberry.manager.evaluation. - Fix rendering bug with OpenGL (bf606b44aaba1b918daf3dcc02be96a8ef5436b4).
Bug fixes.
Notes
Files
rlberry-py/rlberry-v0.2.zip
Files
(318.9 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:126b1bfcceae6db4473f2dd5b546ff8d
|
318.9 kB | Preview Download |
Additional details
Related works
- Is supplement to
- https://github.com/rlberry-py/rlberry/tree/v0.2 (URL)