Published October 1, 2021
| Version 0.2.2-dev
Software
Open
rlberry - A Reinforcement Learning Library for Research and Education
Description
Release of version 0.4.0 of rlberry.
New in 0.4.0
PR #273
- Change the default behavior of plot_writer_data so that if seaborn has version >= 0.12.0 then a 90% percentile interval is used instead of sd.
PR #269
- Add rlberry.envs.PipelineEnv a way to define pipeline of wrappers in a simple way.
PR #262
- PPO can now handle continuous actions.
PR #261, #264
Implementation of Munchausen DQN in rlberry.agents.torch.MDQNAgent.
Comparison of MDQN with DQN agent in the long tests.
PR #244, #250, #253
- Compress the pickles used to save the trained agents.
PR #235
- Implementation of rlberry.envs.SpringCartPole environment, an RL environment featuring two cartpoles linked by a spring.
PR #226, #227
Improve logging, the logging level can now be changed with rlberry.utils.logging.set_level().
Introduce smoothing in curves done with plot_writer_data when only one seed is used.
PR #223
- Moved PPO from experimental to torch agents. Tested and benchmarked.
Notes
Files
rlberry-py/rlberry-v0.4.0.zip
Files
(3.5 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:85ea3122881cf699d5a7b51f81ea04a3
|
3.5 MB | Preview Download |
Additional details
Related works
- Is supplement to
- https://github.com/rlberry-py/rlberry/tree/v0.4.0 (URL)