Software Open Access

Framework for Deep Reinforcement Learning with GPU-CPU Multiprocessing

Ivan Sosin; Oleg Svidchenko; Aleksandra Malysheva; Daniel Kudenko; Aleksei Shpilman

One of the main challenges faced in Deep Reinforcement Learning is that running simulations may be CPU-heavy, while the optimal computing device for training neural networks is a GPU. One way to overcome this problem is building a custom machine with GPU to CPU proportions that avoid bottlenecking one or the other. Another is to have the GPU machine work together with the CPU machine and/or launching one or both via cloud computing service. We have designed a framework for such a tandem interaction.

Authors: Ivan Sosin, Oleg Svidchenko, Aleksandra Malysheva, Daniel Kudenko, Aleksei Shpilman.

Files (20.2 kB)
Name Size
20.2 kB Download
All versions This version
Views 747750
Downloads 3131
Data volume 625.8 kB625.8 kB
Unique views 657659
Unique downloads 3030


Cite as