Software Open Access

Framework for Deep Reinforcement Learning with GPU-CPU Multiprocessing

Ivan Sosin; Oleg Svidchenko; Aleksandra Malysheva; Daniel Kudenko; Aleksei Shpilman

One of the main challenges faced in Deep Reinforcement Learning is that running simulations may be CPU-heavy, while the optimal computing device for training neural networks is a GPU. One way to overcome this problem is building a custom machine with GPU to CPU proportions that avoid bottlenecking one or the other. Another is to have the GPU machine work together with the CPU machine and/or launching one or both via cloud computing service. We have designed a framework for such a tandem interaction.

Authors: Ivan Sosin, Oleg Svidchenko, Aleksandra Malysheva, Daniel Kudenko, Aleksei Shpilman.

Files (20.2 kB)
Name Size
20.2 kB Download
All versions This version
Views 299302
Downloads 1515
Data volume 302.8 kB302.8 kB
Unique views 275277
Unique downloads 1515


Cite as