Software Open Access

Framework for Deep Reinforcement Learning with GPU-CPU Multiprocessing

Ivan Sosin; Oleg Svidchenko; Aleksandra Malysheva; Daniel Kudenko; Aleksei Shpilman

One of the main challenges faced in Deep Reinforcement Learning is that running simulations may be CPU-heavy, while the optimal computing device for training neural networks is a GPU. One way to overcome this problem is building a custom machine with GPU to CPU proportions that avoid bottlenecking one or the other. Another is to have the GPU machine work together with the CPU machine and/or launching one or both via cloud computing service. We have designed a framework for such a tandem interaction.

Authors: Ivan Sosin, Oleg Svidchenko, Aleksandra Malysheva, Daniel Kudenko, Aleksei Shpilman.

Files (20.2 kB)
Name Size
iasawseen/MultiServerRL-v1.0.zip
md5:569c93638c7a3485b8c74839b7f16af0
20.2 kB Download
126
7
views
downloads
All versions This version
Views 126126
Downloads 77
Data volume 141.3 kB141.3 kB
Unique views 106106
Unique downloads 77

Share

Cite as