Software Open Access

Framework for Deep Reinforcement Learning with GPU-CPU Multiprocessing

Ivan Sosin; Oleg Svidchenko; Aleksandra Malysheva; Daniel Kudenko; Aleksei Shpilman

Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="" xmlns:oai_dc="" xmlns:xsi="" xsi:schemaLocation="">
  <dc:creator>Ivan Sosin</dc:creator>
  <dc:creator>Oleg Svidchenko</dc:creator>
  <dc:creator>Aleksandra Malysheva</dc:creator>
  <dc:creator>Daniel Kudenko</dc:creator>
  <dc:creator>Aleksei Shpilman</dc:creator>
  <dc:description>One of the main challenges faced in Deep Reinforcement Learning is that running simulations may be CPU-heavy, while the optimal computing device for training neural networks is a GPU. One way to overcome this problem is building a custom machine with GPU to CPU proportions that avoid bottlenecking one or the other. Another is to have the GPU machine work together with the CPU machine and/or launching one or both via cloud computing service. We have designed a framework for such a tandem interaction.

Authors: Ivan Sosin, Oleg Svidchenko, Aleksandra Malysheva, Daniel Kudenko, Aleksei Shpilman.</dc:description>
  <dc:title>Framework for Deep Reinforcement Learning with GPU-CPU Multiprocessing</dc:title>
All versions This version
Views 587590
Downloads 2424
Data volume 484.5 kB484.5 kB
Unique views 517519
Unique downloads 2424


Cite as