Poster Open Access

Deep Learning Inference on Commodity Network Interface Cards

Giuseppe Siracusano; Davide Sanvito; Salvator Galea; Roberto Bifulco


Citation Style Language JSON Export

{
  "publisher": "Zenodo", 
  "DOI": "10.5281/zenodo.3813152", 
  "language": "eng", 
  "title": "Deep Learning Inference on Commodity Network Interface Cards", 
  "issued": {
    "date-parts": [
      [
        2020, 
        5, 
        7
      ]
    ]
  }, 
  "abstract": "<p>Artificial neural networks&rsquo; fully-connected layers require memory-bound operations on modern processors, which are therefore forced to stall their pipelines while waiting for memory loads. Computation batching improves on the issue, but it is largely inapplicable when dealing with time-sensitive serving workloads, which lowers the overall efficiency of the computing infrastructure. In this paper, we explore the opportunity to improve on the issue by offloading fully-connected layers processing to commodity Network Interface Cards. Our results show that current network cards can already process the fully-connected layers of binary neural networks, and thereby increase a machine&rsquo;s throughput and efficiency. Further preliminary tests show that, with a relatively small hardware design modification, a new generation of network cards could increase their fully-connected layers processing throughput by a factor of 10.</p>", 
  "author": [
    {
      "family": "Giuseppe Siracusano"
    }, 
    {
      "family": "Davide Sanvito"
    }, 
    {
      "family": "Salvator Galea"
    }, 
    {
      "family": "Roberto Bifulco"
    }
  ], 
  "id": "3813152", 
  "event-place": "Vancvouver, Canada", 
  "version": "final", 
  "type": "graphic", 
  "event": "Thirty-second Conference on Neural Information Processing Systems (NeurIPS | 2018)"
}
51
32
views
downloads
All versions This version
Views 5151
Downloads 3232
Data volume 27.6 MB27.6 MB
Unique views 4747
Unique downloads 3030

Share

Cite as