Poster Open Access

Deep Learning Inference on Commodity Network Interface Cards

Giuseppe Siracusano; Davide Sanvito; Salvator Galea; Roberto Bifulco

Artificial neural networks’ fully-connected layers require memory-bound operations on modern processors, which are therefore forced to stall their pipelines while waiting for memory loads. Computation batching improves on the issue, but it is largely inapplicable when dealing with time-sensitive serving workloads, which lowers the overall efficiency of the computing infrastructure. In this paper, we explore the opportunity to improve on the issue by offloading fully-connected layers processing to commodity Network Interface Cards. Our results show that current network cards can already process the fully-connected layers of binary neural networks, and thereby increase a machine’s throughput and efficiency. Further preliminary tests show that, with a relatively small hardware design modification, a new generation of network cards could increase their fully-connected layers processing throughput by a factor of 10.

Files (862.6 kB)
Name Size
2018sysml-nips-poster.pdf
md5:95c5a4a26b315b2098b827887c1655e2
862.6 kB Download
23
15
views
downloads
All versions This version
Views 2323
Downloads 1515
Data volume 12.9 MB12.9 MB
Unique views 2121
Unique downloads 1313

Share

Cite as