Poster Open Access

Deep Learning Inference on Commodity Network Interface Cards

Giuseppe Siracusano; Davide Sanvito; Salvator Galea; Roberto Bifulco

Artificial neural networks’ fully-connected layers require memory-bound operations on modern processors, which are therefore forced to stall their pipelines while waiting for memory loads. Computation batching improves on the issue, but it is largely inapplicable when dealing with time-sensitive serving workloads, which lowers the overall efficiency of the computing infrastructure. In this paper, we explore the opportunity to improve on the issue by offloading fully-connected layers processing to commodity Network Interface Cards. Our results show that current network cards can already process the fully-connected layers of binary neural networks, and thereby increase a machine’s throughput and efficiency. Further preliminary tests show that, with a relatively small hardware design modification, a new generation of network cards could increase their fully-connected layers processing throughput by a factor of 10.

Files (862.6 kB)
Name Size
862.6 kB Download
All versions This version
Views 9696
Downloads 4242
Data volume 36.2 MB36.2 MB
Unique views 8383
Unique downloads 4040


Cite as