Published May 18, 2021 | Version v1
Software Open

RecPipe: Co-designing Models and Hardware to Jointly Optimize Recommendation Quality and Performance

Description

Deep learning recommendation systems must provide high quality, personalized content under strict tail-latency targets and high system loads. This paper presents RecPipe, a system to jointly optimize recommendation quality and inference performance. Central to RecPipe is decomposing recommendation models into multi-stage pipelines to maintain quality while reducing compute complexity and exposing distinct parallelism opportunities. RecPipe implements an inference scheduler to map multi-stage recommendation engines onto commodity, heterogeneous platforms (e.g., CPUs, GPUs).While the hardware-aware scheduling improves ranking efficiency, the commodity platforms suffer from many limitations requiring specialized hardware. Thus, we design RecPipeAccel (RPAccel), a custom accelerator that jointly optimizes quality, tail-latency, and system throughput. RPAc-cel is designed specifically to exploit the distinct design space opened via RecPipe. In particular, RPAccel processes queries in sub-batches to pipeline recommendation stages, implements dual static and dynamic embedding caches, a set of top-k filtering units, and a reconfigurable systolic array. Com-pared to prior-art and at iso-quality, we demonstrate that RPAccel improves latency and throughput by 3x and 6x.

Files

RecPipe-Artifact.zip

Files (7.0 GB)

Name Size Download all
md5:bb99bf19ff2e968ba73ac1b76ff4148c
2.2 GB Download
md5:139f1c03b2ded4791715b9f6e90997f5
4.3 GB Download
md5:0900a4450584766eedb29f74221520b6
540.3 MB Download
md5:3f3486a22a53a006bb6152415b22242c
5.1 MB Preview Download

Additional details

References

  • Decouple software associated to arXiv:2105.08820.