Conference paper Open Access
Marco Cococcioni; Federico Rossi; Emanuele Ruffaldi; Sergio Saponara
With the pervasiveness of deep neural networks in scenarios that bring real-time requirements, there is the increasing need for optimized arithmetic on high performance architectures. In this paper we adopt two key visions: i) extensive use of vectorization to accelerate computation of deep neural network kernels; ii) adoption of the posit compressed arithmetic in order to reduce the memory transfers between the vector registers and the rest of the memory architecture. Finally, we present our first results on a real hardware implementation of the ARM Scalable Vector Extension.
|All versions||This version|
|Data volume||5.0 MB||5.0 MB|