Report Open Access

Graph Neural Network Inference on FPGA

Kazi Ahmed Asif Fuad

Graph Neural Network possess prospect in track reconstruction for the Large Hadron Collider use-case due to high dimensional and sparse data. Field Programmable Gate Arrays (FPGAs) have the potential to speedup inference of machine learning algorithms compared to GPUs as they support pipeline operation. In our research, we have used hls4ml, a machine learning inference package for FPGAs and we have evaluated different approaches: Pipeline, Dataflow and Dataflow with pipeline blocks architectures. Results show that the Pipeline architecture is the fastest but it has some disadvantages such as large loop unrolling and non-functioning reuse factor. Our solution of large loop unrolling takes more than 100 hours to complete synthesis of Hardware architecture from High Level Synthesis(HLS) C++ code. On the other hand, our implementation of the system using the Dataflow architecture is too slow but it does not solve large synthesis time. So we proposed a modified Dataflow architecture where some of the building blocks are in pipeline architecture. We have found prominent results from this architecture but we have not solved the large synthesis time problem.

Files (1.6 MB)
Name Size
Report_Kazi Ahmed Asif_Fuad.pdf
1.6 MB Download
All versions This version
Views 311311
Downloads 240240
Data volume 373.9 MB373.9 MB
Unique views 296296
Unique downloads 227227


Cite as