Report Open Access

Graph Neural Network Inference on FPGA

Kazi Ahmed Asif Fuad

Graph Neural Network possess prospect in track reconstruction for the Large Hadron Collider use-case due to high dimensional and sparse data. Field Programmable Gate Arrays (FPGAs) have the potential to speedup inference of machine learning algorithms compared to GPUs as they support pipeline operation. In our research, we have used hls4ml, a machine learning inference package for FPGAs and we have evaluated different approaches: Pipeline, Dataflow and Dataflow with pipeline blocks architectures. Results show that the Pipeline architecture is the fastest but it has some disadvantages such as large loop unrolling and non-functioning reuse factor. Our solution of large loop unrolling takes more than 100 hours to complete synthesis of Hardware architecture from High Level Synthesis(HLS) C++ code. On the other hand, our implementation of the system using the Dataflow architecture is too slow but it does not solve large synthesis time. So we proposed a modified Dataflow architecture where some of the building blocks are in pipeline architecture. We have found prominent results from this architecture but we have not solved the large synthesis time problem.

Files (1.6 MB)
Name Size
Report_Kazi Ahmed Asif_Fuad.pdf
md5:a611f7438fd4d34329652310456d8f2a
1.6 MB Download
63
51
views
downloads
All versions This version
Views 6363
Downloads 5151
Data volume 79.5 MB79.5 MB
Unique views 5757
Unique downloads 4646

Share

Cite as