Generic and ML Workloads in an HPC Datacenter
Creators
- Chu, Xiaoyu (Contact person)1
- Hofstätter, Daniel (Contact person)2
- Ilager, Shashikant (Contact person)2
- Talluri, Sacheendra (Contact person)1
- Kampert, Duncan (Contact person)3
- Podareanu, Damian (Contact person)3
- Duplyakin, Dmitry (Contact person)4
- Brandic, Ivona (Contact person)2
- Iosup, Alexandru (Contact person)1
Description
This archive contains hardware and workload traces from SURF Lisa, a Dutch datacenter consisting of 338 nodes, used by universities and researchers for various jobs. Around 85% of the nodes are equipped only with CPUs, handling generic compute-heavy workloads, the other 15% come with additional GPUs, serving as accelerators for Machine Learning (ML) jobs. Individual node hardware configurations are listed in `node_hardware_info.parquet`.
Jobs within Lisa are submitted over the SLURM scheduler, where we logged job start and end time, resource allocation, and exit state for roughly 10 months (December 2021 to November 2022). This data saved in `slurm_table_cleaned.parquet`.
Addidionally, we provide detailed Prometheus monitoring logs from all nodes over a timespan of 5 months (June 2022 to November 2022) in `prom_table_cleaned.parquet`. These logs contain over 90 attributes, including CPU/GPU power and temperatures, network I/O, memory and storage usage, and many more. These metrics are sampled at 30s intervals, resulting in a total of almost 130 million records across all nodes.
Finally, job and node data are provided as a joined dataset in `prom_slurm_joined.parquet` for their 4 months of overlapping timespan. This combined data can provide more insights into the resource consumption and performance patterns of jobs.
We conducted detailed analysis of this data where we specifically looked at the different characteristics of generic vs. ML workloads in a heterogeneous HPC environment. Our code used for evaluation can be found on GitHub.
Dataset Name | Explanation |
---|---|
slurm_table_cleaned.parquet | Job data collected by SLURM |
prom_table_cleaned.parquet | Node data collected by Prometheus |
prom_slurm_joined.parquet | Joined Job and Node dataset |
node_hardware_info.parquet | Hardware configurations of each node |