Running Kubernetes Workloads on HPC
- 1. Foundation for Research and Technology Hellas
Description
Cloud and HPC increasingly converge in hardware platform capabilities and speci cations, nevertheless still largely differ in the software stack and how it manages available resources. The HPC world typically favors Slurm for job scheduling, whereas Cloud deployments rely on Kubernetes to orchestrate container instances across nodes. Running hybrid workloads is possible by using bridging mechanisms that submit jobs from one environment to the other. However, such solutions require costly data movements, while operating within the constraints set by each setup's network and access policies. In this work, we explore a design that enables running unmodified Kubernetes workloads directly on HPC. With High-Performance Kubernetes (HPK), users deploy their own private Kubernetes \mini Clouds", which internally convert container lifecycle management commands to use the system-level Slurm installation for scheduling and Singularity/Apptainer as the container runtime. We consider this approach to be practical for deployment in HPC centers, as it requires minimal pre-configuration and retains existing resource management and accounting policies. HPK provides users with an effective way to utilize resources by a combination of well-known tools, APIs, and more interactive and user-friendly interfaces as is common practice in the Cloud domain, as well as seamlessly combine Cloud-native tools with HPC jobs in converged, containerized workflows.
Files
chazapis-WOCC-2023-preprint.pdf
Files
(437.4 kB)
Name | Size | Download all |
---|---|---|
md5:5fdd4112f2179a0a51cb1b4fbf2df872
|
437.4 kB | Preview Download |
Additional details
Related works
- Is previous version of
- Conference paper: 10.1007/978-3-031-40843-4_14 (DOI)