10.1007/978-3-319-46079-6_44
https://zenodo.org/records/803970
oai:zenodo.org:803970
Anastasios Papagiannis
Anastasios Papagiannis
Institute of Computer Science, FORTH (ICS) and Department of Computer Science, University of Crete
Giorgos Saloustros
Giorgos Saloustros
Institute of Computer Science, FORTH (ICS)
Manolis Marazakis
Manolis Marazakis
Institute of Computer Science, FORTH (ICS)
Angelos Bilas
Angelos Bilas
Institute of Computer Science, FORTH (ICS) and Department of Computer Science, University of Crete
User-space I/O for μs-level storage devices
Zenodo
2016
NVM
I/O
storage systems
low latency
protection
European Union
Horizon 2020
Euratom
Euratom research & training programme 2014-2018
2016-10-06
10.1145/3041710.3041713
https://zenodo.org/communities/eu
Creative Commons Attribution Non Commercial No Derivatives 4.0 International
System software overheads in the I/O path, including VFS and file system code, become more pronounced with emerging low-latency storage devices. Currently, these overheads constitute the main bottleneck in the I/O path and they limit efficiency of modern storage systems. In this paper we present Iris, a new I/O path for applications, that minimizes overheads from system software in the common I/O path. The main idea is the separation of the control and data planes. The control plane consists of an unmodified Linux kernel and is responsible for handling data plane initialization and the normal processing path through the kernel for non-file related operations. The data plane is a lightweight mechanism to provide direct access to storage devices with minimum
overheads and without sacrificing strong protection semantics. Iris requires neither hardware support from the storage devices nor changes in user applications. We evaluate our early prototype and we find that it achieves on a single core up to 1.7× and 2.2× better read and write random IOPS, respectively, compared to the xfs and ext4 file systems. It also scales with the number of cores; using 4 cores Iris achieves 1.84× and 1.96× better read and write random IOPS, respectively.
This paper has been presented at WOPSSS 2016: Workshop On Performance and Scalability of Storage Systems. An extended version of it appeared in ACM SIGOPS Operating Systems Review, Volume 50, Issue 3, December 2016,
Pages 3-11.
European Commission
10.13039/501100000780
671553
European Exascale System Interconnect and Storage