Sampling in Cloud Benchmarking: A Critical Review and Methodological Guidelines
Authors/Creators
Description
This replication package contains the data and code to replicate our critical review on sampling in cloud benchmarking.
Paper
Akbari, Saman, and Manfred Hauswirth. "Sampling in Cloud Benchmarking: A Critical Review and Methodological Guidelines." 2024 IEEE International Conference on Cloud Computing Technology and Science (CloudCom). IEEE, 2024. DOI: 10.1109/CloudCom62794.2024.00034.
@inproceedings{akbari2024sampling,
title={Sampling in Cloud Benchmarking: A Critical Review and Methodological Guidelines},
author={Akbari, Saman and Hauswirth, Manfred},
booktitle={2024 IEEE International Conference on Cloud Computing Technology and Science (CloudCom)},
pages={160--167},
year={2024},
organization={IEEE}
}
Abstract
Cloud benchmarks suffer from performance fluctuations caused by resource contention, network latency, hardware heterogeneity, and other factors along with decisions taken in the benchmark design. In particular, the sampling strategy of benchmark designers can significantly influence benchmark results. Despite this well-known fact, no systematic approach has been devised so far to make sampling results comparable and guide benchmark designers in choosing their sampling strategy for use within benchmarks. To identify systematic problems, we critically review sampling in recent cloud computing research. Our analysis identifies concerning trends: (i) a high prevalence of non-probability sampling, (ii) over-reliance on a single benchmark, and (iii) restricted access to samples. To address these issues and increase transparency in sampling, we propose methodological guidelines for researchers and reviewers. We hope that our work contributes to improving the generalizability, reproducibility, and reliability of research results.
Files
sampling_in_cloud_benchmarking.zip
Files
(514.7 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:8be7c450908da9f2039c0b578947c15e
|
514.7 kB | Preview Download |