Published August 2, 2022 | Version 1.0
Dataset Open

Benchmarking on Microservices Configurations and the Impact on the Performance in Cloud Native Environments

  • 1. EURECOM

Description

The peer reviewed publication for this dataset has been published in LCN 2022, 47th Annual IEEE Conference on Local Computer Networks. Please cite this paper when referring to the dataset: https://www.eurecom.fr/publication/6971.

Cloud-native and containerization have changed the way to develop and deploy applications. Cloud-native rethinks the application architecture by embracing a microservice approach, where each microservice is packaged into containers to run in a centralized or an edge cloud. When deploying the container running the micro-service, the tenant has to specify the needed computing resources to run their workload in terms of the amount of CPU and memory limit. However, it is not straightforward for a tenant to know in advance the computing amount that allows running the microservice optimally. This will have an impact not only on the service performances but also on the infrastructure provider, particularly if the resource overprovisioning approach is used. To overcome this issue, we conduct an experimental study aiming to detect if a tenant's configuration allows running its service optimally. We run several experiments on a cloud-native platform, using different types of applications under different resource configurations. The obtained results are presented in the accepted IEEE LCN paper (https://www.eurecom.fr/publication/6971) and are shared in this dataset.

The datasets are collected for 3 types of applications: Web servers written in python and Golang, RabbitMQ data broker and the OpenAirInterface 5G Core network function AMF (Access and Mobility Management Function).


 

Web Servers:

files:  golang-web-server-performance.csv, python-web-server-performance.csv

We used Golang and Python-based web servers for the test. Each request to the web server returns a video of a size 43 MB. For testing we used ApacheBench, a command-line program used for benchmarking HTTP web servers. ApacheBench allows parallel requests from multiple clients. For each web server instance we send a number of requests ranging from 100 to 1000 and a concurrency level between 1 and 100, representing the number of parallel clients performing the requests.

The information available in the dataset are as follows:

time: timestamp of collection of metrics.

ram_limit: the memory allocated to the container in megabytes.

cpu_limit: the CPU allocated to the container.

ram_usage: the amount of memory used by the container at the time of the metrics collection in byte.

cpu_usage: the amount of CPU used by the container at the time of the metrics collection.

n: the number of requests sent to the container.

c: the concurrency level in the requests.

lat50: the least response time for the best 50% requests in microseconds.

lat66: the least response time for the best 66% requests in microseconds.

lat75: the least response time for the best 75% requests in microseconds.

lat80: the least response time for the best 80% requests in microseconds.

lat90: the least response time for the best 90% requests in microseconds.

lat95: the least response time for the best 95% requests in microseconds.

lat98: the least response time for the best 98% requests in microseconds.

lat99: the least response time for the best 99% requests in microseconds.

lat100: the least response time in microseconds.

 

5G Core network’s AMF:

file: amf-performance.csv

For testing we use my5G-RANTester, a tool for emulating control and data planes of the UE and gNB (5G base station). The number of simultaneous registration requests that are sent to each instance of the AMF varies between 10 and 400.

The information available in the dataset are as follows:

time: timestamp of collection of metrics.

ram_limit: the memory allocated to the container in megabytes.

cpu_limit: the CPU allocated to the container.

ram_usage: the amount of memory used by the container at the time of the metrics collection in byte.

cpu_usage: the amount of CPU used by the container at the time of the metrics collection.

n: the number of parallel registration requests sent to the AMF.

mean: the mean registration time for all the registration requests in microseconds.

lat50: the median registration time for registration requests in microseconds.

lat75: the least registration time for the best 75% registration requests in microseconds.

lat80: the least registration time for the best 80% registration requests in microseconds.

lat90: the least registration time for the best 90% registration requests in microseconds.

lat95: the least registration time for the best 95% registration requests in microseconds.

lat98: the least registration time for the best 98% registration requests in microseconds.

lat99: the least registration time for the best 99% registration requests in microseconds.

lat100: the least registration time in microseconds.

 

RabbitMQ data broker:

file: rabbitmq-performance.csv

For testing we used RabbitMQ PerfTest which is a throughput testing tool that simulates basic workloads and provides the throughput and the time that a message takes to be consumed by a consumer. For each deployed RabbitMQ server we used a number of producers and consumers that ranges from 50 to 500. Each producer sends messages to the broker with a rate of 100 messages per second for a period of time of 90 seconds.

The information available in the dataset are as follows:

time: timestamp of collection of metrics.

ram_limit: the memory allocated to the container in megabytes.

cpu_limit: the CPU allocated to the container.

ram_usage: the amount of memory used by the container at the time of the metrics collection in byte.

cpu_usage: the amount of CPU used by the container at the time of the metrics collection.

n: the number of producers sending messages to the RabbitMQ server.

Min: the minimum consumption time for the producer messages.

lat50: the median consumption time for the producer messages.

lat75: the least consumption time for the best 75% messages in microseconds.

lat95: the least consumption time for the best 95% messages in microseconds.

lat99: the least consumption time for the best 99% messages in microseconds.

Files

amf-performance.csv

Files (14.0 MB)

Name Size Download all
md5:e22596d6266604093fb44f34d4443717
3.2 MB Preview Download
md5:cfba008cc05d5aff82b46f4668c57fb6
6.9 MB Preview Download
md5:5e0c9c00c227178b9ec44c0a0b8ad65a
1.9 MB Preview Download
md5:2d02973977f328d3067b01c37faffa4c
1.9 MB Preview Download

Additional details

Related works

Is published in
Conference paper: https://www.eurecom.fr/publication/6971 (URL)

Funding

5G!Drones – Unmanned Aerial Vehicle Vertical Applications' Trials Leveraging Advanced 5G Facilities 857031
European Commission
MonB5G – Distributed management of Network Slices in beyond 5G 871780
European Commission

References

  • Mohamed Mekki, Nassima Toumi, and Adlen Ksentini. "Microservices Configurations and the Impact on the Performance in Cloud Native Environments". In: LCN 2022, 47th Annual IEEE Conference on Local Computer Networks, 26-29 September 2022, Edmonton, Canada.