Replication Package for "TriggerBench: A Performance Benchmark for Serverless Function Triggers"
Creators
- 1. Chalmers | University of Gothenburg
Description
This replication package contains the code (`aws-triggers` and `azure-trigger`), data analysis scripts (`data-analysis`), and dataset (`data`) of the TriggerBench cross-provider serverless benchmark.
It also bundles a customized extension of the `serverless-benchmarker` tool to automate and analyze serverless performance experiments.
TriggerBench
The Github repository joe4dev/trigger-bench contains the last version of TriggerBench. This replication package describes the version for the paper "TriggerBench: A Performance Benchmark for Serverless Function Triggers".
TriggerBench currently supports three triggers on AWS and eight triggers on Microsoft Azure.
Dataset
The `data/aws` and `data/azure` directories contain data from benchmark executions from April 2022.
Each execution is a separate directory with a timestamp in the format `yyyy-mm-dd-HH-MM-SS` (e.g., `2022-04-15_21-58-52`) and contains the following files:
- `k6_metrics.csv`: Load generator HTTP client logs in CSV format (see [K6 docs](https://k6.io/docs/results-visualization/csv/))
- `sb_config.yml`: serverless benchmarker execution configuration including experiment label.
- `trigger.csv`: analyzer output CSV per trace.
- `root_trace_id`: The trace id created by k6 and adopted by the invoker function
- `child_trace_id`: The trace id newly created by the receiver function if trace propagation is not supported (this is the case for most asynchronous triggers)
- `t1`-`t4`: Timestamps following the trace model (see paper)
- `t5`-`t9`: Additional timestamps for measuring timestamping overhead
- `coldstart_f1=True|False`: coldstart status for invoker (f1) and receiver (f2) functions
- `trace_ids.txt`: text file with each pair of `root_trace_id` and `child_trace_id` on a new line.
- `traces.json`: raw trace JSON representation as retrieved from the provider tracing service. For AWS, see [X-Ray segment docs](https://docs.aws.amazon.com/xray/latest/devguide/xray-api-segmentdocuments.html). For Azure, see [Application Insights telemetry data model](https://docs.microsoft.com/en-us/azure/azure-monitor/app/data-model).
- `workload_options.json`: [K6 load scenario](https://k6.io/docs/using-k6/scenarios/) configuration.
Replicate Data Analysis
Installation
1. Install [Python](https://www.python.org/downloads/) 3.10+
2. Install Python dependencies `pip install -r requirements.txt`
Create Plots
1. Run `python plots.py` generates the plots and the statistical summaries presented in the paper.
By default, the plots will be saved into a `plots` sub-directory.
An alternative output directory can be configured through the environment variable `PLOTS_PATH`.
> Hint: For interactive development, we recommend the VSCode [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) in [interactive mode](https://youtu.be/lwN4-W1WR84?t=107).
Replicate Cloud Experiments
The following experiment plan automates benchmarking experiments with different types workloads (constant and bursty).
This generates a new dataset in the same format as described above.
- Set up a load generator as vantage point following the description in [LOADGENERATOR](./serverless-benchmarker/docs/LOADGENERATOR.md).
- Choose the `PROVIDER` (aws or azure) in the [constant.py](./experiment-plans/constant.py) experiment plan
- Run the [constant.py](./experiment-plans/constant.py) experiment plan
- Open tmux
- Activate virtualenv `source sb-env/bin/activate`
- Run `./constant.py 2>&1 | tee -a constant.log`
Contributors
The initial trigger implementations for AWS and Azure are based on two master thesis projects at Chalmers University of Technology in Sweden supervised by Joel:
- AWS + Azure: Performance Comparison of Function-as- a-Service Triggers: A Cross-Platform Performance Study of Function Triggers in Function-as-a-Service by Marcus Bertilsson and Oskar Grönqvist, 2021.
- Azure Extension: Serverless Function Triggers in Azure: An Analysis of Latency and Reliability by Henrik Lagergren and Henrik Tao, 2022.
Joel contributed many improvements to their original source code as documented in the import commits a00b67a and 6d2f5ef and developed TriggerBench as an integrated benchmark suite (see commit history for detailed changelog).
Files
trigger-bench.zip
Files
(61.1 MB)
Name | Size | Download all |
---|---|---|
md5:6b201206d2de6774a574e277669c9740
|
61.1 MB | Preview Download |