Published April 24, 2024 | Version v2
Software Open

Delphi: Efficient Asynchronous Approximate Agreement for Distributed Oracles

  • 1. ROR icon Purdue University West Lafayette
  • 1. Visa Research
  • 2. ROR icon Purdue University West Lafayette
  • 3. Supra Research
  • 4. ROR icon Lucerne University of Applied Sciences and Arts
  • 5. ROR icon Duke University

Description

Delphi: Asynchronous Approximate Agreement for Distributed Oracles

This repository contains a Rust implementation of the following distributed oracle agreement protocols.
 
1. Delphi AAA protocol
2. FIN ACS protocol [1]
3. Abraham et al. AAA protocol [2]
 
The repository uses the libchatter networking library available [here](https://github.com/libdist-rs/libchatter-rs). This code has been written as a research prototype and has not been vetted for security. Therefore, this repository can contain serious security vulnerabilities. Please use at your own risk.
 
Please consider citing our paper if you use this artifact.
Delphi: Efficient Asynchronous Approximate Agreement for Distributed Oracles
Akhil Bandarupalli, Adithya Bhat, Saurabh Bagchi, Aniket Kate, Chen-Da Liu-Zhang, and Michael K. Reiter
To appear at 54th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 2024.
 

Dataset

The repository also contains a dataset containing values of prominent cryptocurrencies polled from 12 cryptocurrency exchanges. Details are available in the `dataset` folder.
 

Quick Start

We describe the steps to run this artifact.

Hardware and OS setup

1. This artifact has been run and tested on `x86_64` and `x64` architectures. However, we are unaware of any issues that would prevent this artifact from running on `x86` architectures.

 

2. This artifact has been run and tested on Ubuntu `20.04.5 LTS` OS and Raspbian Linux version released on `2023-02-21`, both of which follow the Debian distro. However, we are unaware of any issues that would prevent this artifact from running on Fedora distros like CentOS and Red Hat Linux.

 

Rust installation and Cargo setup

The repository uses the `Cargo` build tool. The compatibility between dependencies has been tested for Rust version `1.63`.
 
3. Run the set of following commands to install the toolchain required to compile code written in Rust and create binary executable files.
 
$ sudo apt-get update
$ sudo apt-get -y upgrade
$ sudo apt-get -y autoremove
$ sudo apt-get -y install build-essential
$ sudo apt-get -y install cmake
$ sudo apt-get -y install curl
# Install rust (non-interactive)
$ curl --proto "=https" --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
$ source $HOME/.cargo/env
$ rustup install 1.63.0
$ rustup override set 1.63.0
 
4. Build the repository using the following command. The command should be run in the directory containing the `Cargo.toml` file.
 
$ cargo build --release
$ mkdir logs
 

 

5. Next, generate configuration files for nodes in the system using the following command. Make sure to create the directory (in this example, `testdata/hyb_4/`) before running this command.
 
$ ./target/release/genconfig --base_port 8500 --client_base_port 7000 --client_run_port 9000 --NumNodes 4 --blocksize 100 --delay 100 --target testdata/hyb_4/ --local true

 

6. After generating the configuration files, run the script `appxcon-test.sh` in the scripts folder with the following command line arguments. This command starts Delphi with four nodes.
 
$ ./scripts/appxcon-test.sh {epsilon} {rho} {Delta} testdata/hyb_4/syncer
 
7. Substitute desired values of $\epsilon,\rho_0,\Delta$. Example values include $\epsilon=1,\rho_0=10,\Delta=100000$. The script randomly assigns input values $v_i$ to each node. This logic can be changed to make nodes start with custom input values.
 
8. The outputs are logged into the `syncer.log` file in logs directory. The outputs of each node are printed in a JSON format, along with the amount of time the node took to terminate the protocol.
 
9. Running the FIN ACS protocol requires additional configuration. FIN uses BLS threshold signatures to generate common coins necessary for proposal election and Binary Byzantine Agreement. This setup includes a master public key in the `pub` file, $n$ partial secret keys (one for each node) as `sec0,...,sec3` files, and the $n$ partial public keys as `pub0,...,pub3` files. We utilized the `crypto_blstrs` library in the [apss](https://github.com/ISTA-SPiDerS/apss) repository to generate these keys. We pregenerated these files for $n=16,64,112,160$ in the benchmark folder, in zip files `tkeys-{n}.tar.gz`. After generating these files, place them in the configuration directory (`testdata/hyb_4` in this example) and run the following command (We already performed this step and have these files ready in `testdata/hyb_4` folder).
 
# Kill previous processes running on these ports
$ sudo lsof -ti:7000-7015 | xargs kill -9
$ ./scripts/fin-test.sh testdata/hyb_4/syncer
 

 

10. Similarly, Abraham et al.'s Approximate Agreement protocol can be run using the following command.
 
# Kill previous processes running on these ports
$ sudo lsof -ti:7000-7015 | xargs kill -9
$ ./scripts/abraham-test.sh {epsilon} {delta} {Delta} testdata/hyb_4/syncer
 
The parameters {epsilon} and {delta} must be equal in this context to yield Abraham et al.'s protocol. {Delta} must be set to be equal to the difference between honest inputs `M-m`. Example configuration run includes the following command.
 
$ ./scripts/abraham-test.sh 2 2 20 testdata/hyb_4/syncer
 

 

Running in AWS

We utilize the code in the [Narwhal](https://github.com/MystenLabs/sui/tree/main/narwhal/benchmark) repository to execute code in AWS. This repository uses `fabric` to spawn AWS instances, install Rust, and build the repository on individual machines. Please refer to the `benchmark` directory for more instructions about reproducing the results in the paper.

Running in Raspberry-Pi testbed

We described detailed instructions to reproduce the results in the paper in the Raspberry-Pi device testbed in the `benchmark/raspberry-pi` directory.

 

System architecture

Each node runs as an independent process, which communicates with other nodes through sockets. Apart from the $n$ nodes running the protocol, the system also spawns a process called `syncer`. The `syncer` is responsible for measuring latency of completion. It reliably measures the system's latency by issuing `START` and `STOP` commands to all nodes. The nodes begin executing the protocol only after the `syncer` verifies that all nodes are online, and issues the `START` command by sending a message to all nodes. Further, the nodes send a `TERMINATED` message to the `syncer` once they terminate the protocol. The `syncer` records both start and termination times of all processes, which allows it to accurately measure the latency of each protocol.

 

Dependencies

The artifact uses multiple Rust libraries for various functionalities. We give a list of all dependencies used by the artifact in the `Cargo.lock` file. `Cargo` automatically manages these dependencies and fetches the specified versions from the `crates.io` repository manager.

 

Code Organization

The artifact is organized into the following modules of code.
1. The `config` directory contains code pertaining to configuring each node in the distributed system. Each node requires information about port to use, network addresses of other nodes, symmetric keys to establish pairwise authenticated channels between nodes, and protocol specific configuration parameters like values of $\epsilon,\Delta,\rho$. Code related to managing and parsing these parameters is in the `config` directory. This library has been borrowed from the `libchatter` (https://github.com/libdist-rs/libchatter-rs) repository.

 

2. The `crypto` directory contains code that manages the pairwise authenticated channels between nodes. Mainly, nodes use Message Authentication Codes (MACs) for message authentication. This repo manages the required secret keys and mechanisms for generating MACs. This library has been borrowed from the `libchatter` (https://github.com/libdist-rs/libchatter-rs) repository.

 

3. The `crypto_blstrs` directory contains code that enables nodes to toss common coins from BLS threshold signatures. This library has been borrowed from the `apss` (https://github.com/ISTA-SPiDerS/apss) repository.

 

4. The `types` directory governs the message serialization and deserialization. Each message sent between nodes is serialized into bytecode to be sent over the network. Upon receiving a message, each node deserializes the received bytecode into the required message type after receiving. This library has been written on top of the library from `libchatter` (https://github.com/libdist-rs/libchatter-rs) repository.

 

5. *Networking*: This repository uses the `libnet-rs` (https://github.com/libdist-rs/libnet-rs) networking library. Similar libraries include networking library from the `narwhal` (https://github.com/MystenLabs/sui/tree/main/narwhal/) repository. The nodes use the `tcp` protocol to send messages to each other.

 

6. The `tools` directory consists of code that generates configuration files for nodes. This library has been borrowed from the `libchatter` (https://github.com/libdist-rs/libchatter-rs) repository.

 

7. The `consensus` directory contains the implementations of various protocols. Primarily, it contains implementations of Abraham et al.'s approximate agreement protocol in the `hyb_appxcon` subdirectory, `delphi` protocol in the `delphi` subdirectory, and FIN protocol in `fin` subdirectory. Each protocol contains a `context.rs` file, which contains a function named `spawn` from where the protocol's execution starts. This function is called by the `node` library in the `node` folder. This library contains a `main.rs` file, which spawns an instance of a node running the respective protocol by invoking the `spawn` function.
 

Running Remote Benchmarks on AWS

Forked from (Narwhal) [https://github.com/asonnino/narwhal].

This document explains how to benchmark the codebase and read benchmarks' results. It also provides a step-by-step tutorial to run benchmarks on [Amazon Web Services (AWS)](https://aws.amazon.com) accross multiple data centers (WAN).

Setup

The core protocols are written in Rust, but all benchmarking scripts are written in Python and run with [Fabric](http://www.fabfile.org/). To run the remote benchmark, install the python dependencies:

$ cd benchmark/
$ pip install -r requirements.txt

You also need to install [tmux](https://linuxize.com/post/getting-started-with-tmux/#installing-tmux) (which runs all nodes and clients in the background).

AWS Benchmarks

This repo integrates various python scripts to deploy and benchmark the codebase on [Amazon Web Services (AWS)](https://aws.amazon.com). They are particularly useful to run benchmarks in the WAN, across multiple data centers. This section provides a step-by-step tutorial explaining how to use them.

Step 1. Set up your AWS credentials

Set up your AWS credentials to enable programmatic access to your account from your local machine. These credentials will authorize your machine to create, delete, and edit instances on your AWS account programmatically. First of all, [find your 'access key id' and 'secret access key'](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-creds). Then, create a file `~/.aws/credentials` with the following content:
```
[default]
aws_access_key_id = YOUR_ACCESS_KEY_ID
aws_secret_access_key = YOUR_SECRET_ACCESS_KEY
```
Do not specify any AWS region in that file as the python scripts will allow you to handle multiple regions programmatically.

Step 2. Add your SSH public key to your AWS account

You must now [add your SSH public key to your AWS account](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html). This operation is manual (AWS exposes little APIs to manipulate keys) and needs to be repeated for each AWS region that you plan to use. Upon importing your key, AWS requires you to choose a 'name' for your key; ensure you set the same name on all AWS regions. This SSH key will be used by the python scripts to execute commands and upload/download files to your AWS instances.
If you don't have an SSH key, you can create one using [ssh-keygen](https://www.ssh.com/ssh/keygen/):
$ ssh-keygen -f ~/.ssh/aws
 

Step 3. Configure the testbed

The file [settings.json](https://github.com/asonnino/narwhal/blob/master/benchmark/settings.json) (located in [narwhal/benchmarks](https://github.com/asonnino/narwhal/blob/master/benchmark)) contains all the configuration parameters of the testbed to deploy. Its content looks as follows:
```json
{
"key": {
"name": "aws",
"path": "/absolute/key/path"
},
"port": 8500,
"client_base_port": 9000,
"client_run_port": 9500,
"repo": {
"name": "delphi-rs",
"url": "https://github.com/akhilsb/delphi-rs.git",
"branch": "master"
},
"instances": {
"type": "t2.micro",
"regions": ["us-east-1","us-east-2","us-west-1","us-west-2","ca-central-1", "eu-west-1", "ap-southeast-1", "ap-northeast-1"]
}
}
```
The first block (`key`) contains information regarding your SSH key:
```json
"key": {
"name": "aws",
"path": "/absolute/key/path"
},
```
Enter the name of your SSH key; this is the name you specified in the AWS web console in step 2. Also, enter the absolute path of your SSH private key (using a relative path won't work).


The second block (`ports`) specifies the TCP ports to use:
```json
"port": 8500,
"client_base_port": 9000,
"client_run_port": 9500,
```
The artifact requires a number of TCP ports for communication between the processes. Note that the script will open a large port range (5000-10000) to the WAN on all your AWS instances.

The third block (`repo`) contains the information regarding the repository's name, the URL of the repo, and the branch containing the code to deploy:
```json
"repo": {
"name": "delphi-rs",
"url": "https://github.com/akhilsb/delphi-rs.git",
"branch": "master"
},
```
Remember to update the `url` field to the name of your repo. Modifying the branch name is particularly useful when testing new functionalities without having to checkout the code locally.

The the last block (`instances`) specifies the [AWS instance type](https://aws.amazon.com/ec2/instance-types) and the [AWS regions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions) to use:
```json
"instances": {
"type": "t2.micro",
"regions": ["us-east-1","us-east-2","us-west-1","us-west-2","ca-central-1", "eu-west-1", "ap-southeast-1", "ap-northeast-1"]
}
```
The instance type selects the hardware on which to deploy the testbed. For example, `t2.micro` instances come with 1 vCPU (1 physical core), and 1 GB of RAM. The python scripts will configure each instance with 300 GB of SSD hard drive. The `regions` field specifies the data centers to use. If you require more nodes than data centers, the python scripts will distribute the nodes as equally as possible amongst the data centers. All machines run a fresh install of Ubuntu Server 20.04.

Step 4. Create a testbed

The AWS instances are orchestrated with [Fabric](http://www.fabfile.org) from the file [fabfile.py](https://github.com/akhil-sb/delphi-rs/blob/master/benchmark/fabfile.py) (located in [delphi-rs/benchmarks](https://github.com/akhil-sb/delphi-rs/blob/master/benchmark)); you can list all possible commands as follows:
```
$ cd delphi-rs/benchmark
$ fab --list
```
The command `fab create` creates new AWS instances; open [fabfile.py](https://github.com/asonnino/narwhal/blob/master/benchmark/fabfile.py) and locate the `create` task:
```python
@task
def create(ctx, nodes=2):
...
```
The parameter `nodes` determines how many instances to create in *each* AWS region. That is, if you specified 8 AWS regions as in the example of step 3, setting `nodes=2` will creates a total of 16 machines:
```
$ fab create

Creating 16 instances |██████████████████████████████| 100.0%
Waiting for all instances to boot...
Successfully created 16 new instances
```
You can then clone the repo and install rust on the remote instances with `fab install`:
```
$ fab install

Installing rust and cloning the repo...
Initialized testbed of 16 nodes
```
This may take a long time as the command will first update all instances.
The commands `fab stop` and `fab start` respectively stop and start the testbed without destroying it (it is good practice to stop the testbed when not in use as AWS can be quite expensive); and `fab destroy` terminates all instances and destroys the testbed. Note that, depending on the instance types, AWS instances may take up to several minutes to fully start or stop. The command `fab info` displays a nice summary of all available machines and information to manually connect to them (for debug).
 


Step 5. Run a benchmark

 
 
After setting up the testbed, run a benchmark. Locate the task `remote` in fabfile.py:
```python
@task
def remote(ctx):
...
```
Run the benchmark with the following command.
```
$ fab remote
```
This command first updates all machines with the latest commit of the GitHub repo and branch specified in your file [settings.json](https://github.com/akhilsb/delphi-rs/blob/master/benchmark/settings.json) (step 3); this ensures that benchmarks are always run with the latest version of the code. It then generates and uploads the configuration files to each machine, and runs the benchmarks with the specified parameters. The input parameters for Delphi can be set in the `_config` function in the (remote.py)[https://github.com/akhilsb/delphi-rs/benckmark/benchmark/remote.py] file in the `benchmark` folder.

 

Step 6: Download logs

 
The following command downloads the log file from the `syncer` titled `syncer.log`.
```
$ fab logs
```
The `syncer.log` file contains the details about the latency of the protocol and the outputs of the nodes. Note that this log file needs to be downloaded only after allowing the protocol sufficient time to terminate (Ideally within 5 minutes). If anything goes wrong during a benchmark, you can always stop it by running `fab kill`.

Be sure to kill the prior benchmark using the following command before running a new benchmark.
```
$ fab kill
```
 

Running protocols

The `run_primary` function in the `commands.py` file specifies which protocol to run. Currently, the function runs the `Delphi` protocol denoted by the keyword `del`, passed to the program using the `--vsstype` functionality. Change this `del` keyword to `fin` and `hyb` to run FIN and Abraham et al., respectively.

In addition to the previous changes, the FIN protocol requires the presence of a file with the name `tkeys.tar.gz`, which is a compressed file containing the BLS public key as `pub`, partial secret key shares as `sec0,...,sec{n-1}`, and corresponding public keys as `pub0,...,pubn-1`. This repository contains these keys for values of `n=16,64,112,160`. Before running FIN, run the following command to copy the BLS keys for the code to access.
```
$ cp tkeys-{n}.tar.gz tkeys.tar.gz
```
After making these changes, retrace the procedure from Step 5 to run the protocols.

Running the benchmark for different numbers of nodes

 
After running the benchmarks for a given number of nodes, destroy the testbed with the following command.
```
$ fab destroy
```
This command destroys the testbed and terminates all created AWS instances. Retrace the setup from step 4 to reestablish a testbed with a different number of nodes. 
 
 

Reproducing results in the paper

 
We ran Delphi at configuration of $\epsilon=2,\rho_0 =2, \delta=20, \Delta = 2000$ (set on line 250 in the file `remote.py`) in the Bitcoin usecase at $n=16,64,112,160$ nodes in a geo-distributed testbed of `t2.micro` nodes spread across 8 regions: N. Virginia, Ohio, N. California, Oregon, Canada, Ireland, Singapore, and Tokyo (These values are pre-configured in the `settings.json` file). We also ran Delphi at a configuration of $\epsilon=2, \rho_0=2, \delta=180, \Delta = 2000$ (need to be changed on line 250 in the file `remote.py`) to demonstrate the performance at a high difference $\delta$.

We ran FIN with the same configuration. However, FIN's runtime is independent of the inputs and input parameters. Remember to change the protocol to run by modifying `commands.py` file on Line 38 (change `del` to `fin`) before running the benchmark.

We ran Abraham et al. with $\epsilon=2, \rho_0 = 20, \delta=20, \Delta = 20$ (change these parameters on line 250 in the file `remote.py`). Notice that $\delta$ in Delphi is different than Abraham et al. In Abraham et al., $\Delta$ is the real difference between honest inputs (It is the maximum difference in Delphi).

In summary, perform the following steps before running a protocol on a given set of values.

1. Follow steps 1 through 4 to create a testbed of $n=16$ nodes. In step 4, set `nodes=2` in the `create` function to create a testbed of 16 nodes on AWS.
2. Change the `remote.py` file on line 250. Set the number of nodes $n$, $\epsilon$ (variable name epsilon), $\rho_0$ (variable name rho_0), $\delta$ (variable name delta), and $\Delta$ (variable name Delta).
3. Change the `commands.py` file on line 38. Pass the parameter `del`, `fin`, `hyb` into the `--vsstype` parameter for running Delphi, FIN, and Abraham et al., respectively.
4. (For running FIN) Paste the `tkeys.tar.gz` file as specified in line 153 of this README.md file.
5. Run the benchmark from Step 5. Wait for 5 minutes and download the log file using the command `fab logs`.
6. Run `fab kill` to kill any previous benchmark.
7. Retrace this summary procedure from bullet point 2 to run a different benchmark on the same testbed.
8. After running all benchmarks at this $n$ value, run `fab destroy` to terminate all instances.
9. Retrace this summary procedure from bullet point 1 to run a benchmark on a testbed with different number of nodes. To reproduce the results from the paper, run the benchmarks on $n=16,64,112,160$ nodes. The `nodes` parameter in the `create` function must be set to `2,8,14,20` to create testbeds of these sizes in a geo-distributed manner.
 
 

Cryptocurrency Price Dataset

This dataset file aggregate_cl.json contains the price exchange data of 63 prominent cryptocurrencies including Bitcoin, Ethereum, Dogecoin, and Algorand from 7th July to 21st July. The readings have been collected at a frequency of one reading every two minutes. The readings have been collected from the following exchanges: `Binance`, `Coinbase`, `Crypto.com`, `Gate.io`, `Huobi`, `Mexc`, `Poloniex`, `Bybit`, `Kucoin`, `okex` and `Kraken`. The values of the cryptocurrency have been reported as the corresponding equivalent of `USDT`, a digital currency pegged to the US Dollar. The data is present in the form of a JSON file with the following format.
```
{
"btc_usdt": {
"1688737482000": {
"bybit": 30250.2,
"poloniex": 30269.120000000003,
"okex": 30269.3,
"huobi_global": 30270.999999999996,
"coinbase_pro": 30271.81,
"gateio": 30272.4,
"mexc": 30273.7,
"binance": 30273.7,
"kraken": 30273.7,
"kucoin": 30273.8,
"binance_us": 30289.989999999998
},
...
},
"eth_usdt": {
"1688737257000": {
"binance_us": 1864.84,
"bybit": 1866,
"poloniex": 1866.8999999999999,
"huobi_global": 1867,
"gateio": 1867.16,
"mexc": 1867.16,
"binance": 1867.16,
"okex": 1867.23,
"kucoin": 1867.4,
"coinbase_pro": 1867.48
},
...
},
...
}
```
We thank the Manoj Patil and Saaransh Jakhar at [Supra](https://www.supra.com) for collecting raw data from these exchanges for these cryptocurrencies.

References

[1] Duan, Sisi, Xin Wang, and Haibin Zhang. "Fin: Practical signature-free asynchronous common subset in constant time." In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, pp. 815-829. 2023.
 
[2] Abraham, Ittai, Yonatan Amit, and Danny Dolev. "Optimal resilience asynchronous approximate agreement." In Principles of Distributed Systems: 8th International Conference, OPODIS 2004, Grenoble, France, December 15-17, 2004, Revised Selected Papers 8, pp. 229-239. Springer Berlin Heidelberg, 2005.

Files

aggregate_cl.json

Files (184.6 MB)

Name Size Download all
md5:99efcc3132926dffabf99a645f4cace3
183.4 MB Preview Download
md5:552c8e2a3ec7d74d1f6d39ec1f9e5abd
1.2 MB Preview Download

Additional details

Dates

Available
2024-04-18

Software

Repository URL
https://github.com/akhilsb/delphi-rs
Programming language
Rust
Development Status
Active