README.md
1 # libp2p performance benchmarking 2 3 This project includes the following components: 4 5 - `terraform/`: a Terraform scripts to provision infrastructure 6 - `impl/`: implementations of the [libp2p perf protocol](https://github.com/libp2p/specs/blob/master/perf/perf.md) running on top of e.g. go-libp2p, rust-libp2p or Go's std-library https stack 7 - `runner/`: a set of scripts building and running the above implementations on the above infrastructure, reporting the results in `benchmark-results.json` 8 9 Benchmark results can be visualized with https://observablehq.com/@libp2p-workspace/performance-dashboard. 10 11 ## Running via GitHub Action 12 13 1. Create a pull request with your changes on https://github.com/libp2p/test-plans/. 14 2. Trigger GitHub Action for branch on https://github.com/libp2p/test-plans/actions/workflows/perf.yml (see _Run workflow_ button). 15 3. Wait for action run to finish and to push a commit to your branch. 16 4. Visualize results on https://observablehq.com/@libp2p-workspace/performance-dashboard. 17 18 ## Running manually 19 20 ### Prerequisites 21 22 - Terraform 1.5.4 or later 23 - Node.js 18 or later 24 - [an AWS IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) 25 26 27 ### Provision infrastructure 28 29 1. Save your public SSH key as the file `./terraform/modules/short_lived/files/perf.pub`; or generate a new key pair with `make ssh-keygen` and add it to your SSH agent with `make ssh-add`. 30 2. `cd terraform/configs/local` 31 3. `terraform init` 32 4. `terraform apply` 33 5. `CLIENT_IP=$(terraform output -raw client_ip)` 34 6. `SERVER_IP=$(terraform output -raw server_ip)` 35 36 **Notes** 37 - While running terraform you may encounter the following error: 38 ```bash 39 Error: collecting instance settings: reading EC2 Launch Template versions: couldn't find resource 40 │ 41 │ with module.short_lived_server[0].aws_instance.perf, 42 │ on ../../modules/short_lived/main.tf line 15, in resource "aws_instance" "perf": 43 │ 15: resource "aws_instance" "perf" { 44 ``` 45 - This implies that you haven't deployed the long-lived infrastructure on your AWS account. To do so along with each short-lived deployment, you can set *TF_VAR* [`long_lived_enabled`](./terraform/configs/local/terraform.tf#L42) env variable to default to `true`. Terraform should then spin up the long-lived resources that are required for the short-lived resources to be created. 46 47 - It's best to destroy the infrastructure after you're done with your testing, you can do that by running `terraform destroy`. 48 49 ### Build and run libp2p implementations 50 51 Given you have provisioned your infrastructure, you can now build and run the libp2p implementations on the AWS instances. 52 53 1. `cd runner` 54 2. `npm ci` 55 3. `npm run start -- --client-public-ip $CLIENT_IP --server-public-ip $SERVER_IP` 56 * Note: The default number of iterations that perf will run is 10; desired iterations can be set with the `--iterations <value>` option. 57 58 ### Deprovision infrastructure 59 60 1. `cd terraform/configs/local` 61 2. `terraform destroy` 62 63 ## Adding a new implementation or a new version 64 65 1. Add the implementation to new subdirectory in [`impl/*`](./impl/). 66 - For a new implementation, create a folder `impl/<your-implementation-name>/` e.g. `go-libp2p` 67 - For a new version of an existing implementation, create a folder `impl/<your-implementation-name>/<your-implementation-version>`. 68 - In that folder include a `Makefile` that builds an executable and stores it next to the `Makefile` under the name `perf`. 69 - Requirements for the executable: 70 - Running as a libp2p-perf server: 71 - The perf server must not exit as it will be closed by the test runner. 72 - The executable must accept the command flag `--run-server` which indicates it's running as server. 73 - Running as a libp2p-perf client 74 - Given that perf is a client driven set of benchmarks, the performance will be measured by the client. 75 - Input via command line 76 - `--server-address` 77 - `--transport` (see [`runner/versions.ts`](./runner/src/versions.ts#L7-L43) for possible variants) 78 - `--upload-bytes` number of bytes to upload per stream. 79 - `--download-bytes` number of bytes to download per stream. 80 - Output 81 - Logging MUST go to `stderr`. 82 - Measurement output is printed to `stdout` as JSON. 83 - The output schema is: 84 ``` typescript 85 interface Data { 86 type: "intermediary" | "final"; 87 timeSeconds: number; 88 uploadBytes: number; 89 downloadBytes: number; 90 } 91 ``` 92 - Every second the client must print the current progress to stdout. See example below. Note the `type: "intermediary"`. 93 ``` json 94 { 95 "type": "intermediary", 96 "timeSeconds": 1.004957645, 97 "uploadBytes": 73039872, 98 "downloadBytes": 0 99 }, 100 ``` 101 - Before terminating the client must print a final summary. See example below. Note the `type: "final"`. Also note that the measurement includes the time to (1) establish the connection, (2) upload the bytes and (3) download the bytes. 102 ``` json 103 { 104 "type": "final", 105 "timeSeconds": 60.127230659, 106 "uploadBytes": 4382392320, 107 "downloadBytes": 0 108 } 109 ``` 110 2. For a new implementation, in [`impl/Makefile` include your implementation in the `all` target.](./impl/Makefile#L7) 111 3. For a new version, reference version in [`runner/src/versions.ts`](./runner/src/versions.ts#L7-L43).