/ docs / src / benchmarks.md
benchmarks.md
 1  # Benchmarks
 2  
 3  The `benchmarks/` project captures runtime and throughput measurements for the
 4  hottest Groth16 paths. This page summarises how to regenerate the data and shows
 5  the latest artefacts.
 6  
 7  ## Running the Suite
 8  
 9  1. Activate the benchmarks project:
10  
11     ```bash
12     julia --project=benchmarks -e 'using Pkg; Pkg.instantiate()'
13     ```
14  
15  2. Run the benchmark harness (JSON results land under `benchmarks/`):
16  
17     ```bash
18     julia --project=benchmarks benchmarks/run.jl
19     ```
20  
21  3. Regenerate plots from an existing JSON snapshot:
22  
23     ```bash
24     julia --project=benchmarks benchmarks/plot.jl benchmarks/results_2025-09-29_121914.json
25     ```
26  
27  The harness writes a timestamped JSON (raw statistics) and PNG charts covering
28  MSM, pairing, normalisation, and Groth16 end-to-end timings.
29  
30  ## Latest Snapshot (2025‑09‑29)
31  
32  ```@example
33  using JSON
34  json_path = joinpath(@__DIR__, "assets", "results_2025-09-29_121914.json")
35  results = JSON.parsefile(json_path)
36  keys(results)
37  ```
38  
39  Each entry contains per-benchmark medians, deviations, and configuration
40  metadata (threading, window sizes, curve parameters). Refer to
41  `benchmarks/results_2025-09-23_204214_env.md` for the environment capture that
42  accompanied the latest run.
43  
44  ## Plots
45  
46  ![Pairing throughput](assets/pairing.png)
47  
48  ![Groth16 end-to-end](assets/groth16.png)
49  
50  ![MSM G1 timings](assets/msm_g1.png)