This repository contains scripts to run benchmarks across multiple clients. Follow the instructions below to run the benchmarks locally.
Make sure you have the following installed on your system:
- Python 3.10
- Docker
- Docker Compose
- .NET 8.0.x
make
(for running make commands)
- Clone the repository:
git clone https://github.com/nethermindeth/gas-benchmarks.git
cd gas-benchmarks
- Install Python dependencies:
pip install -r requirements.txt
- Prepare Kute dependencies (specific to Nethermind):
make prepare_tools
- Create a results directory:
mkdir -p results
For running the whole pipeline, you can use the run.sh
script.
bash run.sh -t "testPath" -w "warmupFilePath" -c "client1,client2" -r runNumber -i "image1,image2"
Example run:
run.sh -t "tests/" -w "warmup/warmup-1000bl-16wi-24tx.txt" -c "nethermind,geth,reth" -r 8
Flags:
--t
it's used to define the path where the tests are located.--w
it's used to define the path where the warmup file is located.--c
it's used to define the clients that you want to run the benchmarks. Separate the clients with a comma.--r
it's used to define the number of iterations that you want to run the benchmarks. It's a numeric value.--i
it's used to define the images that you want to use to run the benchmarks. Separate the images with a comma, and match the clients. Usedefault
if you want to ignore the values.
Now you're ready to run the benchmarks locally!
After running benchmarks and generating report files, you can populate a PostgreSQL database with the results for further analysis. This process involves two main scripts: generate_postgres_schema.py
to set up the database table, and fill_postgres_db.py
to load the data.
The generate_postgres_schema.py
script creates the necessary table in your PostgreSQL database to store the benchmark data.
Usage:
python generate_postgres_schema.py \
--db-host <your_db_host> \
--db-port <your_db_port> \
--db-user <your_db_user> \
--db-name <your_db_name> \
--table-name <target_table_name> \
--log-level <DEBUG|INFO|WARNING|ERROR|CRITICAL>
- You will be prompted to enter the password for the specified database user.
--table-name
: Defaults tobenchmark_data
.--log-level
: Defaults toINFO
.
Example:
python generate_postgres_schema.py \
--db-host localhost \
--db-port 5432 \
--db-user myuser \
--db-name benchmarks \
--table-name gas_benchmark_results
This will create a table named gas_benchmark_results
(if it doesn't already exist) in the benchmarks
database.
Once the schema is set up, use fill_postgres_db.py
to parse the benchmark report files (generated by run.sh
or other means) and insert the data into the PostgreSQL table.
Usage:
python fill_postgres_db.py \
--reports-dir <path_to_reports_directory> \
--db-host <your_db_host> \
--db-port <your_db_port> \
--db-user <your_db_user> \
--db-password <your_db_password> \
--db-name <your_db_name> \
--table-name <target_table_name> \
--log-level <DEBUG|INFO|WARNING|ERROR|CRITICAL>
--reports-dir
: Path to the directory containing the benchmark output files (e.g.,output_*.csv
,raw_results_*.csv
, andindex.html
orcomputer_specs.txt
).--db-password
: The password for the database user.--table-name
: Should match the table name used withgenerate_postgres_schema.py
. Defaults tobenchmark_data
.--log-level
: Defaults toINFO
.
Example:
python fill_postgres_db.py \
--reports-dir ./results/my_benchmark_run_01 \
--db-host localhost \
--db-port 5432 \
--db-user myuser \
--db-password "securepassword123" \
--db-name benchmarks \
--table-name gas_benchmark_results
This script will scan the specified reports directory, parse the client benchmark data and computer specifications, and insert individual run records into the gas_benchmark_results
table.