Performance validation, stress testing, and stability analysis for the Hamstring NDR.
Report Bug
·
Request Feature
This is the dedicated benchmarking suite for Hamstring (formerly heiDGAF). It is designed to validate the performance and stability of the Hamstring NDR pipeline under various load conditions.
The benchmarking suite consists of two main components:
-
Controller (
src/controller): The orchestrator of the benchmarking process.- Reads the
config.yamlto determine which tests to run. - Manages execution parameters (data rates, durations, etc.).
- Can trigger tests locally or on remote hosts.
- Instructs the Test Runner to execute specific scenarios.
- Reads the
-
Test Runner (
src/test_runner): The execution engine.- Runs on the target machine (or within a Docker container).
- Generates traffic/load according to the parameters received from the Controller.
- Collects metrics and handles the actual interaction with the system under test.
The suite includes a comprehensive Reporting Feature that aggregates test results into an overview PDF.
- PDF Overview Generator: Located in
src/test_runner/plotting/pdf_overview_generator.py. - Capabilities:
- Combines metadata (test dates, configuration).
- Visualizes latency comparisons and fill levels.
- Plots throughput (entering vs. processed logs) over time.
- Generates a single, easy-to-read PDF report for each test run, saved in the
testing_reportsdirectory.
- Docker: Ensure Docker and Docker Compose are installed.
- Python: Python 3.10+ is recommended.
- Hamstring Containers:
[!IMPORTANT] The Hamstring containers must be running before starting any benchmarks. The benchmarking suite connects to the existing
docker_heidgafnetwork to inject traffic.# In the main Hamstring project directory: HOST_IP=127.0.0.1 docker compose -f docker/docker-compose.yml up -d
-
Clone the repository:
git clone https://github.com/Hamstring-NDR/hamstring-benchmarking.git cd hamstring-benchmarking -
Create and activate a virtual environment:
python -m venv .venv source .venv/bin/activate -
Install the dependencies (editable mode is recommended):
pip install -e .- Or install specific requirements via
pip install -r requirements.txt(if available), orsh install_requirements.sh.
- Or install specific requirements via
Configure your test runs in config.yaml. You can define:
- Remote Execution: Host details and SSH keys.
- Tests:
- Ramp Up: Gradually increases load.
- Burst: Simulates spikes in traffic.
- Maximum Throughput: Tests the absolute limit of the system.
- Long Term: Stability testing over hours or days.
- Test Runs: List of tests to execute sequentially (e.g.,
["ramp_up", "burst"]).
To start the configured benchmarks, run the controller:
python src/controller/benchmark_controller.pyThe controller will:
- Parse the
config.yaml. - Connect to the
benchmark_test_runnercontainer (or remote host). - Execute the defined tests sequentially.
- Generate PDF reports upon completion.
Ranked results and PDF reports will be generated in:
benchmark_results/: Raw data and individual graphs.testing_reports/: Consolidated PDF overview reports.
Distributed under the EUPL License. See LICENSE for more information.