-
Configure Your Benchmarks: Create a YAML configuration file (e.g.,
configs/example.yaml
):algorithms: - name: "pagerank" func: "networkx.pagerank" params: alpha: 0.85 groups: ["centrality"] datasets: - name: "karate" source: "networkrepository"
-
Start an instance of an orion server in a separate terminal window:
export PREFECT_API_URL="http://127.0.0.1:4200/api"
export PREFECT_API_DATABASE_CONNECTION_URL="postgresql+asyncpg://prefect_user:pass@localhost:5432/prefect_db"
prefect server start
-
Run Benchmarks Based on the Configuration:
nxbench --config 'nxbench/configs/example.yaml' benchmark run
-
Export Results:
nxbench --config 'nxbench/configs/example.yaml' benchmark export 'results/9e3e8baa4a3443c392dc8fee00373b11_20241220002902.json' --output-format csv --output-file 'results/results.csv' # convert benchmarked results from a run with hash `9e3e8baa4a3443c392dc8fee00373b11_20241220002902` into csv format.
-
View Results:
nxbench viz serve # launch the interactive results visualization dashboard.
The CLI provides comprehensive management of benchmarks, datasets, and visualization.
-
Download a Specific Dataset:
nxbench data download karate
-
List Available Datasets by Category:
nxbench data list --category social
-
Run Benchmarks with Verbose Output:
nxbench --config 'nxbench/configs/example.yaml' -vvv benchmark run
-
Export Results to a Postgres SQL Database:
nxbench --config 'nxbench/configs/example.yaml' benchmark export 'results/9e3e8baa4a3443c392dc8fee00373b11_20241220002902.json' --output-format sql
-
Launch the Dashboard:
nxbench viz serve
Use the provided convenience script (docker/nxbench-run.sh) instead of invoking docker-compose directly. This script automatically resolves your configuration file, switches between CPU and GPU modes, detects the desired nxbench subcommand, and mounts the host's results directory when needed.
# Download a Dataset (e.g. Karate):
docker/nxbench-run.sh --config 'nxbench/configs/example.yaml' data download karate
# List Available Datasets by Category:
docker/nxbench-run.sh --config 'nxbench/configs/example.yaml' data list --category social
# Run benchmarks
docker/nxbench-run.sh --config 'nxbench/configs/example.yaml' benchmark run
# Run benchmarks (with GPU support)
docker/nxbench-run.sh --gpu --config 'nxbench/configs/example.yaml' benchmark run
# Export Benchmark Results to CSV:
docker/nxbench-run.sh --config 'nxbench/configs/example.yaml' benchmark export 'nxbench_results/9e3e8baa4a3443c392dc8fee00373b11_20241220002902.json' --output-format csv --output-file 'nxbench_results/results.csv'
# Launch the Visualization Dashboard:
docker/nxbench-run.sh --config 'nxbench/configs/example.yaml' viz serve
# Note: The dashboard service requires that benchmark results have been generated and exported (i.e. a valid results/results.csv file
# exists).
Note: The following guide assumes you have a recent version of NxBench with the new
BackendManager
and associated tools (e.g.,core.py
andregistry.py
) already in place. It also assumes that your backend follows the guidelines for developing custom NetworkX backends
-
Install your backend via
pip
(or conda, etc.). For example, if your backend library ismy_cool_backend
, ensure that:pip install my_cool_backend
-
Check import: NxBench’s detection system simply looks for
importlib.util.find_spec("my_cool_backend")
. So if your library is not found by Python, NxBench will conclude it is unavailable.
In NxBench, a “backend” is simply a library or extension that converts a networkx.Graph
into an alternate representation. You must define one or more conversion functions:
def convert_my_cool_backend(nx_graph: networkx.Graph, num_threads: int):
import my_cool_backend
# Possibly configure multi-threading if relevant:
# my_cool_backend.configure_threads(num_threads)
# Convert the Nx graph to your library’s internal representation:
return my_cool_backend.from_networkx(nx_graph)
If your backend has special cleanup needs (e.g., free GPU memory, close connections, revert global state, etc.), define a teardown function:
def teardown_my_cool_backend():
import my_cool_backend
# e.g. my_cool_backend.shutdown()
pass
If your backend doesn’t need cleanup, skip this or simply define an empty function.
Locate NxBench’s registry.py (or a similar file where other backends are registered). Add your calls to backend_manager.register_backend(...)
:
from nxbench.backends.registry import backend_manager
import networkx as nx # only if needed
def convert_my_cool_backend(nx_graph: nx.Graph, num_threads: int):
import my_cool_backend
# Possibly configure my_cool_backend with num_threads
return my_cool_backend.from_networkx(nx_graph)
def teardown_my_cool_backend():
# e.g. release resources
pass
backend_manager.register_backend(
name="my_cool_backend", # The name NxBench will use to refer to it
import_name="my_cool_backend", # The importable Python module name
conversion_func=convert_my_cool_backend,
teardown_func=teardown_my_cool_backend # optional
)
Important:
name
is the “human-readable” alias in NxBench.import_name
is the actual module import path. They can be the same (most common) or different if your library’s PyPI name differs from its Python import path.
-
Check NxBench logs: When NxBench runs, it will detect whether
"my_cool_backend"
is installed by callingimportlib.util.find_spec("my_cool_backend")
. -
Run a quick benchmark:
nxbench --config my_config.yaml benchmark run
If you see logs like “Chosen backends: [‘my_cool_backend’ …]” then NxBench recognized your backend. If it fails with “No valid backends found,” ensure your library is installed and spelled correctly.
If you want NxBench to only run your backend if it matches a pinned version (e.g. my_cool_backend==2.1.0
), add something like this to your NxBench config YAML:
environ:
backend:
my_cool_backend:
- "my_cool_backend==2.1.0"
NxBench will:
- Detect the installed version automatically (via
my_cool_backend.**version**
or PyPI metadata) - Skip running if it doesn’t match
2.1.0
.
You’ve successfully added a new backend to NxBench! Now, NxBench can detect it, convert graphs for it, optionally tear it down, and track its version during benchmarking.