Skip to content

Commit d725fb7

Browse files
committed
Enhance benchmarking: update baselines, add new benchmarks for JS rendering, and improve documentation
1 parent daec3fd commit d725fb7

11 files changed

Lines changed: 1047 additions & 118 deletions

File tree

.github/workflows/benchmarks.yml

Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
name: Benchmarks
2+
3+
on:
4+
pull_request:
5+
branches: [main]
6+
push:
7+
branches: [main]
8+
workflow_dispatch: {}
9+
10+
# Cancel in-flight runs for the same PR / branch.
11+
concurrency:
12+
group: benchmarks-${{ github.ref }}
13+
cancel-in-progress: true
14+
15+
jobs:
16+
bench-py:
17+
name: Python serialisation benchmarks
18+
runs-on: ubuntu-latest
19+
20+
steps:
21+
- name: Check out HEAD
22+
uses: actions/checkout@v4
23+
with:
24+
path: head
25+
26+
# For PRs compare against the target branch; for pushes compare against
27+
# the previous commit. Skipped on the first push to a new branch
28+
# (all-zero before SHA) — benchmark tests will skip automatically.
29+
- name: Check out BASE
30+
if: >
31+
github.event_name == 'pull_request' ||
32+
(github.event_name == 'push' &&
33+
github.event.before != '0000000000000000000000000000000000000000')
34+
uses: actions/checkout@v4
35+
with:
36+
ref: >-
37+
${{ github.event_name == 'pull_request'
38+
&& github.base_ref
39+
|| github.event.before }}
40+
path: base
41+
continue-on-error: true
42+
43+
- uses: astral-sh/setup-uv@v5
44+
45+
- name: Install base dependencies
46+
if: hashFiles('base/pyproject.toml') != ''
47+
run: cd base && uv sync
48+
49+
- name: Install head dependencies
50+
run: cd head && uv sync
51+
52+
# Both steps run on the same runner so only the ratio matters —
53+
# absolute ms differences from different hardware cancel out.
54+
- name: Record base branch timings
55+
if: hashFiles('base/pyproject.toml') != ''
56+
run: |
57+
cd base
58+
uv run pytest tests/test_benchmarks_py.py \
59+
--update-benchmarks \
60+
--baselines-path /tmp/ci_baselines.json \
61+
-v
62+
continue-on-error: true
63+
64+
- name: Run benchmarks on HEAD
65+
run: |
66+
cd head
67+
uv run pytest tests/test_benchmarks_py.py \
68+
--baselines-path /tmp/ci_baselines.json \
69+
-v
70+
71+
- name: Upload timings
72+
if: always()
73+
uses: actions/upload-artifact@v4
74+
with:
75+
name: bench-py-${{ github.sha }}
76+
path: /tmp/ci_baselines.json
77+
if-no-files-found: ignore
78+

Examples/Benchmarks/README.rst

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
Benchmarks
2+
----------
3+
4+
Timing comparisons for the Python-side data-push pipeline in anyplotlib,
5+
matplotlib, Plotly, and Bokeh. All measurements capture only the
6+
**Python serialisation cost** — the bottleneck in a live Jupyter session
7+
where new data must be encoded and dispatched to the browser on every frame.
8+

0 commit comments

Comments
 (0)