Ryan Malloy 8eb19f7534 Phase 34: Scaling benchmarks (1k/10k/100k rows; 5/20/50 cols) (2026.05.05.8)
Adds tests/benchmarks/test_scaling_perf.py with parametrized
benchmarks across row-count, column-width, and type-mix axes.
Caught the NFETCH-loop bug (Phase 35) immediately on first run.

Headline numbers:

Bulk insert (executemany in transaction):
  1k rows:   23 ms (23 us/row)
  10k rows:  161 ms (16 us/row)
  100k rows: 1487 ms (15 us/row, ~67k rows/sec sustained)

SELECT (linear scaling, near-constant per-row):
  1k rows:   2.7 ms (2.7 us/row)
  10k rows:  25.8 ms (2.6 us/row)
  100k rows: 271 ms (2.7 us/row)

Wide-row SELECT (1k rows x N cols):
  5 cols:  2.4 ms
  20 cols: 5.1 ms
  50 cols: 10.1 ms

Type-mix SELECT (INT + VARCHAR + DECIMAL + DATE + FLOAT + SMALLINT):
  1000 rows: 4.7 ms (4.7 us/row, ~1.7x baseline)

Per-row codec cost is essentially constant from 1k to 100k rows
(2.7 us/row), proving parse_tuple_payload optimizations (Phases
23-25) hold at 100x scale with no GC-pause amplification or
memory-pressure degradation.

Per-row insert cost actually DECREASES with scale (23us at 1k to
15us at 100k) - Phase 33's pipelining amortizes prepare/release
overhead better at larger N.

10 new parametrized benchmarks. Total: 77 unit + 249 integration +
43 benchmark = 369 tests.
2026-05-05 12:38:07 -06:00
..

Benchmarks (Phase 21)

Performance baselines for informix-db. Two layers:

  1. Codec micro-benchmarks (test_codec_perf.py) — pure CPU, no server. These set the ceiling for what end-to-end can achieve. Run with make bench-codec. Suitable for CI's pre-merge job.
  2. End-to-end benchmarks — exercise the full PREPARE → BIND → EXECUTE → FETCH → CLOSE → RELEASE round-trip. Need an Informix container (make ifx-up). Run with make bench.

Headline numbers (baseline 2026-05-04, x86_64 Linux, dev container on loopback)

Operation Mean Ops/sec
decode(int) (per cell) 181 ns 5.5M
parse_tuple_payload(5 cols) (per row) 2.87 µs 350K
encode_param(int) (per param) 103 ns 9.7M
SELECT 1 round-trip 177 µs 5,650
Pool acquire + tiny query + release 295 µs 3,400
Cold connect + close (login handshake) 11.2 ms 89
1000-row SELECT * 1.56 ms 640
INSERT (single, prepared) 1.88 ms 530
executemany(100) autocommit=True 181 ms ~550 rows/sec
executemany(1000) autocommit=True 1.72 s ~580 rows/sec
executemany(1000) in single transaction 32 ms ~31,000 rows/sec

What these tell you

  • Pool gives 72× speedup over cold connect. If your app opens a connection per request, fix that first.
  • Wrap bulk INSERTs in a transaction. That's a 53× speedup over the autocommit-True default. With autocommit on, each row forces the server to flush its transaction log; in transaction mode the flush happens once at COMMIT. Per-row cost drops from 1.72 ms (storage-bound) to 32 µs (pure protocol). PEP 249's default autocommit=False was designed for this — we just default to False.
  • Codec is not the bottleneck. Per-row decode (2.9 µs) is 1000× faster than wire round-trip (177 µs for SELECT 1). Network and server-side cost dominate.
  • UTF-8 carries no measurable cost. decode_varchar_utf8 runs at 216 ns vs decode_varchar_short at 170 ns — the 27% delta is the multibyte string walk inherent in UTF-8 decoding, not Phase 20 overhead.

Performance gotchas

  • autocommit=True + executemany is the slowest reasonable pattern. Use it only when each row genuinely needs to land independently. For bulk loads, default autocommit=False and call conn.commit() at the end of the batch.
  • Single INSERT in a tight loop is 1.88 ms each — strictly worse than executemany (which saves PREPARE/RELEASE overhead). If you find yourself looping over cur.execute("INSERT...") hundreds of times, switch to executemany.
  • Cold connect is 11 ms. The login handshake is expensive compared to anything you'll do with the connection. Pool everything in long-lived processes.

Regression policy

baseline.json is committed and represents the dev-container baseline. Compare a current run against it with:

uv run pytest tests/benchmarks/ -m benchmark --benchmark-only \
    --benchmark-compare=tests/benchmarks/baseline.json \
    --benchmark-compare-fail=mean:25%

A 25% mean-regression fails the run. Adjust the threshold per CI noise profile. CI's loopback-network-on-shared-runner is noisier than dev container on a quiet box — start permissive and tighten as you collect runs.

Updating the baseline

When you intentionally change performance (an optimization, or accept a regression for correctness), refresh:

make bench-save                                 # writes .results/0001_run.json
cp tests/benchmarks/.results/Linux-CPython-*/0001_run.json tests/benchmarks/baseline.json
git add tests/benchmarks/baseline.json

Document the change in CHANGELOG so reviewers know why the floor moved.

Files

  • test_codec_perf.py — codec dispatch (decode, encode_param, parse_tuple_payload)
  • test_select_perf.py — SELECT round-trips, single + multi-row
  • test_insert_perf.py — INSERT single + executemany throughput
  • test_pool_perf.py — cold connect vs pool acquire/release
  • test_async_perf.py — async-path latency + concurrent throughput
  • conftest.py — long-lived bench_conn and 1k-row bench_table fixtures
  • baseline.json — committed baseline for regression comparison
  • .results/ — gitignored; per-run output from make bench-save