Ryan Malloy 90ce035a00 Phase 21: Performance benchmarks (2026.05.04.5)
Adds tests/benchmarks/ with pytest-benchmark coverage of the hot codec
paths and end-to-end SELECT/INSERT/pool/async round-trips. Establishes
a committed baseline.json so PRs can be regression-checked at review
via --benchmark-compare.

* test_codec_perf.py (16): decode/encode_param/parse_tuple_payload
  micro-benchmarks - run without container, suitable for pre-merge CI.
* test_select_perf.py (4): SELECT round-trips - 1-row latency floor,
  10-row, 1k-row full fetch, parameterized.
* test_insert_perf.py (3): single-row INSERT, executemany 100 / 1000.
* test_pool_perf.py (3): cold connect, pool acquire/release, pool
  acquire + query + release.
* test_async_perf.py (2): async round-trip overhead, 10x concurrent.
* baseline.json: committed snapshot, 28 measurements.
* benchmark pytest marker, gated off by default.
* Makefile: bench / bench-codec / bench-save targets;
  test-integration excludes benchmarks for speed.

Headline numbers (dev container loopback):
* decode(int): 181 ns
* parse_tuple 5 cols: 2.87 µs/row
* SELECT 1 round-trip: 177 µs
* Pool acquire+query+release: 295 µs
* Cold connect: 11.2 ms (72x slower than pool)

UTF-8 decode carries no measurable cost vs iso-8859-1 - confirms
Phase 20 didn't regress anything.

Total: 69 unit + 211 integration + 28 benchmark = 308 tests.
2026-05-04 17:21:12 -06:00

81 lines
2.3 KiB
Python

"""Benchmark fixtures — long-lived connections + populated test tables.
The end-to-end benchmark suite needs:
* A persistent connection (creating one per benchmark inflates the cost
by the login handshake, ~5-15ms — distorts micro-second measurements).
* A pre-populated test table so SELECT/UPDATE benchmarks have rows to
iterate.
Both fixtures are session-scoped so the table is created exactly once
even when the same benchmark is iterated over many rounds.
"""
from __future__ import annotations
import contextlib
from collections.abc import Iterator
import pytest
import informix_db
from tests.conftest import ConnParams
BENCH_TABLE_ROWS = 1000 # rows in the populated benchmark table
@pytest.fixture(scope="session")
def bench_conn(conn_params: ConnParams) -> Iterator[informix_db.Connection]:
"""One long-lived autocommit connection for the entire bench session."""
conn = informix_db.connect(
host=conn_params.host,
port=conn_params.port,
user=conn_params.user,
password=conn_params.password,
database=conn_params.database,
server=conn_params.server,
autocommit=True,
)
try:
yield conn
finally:
conn.close()
@pytest.fixture(scope="session")
def bench_table(bench_conn: informix_db.Connection) -> Iterator[str]:
"""Create + populate a 1k-row table for SELECT/UPDATE benchmarks.
Yields the table name. The table is dropped at session teardown.
Schema covers the common type mix: INT id, VARCHAR name,
INT (counter), FLOAT (value), DATE (created).
"""
table = "p21_bench"
cur = bench_conn.cursor()
with contextlib.suppress(informix_db.Error):
cur.execute(f"DROP TABLE {table}")
cur.execute(
f"CREATE TABLE {table} ("
" id INT, name VARCHAR(64), counter INT,"
" value FLOAT, created DATE)"
)
# Populate via executemany so setup is fast.
rows = [
(
i,
f"row_{i:04d}",
i * 7,
float(i) * 1.5,
None, # DATE NULL — keeps fixture small
)
for i in range(BENCH_TABLE_ROWS)
]
cur.executemany(
f"INSERT INTO {table} VALUES (?, ?, ?, ?, ?)",
rows,
)
try:
yield table
finally:
with contextlib.suppress(informix_db.Error):
cur.execute(f"DROP TABLE {table}")