informix-db/tests/benchmarks/test_async_perf.py
Ryan Malloy 90ce035a00 Phase 21: Performance benchmarks (2026.05.04.5)
Adds tests/benchmarks/ with pytest-benchmark coverage of the hot codec
paths and end-to-end SELECT/INSERT/pool/async round-trips. Establishes
a committed baseline.json so PRs can be regression-checked at review
via --benchmark-compare.

* test_codec_perf.py (16): decode/encode_param/parse_tuple_payload
  micro-benchmarks - run without container, suitable for pre-merge CI.
* test_select_perf.py (4): SELECT round-trips - 1-row latency floor,
  10-row, 1k-row full fetch, parameterized.
* test_insert_perf.py (3): single-row INSERT, executemany 100 / 1000.
* test_pool_perf.py (3): cold connect, pool acquire/release, pool
  acquire + query + release.
* test_async_perf.py (2): async round-trip overhead, 10x concurrent.
* baseline.json: committed snapshot, 28 measurements.
* benchmark pytest marker, gated off by default.
* Makefile: bench / bench-codec / bench-save targets;
  test-integration excludes benchmarks for speed.

Headline numbers (dev container loopback):
* decode(int): 181 ns
* parse_tuple 5 cols: 2.87 µs/row
* SELECT 1 round-trip: 177 µs
* Pool acquire+query+release: 295 µs
* Cold connect: 11.2 ms (72x slower than pool)

UTF-8 decode carries no measurable cost vs iso-8859-1 - confirms
Phase 20 didn't regress anything.

Total: 69 unit + 211 integration + 28 benchmark = 308 tests.
2026-05-04 17:21:12 -06:00

109 lines
3.0 KiB
Python

"""Async-path benchmarks.
The async layer is a thin ``_to_thread`` shim over the sync codec, so
the per-call delta vs sync is the event-loop hop cost (~tens of µs).
The win is **concurrency**: running 10 SELECTs through a pool with
``asyncio.gather`` returns in roughly the same wall-clock time as 1.
These benchmarks measure both:
* ``test_async_select_one_row`` — single-call overhead delta vs sync
* ``test_async_concurrent_10_selects`` — concurrent throughput
"""
from __future__ import annotations
import asyncio
import pytest
from informix_db import aio
from tests.conftest import ConnParams
pytestmark = [pytest.mark.benchmark, pytest.mark.integration]
@pytest.fixture
def event_loop():
"""A fresh event loop per benchmark — pytest-asyncio compat shim."""
loop = asyncio.new_event_loop()
yield loop
loop.close()
def test_async_select_one_row(
benchmark, conn_params: ConnParams
) -> None:
"""Single async round-trip — measure thread-hop overhead."""
loop = asyncio.new_event_loop()
async def setup() -> aio.AsyncConnection:
return await aio.connect(
host=conn_params.host,
port=conn_params.port,
user=conn_params.user,
password=conn_params.password,
database=conn_params.database,
server=conn_params.server,
autocommit=True,
)
conn = loop.run_until_complete(setup())
async def one_query() -> object:
cur = await conn.cursor()
await cur.execute("SELECT 1 FROM systables WHERE tabid = 1")
row = await cur.fetchone()
await cur.close()
return row
def run() -> object:
return loop.run_until_complete(one_query())
try:
benchmark(run)
finally:
loop.run_until_complete(conn.close())
loop.close()
def test_async_concurrent_10_selects(
benchmark, conn_params: ConnParams
) -> None:
"""10 concurrent SELECTs through a pool — sub-linear vs serial."""
loop = asyncio.new_event_loop()
async def setup() -> aio.AsyncConnectionPool:
return await aio.create_pool(
host=conn_params.host,
port=conn_params.port,
user=conn_params.user,
password=conn_params.password,
database=conn_params.database,
server=conn_params.server,
autocommit=True,
min_size=2,
max_size=10,
)
pool = loop.run_until_complete(setup())
async def one_through_pool() -> object:
async with pool.connection() as conn:
cur = await conn.cursor()
await cur.execute("SELECT 1 FROM systables WHERE tabid = 1")
row = await cur.fetchone()
await cur.close()
return row
async def ten_concurrent() -> list:
return await asyncio.gather(*(one_through_pool() for _ in range(10)))
def run() -> list:
return loop.run_until_complete(ten_concurrent())
try:
benchmark(run)
finally:
loop.run_until_complete(pool.close())
loop.close()