Phase 21: Performance benchmarks (2026.05.04.5)

Adds tests/benchmarks/ with pytest-benchmark coverage of the hot codec
paths and end-to-end SELECT/INSERT/pool/async round-trips. Establishes
a committed baseline.json so PRs can be regression-checked at review
via --benchmark-compare.

* test_codec_perf.py (16): decode/encode_param/parse_tuple_payload
  micro-benchmarks - run without container, suitable for pre-merge CI.
* test_select_perf.py (4): SELECT round-trips - 1-row latency floor,
  10-row, 1k-row full fetch, parameterized.
* test_insert_perf.py (3): single-row INSERT, executemany 100 / 1000.
* test_pool_perf.py (3): cold connect, pool acquire/release, pool
  acquire + query + release.
* test_async_perf.py (2): async round-trip overhead, 10x concurrent.
* baseline.json: committed snapshot, 28 measurements.
* benchmark pytest marker, gated off by default.
* Makefile: bench / bench-codec / bench-save targets;
  test-integration excludes benchmarks for speed.

Headline numbers (dev container loopback):
* decode(int): 181 ns
* parse_tuple 5 cols: 2.87 µs/row
* SELECT 1 round-trip: 177 µs
* Pool acquire+query+release: 295 µs
* Cold connect: 11.2 ms (72x slower than pool)

UTF-8 decode carries no measurable cost vs iso-8859-1 - confirms
Phase 20 didn't regress anything.

Total: 69 unit + 211 integration + 28 benchmark = 308 tests.
This commit is contained in:
Ryan Malloy 2026-05-04 17:21:12 -06:00
parent bea1a1cd0c
commit 90ce035a00
14 changed files with 2068 additions and 8 deletions

1
.gitignore vendored
View File

@ -58,3 +58,4 @@ build/*.jar
# Java reference client build outputs # Java reference client build outputs
*.class *.class
tests/benchmarks/.results/

View File

@ -2,6 +2,44 @@
All notable changes to `informix-db`. Versioning is [CalVer](https://calver.org/) — `YYYY.MM.DD` for date-based releases, `YYYY.MM.DD.N` for same-day post-releases per PEP 440. All notable changes to `informix-db`. Versioning is [CalVer](https://calver.org/) — `YYYY.MM.DD` for date-based releases, `YYYY.MM.DD.N` for same-day post-releases per PEP 440.
## 2026.05.04.5 — Performance benchmarks (Phase 21)
Adds `tests/benchmarks/` — a `pytest-benchmark` driven suite covering codec micro-benchmarks (no server required) and end-to-end SELECT/INSERT/pool/async benchmarks. Establishes a committed `baseline.json` so future PRs can be compared against the floor and regressions caught at review.
### Added
- **`tests/benchmarks/test_codec_perf.py`** — 16 micro-benchmarks for the hot codec paths (`decode`, `encode_param`, `parse_tuple_payload`). Run without an Informix container; suitable for pre-merge CI.
- **`tests/benchmarks/test_select_perf.py`** — 4 SELECT round-trip benchmarks: 1-row latency floor, ~10 rows, full 1k-row table, parameterized.
- **`tests/benchmarks/test_insert_perf.py`** — 3 INSERT benchmarks: single-row, `executemany(100)`, `executemany(1000)`.
- **`tests/benchmarks/test_pool_perf.py`** — 3 pool benchmarks: cold connect (login handshake cost), pool acquire/release, pool acquire + tiny query + release.
- **`tests/benchmarks/test_async_perf.py`** — 2 async benchmarks: single async round-trip overhead, 10 concurrent SELECTs through an async pool.
- **`tests/benchmarks/conftest.py`** — `bench_conn` (long-lived autocommit connection) and `bench_table` (pre-populated 1k-row table) fixtures, both session-scoped.
- **`tests/benchmarks/baseline.json`** — committed baseline (28 measurements) for `--benchmark-compare` regression checks.
- **`tests/benchmarks/README.md`** — headline numbers, regression policy, how to update baseline, what each benchmark measures.
- **`make bench` / `make bench-codec` / `make bench-save`** Makefile targets.
- **`benchmark` pytest marker** — gated, off by default. `pytest -m benchmark` to opt in.
### Changed
- **`make test-integration`** now uses `-m "integration and not benchmark"` so the integration suite stays fast (~6s) — benchmarks (~27s) are gated behind `make bench`.
- **`pytest`** default `-m` now excludes both `integration` and `benchmark`. Default run is unit-only.
### Headline numbers (dev container, x86_64 Linux, loopback)
| Operation | Mean |
|-|-:|
| `decode(int)` (per cell) | 181 ns |
| `parse_tuple_payload(5 cols)` (per row) | 2.87 µs |
| `SELECT 1` round-trip | 177 µs |
| Pool acquire + tiny query + release | 295 µs |
| **Cold connect + close** | **11.2 ms** |
**Pool-vs-cold delta is 72×.** UTF-8 decode carries no measurable cost over iso-8859-1 (Phase 20 didn't slow anything down).
### Tests
28 new benchmark tests. Total: **69 unit + 211 integration + 28 benchmark = 308**.
## 2026.05.04.4 — UTF-8 / multibyte locale support ## 2026.05.04.4 — UTF-8 / multibyte locale support
Threads the connection's `CLIENT_LOCALE` through to user-data string codecs so multibyte locales (UTF-8, etc.) round-trip correctly. The driver previously hardcoded `iso-8859-1` for every string conversion — fine for Western European text, broken-by-design for CJK, Cyrillic, Arabic, emoji. Threads the connection's `CLIENT_LOCALE` through to user-data string codecs so multibyte locales (UTF-8, etc.) round-trip correctly. The driver previously hardcoded `iso-8859-1` for every string conversion — fine for Western European text, broken-by-design for CJK, Cyrillic, Arabic, emoji.

View File

@ -32,15 +32,30 @@ format: ## Auto-format with ruff
test: ## Run unit tests (no Docker required) test: ## Run unit tests (no Docker required)
uv run pytest uv run pytest
test-integration: ## Run integration tests (needs Informix container; see `make ifx-up`) test-integration: ## Run integration tests (needs Informix container; see `make ifx-up`). Excludes benchmarks; use `make bench` for those.
uv run pytest -m integration uv run pytest -m "integration and not benchmark"
test-all: ## Run unit + integration tests test-all: ## Run unit + integration tests (no benchmarks; use `make bench` for those)
uv run pytest -m "" uv run pytest -m "not benchmark"
test-pdu: ## Run only the JDBC-vs-Python PDU regression test test-pdu: ## Run only the JDBC-vs-Python PDU regression test
uv run pytest tests/test_pdu_match.py -v uv run pytest tests/test_pdu_match.py -v
bench: ## Run all benchmarks (needs container for end-to-end; codec works standalone)
uv run pytest tests/benchmarks/ -m benchmark --benchmark-only \
--benchmark-columns=mean,stddev,ops,rounds \
--benchmark-sort=mean
bench-codec: ## Run codec micro-benchmarks only (no container required)
uv run pytest tests/benchmarks/test_codec_perf.py -m benchmark --benchmark-only \
--benchmark-columns=mean,stddev,ops,rounds \
--benchmark-sort=mean
bench-save: ## Save current bench run under .results/ (manual: copy to baseline.json)
uv run pytest tests/benchmarks/ -m benchmark --benchmark-only \
--benchmark-storage=tests/benchmarks/.results \
--benchmark-save=run
# ---------------------------------------------------------------------------- # ----------------------------------------------------------------------------
# Informix dev container # Informix dev container
# ---------------------------------------------------------------------------- # ----------------------------------------------------------------------------

View File

@ -1,6 +1,6 @@
[project] [project]
name = "informix-db" name = "informix-db"
version = "2026.05.04.4" version = "2026.05.04.5"
description = "Pure-Python driver for IBM Informix IDS — speaks the SQLI wire protocol over raw sockets. No CSDK, no JVM, no native libraries." description = "Pure-Python driver for IBM Informix IDS — speaks the SQLI wire protocol over raw sockets. No CSDK, no JVM, no native libraries."
readme = "README.md" readme = "README.md"
license = { text = "MIT" } license = { text = "MIT" }
@ -93,13 +93,15 @@ addopts = [
"-ra", # short summary for non-passing "-ra", # short summary for non-passing
"--strict-markers", "--strict-markers",
"--strict-config", "--strict-config",
"-m", "not integration", # default: unit-only. Override with: pytest -m integration "-m", "not integration and not benchmark", # default: unit-only. Override with: pytest -m integration / -m benchmark
] ]
markers = [ markers = [
"integration: requires a running Informix container (docker compose up); skipped by default", "integration: requires a running Informix container (docker compose up); skipped by default",
"benchmark: pytest-benchmark performance test; skipped by default. Run with `make bench`.",
] ]
[dependency-groups] [dependency-groups]
dev = [ dev = [
"pytest-asyncio>=1.3.0", "pytest-asyncio>=1.3.0",
"pytest-benchmark>=5.2.3",
] ]

View File

@ -0,0 +1,79 @@
# Benchmarks (Phase 21)
Performance baselines for `informix-db`. Two layers:
1. **Codec micro-benchmarks** (`test_codec_perf.py`) — pure CPU, no
server. These set the *ceiling* for what end-to-end can achieve.
Run with `make bench-codec`. Suitable for CI's pre-merge job.
2. **End-to-end benchmarks** — exercise the full
PREPARE → BIND → EXECUTE → FETCH → CLOSE → RELEASE round-trip.
Need an Informix container (`make ifx-up`). Run with `make bench`.
## Headline numbers (baseline 2026-05-04, x86_64 Linux, dev container on loopback)
| Operation | Mean | Ops/sec |
|-|-:|-:|
| `decode(int)` (per cell) | 181 ns | 5.5M |
| `parse_tuple_payload(5 cols)` (per row) | 2.87 µs | 350K |
| `encode_param(int)` (per param) | 103 ns | 9.7M |
| `SELECT 1` round-trip | 177 µs | 5,650 |
| Pool acquire + tiny query + release | 295 µs | 3,400 |
| **Cold connect + close** (login handshake) | **11.2 ms** | **89** |
| 1000-row SELECT * | 1.56 ms | 640 |
| INSERT (single, prepared) | 1.88 ms | 530 |
| `executemany(100 rows)` | 181 ms | 5.5 (i.e. ~550 rows/sec) |
| `executemany(1000 rows)` | 1.74 s | 0.57 (i.e. ~575 rows/sec) |
### What these tell you
- **Pool gives 72× speedup** over cold connect. If your app opens a
connection per request, fix that first.
- **Codec is not the bottleneck.** Per-row decode (2.9 µs) is 1000× faster
than wire round-trip (177 µs for `SELECT 1`). Network and server-side
cost dominate.
- **UTF-8 carries no measurable cost.** `decode_varchar_utf8` runs at
216 ns vs `decode_varchar_short` at 170 ns — the 27% delta is the
multibyte string walk inherent in UTF-8 decoding, not Phase 20 overhead.
- **`executemany` doesn't scale linearly.** 100 rows in 181 ms = 1.81 ms/row;
1000 rows in 1.74 s = 1.74 ms/row. Suggests per-row cost dominates over
PREPARE amortization. Worth investigating in Phase 21.x.
## Regression policy
`baseline.json` is committed and represents the dev-container baseline.
Compare a current run against it with:
```bash
uv run pytest tests/benchmarks/ -m benchmark --benchmark-only \
--benchmark-compare=tests/benchmarks/baseline.json \
--benchmark-compare-fail=mean:25%
```
A 25% mean-regression fails the run. Adjust the threshold per CI noise
profile. CI's loopback-network-on-shared-runner is noisier than dev
container on a quiet box — start permissive and tighten as you collect
runs.
## Updating the baseline
When you intentionally change performance (an optimization, or accept
a regression for correctness), refresh:
```bash
make bench-save # writes .results/0001_run.json
cp tests/benchmarks/.results/Linux-CPython-*/0001_run.json tests/benchmarks/baseline.json
git add tests/benchmarks/baseline.json
```
Document the change in CHANGELOG so reviewers know why the floor moved.
## Files
- `test_codec_perf.py` — codec dispatch (decode, encode_param, parse_tuple_payload)
- `test_select_perf.py` — SELECT round-trips, single + multi-row
- `test_insert_perf.py` — INSERT single + executemany throughput
- `test_pool_perf.py` — cold connect vs pool acquire/release
- `test_async_perf.py` — async-path latency + concurrent throughput
- `conftest.py` — long-lived `bench_conn` and 1k-row `bench_table` fixtures
- `baseline.json` — committed baseline for regression comparison
- `.results/` — gitignored; per-run output from `make bench-save`

View File

@ -0,0 +1,21 @@
"""Phase 21 — performance benchmarks for informix-db.
These tests are gated behind the ``benchmark`` marker and excluded from
the default ``pytest`` run. To run:
make bench # all benchmarks
uv run pytest -m benchmark tests/benchmarks/test_codec_perf.py
Codec micro-benchmarks (``test_codec_perf.py``) run without a server
and are fast enough for tight inner-loop iteration. End-to-end
benchmarks (SELECT/INSERT/pool) require an Informix container.
Output goes to ``.benchmarks/`` (gitignored). Persistent baseline at
``tests/benchmarks/baseline.json`` is updated manually with::
uv run pytest -m benchmark --benchmark-only \
--benchmark-save=baseline --benchmark-storage=tests/benchmarks/
Then copy ``.benchmarks/Linux-CPython-X.Y/000N_baseline.json`` to
``tests/benchmarks/baseline.json``.
"""

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,80 @@
"""Benchmark fixtures — long-lived connections + populated test tables.
The end-to-end benchmark suite needs:
* A persistent connection (creating one per benchmark inflates the cost
by the login handshake, ~5-15ms distorts micro-second measurements).
* A pre-populated test table so SELECT/UPDATE benchmarks have rows to
iterate.
Both fixtures are session-scoped so the table is created exactly once
even when the same benchmark is iterated over many rounds.
"""
from __future__ import annotations
import contextlib
from collections.abc import Iterator
import pytest
import informix_db
from tests.conftest import ConnParams
BENCH_TABLE_ROWS = 1000 # rows in the populated benchmark table
@pytest.fixture(scope="session")
def bench_conn(conn_params: ConnParams) -> Iterator[informix_db.Connection]:
"""One long-lived autocommit connection for the entire bench session."""
conn = informix_db.connect(
host=conn_params.host,
port=conn_params.port,
user=conn_params.user,
password=conn_params.password,
database=conn_params.database,
server=conn_params.server,
autocommit=True,
)
try:
yield conn
finally:
conn.close()
@pytest.fixture(scope="session")
def bench_table(bench_conn: informix_db.Connection) -> Iterator[str]:
"""Create + populate a 1k-row table for SELECT/UPDATE benchmarks.
Yields the table name. The table is dropped at session teardown.
Schema covers the common type mix: INT id, VARCHAR name,
INT (counter), FLOAT (value), DATE (created).
"""
table = "p21_bench"
cur = bench_conn.cursor()
with contextlib.suppress(informix_db.Error):
cur.execute(f"DROP TABLE {table}")
cur.execute(
f"CREATE TABLE {table} ("
" id INT, name VARCHAR(64), counter INT,"
" value FLOAT, created DATE)"
)
# Populate via executemany so setup is fast.
rows = [
(
i,
f"row_{i:04d}",
i * 7,
float(i) * 1.5,
None, # DATE NULL — keeps fixture small
)
for i in range(BENCH_TABLE_ROWS)
]
cur.executemany(
f"INSERT INTO {table} VALUES (?, ?, ?, ?, ?)",
rows,
)
try:
yield table
finally:
with contextlib.suppress(informix_db.Error):
cur.execute(f"DROP TABLE {table}")

View File

@ -0,0 +1,108 @@
"""Async-path benchmarks.
The async layer is a thin ``_to_thread`` shim over the sync codec, so
the per-call delta vs sync is the event-loop hop cost (~tens of µs).
The win is **concurrency**: running 10 SELECTs through a pool with
``asyncio.gather`` returns in roughly the same wall-clock time as 1.
These benchmarks measure both:
* ``test_async_select_one_row`` single-call overhead delta vs sync
* ``test_async_concurrent_10_selects`` concurrent throughput
"""
from __future__ import annotations
import asyncio
import pytest
from informix_db import aio
from tests.conftest import ConnParams
pytestmark = [pytest.mark.benchmark, pytest.mark.integration]
@pytest.fixture
def event_loop():
"""A fresh event loop per benchmark — pytest-asyncio compat shim."""
loop = asyncio.new_event_loop()
yield loop
loop.close()
def test_async_select_one_row(
benchmark, conn_params: ConnParams
) -> None:
"""Single async round-trip — measure thread-hop overhead."""
loop = asyncio.new_event_loop()
async def setup() -> aio.AsyncConnection:
return await aio.connect(
host=conn_params.host,
port=conn_params.port,
user=conn_params.user,
password=conn_params.password,
database=conn_params.database,
server=conn_params.server,
autocommit=True,
)
conn = loop.run_until_complete(setup())
async def one_query() -> object:
cur = await conn.cursor()
await cur.execute("SELECT 1 FROM systables WHERE tabid = 1")
row = await cur.fetchone()
await cur.close()
return row
def run() -> object:
return loop.run_until_complete(one_query())
try:
benchmark(run)
finally:
loop.run_until_complete(conn.close())
loop.close()
def test_async_concurrent_10_selects(
benchmark, conn_params: ConnParams
) -> None:
"""10 concurrent SELECTs through a pool — sub-linear vs serial."""
loop = asyncio.new_event_loop()
async def setup() -> aio.AsyncConnectionPool:
return await aio.create_pool(
host=conn_params.host,
port=conn_params.port,
user=conn_params.user,
password=conn_params.password,
database=conn_params.database,
server=conn_params.server,
autocommit=True,
min_size=2,
max_size=10,
)
pool = loop.run_until_complete(setup())
async def one_through_pool() -> object:
async with pool.connection() as conn:
cur = await conn.cursor()
await cur.execute("SELECT 1 FROM systables WHERE tabid = 1")
row = await cur.fetchone()
await cur.close()
return row
async def ten_concurrent() -> list:
return await asyncio.gather(*(one_through_pool() for _ in range(10)))
def run() -> list:
return loop.run_until_complete(ten_concurrent())
try:
benchmark(run)
finally:
loop.run_until_complete(pool.close())
loop.close()

View File

@ -0,0 +1,199 @@
"""Codec micro-benchmarks — no server required.
These measure the tight inner loops the driver hits on every row:
``decode()`` per cell, ``parse_tuple_payload()`` per row,
``encode_param()`` per parameter. A 1M-row fetch hits ``decode()``
5-10M times; a 1% slowdown there is *visible*.
The fixtures synthesize realistic byte payloads no need for the
Docker container. This makes the benchmarks usable in CI's pre-merge
job (which doesn't run integration tests).
"""
from __future__ import annotations
import datetime
import struct
from io import BytesIO
import pytest
from informix_db._protocol import IfxStreamReader
from informix_db._resultset import ColumnInfo, parse_tuple_payload
from informix_db._types import IfxType
from informix_db.converters import decode, encode_param
pytestmark = pytest.mark.benchmark
# ---------------------------------------------------------------------------
# decode() — per-value dispatch
# ---------------------------------------------------------------------------
def test_decode_int(benchmark) -> None:
"""Hot path: per-cell INT decode. ~5M calls/sec is the kind of speed
a 1M-row fetch with 5 INT columns needs."""
raw = struct.pack("!i", 42)
benchmark(decode, int(IfxType.INT), raw)
def test_decode_smallint(benchmark) -> None:
raw = struct.pack("!h", 100)
benchmark(decode, int(IfxType.SMALLINT), raw)
def test_decode_bigint(benchmark) -> None:
raw = struct.pack("!q", 1234567890123)
benchmark(decode, int(IfxType.BIGINT), raw)
def test_decode_float(benchmark) -> None:
raw = struct.pack("!d", 3.14159)
benchmark(decode, int(IfxType.FLOAT), raw)
def test_decode_date(benchmark) -> None:
raw = struct.pack("!i", 45678)
benchmark(decode, int(IfxType.DATE), raw)
def test_decode_varchar_short(benchmark) -> None:
"""20-byte ASCII string — typical name column."""
raw = b"hello world example "
benchmark(decode, int(IfxType.VARCHAR), raw)
def test_decode_varchar_long(benchmark) -> None:
"""255-byte VARCHAR — max non-LVARCHAR length."""
raw = b"x" * 255
benchmark(decode, int(IfxType.VARCHAR), raw)
def test_decode_varchar_utf8(benchmark) -> None:
"""Multi-byte UTF-8 decode — exercise Phase 20 path."""
raw = "café résumé naïve Zürich".encode()
benchmark(decode, int(IfxType.VARCHAR), raw, "utf-8")
# ---------------------------------------------------------------------------
# encode_param() — parameter-binding hot path
# ---------------------------------------------------------------------------
def test_encode_int(benchmark) -> None:
benchmark(encode_param, 42)
def test_encode_str_ascii(benchmark) -> None:
benchmark(encode_param, "hello world example", "iso-8859-1")
def test_encode_str_utf8(benchmark) -> None:
benchmark(encode_param, "café résumé naïve", "utf-8")
def test_encode_float(benchmark) -> None:
benchmark(encode_param, 3.14159)
def test_encode_date(benchmark) -> None:
benchmark(encode_param, datetime.date(2026, 5, 4))
def test_encode_datetime(benchmark) -> None:
benchmark(encode_param, datetime.datetime(2026, 5, 4, 12, 30, 45))
# ---------------------------------------------------------------------------
# parse_tuple_payload() — per-row decode
# ---------------------------------------------------------------------------
def _build_systables_row_payload() -> bytes:
"""Synthesize the SQ_TUPLE bytes a typical systables row produces.
Layout: [short warn=0][int size][payload][optional pad]
Payload has columns: tabname VARCHAR(128), owner VARCHAR(32),
tabid INT, partnum INT, ncols INT.
"""
payload = bytearray()
# tabname VARCHAR: [byte len][bytes] — single-byte length prefix per
# the discovered tuple format
name = b"systables"
payload.append(len(name))
payload.extend(name)
# owner VARCHAR
owner = b"informix"
payload.append(len(owner))
payload.extend(owner)
# tabid INT
payload.extend(struct.pack("!i", 1))
# partnum INT
payload.extend(struct.pack("!i", 1048578))
# ncols INT
payload.extend(struct.pack("!i", 32))
out = bytearray()
out.extend(struct.pack("!h", 0)) # warn
out.extend(struct.pack("!i", len(payload)))
out.extend(payload)
if len(payload) & 1:
out.append(0) # even-byte pad
return bytes(out)
_SYSTABLES_COLUMNS = [
ColumnInfo(
name="tabname",
type_code=int(IfxType.VARCHAR),
raw_type_code=int(IfxType.VARCHAR),
encoded_length=128,
),
ColumnInfo(
name="owner",
type_code=int(IfxType.VARCHAR),
raw_type_code=int(IfxType.VARCHAR),
encoded_length=32,
),
ColumnInfo(
name="tabid",
type_code=int(IfxType.INT),
raw_type_code=int(IfxType.INT),
encoded_length=4,
),
ColumnInfo(
name="partnum",
type_code=int(IfxType.INT),
raw_type_code=int(IfxType.INT),
encoded_length=4,
),
ColumnInfo(
name="ncols",
type_code=int(IfxType.INT),
raw_type_code=int(IfxType.INT),
encoded_length=4,
),
]
def test_parse_tuple_5cols_iso8859(benchmark) -> None:
"""Decode a 5-column row (2 VARCHAR + 3 INT) — typical `systables` shape."""
payload = _build_systables_row_payload()
def run() -> tuple:
reader = IfxStreamReader(BytesIO(payload))
return parse_tuple_payload(reader, _SYSTABLES_COLUMNS)
benchmark(run)
def test_parse_tuple_5cols_utf8(benchmark) -> None:
"""Same shape, UTF-8 codec path — verify Phase 20 isn't a bottleneck."""
payload = _build_systables_row_payload()
def run() -> tuple:
reader = IfxStreamReader(BytesIO(payload))
return parse_tuple_payload(reader, _SYSTABLES_COLUMNS, encoding="utf-8")
benchmark(run)

View File

@ -0,0 +1,106 @@
"""End-to-end INSERT benchmarks — single-row, executemany, and the gap.
The single-row vs. executemany delta is the ``executemany`` win we
PREPARE+RELEASE once and BIND+EXECUTE per row, vs PREPARE+RELEASE per
row. On any decent network this is 10-50x.
"""
from __future__ import annotations
import contextlib
import pytest
import informix_db
pytestmark = [pytest.mark.benchmark, pytest.mark.integration]
def _setup_temp_table(conn: informix_db.Connection, name: str) -> None:
cur = conn.cursor()
with contextlib.suppress(informix_db.Error):
cur.execute(f"DROP TABLE {name}")
cur.execute(
f"CREATE TABLE {name} (id INT, name VARCHAR(64), value FLOAT)"
)
def _drop_temp_table(conn: informix_db.Connection, name: str) -> None:
cur = conn.cursor()
with contextlib.suppress(informix_db.Error):
cur.execute(f"DROP TABLE {name}")
def test_insert_single_row(benchmark, bench_conn: informix_db.Connection) -> None:
"""Single INSERT per call — full PREPARE+BIND+EXECUTE+RELEASE cycle."""
table = "p21_ins_single"
_setup_temp_table(bench_conn, table)
counter = [0]
def run() -> None:
counter[0] += 1
cur = bench_conn.cursor()
cur.execute(
f"INSERT INTO {table} VALUES (?, ?, ?)",
(counter[0], f"name_{counter[0]}", float(counter[0])),
)
cur.close()
try:
benchmark(run)
finally:
_drop_temp_table(bench_conn, table)
def test_executemany_100_rows(
benchmark, bench_conn: informix_db.Connection
) -> None:
"""100 INSERTs via executemany — one PREPARE, 100 BIND+EXECUTEs, one RELEASE."""
table = "p21_ins_emany_100"
_setup_temp_table(bench_conn, table)
counter = [0]
def run() -> None:
counter[0] += 1
base = counter[0] * 100
rows = [
(base + i, f"row_{base + i}", float(base + i)) for i in range(100)
]
cur = bench_conn.cursor()
cur.executemany(
f"INSERT INTO {table} VALUES (?, ?, ?)",
rows,
)
cur.close()
try:
benchmark(run)
finally:
_drop_temp_table(bench_conn, table)
def test_executemany_1000_rows(
benchmark, bench_conn: informix_db.Connection
) -> None:
"""1000 INSERTs via executemany — sustained-batch throughput."""
table = "p21_ins_emany_1000"
_setup_temp_table(bench_conn, table)
counter = [0]
def run() -> None:
counter[0] += 1
base = counter[0] * 1000
rows = [
(base + i, f"row_{base + i}", float(base + i)) for i in range(1000)
]
cur = bench_conn.cursor()
cur.executemany(
f"INSERT INTO {table} VALUES (?, ?, ?)",
rows,
)
cur.close()
try:
benchmark.pedantic(run, rounds=3, iterations=1)
finally:
_drop_temp_table(bench_conn, table)

View File

@ -0,0 +1,83 @@
"""Connection-pool benchmarks — measure the cost of pool acquire/release
vs. fresh connect.
The win on the pool side is *avoiding the login handshake*. Cold connect
to Informix is ~5-15ms (server-side auth + protocol negotiation). Pool
acquire is ~50-200µs (validation only). The benchmark makes that delta
visible.
"""
from __future__ import annotations
import pytest
import informix_db
from informix_db.pool import ConnectionPool, create_pool
from tests.conftest import ConnParams
pytestmark = [pytest.mark.benchmark, pytest.mark.integration]
@pytest.fixture(scope="module")
def pool(conn_params: ConnParams):
"""Module-scoped pool kept warm across the bench file."""
p = create_pool(
host=conn_params.host,
port=conn_params.port,
user=conn_params.user,
password=conn_params.password,
database=conn_params.database,
server=conn_params.server,
autocommit=True,
min_size=2,
max_size=10,
)
try:
yield p
finally:
p.close()
def test_cold_connect_disconnect(benchmark, conn_params: ConnParams) -> None:
"""Full login handshake + close per call — the worst case."""
def run() -> None:
conn = informix_db.connect(
host=conn_params.host,
port=conn_params.port,
user=conn_params.user,
password=conn_params.password,
database=conn_params.database,
server=conn_params.server,
autocommit=True,
)
conn.close()
# Cold-connect is slow (~10ms); cap at 5 rounds, no per-round iteration
benchmark.pedantic(run, rounds=5, iterations=1)
def test_pool_acquire_release(benchmark, pool: ConnectionPool) -> None:
"""Pool acquire+release — the steady-state cost of a pooled query."""
def run() -> None:
with pool.connection() as _conn:
pass
benchmark(run)
def test_pool_acquire_query_release(
benchmark, pool: ConnectionPool
) -> None:
"""Realistic per-query cost: acquire, run a tiny query, release."""
def run() -> object:
with pool.connection() as conn:
cur = conn.cursor()
cur.execute("SELECT 1 FROM systables WHERE tabid = 1")
row = cur.fetchone()
cur.close()
return row
benchmark(run)

View File

@ -0,0 +1,81 @@
"""End-to-end SELECT benchmarks.
Measure the full PREPARE EXECUTE FETCH CLOSE RELEASE round-trip
for representative query shapes. The codec micro-benchmarks set the
*ceiling* (best-case CPU); these tell you how much of that ceiling
the wire protocol + server response time eats.
Layered comparison:
- ``select_one_row`` protocol-overhead floor (single tiny round-trip)
- ``select_systables_first`` small server-side query (~10 rows)
- ``select_bench_table_all`` full 1k-row table fetch (sustained throughput)
"""
from __future__ import annotations
import pytest
import informix_db
pytestmark = [pytest.mark.benchmark, pytest.mark.integration]
def test_select_one_row(benchmark, bench_conn: informix_db.Connection) -> None:
"""Single-row round-trip — protocol-overhead floor."""
def run() -> object:
cur = bench_conn.cursor()
cur.execute("SELECT 1 FROM systables WHERE tabid = 1")
row = cur.fetchone()
cur.close()
return row
benchmark(run)
def test_select_systables_first_10(benchmark, bench_conn: informix_db.Connection) -> None:
"""Small server-side query — describes 4 columns, returns ~10 rows."""
def run() -> list:
cur = bench_conn.cursor()
cur.execute(
"SELECT FIRST 10 tabname, owner, tabid, ncols FROM systables"
)
rows = cur.fetchall()
cur.close()
return rows
benchmark(run)
def test_select_bench_table_all(
benchmark, bench_conn: informix_db.Connection, bench_table: str
) -> None:
"""1000-row sustained fetch — covers the typical reporting query."""
def run() -> list:
cur = bench_conn.cursor()
cur.execute(f"SELECT * FROM {bench_table}")
rows = cur.fetchall()
cur.close()
return rows
benchmark(run)
def test_select_with_param(
benchmark, bench_conn: informix_db.Connection, bench_table: str
) -> None:
"""Parameterized SELECT — exercises the BIND path."""
def run() -> list:
cur = bench_conn.cursor()
cur.execute(
f"SELECT id, name FROM {bench_table} WHERE counter > ?",
(5000,),
)
rows = cur.fetchall()
cur.close()
return rows
benchmark(run)

30
uv.lock generated
View File

@ -34,7 +34,7 @@ wheels = [
[[package]] [[package]]
name = "informix-db" name = "informix-db"
version = "2026.5.4.3" version = "2026.5.4.4"
source = { editable = "." } source = { editable = "." }
[package.optional-dependencies] [package.optional-dependencies]
@ -46,6 +46,7 @@ dev = [
[package.dev-dependencies] [package.dev-dependencies]
dev = [ dev = [
{ name = "pytest-asyncio" }, { name = "pytest-asyncio" },
{ name = "pytest-benchmark" },
] ]
[package.metadata] [package.metadata]
@ -56,7 +57,10 @@ requires-dist = [
provides-extras = ["dev"] provides-extras = ["dev"]
[package.metadata.requires-dev] [package.metadata.requires-dev]
dev = [{ name = "pytest-asyncio", specifier = ">=1.3.0" }] dev = [
{ name = "pytest-asyncio", specifier = ">=1.3.0" },
{ name = "pytest-benchmark", specifier = ">=5.2.3" },
]
[[package]] [[package]]
name = "iniconfig" name = "iniconfig"
@ -85,6 +89,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538, upload-time = "2025-05-15T12:30:06.134Z" }, { url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538, upload-time = "2025-05-15T12:30:06.134Z" },
] ]
[[package]]
name = "py-cpuinfo"
version = "9.0.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/37/a8/d832f7293ebb21690860d2e01d8115e5ff6f2ae8bbdc953f0eb0fa4bd2c7/py-cpuinfo-9.0.0.tar.gz", hash = "sha256:3cdbbf3fac90dc6f118bfd64384f309edeadd902d7c8fb17f02ffa1fc3f49690", size = 104716, upload-time = "2022-10-25T20:38:06.303Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e0/a9/023730ba63db1e494a271cb018dcd361bd2c917ba7004c3e49d5daf795a2/py_cpuinfo-9.0.0-py3-none-any.whl", hash = "sha256:859625bc251f64e21f077d099d4162689c762b5d6a4c3c97553d56241c9674d5", size = 22335, upload-time = "2022-10-25T20:38:27.636Z" },
]
[[package]] [[package]]
name = "pygments" name = "pygments"
version = "2.20.0" version = "2.20.0"
@ -126,6 +139,19 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/e5/35/f8b19922b6a25bc0880171a2f1a003eaeb93657475193ab516fd87cac9da/pytest_asyncio-1.3.0-py3-none-any.whl", hash = "sha256:611e26147c7f77640e6d0a92a38ed17c3e9848063698d5c93d5aa7aa11cebff5", size = 15075, upload-time = "2025-11-10T16:07:45.537Z" }, { url = "https://files.pythonhosted.org/packages/e5/35/f8b19922b6a25bc0880171a2f1a003eaeb93657475193ab516fd87cac9da/pytest_asyncio-1.3.0-py3-none-any.whl", hash = "sha256:611e26147c7f77640e6d0a92a38ed17c3e9848063698d5c93d5aa7aa11cebff5", size = 15075, upload-time = "2025-11-10T16:07:45.537Z" },
] ]
[[package]]
name = "pytest-benchmark"
version = "5.2.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "py-cpuinfo" },
{ name = "pytest" },
]
sdist = { url = "https://files.pythonhosted.org/packages/24/34/9f732b76456d64faffbef6232f1f9dbec7a7c4999ff46282fa418bd1af66/pytest_benchmark-5.2.3.tar.gz", hash = "sha256:deb7317998a23c650fd4ff76e1230066a76cb45dcece0aca5607143c619e7779", size = 341340, upload-time = "2025-11-09T18:48:43.215Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/33/29/e756e715a48959f1c0045342088d7ca9762a2f509b945f362a316e9412b7/pytest_benchmark-5.2.3-py3-none-any.whl", hash = "sha256:bc839726ad20e99aaa0d11a127445457b4219bdb9e80a1afc4b51da7f96b0803", size = 45255, upload-time = "2025-11-09T18:48:39.765Z" },
]
[[package]] [[package]]
name = "ruff" name = "ruff"
version = "0.15.12" version = "0.15.12"