Closes the bulk-fetch gap to within ~7-15% of IfxPy. Lever was the
buffer/I/O machinery, not the codec — Phase 37/38 had already brought
the codec close to IfxPy's C path; the remaining gap was ~450k
read_exact calls per 100k-row fetch, each doing its own recv-loop
and bytes.join.
Architecture: IfxSocket owns a connection-scoped bytearray + integer
offset cursor; BufferedSocketReader is a thin parser-view that delegates
buffer-fill to the socket. One recv() per ~64 KB instead of per field.
This is how asyncpg (buffer.pyx) and psycopg3 (pq.PGconn) structure
their read paths.
The buffer MUST be socket-scoped, not reader-scoped: the pipelined-
executemany path (Phase 33) streams N responses back-to-back across
multiple cursor reads, and a per-reader buffer would throw away
pre-fetched bytes when one reader is destroyed. (The first iteration
of this phase tried per-reader and hung on test_executemany_1000_rows.)
A/B vs Phase 38, same harness, warmed cache:
select_scaling_1000 2.90 -> 1.72 ms (-41%)
select_scaling_10000 24.32 -> 16.08 ms (-34%)
select_scaling_100000 250.36 -> 168.98 ms (-32%)
Head-to-head vs IfxPy 2.0.7:
select_scaling_1000 1.05x (basically tied)
select_scaling_10000 1.07x
select_scaling_100000 1.15x
IQR collapsed 9x at 100k (3.6 ms -> 0.4 ms) — fewer recvs means fewer
scheduler/jitter pulses showing up in the measurement.
Default ON. Set IFX_BUFFERED_READER=0 to fall back to the legacy reader
(still tested in CI as the escape hatch). Both paths green: 251/251
integration tests pass on each.