Fills the highest-priority gap from the test-adequacy audit: connection-failure recovery. 12 new integration tests using a thread-based TCP proxy (ControlledProxy) that can be kill()'d at any moment to simulate network drops or server crashes via TCP RST (SO_LINGER=0). Coverage: * Network drop mid-SELECT — OperationalError, not hang * Network drop after describe, before fetch * Network drop during fetch (already-materialized rows still readable; fresh execute fails) * Local socket forced-close (kernel-level disconnect simulation) * I/O error marks connection unusable post-failure * Pool evicts connection that died mid-`with` block (size drops) * Pool revives after all idle connections died (health check on acquire mints fresh) * Async cancellation via asyncio.wait_for — pool stays usable * Cursor reusable after SQL error * Connection survives cursor close after error * Sustained pool load (50 acquire/release cycles, no leak) * read_timeout fires on a hung connection within bounds Catches the failure classes that bite production users: * Hangs (waiting forever on dead socket) * Silent corruption (EOF treated as valid tuple) * Double-fault (cleanup raises after primary error) * Pool poisoning (broken connection returned to pool) * Stale cursor reuse across error boundaries Helper: * tests/_proxy.py — ControlledProxy: thread-based TCP forwarder with kill() for fault injection. Two-thread pump model. SO_LINGER=0 for RST-on-close (mimics router drop). Total: 69 unit + 203 integration = 272 tests. Remaining gaps from the audit (UTF-8 multibyte locale, server-version matrix, performance benchmarks) are real but lower-severity. Phase 19 addressed the one most likely to bite production deployments.
8.5 KiB
Changelog
All notable changes to informix-db. Versioning is CalVer — YYYY.MM.DD for date-based releases, YYYY.MM.DD.N for same-day post-releases per PEP 440.
2026.05.04.3 — Resilience tests (fault injection)
Added
-
tests/_proxy.py—ControlledProxyhelper: a thread-based TCP forwarder between the test client and Informix, with akill()method that sends TCP RST (viaSO_LINGER=0) to simulate a network drop or server crash. Used as a context manager. -
tests/test_resilience.py— 12 integration tests filling the resilience gap identified in the test-coverage audit:- Network drop mid-SELECT raises
OperationalErrorcleanly (not hang) - Network drop after describe but before fetch
- Network drop during fetch iteration (already-materialized rows still readable, fresh execute fails)
- Local socket close (yank-the-rug from client side)
- I/O error marks connection unusable
- Pool evicts a connection that died mid-
withblock - Pool revives after all idle connections died (health-check on acquire mints fresh)
- Async cancellation via
asyncio.wait_for— pool stays usable for subsequent queries - Cursor reusable after SQL error
- Connection survives cursor close after error
- Pool sustained-load smoke (50 acquire/release cycles, no leak)
read_timeoutfires on a hung connection
- Network drop mid-SELECT raises
What this catches
- Hangs (waiting forever on a dead socket)
- Silent data corruption (treating EOF as a valid tuple)
- Double-fault (one error → cleanup raises a different error)
- Pool poisoning (returning a broken connection to the pool)
- Stale cursor reuse (same cursor reused across an error boundary)
Tests
12 new integration tests. Total: 69 unit + 203 integration = 272 tests.
The Phase 19 work fills the highest-priority gap from the test-adequacy audit. Remaining gaps from that audit (UTF-8 locale, server-version matrix, performance benchmarks) are real but lower-severity.
2026.05.04.2 — Server-side scrollable cursors
Added
-
Server-side scrollable cursors (Phase 18): opt in via
conn.cursor(scrollable=True). The cursor opens withSQ_SCROLL(24) beforeSQ_OPEN(6), the result set stays materialized server-side, and each scroll method sendsSQ_SFETCH(23) to fetch one row at a time. Use this for huge result sets where in-memory materialization would be wasteful.The user-facing API is identical to Phase 17's in-memory scroll (
fetch_first,fetch_last,fetch_prior,fetch_absolute,fetch_relative,scroll,rownumber); only the internal mechanism differs:Default cursor scrollable=TrueMemory All rows materialized One row at a time Network round-trips per fetch 0 (after initial NFETCH) 1 (one SFETCH per call) Cursor lifetime Closed after execute()Open until close()Best for Moderate result sets, sequential iteration Huge result sets, random access Implementation discovers total row count lazily via SFETCH(LAST=4) when negative absolute indexing requires it; result is cached in
_scroll_total_rows. Position tracking is authoritative from the server'sSQ_TUPID(25) tag, not client-computed.
Wire-protocol details
SQ_SFETCH(23):[short SQ_ID=4][int 23][short scrolltype][int target][int bufSize=4096][short SQ_EOT]. scrolltype values: 1=NEXT, 4=LAST, 6=ABSOLUTE.SQ_SCROLL(24): emitted between CURNAME and SQ_OPEN to mark the cursor as scrollable.SQ_TUPID(25): server response carrying the 1-indexed row position the server just delivered.[short 25][int rowID].
The trap on the way: I initially used SHORT for bufSize and the server hung silently — same SHORT-vs-INT diagnostic pattern as Phase 4.x's CURNAME+NFETCH. Captured a JDBC trace, byte-diffed against ours, found the mismatch.
Tests
14 new integration tests in test_scroll_cursor_server.py. Total: 69 unit + 191 integration = 260 tests.
2026.05.04.1 — Scroll cursors
Added
-
Scroll cursor API on
Cursor(Phase 17):cur.scroll(value, mode='relative'|'absolute')— PEP 249 compatiblecur.fetch_first()/cur.fetch_last()— jump to endscur.fetch_prior()— backward step (SQL-standard semantics: from past-end yields the last row)cur.fetch_absolute(n)— 0-indexed jump; negativenindexes from the endcur.fetch_relative(n)— n-step from current positioncur.rownumber— current 0-indexed position (None if before-first or no result set)
In-memory implementation — no new wire-protocol; the existing materialized result set in
cur._rowsis now indexed rather than iterated. For server-side scroll over huge result sets,SQ_SFETCH(tag 23) would be needed — Phase 18 if anyone hits the in-memory ceiling.
Tests
14 new integration tests in test_scroll_cursor.py. Total: 69 unit + 177 integration = 246 tests.
2026.05.04 — Library completion
The Phase 0 ambition — first pure-Python Informix SQLI driver — reaches feature completeness. Adds async, TLS, connection pool, smart-LOBs, fast-path RPC, composite UDTs.
Added
- Async API (
informix_db.aio) —AsyncConnection,AsyncCursor,AsyncConnectionPoolfor FastAPI / aiohttp / asyncio. Each blocking I/O call is offloaded to a worker thread viaasyncio.to_thread; event loop never blocks. - Connection pool (
informix_db.create_pool) — thread-safe with min/max sizing, lazy growth, health-check on acquire, error-aware eviction. - TLS —
tls=Truefor self-signed dev servers,tls=ssl.SSLContextfor production. Wrapping happens inIfxSocketso the rest of the protocol layer is unaware. - Smart-LOBs (BLOB / CLOB) — full read/write end-to-end via
cursor.read_blob_column()/cursor.write_blob_column()using the server'slotofile/filetoblobSQL functions intercepted at theSQ_FILE(98) protocol level. - Legacy in-row blobs (BYTE / TEXT) — bind + read via the
SQ_BBIND/SQ_BLOB/SQ_FETCHBLOBprotocol family. - Fast-path RPC (
Connection.fast_path_call) — direct stored-procedure invocation bypassing PREPARE/EXECUTE; routine handles cached per-connection. - Composite UDT recognition —
ROW,SET,MULTISET,LISTcolumns return typedRowValue/CollectionValuewrappers exposing schema and raw bytes. - Type codecs —
INTERVAL(both DAY-TO-FRACTION and YEAR-TO-MONTH families),DATETIME(all qualifier ranges),DECIMAL/MONEY(BCD with sign+exp head byte and asymmetric base-100 complement for negatives),DATE,BOOL, all integer / float widths,CHAR/VARCHAR/LVARCHAR. - Transactions — implicit
SQ_BEGINbefore each transaction in non-ANSI logged DBs; transparent no-ops on unlogged DBs. - PEP 249 exception hierarchy — server
SQLCODEmapped to the right exception class (IntegrityErrorfor duplicate-key violations,ProgrammingErrorfor syntax errors, etc.).
Documentation
README.md— overview and quick-startdocs/USAGE.md— practical recipes and migration guidedocs/PROTOCOL_NOTES.md— byte-level wire-format referencedocs/DECISION_LOG.md— phase-by-phase architectural decisions, with the why preserveddocs/JDBC_NOTES.md— index into the decompiled IBM JDBC referencedocs/CAPTURES/— annotated socat hex-dump captures
Test coverage
232 tests total: 69 unit + 163 integration. Unit tests run with no external dependencies; integration tests run against the IBM Informix Developer Edition Docker image.
Known gaps (deferred)
- Full ROW/COLLECTION recursive parsing: Phase 12 ships type recognition + raw-bytes wrapper. Parsing the textual representation into typed Python tuples/sets/lists is deferred — most workloads can use SQL projections (
SELECT row_col.fieldname FROM tbl) instead. - UDT parameter encoding for fast-path: scalar params/returns work; passing a 72-byte BLOB locator as a UDT param requires extending the SQ_BIND encoder with the extended_owner/extended_name preamble for type > 18.
- Native async I/O: Phase 16 ships a thread-pool wrapper that's functionally equivalent for typical FastAPI workloads. Native async (asyncpg-style transport abstraction) would be Phase 17 if a real workload needs it.
2026.05.02 — Phase 1: connection lifecycle
Initial release. connect() / close() works end-to-end. Cursor / execute / fetch arrived in Phase 2 (subsequent commits within the same session).