Fixes the dirty-pool-checkout bug surfaced by Margaret Hamilton's
system-wide audit (Critical #1).
The bug: ConnectionPool.release() returned connections with open
server-side transactions still active. Request A's uncommitted
INSERTs would be inherited by Request B reusing the same connection -
B's commit would land A's writes permanently; B's rollback would
silently lose them. Same shape as psycopg2's pre-2.5 dirty-pool bug.
The fix: pool.release() now rolls back any open transaction before
returning the connection to the idle list. The rollback runs OUTSIDE
the pool lock since it's a wire round-trip - the connection is
already off the idle list and counted in _total, so no other thread
can grab it during the rollback window. If the rollback itself fails
(dead socket, etc.), the connection is evicted rather than recycled.
Async path covered automatically: AsyncConnectionPool.release()
delegates to the sync pool's release via _to_thread.
Margaret Hamilton review pass surfaced two findings, both addressed:
* Silent rollback failure: added a WARNING log via logging.getLogger
("informix_db.pool") so evictions are debuggable. First logger in
the project.
* Async cancellation race: the fix doesn't introduce the
asyncio.wait_for race (Critical #2, deferred to Phase 27), but it
adds a code path that can trigger it. Documented loudly in
pool.release() docstring, aio.py module docstring, and USAGE.md
async section. Recommendation: use read_timeout on the connection
instead of asyncio.wait_for until Phase 27 lands.
Two new regression tests in tests/test_pool.py:
* test_uncommitted_writes_invisible_to_next_acquirer (the bug)
* test_committed_writes_survive_pool_checkout (no over-correction)
Verified the regression test catches the bug: stashed the fix, ran
the test - it fails with "B sees 1 rows - leaked across pool
checkout boundary" - confirming it tests the real failure mode.
Total tests: 72 unit + 226 integration + 28 benchmark = 326.
Deferred to Phase 27 per Hamilton audit:
* Critical #2 (concurrency / per-connection wire lock)
* High #3 (async cancellation routes to broken=True)
* High #4 (bare except in _raise_sq_err drain)
* High #5 (no cursor finalizers - server-side resource leaks)
Thread-safe connection pool with min/max sizing, lazy growth,
idle recycling, and per-acquire health-check.
API:
pool = informix_db.create_pool(host=..., min_size=1, max_size=10)
with pool.connection() as conn:
...
pool.close()
Design choices:
* Lazy growth from min_size — pre-opens min_size on construction,
grows to max_size on demand. Pay-nothing startup with burst capacity.
* Health-check on acquire, not release. Sends a trivial SELECT 1
round-trip before yielding. Dead idle connections (server-side
timeout, network drop) are silently replaced. The cost is ~1ms
per acquire, bought at the price of "users never see a stale-
connection error". Check-on-release is wrong because idle time
is when connections actually die.
* Eviction on OperationalError/InterfaceError only. The "with
pool.connection()" context manager retains the connection on
application-level errors (ValueError, IntegrityError, etc.).
Avoids the "every constraint violation evicts a healthy connection"
pitfall.
* Releases the pool lock during connect() — the slow handshake
(50-100ms) doesn't serialize other threads' acquires.
Tests: 15 integration tests in test_pool.py covering:
* API & lifecycle (pre-open, lazy growth, context-manager, LIFO)
* Exhaustion (timeout when full, per-acquire override, unblock-on-release)
* Eviction (explicit broken, auto on OperationalError, retain on
application errors)
* Health-check (dead idle silently replaced)
* Shutdown (close drains, idempotent, context-manager)
* Multi-thread safety (8 workers × 3 queries each, no leaks)
Total: 69 unit + 154 integration = 223 tests.
With Phase 14 (TLS) and Phase 15 (pool), the project covers the
three things a typical Python web/API workload needs from a
database driver: PEP 249 surface, TLS transport, connection pool.
Only async (informix_db.aio) remains in the backlog.