Closes the unbounded-leak gap on long-lived pooled connections that
Phase 28's cursor finalizer left as future work. When the finalizer
can't acquire the wire lock (cross-thread GC during another thread's
op), instead of leaking + logging, it enqueues the cleanup PDUs to a
per-connection deferred queue. The next normal operation drains the
queue under the wire lock, completing the cleanup atomically before
the new op.
What changed:
connections.py:
* Connection._pending_cleanup: list[bytes] + Connection._cleanup_lock
(separate from _wire_lock - tiny critical section for list mutation
only, allows enqueue without waiting for an in-flight wire op)
* _enqueue_cleanup(pdus): thread-safe append, callable from any
thread (including finalizers without lock ownership)
* _drain_pending_cleanup(): pop-the-list + send-each-PDU. Caller
must hold _wire_lock. Force-closes on wire desync (same doctrine
as _raise_sq_err)
* _send_pdu opportunistically drains the queue before sending. Cost
is one length-check when queue is empty (the common case)
cursors.py:
* _finalize_cursor enqueues [_CLOSE_PDU, _RELEASE_PDU] instead of
leaking when the lock is busy. WARNING demoted to DEBUG since
leak no longer accumulates.
Lock-order discipline: _cleanup_lock is held only for list extend/pop;
_wire_lock is held for the actual wire I/O. Never grab _cleanup_lock
while holding _wire_lock - the drain pops-and-clears under
_cleanup_lock, then iterates under _wire_lock (which caller holds).
Two new regression tests:
* test_enqueue_cleanup_drains_on_next_send_pdu - verifies queue
mechanism end-to-end
* test_pending_cleanup_thread_safe_enqueue - 8x50 concurrent enqueues,
no race-loss
72 unit + 231 integration + 28 benchmark = 331 tests; ruff clean.
Hamilton audit punch list status:
0 critical, 0 high, 3 medium remaining (login errors, _send_exit
cleanup, pool acquire re-entrance) - all Phase 30 scope.