informix-db/tests/test_smart_lob_write.py
Ryan Malloy fdb9ba32d5 Phase 28: Resource leak hardening (2026.05.05.2)
Closes Hamilton audit High #4 (bare-except in error drain) and
High #5 (no cursor finalizers), plus 1 medium one-liner.

After Phases 26-28, 0 CRITICAL and 0 HIGH audit findings remain.
Driver is PRODUCTION READY.

What changed:

cursors.py:
* Cursor finalizers via weakref.finalize. Mid-fetch raises (or any
  GC without explicit close()) now release server-side resources
  (CLOSE + RELEASE PDUs). Pre-built static PDU bytes at module load
  so finalizer can run on any thread without allocating or calling
  cursor methods.
* Non-blocking lock acquire prevents cross-thread GC deadlock.
  WARNING log on lock-busy so leak accumulation is visible.
* state=[False] list pattern keeps finalizer closure weak. GIL
  dependency of atomic single-element mutation documented.
* _raise_sq_err near-token parse: (ProtocolError, OSError) only.
* _raise_sq_err drain: force-close connection on same exceptions
  (wire unrecoverable after desync).

connections.py:
* _raise_sq_err drain: same hardening as cursor version. Force-close
  on (ProtocolError, OSError, OperationalError) - the latter from
  _drain_to_eot raising on unknown tags. Documented inline.
* Added contextlib import for force-close suppression.

cursors.py write_blob_column:
* BLOB_PLACEHOLDER validation now requires EXACTLY ONE occurrence.
  Pre-Phase-28, str.replace silently substituted every occurrence -
  corrupting SQL containing the literal string in comments etc.
  Now raises ProgrammingError with workaround pointer.

_resultset.py:
* Investigated end-of-loop bounds check for parse_tuple_payload.
  Reverted: long-standing off-by-one in UDTVAR(lvarchar) trailing-
  pad logic produces benign over-reads (payload is a fully-extracted
  bytes object; over-reads return empty slices through unused
  branches). Real silent-corruption surfaces are length-prefix
  decoders, needing branch-local checks. Documented as deliberate
  non-fix.

Margaret Hamilton review surfaced two blocking conditions:

* Asymmetric failure handling: _raise_sq_err force-closed the
  connection on wire desync, but the cursor finalizer silently
  swallowed identical failures. "Same wire, same failure mode,
  same response" - finalizer now matches _raise_sq_err's discipline.

* Leak visibility: wire-lock-busy log was DEBUG. Promoted to WARNING
  so leak accumulation on pooled connections is visible.

Plus three documentation improvements (GIL dependency, OperationalError
in desync taxonomy, parse_tuple non-fix rationale).

One new regression test:
* test_write_blob_column_rejects_multiple_placeholders

72 unit + 229 integration + 28 benchmark = 329 tests; ruff clean.

Phase 29 ticket (Hamilton recommended): deferred-cleanup queue
drained at next _send_pdu, closes unbounded-leak gap on long-lived
pooled connections. Not blocking Phase 28.

Hamilton audit verdict:
  Pre-26:  2 critical, 3 high, 5 medium
  Post-28: 0 critical, 0 high, 4 medium
2026-05-05 03:56:24 -06:00

291 lines
9.3 KiB
Python

"""Phase 11 integration tests — smart-LOB BLOB/CLOB write via SQ_FILE / filetoblob.
Phase 10 implemented BLOB *read* by leveraging ``lotofile(...)`` and
intercepting the resulting ``SQ_FILE`` (98) protocol. Phase 11 mirrors
that pattern in the *write* direction: the user calls
``filetoblob('/sentinel', 'client')`` (or ``filetoclob``) with bytes
pre-registered in ``cursor.virtual_files``. The server's read-from-
client SQ_FILE optype=2 messages drive our handler to stream the
registered bytes up.
The high-level API is ``cursor.write_blob_column(sql, blob_data, params)``
which uses a ``BLOB_PLACEHOLDER`` token in the SQL.
This is the symmetric counterpart of Phase 10's ``read_blob_column``
and the missing piece that makes the smart-LOB read+write loop
complete entirely in pure Python — no JDBC needed for fixture seeding.
"""
from __future__ import annotations
import contextlib
from collections.abc import Iterator
import pytest
import informix_db
from tests.conftest import ConnParams
pytestmark = pytest.mark.integration
def _connect(params: ConnParams) -> informix_db.Connection:
return informix_db.connect(
host=params.host,
port=params.port,
user=params.user,
password=params.password,
database=params.database,
server=params.server,
connect_timeout=10.0,
read_timeout=10.0,
autocommit=True,
)
@pytest.fixture
def blob_table(logged_db_params: ConnParams) -> Iterator[str]:
"""A fresh BLOB table per test, dropped on teardown."""
table = "t_p11_blob"
with _connect(logged_db_params) as conn:
cur = conn.cursor()
with contextlib.suppress(Exception):
cur.execute(f"DROP TABLE {table}")
try:
cur.execute(f"CREATE TABLE {table} (id INT, data BLOB)")
except informix_db.Error as e:
pytest.skip(f"sbspace unavailable ({e!r})")
try:
yield table
finally:
with _connect(logged_db_params) as conn:
cur = conn.cursor()
with contextlib.suppress(Exception):
cur.execute(f"DROP TABLE {table}")
@pytest.fixture
def clob_table(logged_db_params: ConnParams) -> Iterator[str]:
"""A fresh CLOB table per test."""
table = "t_p11_clob"
with _connect(logged_db_params) as conn:
cur = conn.cursor()
with contextlib.suppress(Exception):
cur.execute(f"DROP TABLE {table}")
try:
cur.execute(f"CREATE TABLE {table} (id INT, txt CLOB)")
except informix_db.Error as e:
pytest.skip(f"sbspace unavailable ({e!r})")
try:
yield table
finally:
with _connect(logged_db_params) as conn:
cur = conn.cursor()
with contextlib.suppress(Exception):
cur.execute(f"DROP TABLE {table}")
# -------- BLOB write+read round-trip --------
def test_write_blob_round_trip_short(
logged_db_params: ConnParams, blob_table: str
) -> None:
"""Short payload — single SQ_FILE_READ chunk."""
payload = b"hello phase 11 blob write"
with _connect(logged_db_params) as conn:
cur = conn.cursor()
cur.write_blob_column(
f"INSERT INTO {blob_table} VALUES (?, BLOB_PLACEHOLDER)",
payload,
(1,),
)
got = cur.read_blob_column(
f"SELECT data FROM {blob_table} WHERE id = ?", (1,)
)
assert got == payload
def test_write_blob_round_trip_multichunk(
logged_db_params: ConnParams, blob_table: str
) -> None:
"""50KB payload — spans many SQ_FILE_READ chunks (32KB cap each)."""
payload = bytes(range(256)) * 200 # 51200 bytes
with _connect(logged_db_params) as conn:
cur = conn.cursor()
cur.write_blob_column(
f"INSERT INTO {blob_table} VALUES (?, BLOB_PLACEHOLDER)",
payload,
(1,),
)
got = cur.read_blob_column(
f"SELECT data FROM {blob_table} WHERE id = ?", (1,)
)
assert got == payload
assert len(got) == 51200
def test_write_blob_empty(
logged_db_params: ConnParams, blob_table: str
) -> None:
"""Empty bytes round-trip cleanly."""
with _connect(logged_db_params) as conn:
cur = conn.cursor()
cur.write_blob_column(
f"INSERT INTO {blob_table} VALUES (?, BLOB_PLACEHOLDER)",
b"",
(1,),
)
got = cur.read_blob_column(
f"SELECT data FROM {blob_table} WHERE id = ?", (1,)
)
assert got == b""
def test_write_blob_binary_safe(
logged_db_params: ConnParams, blob_table: str
) -> None:
"""All-byte-values payload — no encoding artifacts."""
payload = bytes(range(256)) * 4 # 1024 bytes covering all values
with _connect(logged_db_params) as conn:
cur = conn.cursor()
cur.write_blob_column(
f"INSERT INTO {blob_table} VALUES (?, BLOB_PLACEHOLDER)",
payload,
(1,),
)
got = cur.read_blob_column(
f"SELECT data FROM {blob_table} WHERE id = ?", (1,)
)
assert got == payload
def test_write_blob_update(
logged_db_params: ConnParams, blob_table: str
) -> None:
"""UPDATE with BLOB column replaces the prior value."""
with _connect(logged_db_params) as conn:
cur = conn.cursor()
cur.write_blob_column(
f"INSERT INTO {blob_table} VALUES (?, BLOB_PLACEHOLDER)",
b"original",
(1,),
)
cur.write_blob_column(
f"UPDATE {blob_table} SET data = BLOB_PLACEHOLDER WHERE id = ?",
b"replacement",
(1,),
)
got = cur.read_blob_column(
f"SELECT data FROM {blob_table} WHERE id = ?", (1,)
)
assert got == b"replacement"
def test_write_blob_multiple_rows(
logged_db_params: ConnParams, blob_table: str
) -> None:
"""Distinct INSERTs round-trip independently."""
rows = [
(1, b"first row"),
(2, b"second row blob"),
(3, b"third"),
]
with _connect(logged_db_params) as conn:
cur = conn.cursor()
for rid, payload in rows:
cur.write_blob_column(
f"INSERT INTO {blob_table} VALUES (?, BLOB_PLACEHOLDER)",
payload,
(rid,),
)
for rid, expected in rows:
got = cur.read_blob_column(
f"SELECT data FROM {blob_table} WHERE id = ?", (rid,)
)
assert got == expected
# -------- CLOB --------
def test_write_clob_round_trip(
logged_db_params: ConnParams, clob_table: str
) -> None:
"""``clob=True`` routes through ``filetoclob`` (not ``filetoblob``)."""
text = "Lorem ipsum dolor sit amet, café résumé".encode("iso-8859-1")
with _connect(logged_db_params) as conn:
cur = conn.cursor()
cur.write_blob_column(
f"INSERT INTO {clob_table} VALUES (?, BLOB_PLACEHOLDER)",
text,
(1,),
clob=True,
)
got = cur.read_blob_column(
f"SELECT txt FROM {clob_table} WHERE id = ?", (1,)
)
assert got == text
# -------- Helper validation --------
def test_write_blob_column_requires_placeholder(
logged_db_params: ConnParams, blob_table: str
) -> None:
"""SQL without ``BLOB_PLACEHOLDER`` is rejected."""
with _connect(logged_db_params) as conn:
cur = conn.cursor()
with pytest.raises(
informix_db.ProgrammingError, match="BLOB_PLACEHOLDER"
):
cur.write_blob_column(
f"INSERT INTO {blob_table} VALUES (1, NULL)",
b"data",
(),
)
def test_write_blob_column_rejects_multiple_placeholders(
logged_db_params: ConnParams, blob_table: str
) -> None:
"""Phase 28 regression: SQL containing BLOB_PLACEHOLDER twice is rejected.
Pre-Phase-28, ``str.replace`` silently substituted EVERY occurrence,
corrupting any SQL that legitimately contained the literal string
in (e.g.) a comment. Now we fail loudly so the user gets a clear
error rather than mysterious server-side syntax errors.
"""
with _connect(logged_db_params) as conn:
cur = conn.cursor()
with pytest.raises(
informix_db.ProgrammingError,
match=r"BLOB_PLACEHOLDER.*2 times",
):
cur.write_blob_column(
# The /* BLOB_PLACEHOLDER */ comment is the trap; in the
# old code this would have been substituted along with
# the real slot, producing a SQL syntax error from the
# server with no hint that the comment was the cause.
f"INSERT /* BLOB_PLACEHOLDER comment */ INTO {blob_table} "
f"VALUES (?, BLOB_PLACEHOLDER)",
b"data",
(1,),
)
def test_virtual_files_cleared_after_call(
logged_db_params: ConnParams, blob_table: str
) -> None:
"""``virtual_files`` doesn't leak the registered bytes between calls."""
with _connect(logged_db_params) as conn:
cur = conn.cursor()
cur.write_blob_column(
f"INSERT INTO {blob_table} VALUES (?, BLOB_PLACEHOLDER)",
b"some data",
(1,),
)
# The default sentinel should have been removed
assert "/tmp/_informix_db_blob_in" not in cur.virtual_files