Adds scroll/random-access methods on Cursor:
* scroll(value, mode='relative'|'absolute') — PEP 249 compatible
* fetch_first() / fetch_last() — jump to result-set ends
* fetch_prior() — step backward (SQL-standard: from past-end yields
the last row, matching JDBC ResultSet.previous() semantics)
* fetch_absolute(n) — 0-indexed jump; negative n indexes from end
* fetch_relative(n) — n-step from current position
* rownumber property — current 0-indexed position
Implementation: replaced _row_iter (single-pass iterator) with
_row_index (random-access index) on the cursor. The result set
is already materialized in _rows during execute(); scroll just
repositions the index. No new wire protocol needed.
For server-side scroll over genuinely huge result sets, SQ_SFETCH
(tag 23) would be needed — JDBC has executeScrollFetch (line 3908)
but we only need it if someone hits the in-memory materialization
ceiling. Phase 18 if so.
Out-of-range scroll raises IndexError per PEP 249. Invalid mode
strings raise ProgrammingError. fetchall() now correctly returns
only the rows from the current position to end (not all rows).
14 new integration tests in test_scroll_cursor.py covering:
* fetchone advancing rownumber sequentially
* fetch_first reset
* fetch_last
* fetch_prior including the past-end-to-last-row semantics
* fetch_absolute with positive and negative indexes
* fetch_relative
* PEP 249 scroll(value, mode='relative'/'absolute')
* IndexError on out-of-range
* ProgrammingError on bad mode
* Empty-result-set edge cases
* fetchall after partial iteration
Total: 69 unit + 177 integration = 246 tests.
Version bump (2026.05.02 → 2026.05.04) reflects the library reaching
feature completeness across Phases 1-16.
Documentation:
* README.md — full rewrite. The previous README was from Phase 1
("cursor() / execute() / fetchone() arrive in Phase 2"). New
README covers: sync + async APIs, connection pool, TLS, full type
matrix, smart-LOBs, fast-path RPC, server-compatibility,
development workflow, and pointers to the protocol research docs.
* docs/USAGE.md — new practical recipe guide. Connecting, cursor
lifecycle, parameter binding, transactions (logged + unlogged),
executemany, smart-LOB read/write, connection pool, async,
TLS, error handling, fast-path RPC, server-side setup steps,
and a migration table from IfxPy / legacy informixdb.
* CHANGELOG.md — new file. Captures the v2026.05.04 release as the
Phase 1-16 completion milestone with a full feature inventory
and known-gap list. Future point-releases append here.
Classifiers updated:
* Development Status: 2 → 4 (Pre-Alpha → Beta)
* Added Framework :: AsyncIO
Keywords: added asyncio, async.
No code changes; tests still pass (69 unit + 163 integration = 232).
Ruff clean.
Ships AsyncConnection, AsyncCursor, and AsyncConnectionPool that
expose async/await versions of the sync API for use with FastAPI,
aiohttp, etc.
Strategy: thread-pool wrapping (aiopg pattern), not native async.
Each blocking I/O call is offloaded to a worker thread via
asyncio.to_thread. The event loop never blocks; queries run in
parallel up to the pool's max_size. Cost: ~250 lines, no changes
to the sync codebase. Native async (Phase 17) would require a
~2000-line transport abstraction refactor — deferred until a real
workload needs it.
For typical FastAPI/aiohttp workloads (request → one query → return),
this is functionally equivalent to native async. Each await yields
the loop while a worker thread does the I/O. Only differs for
hundreds-of-concurrent-connections workloads.
API mirrors the sync API one-to-one:
import asyncio
from informix_db import aio
async def main():
pool = await aio.create_pool(host=..., min_size=1, max_size=10)
async with pool.connection() as conn:
cur = await conn.cursor()
await cur.execute("SELECT id FROM users WHERE name = ?", (name,))
row = await cur.fetchone()
await pool.close()
The async pool preserves the sync pool's eviction policy: connection
errors evict, application errors retain.
Tests: 9 integration tests in test_aio.py covering open/close,
async-with, simple/parameterized SELECT, async-for cursor iteration,
pool acquire/release, 20-query concurrent gather (verifies parallelism
through max_size=5 pool), pool async context manager, commit/rollback.
Total: 69 unit + 163 integration = 232 tests.
Pyproject changes:
* Added pytest-asyncio>=1.3.0 as dev dep
* asyncio_mode = "auto" so async tests don't need decorators
Architectural completion: with Phase 16, every backlog item is
done. The Phase 0 ambition — first pure-Python Informix driver,
no native deps — is now genuinely complete.
This commit takes informix-db from documentation-only (Phase 0 spike)
to a functional connect() / close() against a real Informix server.
To our knowledge, this is the first pure-socket Informix client in any
language — no CSDK, no JVM, no native libraries.
Layered architecture per the plan, mirroring PyMySQL's shape:
src/informix_db/
__init__.py — PEP 249 surface (connect, exceptions, paramstyle="numeric")
exceptions.py — full PEP 249 hierarchy declared up front
_socket.py — raw socket I/O (read_exact, write_all, timeouts)
_protocol.py — IfxStreamReader / IfxStreamWriter framing primitives
(big-endian, 16-bit-aligned variable payloads,
length-prefixed nul-terminated strings)
_messages.py — SQ_* tags from IfxMessageTypes + ASF/login markers
_auth.py — pluggable auth handlers; plain-password is the
only Phase-1 implementation
connections.py — Connection class: builds the binary login PDU
(SLheader + PFheader byte-for-byte per
PROTOCOL_NOTES.md §3), sends it, parses the
server response, wires up close()
Phase 1 design decisions locked in DECISION_LOG.md:
- paramstyle = "numeric" (matches Informix ESQL/C convention)
- Python >= 3.10
- autocommit defaults to off (PEP 249 implicit)
- License: MIT
- Distribution name: informix-db (verified PyPI-available)
Test coverage: 34 unit tests (codec round-trips against synthetic byte
streams; observed login-PDU values from the spike captures asserted as
exact byte literals) + 6 integration tests (connect, idempotent close,
context manager, bad-password → OperationalError, bad-host →
OperationalError, cursor() raises NotImplementedError).
pytest — runs 34 unit tests, no Docker needed
pytest -m integration — runs 6 integration tests against the
Developer Edition container (pinned by digest
in tests/docker-compose.yml)
pytest -m "" — runs everything
ruff is clean across src/ and tests/.
One bug found during smoke testing: threading.get_ident() can exceed
signed 32-bit on some processes, overflowing struct.pack("!i"). Fixed
the same way the JDBC reference does — clamp to signed 32-bit, fall
back to 0 if out of range. The field is diagnostic only.
One protocol-level observation that AMENDED the JDBC source reading:
the "capability section" in the login PDU is three independently
negotiated 4-byte ints (Cap_1=1, Cap_2=0x3c000000, Cap_3=0), not one
int + 8 reserved zero bytes as my CFR decompile read suggested. The
server echoes them back identically. Trust the wire over the
decompiler.
Phase 1 verification matrix (from PROTOCOL_NOTES.md §12):
- Login byte layout: confirmed (server accepts our pure-Python PDU)
- Disconnection: confirmed (SQ_EXIT round-trip works)
- Framing primitives: confirmed (34 unit tests)
- Error path: bad password → OperationalError, bad host → OperationalError
Phase 2 (Cursor / SELECT / basic types) is the next phase. The hard
unknowns there — exact column-descriptor layout, statement-time error
format — were called out as bounded gaps in Phase 0 and have existing
captures (02-select-1.socat.log, 02-dml-cycle.socat.log) to characterize
against.