Version bump (2026.05.02 → 2026.05.04) reflects the library reaching
feature completeness across Phases 1-16.
Documentation:
* README.md — full rewrite. The previous README was from Phase 1
("cursor() / execute() / fetchone() arrive in Phase 2"). New
README covers: sync + async APIs, connection pool, TLS, full type
matrix, smart-LOBs, fast-path RPC, server-compatibility,
development workflow, and pointers to the protocol research docs.
* docs/USAGE.md — new practical recipe guide. Connecting, cursor
lifecycle, parameter binding, transactions (logged + unlogged),
executemany, smart-LOB read/write, connection pool, async,
TLS, error handling, fast-path RPC, server-side setup steps,
and a migration table from IfxPy / legacy informixdb.
* CHANGELOG.md — new file. Captures the v2026.05.04 release as the
Phase 1-16 completion milestone with a full feature inventory
and known-gap list. Future point-releases append here.
Classifiers updated:
* Development Status: 2 → 4 (Pre-Alpha → Beta)
* Added Framework :: AsyncIO
Keywords: added asyncio, async.
No code changes; tests still pass (69 unit + 163 integration = 232).
Ruff clean.
11 KiB
Usage Guide
Practical recipes for common Informix patterns with informix-db. For installation and a quick overview, see the README. For protocol-level / architectural decisions, see the DECISION_LOG.
Connecting
import informix_db
conn = informix_db.connect(
host="db.example.com",
port=9088,
user="informix",
password="...",
database="mydb",
server="informix", # the DBSERVERNAME from sqlhosts
autocommit=False, # default; opt-in with True
connect_timeout=10.0, # seconds; None = OS default
read_timeout=30.0, # seconds for each read; None = no timeout
keepalive=False, # SO_KEEPALIVE on the socket
client_locale="en_US.8859-1",
)
server is not the hostname — it's the Informix DBSERVERNAME the listener identifies itself as (configured server-side in $ONCONFIG's DBSERVERNAME). For the official IBM Developer Edition Docker image, the default "informix" is correct.
database may be None to log in without selecting a database; the server still completes a successful login. Useful for cross-database queries that fully qualify table names.
Cursor lifecycle
cur = conn.cursor()
cur.execute("SELECT id, name FROM users WHERE active = ?", (True,))
# Single row
row = cur.fetchone() # tuple or None
# All rows
rows = cur.fetchall() # list[tuple]
# Bounded batch
batch = cur.fetchmany(100) # honors cur.arraysize default
# Iteration
for row in cur:
print(row)
cur.close()
The connection's with block automatically closes both the connection and any open cursors:
with informix_db.connect(...) as conn:
cur = conn.cursor()
cur.execute("SELECT 1 FROM systables WHERE tabid = 1")
print(cur.fetchone())
# socket closed, cursor torn down
Parameter binding
Informix uses paramstyle = "numeric" (ESQL/C convention). Both ? and :1 / :2 work:
cur.execute("SELECT id FROM users WHERE name = ? AND age > ?", ("alice", 30))
cur.execute(
"UPDATE users SET email = :2 WHERE id = :1",
(42, "alice@example.com"),
)
Type mapping: int, float, str, bool, None, datetime.date, datetime.datetime, datetime.timedelta, decimal.Decimal, informix_db.IntervalYM, bytes (BYTE/TEXT params).
Transactions
Logged-DB transactions are managed implicitly. The driver sends SQ_BEGIN before each transaction in non-autocommit mode; commit() and rollback() close it.
conn = informix_db.connect(..., autocommit=False) # default
cur = conn.cursor()
cur.execute("INSERT INTO orders VALUES (?, ?)", (1, "..."))
cur.execute("UPDATE inventory SET qty = qty - 1 WHERE sku = ?", ("ABC",))
conn.commit()
cur.execute("INSERT INTO orders VALUES (?, ?)", (2, "..."))
conn.rollback() # discards the second insert
For unlogged databases, both commit() and rollback() are silent no-ops — the connection knows it can't open a transaction (the server returns sqlcode -201 to SQ_BEGIN) and caches that state. Same client code works with both DB modes.
For autocommit mode, each statement commits independently:
conn = informix_db.connect(..., autocommit=True)
cur = conn.cursor()
cur.execute("INSERT ...") # already committed
executemany
Batched DML — PREPARE once, BIND/EXECUTE per row, RELEASE at the end:
cur.executemany(
"INSERT INTO log VALUES (?, ?, ?)",
[
(1, "info", "started"),
(2, "info", "loaded config"),
(3, "warn", "missing optional setting"),
],
)
conn.commit()
Smart-LOBs (BLOB / CLOB)
Read
# Fetch a single row's BLOB content as bytes
data = cur.read_blob_column(
"SELECT data FROM photos WHERE id = ?", (42,)
)
# data is bytes (or None if NULL or no rows match)
For multi-row reads or full control, drop down to the lower-level
lotofile() SQL form:
cur.execute(
"SELECT id, lotofile(data, '/tmp/x', 'client') FROM photos LIMIT 100"
)
for row in cur:
photo_id, returned_filename = row
raw_bytes = cur.blob_files[returned_filename]
process(photo_id, raw_bytes)
The server returns a unique filename suffix for each row; cur.blob_files is a dict keyed by those names. Phase 10 in the decision log explains the protocol.
Write
cur.write_blob_column(
"INSERT INTO photos VALUES (?, BLOB_PLACEHOLDER)",
blob_data=jpeg_bytes,
params=(42,),
)
# CLOB column? Pass clob=True so it routes through filetoclob:
cur.write_blob_column(
"INSERT INTO docs VALUES (?, BLOB_PLACEHOLDER)",
blob_data=text.encode("iso-8859-1"),
params=(1,),
clob=True,
)
Why BLOB_PLACEHOLDER instead of ??
Plain bytes already maps to BYTE (legacy in-row blobs, type 11) when used as a ?-parameter. The token approach makes it unambiguous which column receives the smart-LOB. The driver substitutes BLOB_PLACEHOLDER with filetoblob('<sentinel>', 'client') and registers the bytes for upload via the SQ_FILE protocol.
Connection pool
pool = informix_db.create_pool(
host="...", user="...", password="...",
database="mydb",
min_size=1, # pre-opened on construction
max_size=10, # hard ceiling
acquire_timeout=30.0, # seconds to wait for a free connection
)
# Acquire / release via context manager (preferred)
with pool.connection() as conn:
cur = conn.cursor()
cur.execute(...)
# automatically returned to the pool
# Or manually
conn = pool.acquire(timeout=5.0)
try:
cur = conn.cursor()
cur.execute(...)
finally:
pool.release(conn)
pool.close() # drains idle connections; in-use connections close on their next release
The pool sends a trivial SELECT 1 round-trip before yielding each connection (cheap health check; ~1ms on local network). Dead connections are silently replaced. Connection-related errors (OperationalError, InterfaceError) raised inside with pool.connection() as conn: evict the connection rather than returning it to the pool.
Async (asyncio)
import asyncio
from informix_db import aio
async def main():
async with await aio.connect(
host="...", user="...", password="...", database="mydb",
) as conn:
cur = await conn.cursor()
await cur.execute(
"SELECT id, name FROM users WHERE active = ?", (True,)
)
async for row in cur:
print(row)
asyncio.run(main())
Async pool:
pool = await aio.create_pool(
host="...", user="...", password="...", database="mydb",
min_size=1, max_size=10,
)
async with pool.connection() as conn:
cur = await conn.cursor()
await cur.execute(...)
rows = await cur.fetchall()
await pool.close()
The async API mirrors the sync API one-to-one. Each blocking I/O call is offloaded to a worker thread via asyncio.to_thread — the event loop never blocks; concurrent queries across an asyncio.gather actually run in parallel up to max_size.
TLS
import ssl
# Production: caller-supplied SSLContext with full verification
ctx = ssl.create_default_context(cafile="/path/to/ca.pem")
informix_db.connect(host="db.example.com", port=9089, ..., tls=ctx)
# Dev / self-signed certs: tls=True (verification DISABLED)
informix_db.connect(host="127.0.0.1", port=9089, ..., tls=True)
Informix uses dedicated TLS-enabled listener ports (configured server-side in sqlhosts) — point port at the TLS listener (often 9089) when tls is enabled.
Error handling
The exception hierarchy follows PEP 249:
Warning
Error
├── InterfaceError
└── DatabaseError
├── DataError
├── OperationalError
│ ├── PoolClosedError
│ └── PoolTimeoutError
├── IntegrityError
├── InternalError
├── ProgrammingError
└── NotSupportedError
Server-side SQL errors carry the Informix sqlcode, isamcode, byte offset, and "near token" attributes:
try:
cur.execute("INSERT INTO users VALUES (1, 'duplicate-name')")
except informix_db.IntegrityError as e:
print(e.sqlcode) # e.g., -239 (duplicate key)
print(e.isamcode) # e.g., -100
print(e.near) # e.g., "u_users_name"
The exception class is chosen based on the sqlcode (per the catalog in informix_db/_errcodes.py):
| sqlcode | Exception class |
|---|---|
| -239, -268, -391, etc. | IntegrityError |
| -201, -202, -206, etc. | ProgrammingError |
| -255, -256, -267, etc. | OperationalError |
| -329, -413, -879, etc. | NotSupportedError |
Direct stored-procedure invocation (fast-path RPC)
For UDFs that aren't callable via plain SQL (ifx_lo_close, etc.) or where you want to skip PREPARE → DESCRIBE → EXECUTE overhead:
result = conn.fast_path_call(
"function informix.ifx_lo_close(integer)", lofd
)
# result is a list of return values; here, [0] on success
Routine handles are cached per-connection by signature — first call resolves via SQ_GETROUTINE, subsequent calls skip that round-trip. UDT parameters (e.g., the 72-byte BLOB locator type) aren't yet supported on the bind side; only scalar params/returns work in the current MVP.
Server-side requirements
Informix dev-image setup once-per-instance for the LOB feature set:
# Inside the container, as the informix user with INFORMIXDIR/INFORMIXSERVER set:
onspaces -c -b blobspace1 -p /opt/ibm/data/spaces/blobspace.000 -o 0 -s 50000
onspaces -c -S sbspace1 -p /opt/ibm/data/spaces/sbspace.000 -o 0 -s 50000 -Df "AVG_LO_SIZE=100"
onmode -wm SBSPACENAME=sbspace1
onmode -wm LTAPEDEV=/dev/null
onmode -wm TAPEDEV=/dev/null
onmode -l
ontape -s -L 0 -t /dev/null
Then create a logged database (required for BYTE/TEXT/BLOB/CLOB):
CREATE DATABASE mydb WITH LOG;
These steps are detailed in the DECISION_LOG §6.f and §10.
Migration from IfxPy / legacy informixdb
The PEP 249 surface is identical — most code Just Works after switching the import:
# Before
import IfxPyDbi as ifx
# After
import informix_db as ifx
Differences worth knowing:
IfxPy / legacy informixdb |
informix-db |
|
|---|---|---|
| Native deps | IBM CSDK (libifsql.so) |
None |
| Wheel size | ~50MB+ (CSDK bundled) | ~50KB |
| Connection string | DSN format | Per-keyword args (host=, user=, password=, database=, server=) |
| paramstyle | qmark |
numeric (both ? and :N work) |
| TLS | CSDK-managed | Native Python ssl.SSLContext |
| Async | Not supported | informix_db.aio |
| Pool | External (e.g., SQLAlchemy) | Built-in (informix_db.create_pool) |
| BLOB API | setBytes/getBytes |
cursor.read_blob_column / cursor.write_blob_column with BLOB_PLACEHOLDER |