PyPI rejected `informix-db` as too similar to legacy `informixdb` (v2.5, 2008). Renamed distribution to `informix-driver`. Import name stays `informix_db` — same separation Pillow uses with `import PIL`. Updated: - pyproject.toml [project].name - README pip-install command + brief explanation note - docs-site quickstart, vs-ifxpy, wtf, Hero install commands First PyPI release: pypi.org/project/informix-driver/2026.5.8/
5.7 KiB
| title | description | sidebar | ||||
|---|---|---|---|---|---|---|
| WTF did you build this for? | A pure-Python Informix driver, why it didn't exist before, and what it's good for. |
|
The existing tools were not my style.
Every Informix driver in any language — IfxPy, the legacy informixdb, ODBC bridges, JPype/JDBC, Perl DBD::Informix — wraps either IBM's C Client SDK or the JDBC JAR. To our knowledge informix-db is the first pure-socket Informix driver in any language.
The problem with IBM's C SDK
The IBM Informix Client SDK (CSDK), now packaged as part of OneDB Client, is a 92 MB tarball with a non-trivial install gauntlet:
- Python ≤ 3.11 (IfxPy is broken on 3.12+)
setuptools < 58(legacy build system)- Permissive
CFLAGSfor the C extension build - Manual download of the 92 MB ODBC tarball
- Four
LD_LIBRARY_PATHdirectories libcrypt.so.1— deprecated in 2018, missing on Arch, Fedora 35+, RHEL 9
For containerized deployments, ETL pipelines, FastAPI services, or anywhere Python lives and IBM's C SDK is friction, the friction compounds. informix-db's install is pip install informix-driver (import informix_db — the distribution name dodges PyPI's 2008-vintage informixdb package, the import name is what you'd expect). The wheel is ~50 KB. There are zero runtime dependencies.
What it does
informix-db opens a TCP socket to an Informix server's SQLI listener and speaks the wire protocol directly — the same protocol IBM's JDBC driver uses, the same protocol the CSDK speaks under the hood. No native code is in the thread of execution.
The wire protocol was reverse-engineered through three sources:
- Decompiled IBM JDBC driver (
com.informix.jdbc.IfxConnectionand friends), used as a clean-room reference for PDU shapes and protocol semantics. - Annotated
socatcaptures of real client/server traffic against the IBM Informix Developer Edition Docker image. - Differential testing against
IfxPy— every codec path is tested against the C driver's behavior on the same data.
The result is a PEP 249 compliant driver with a sync API, an async API (FastAPI / asyncio compatible), a connection pool, TLS support, smart-LOB read/write, scrollable cursors, fast-path stored procedure invocation, and bulk-insert / bulk-fetch performance within ~10–60% of the C driver depending on workload.
What it's good for
The places where informix-db is unambiguously the right choice:
- ETL and bulk-load pipelines. Pipelined
executemany(Phase 33) is 1.6× faster than IfxPy at scale because every BIND+EXECUTE PDU goes out before any responses are drained. IfxPy still pays one round-trip perIfxPy.execute(stmt, tuple)call. - Container deployments. The 50 KB wheel and absent native deps mean a slim base image works. No multi-stage build to compile the CSDK.
- Modern Python. Works on 3.10 through 3.14 unmodified. IfxPy hasn't shipped 3.12 wheels.
- Async / FastAPI. Native async support via thread-pool wrapping. IfxPy is fully synchronous; using it from FastAPI requires
run_in_executorboilerplate and gives up the connection pool's natural async semantics. - Anywhere
libcrypt.so.1is missing. Modern Linux distributions shiplibcrypt.so.2. IfxPy refuses to load withoutlibcrypt.so.1. We don't link against either.
What IfxPy is still better at
Honesty matters here:
- Large analytical fetches. IfxPy's C-level
fetch_tupledecoder is faster than our Pythonparse_tuple_payload(~1.1 µs/row vs ~2.0 µs/row after Phase 39). For workloads pulling 10k+ rows in a single SELECT where the per-row decode cost dominates, IfxPy is currently 5–15% faster. The gap is shrinking phase by phase. - Workloads built around the CSDK. If your existing code already uses IfxPy idioms (
IfxPyDbi.connect_pooled, IBM's specific cursor extensions), the migration toinformix-dbis straightforward but not zero-cost.
The honest summary table from the comparison page:
| Workload | Winner | Margin |
|---|---|---|
Bulk insert (executemany 10k–100k rows) |
informix-db |
1.6× faster |
| Bulk SELECT (10k–100k rows) | IfxPy | 1.05–1.15× faster |
| Single-row queries | tied | within noise |
| Cold connect | tied | within noise |
| Containerized deployment | informix-db |
no contest |
| Python 3.12+ | informix-db |
only option |
Production-ready
Every finding from a system-wide failure-mode audit (data correctness, wire safety, resource leaks, concurrency, async cancellation) has been addressed:
- Pool no longer returns connections with open transactions
- Per-connection wire lock prevents PDU interleaving from accidental sharing
- Async cancellation cannot leak running workers onto recycled connections
_raise_sq_errno longer masks wire desync via bare-except- Cursor finalizers release server-side resources on mid-fetch raise
- 5 medium-severity hardening items resolved
0 critical, 0 high, 0 medium audit findings remain. Every architectural change went through a Margaret Hamilton-style review focused on silent-failure modes, recovery paths, and documented invariants. Each documented invariant is paired with either a runtime guard or a CI tripwire test.
300+ tests across unit / integration / benchmark suites. Integration tests run against the official IBM Informix Developer Edition Docker image (15.0.1.0.3DE).
Read next
- Install & first query → — five minutes from
pip installto a real SELECT against a Docker-hosted Informix. - Compared to IfxPy → — full head-to-head benchmarks, methodology, and reproduction.
- Architecture → — how the layers stack: socket, framing, codec, resultset, cursor.