Prose rebrand: informix-db → informix-driver across docs site

Sweep all backticked + bold + table-cell + heading mentions of the
project's brand to match the PyPI distribution name and the docs domain.
Path references (`cd informix-db`, `git.supported.systems/.../informix-db`)
stay — those reference the actual Gitea repo directory which we did NOT
rename. Same with `import informix_db` (Python module name, separate
from distribution brand).

Also flip GitHub references to Gitea throughout the docs site:
- `github.com/rsp2k/informix-db/blob/main/X` → Gitea `/src/branch/main/X`
- `github.com/rsp2k/informix-db/tree/main/X` → Gitea same path
- `github.com/rsp2k/informix-db` (plain) → Gitea
- Hero "GitHub" CTA button → Gitea source URL
- Social icon: `github` → `seti:git` (generic git icon, not octocat)

Net result: zero stale GitHub references, brand consistency matches what
users `pip install`.
This commit is contained in:
Ryan Malloy 2026-05-08 05:43:07 -06:00
parent b67e6d008b
commit 21c47385ae
19 changed files with 62 additions and 62 deletions

View File

@ -1,4 +1,4 @@
# informix-db # informix-driver
Pure-Python driver for IBM Informix IDS, speaking the SQLI wire protocol over raw sockets. **No IBM Client SDK. No JVM. No native libraries.** PEP 249 compliant; sync + async APIs; built-in connection pool; TLS support. Pure-Python driver for IBM Informix IDS, speaking the SQLI wire protocol over raw sockets. **No IBM Client SDK. No JVM. No native libraries.** PEP 249 compliant; sync + async APIs; built-in connection pool; TLS support.
@ -176,25 +176,25 @@ Single-connection benchmarks against the dev container on loopback:
Head-to-head benchmarks against [IfxPy](https://pypi.org/project/IfxPy/) on identical workloads, same Informix server, matched conditions. Using **median + IQR over 10+ rounds** to resist outlier-round noise: Head-to-head benchmarks against [IfxPy](https://pypi.org/project/IfxPy/) on identical workloads, same Informix server, matched conditions. Using **median + IQR over 10+ rounds** to resist outlier-round noise:
| Benchmark | IfxPy 3.0.5 (C-bound) | `informix-db` (pure Python) | Result | | Benchmark | IfxPy 3.0.5 (C-bound) | `informix-driver` (pure Python) | Result |
|---|---:|---:|---:| |---|---:|---:|---:|
| Single-row SELECT round-trip | 118 µs | 114 µs | comparable | | Single-row SELECT round-trip | 118 µs | 114 µs | comparable |
| ~10-row server-side query | 130 µs | 159 µs | IfxPy 22% faster | | ~10-row server-side query | 130 µs | 159 µs | IfxPy 22% faster |
| Cold connect (login handshake) | 11.0 ms | 10.5 ms | comparable | | Cold connect (login handshake) | 11.0 ms | 10.5 ms | comparable |
| **`executemany(1k)` in transaction** | 23.5 ms | 23.2 ms | tied | | **`executemany(1k)` in transaction** | 23.5 ms | 23.2 ms | tied |
| **`executemany(10k)` in transaction** | 259 ms | **161 ms** | **`informix-db` 1.6× faster** | | **`executemany(10k)` in transaction** | 259 ms | **161 ms** | **`informix-driver` 1.6× faster** |
| **`executemany(100k)` in transaction** | 2376 ms | **1487 ms** | **`informix-db` 1.6× faster** | | **`executemany(100k)` in transaction** | 2376 ms | **1487 ms** | **`informix-driver` 1.6× faster** |
| `SELECT` 1k rows | 1.2 ms | 2.7 ms | IfxPy 2.3× faster | | `SELECT` 1k rows | 1.2 ms | 2.7 ms | IfxPy 2.3× faster |
| `SELECT` 10k rows | 11.3 ms | 25.8 ms | IfxPy 2.3× faster | | `SELECT` 10k rows | 11.3 ms | 25.8 ms | IfxPy 2.3× faster |
| `SELECT` 100k rows | 112 ms | 271 ms | IfxPy 2.4× faster | | `SELECT` 100k rows | 112 ms | 271 ms | IfxPy 2.4× faster |
**The honest summary:** **The honest summary:**
- **Bulk-insert workloads: `informix-db` wins 1.6× at scale.** The pipelined `executemany` (Phase 33) sends all N BIND+EXECUTE PDUs before draining responses, eliminating per-row RTT. IfxPy still pays one round-trip per `IfxPy.execute(stmt, tuple)` call. - **Bulk-insert workloads: `informix-driver` wins 1.6× at scale.** The pipelined `executemany` (Phase 33) sends all N BIND+EXECUTE PDUs before draining responses, eliminating per-row RTT. IfxPy still pays one round-trip per `IfxPy.execute(stmt, tuple)` call.
- **Large-fetch workloads: IfxPy wins 2.3× at scale.** Their C-level `fetch_tuple` decoder is genuinely faster than our Python `parse_tuple_payload` (~1.1 µs/row vs ~2.7 µs/row). At 100k rows, that 1.6 µs/row gap accumulates into a 160 ms wall-clock difference. - **Large-fetch workloads: IfxPy wins 2.3× at scale.** Their C-level `fetch_tuple` decoder is genuinely faster than our Python `parse_tuple_payload` (~1.1 µs/row vs ~2.7 µs/row). At 100k rows, that 1.6 µs/row gap accumulates into a 160 ms wall-clock difference.
- **Small queries: comparable.** Both spend ~120 µs waiting for the server; the per-call codec cost is small relative to the round-trip. - **Small queries: comparable.** Both spend ~120 µs waiting for the server; the per-call codec cost is small relative to the round-trip.
**When to prefer `informix-db`:** **When to prefer `informix-driver`:**
- ETL pipelines, log shipping, bulk writes (1.6× faster at scale) - ETL pipelines, log shipping, bulk writes (1.6× faster at scale)
- Containerized / minimal-dependency environments (50 KB wheel vs IfxPy's 92 MB OneDB tarball + libcrypt.so.1 dependency hell) - Containerized / minimal-dependency environments (50 KB wheel vs IfxPy's 92 MB OneDB tarball + libcrypt.so.1 dependency hell)
- Modern Python (works on 3.103.14; IfxPy is broken on Python 3.12+) - Modern Python (works on 3.103.14; IfxPy is broken on Python 3.12+)
@ -208,7 +208,7 @@ These results are reproducible from `tests/benchmarks/compare/` — the Dockerfi
Full methodology, IQR caveats, install gauntlet, and reproduction in [`tests/benchmarks/compare/README.md`](https://git.supported.systems/warehack.ing/informix-db/src/branch/main/tests/benchmarks/compare/README.md). Full methodology, IQR caveats, install gauntlet, and reproduction in [`tests/benchmarks/compare/README.md`](https://git.supported.systems/warehack.ing/informix-db/src/branch/main/tests/benchmarks/compare/README.md).
A note on IfxPy's install gauntlet: getting it to run on a modern system requires Python ≤ 3.11, setuptools <58, permissive CFLAGS, manual download of a 92 MB ODBC tarball, four `LD_LIBRARY_PATH` directories, and `libcrypt.so.1` (deprecated 2018, missing on Arch / Fedora 35+ / RHEL 9). `informix-db`'s install: `pip install informix-driver`. A note on IfxPy's install gauntlet: getting it to run on a modern system requires Python ≤ 3.11, setuptools <58, permissive CFLAGS, manual download of a 92 MB ODBC tarball, four `LD_LIBRARY_PATH` directories, and `libcrypt.so.1` (deprecated 2018, missing on Arch / Fedora 35+ / RHEL 9). `informix-driver`'s install: `pip install informix-driver`.
## Standards & guarantees ## Standards & guarantees

View File

@ -34,7 +34,7 @@ export default defineConfig({
baseUrl: 'https://git.supported.systems/warehack.ing/informix-db/_edit/branch/main/docs-site/', baseUrl: 'https://git.supported.systems/warehack.ing/informix-db/_edit/branch/main/docs-site/',
}, },
social: [ social: [
{ icon: 'github', label: 'Source (Gitea)', href: 'https://git.supported.systems/warehack.ing/informix-db' }, { icon: 'seti:git', label: 'Source (Gitea)', href: 'https://git.supported.systems/warehack.ing/informix-db' },
{ icon: 'seti:python', label: 'PyPI', href: 'https://pypi.org/project/informix-driver/' }, { icon: 'seti:python', label: 'PyPI', href: 'https://pypi.org/project/informix-driver/' },
], ],
customCss: ['./src/styles/theme.css', './src/styles/components.css'], customCss: ['./src/styles/theme.css', './src/styles/components.css'],

View File

@ -20,7 +20,7 @@ import Default from '@astrojs/starlight/components/Footer.astro';
<div class="ifx-ss-badge__copy"> <div class="ifx-ss-badge__copy">
<h3 class="ifx-ss-badge__heading">A Supported Systems Joint</h3> <h3 class="ifx-ss-badge__heading">A Supported Systems Joint</h3>
<p class="ifx-ss-badge__body"> <p class="ifx-ss-badge__body">
<code>informix-db</code> is built and maintained by <code>informix-driver</code> is built and maintained by
<span class="ifx-ss-badge__name">Supported Systems</span> &mdash; a <span class="ifx-ss-badge__name">Supported Systems</span> &mdash; a
boutique software studio focused on thoughtful, user-first technology. boutique software studio focused on thoughtful, user-first technology.
We take databases personally. We take databases personally.

View File

@ -21,7 +21,7 @@
<div class="ifx-hero__cta"> <div class="ifx-hero__cta">
<a class="primary" href="/start/quickstart/">Get started →</a> <a class="primary" href="/start/quickstart/">Get started →</a>
<a class="secondary" href="/start/vs-ifxpy/">Compared to IfxPy</a> <a class="secondary" href="/start/vs-ifxpy/">Compared to IfxPy</a>
<a class="secondary" href="https://github.com/rsp2k/informix-db">GitHub</a> <a class="secondary" href="https://git.supported.systems/warehack.ing/informix-db">GitHub</a>
</div> </div>
<div class="ifx-hero__install">pip install informix-driver</div> <div class="ifx-hero__install">pip install informix-driver</div>
</div> </div>

View File

@ -1,13 +1,13 @@
--- ---
title: Async strategy title: Async strategy
description: Why informix-db wraps a sync core in a thread pool instead of going fully async — and what that costs. description: Why informix-driver wraps a sync core in a thread pool instead of going fully async — and what that costs.
sidebar: sidebar:
order: 4 order: 4
--- ---
import { Aside } from '@astrojs/starlight/components'; import { Aside } from '@astrojs/starlight/components';
`informix-db`'s async API (`from informix_db import aio`) is implemented by wrapping the sync core in a thread pool. Every `await conn.execute(...)` schedules the underlying sync `execute()` on the pool's executor. `informix-driver`'s async API (`from informix_db import aio`) is implemented by wrapping the sync core in a thread pool. Every `await conn.execute(...)` schedules the underlying sync `execute()` on the pool's executor.
This is a deliberate architectural choice from Phase 16. Here's the reasoning. This is a deliberate architectural choice from Phase 16. Here's the reasoning.
@ -57,5 +57,5 @@ The two scenarios where a full async I/O implementation would matter:
If either becomes a real production concern, the layered architecture lets us swap in a fully-async lower half without changing the upper half. The cursor / connection / pool API doesn't care how the bytes get to and from the server. That's the option-2 win we explicitly preserved. If either becomes a real production concern, the layered architecture lets us swap in a fully-async lower half without changing the upper half. The cursor / connection / pool API doesn't care how the bytes get to and from the server. That's the option-2 win we explicitly preserved.
<Aside type="note"> <Aside type="note">
This is a Phase 16 decision. The pivot from "rewrite as async-native" to "wrap the sync core" is documented in [`docs/DECISION_LOG.md`](https://github.com/rsp2k/informix-db/blob/main/docs/DECISION_LOG.md). Three years from now, if it turns out we should have gone with option 1, we have a clear path. This is a Phase 16 decision. The pivot from "rewrite as async-native" to "wrap the sync core" is documented in [`docs/DECISION_LOG.md`](https://git.supported.systems/warehack.ing/informix-db/src/branch/main/docs/DECISION_LOG.md). Three years from now, if it turns out we should have gone with option 1, we have a clear path.
</Aside> </Aside>

View File

@ -34,7 +34,7 @@ The kernel was doing maybe 2530 ms of work. The other 130 ms of the gap-vs-If
Both `asyncpg` (in `buffer.pyx`) and `psycopg3` (in `pq.PGconn`) put a single growing read buffer on the protocol/connection object. The parser indexes into it via `struct.unpack_from(buf, offset)` rather than slicing copies. Refills happen via one large `recv(64K)` rather than many small `recv()`s for individual fields. Both `asyncpg` (in `buffer.pyx`) and `psycopg3` (in `pq.PGconn`) put a single growing read buffer on the protocol/connection object. The parser indexes into it via `struct.unpack_from(buf, offset)` rather than slicing copies. Refills happen via one large `recv(64K)` rather than many small `recv()`s for individual fields.
Phase 39 ports that pattern to `informix-db`. The state machine: Phase 39 ports that pattern to `informix-driver`. The state machine:
```text ```text
┌───────────────────────────────┐ ┌───────────────────────────────┐
@ -106,7 +106,7 @@ A/B-measured against the same Docker container, warmed cache, only the env flag
Re-running the IfxPy comparison after Phase 39: Re-running the IfxPy comparison after Phase 39:
| Workload | IfxPy 2.0.7 (C) | informix-db Phase 39 | Ratio | | Workload | IfxPy 2.0.7 (C) | informix-driver Phase 39 | Ratio |
|---|---:|---:|---:| |---|---:|---:|---:|
| `select_scaling_1000` | 1.637 ms | 1.716 ms | **1.05×** | | `select_scaling_1000` | 1.637 ms | 1.716 ms | **1.05×** |
| `select_scaling_10000` | 15.07 ms | 16.08 ms | **1.07×** | | `select_scaling_10000` | 15.07 ms | 16.08 ms | **1.07×** |

View File

@ -1,11 +1,11 @@
--- ---
title: The phase log title: The phase log
description: Phase-by-phase narrative of how informix-db got built, with notable architectural decisions called out. description: Phase-by-phase narrative of how informix-driver got built, with notable architectural decisions called out.
sidebar: sidebar:
order: 6 order: 6
--- ---
The driver was built across 39+ phases, each with a focused scope and a decision log. This page is a high-level index; the gory details (with rationale, alternatives considered, and rollback notes) live in [`docs/DECISION_LOG.md`](https://github.com/rsp2k/informix-db/blob/main/docs/DECISION_LOG.md). The driver was built across 39+ phases, each with a focused scope and a decision log. This page is a high-level index; the gory details (with rationale, alternatives considered, and rollback notes) live in [`docs/DECISION_LOG.md`](https://git.supported.systems/warehack.ing/informix-db/src/branch/main/docs/DECISION_LOG.md).
## Foundation (Phases 110) ## Foundation (Phases 110)
@ -84,4 +84,4 @@ The roadmap (loose, not committed):
- **Phase 4x (protocol)**: Optional Cython acceleration for the codec hot loop. Would compromise "pure Python" — gated behind a build flag. - **Phase 4x (protocol)**: Optional Cython acceleration for the codec hot loop. Would compromise "pure Python" — gated behind a build flag.
- **Phase 5x (API)**: Native `callproc` with named parameters, IBM-specific scrollable cursor extensions for full IfxPy parity. - **Phase 5x (API)**: Native `callproc` with named parameters, IBM-specific scrollable cursor extensions for full IfxPy parity.
The phase log is updated as work lands. The repo's [`CHANGELOG.md`](https://github.com/rsp2k/informix-db/blob/main/CHANGELOG.md) is the source of truth for shipped changes. The phase log is updated as work lands. The repo's [`CHANGELOG.md`](https://git.supported.systems/warehack.ing/informix-db/src/branch/main/CHANGELOG.md) is the source of truth for shipped changes.

View File

@ -58,6 +58,6 @@ For I/O-bound workloads we're already at the ceiling. The buffered reader closed
Pure-Python costs us ~515% on bulk-fetch workloads and zero (or favorable) on everything else. The deployment, async, and modern-Python wins are large and don't depend on workload. Pure-Python costs us ~515% on bulk-fetch workloads and zero (or favorable) on everything else. The deployment, async, and modern-Python wins are large and don't depend on workload.
If the codec gap matters for your case — analytical reporting against a wide table, pulling millions of rows in a single SELECT — IfxPy is probably the right tool today. If you're doing transactional or bulk-load work, FastAPI services, or any deployment where IBM's C SDK is friction, `informix-db` is the right tool. If the codec gap matters for your case — analytical reporting against a wide table, pulling millions of rows in a single SELECT — IfxPy is probably the right tool today. If you're doing transactional or bulk-load work, FastAPI services, or any deployment where IBM's C SDK is friction, `informix-driver` is the right tool.
The driver chose the goal — *first pure-socket Informix driver in any language* — over the local optimum. Phase 37 onward is a sustained effort to make that choice cost as little as possible. The driver chose the goal — *first pure-socket Informix driver in any language* — over the local optimum. Phase 37 onward is a sustained effort to make that choice cost as little as possible.

View File

@ -9,7 +9,7 @@ import { Aside } from '@astrojs/starlight/components';
SQLI is Informix's wire protocol — the same protocol IBM's CSDK and JDBC driver speak. It's a binary, length-prefixed PDU stream over a single TCP connection. SQLI is Informix's wire protocol — the same protocol IBM's CSDK and JDBC driver speak. It's a binary, length-prefixed PDU stream over a single TCP connection.
This page is a short tour. The byte-level reference (with hex annotations) lives in [`docs/PROTOCOL_NOTES.md`](https://github.com/rsp2k/informix-db/blob/main/docs/PROTOCOL_NOTES.md) in the repo. This page is a short tour. The byte-level reference (with hex annotations) lives in [`docs/PROTOCOL_NOTES.md`](https://git.supported.systems/warehack.ing/informix-db/src/branch/main/docs/PROTOCOL_NOTES.md) in the repo.
## PDU framing ## PDU framing
@ -80,5 +80,5 @@ For pipelined `executemany`, the driver sends `SQ_OPEN`+`SQ_FETCH` (or `SQ_BIND`
This was the architectural pivot in [Phase 10/11](/explain/phase-log/) that made smart-LOBs work end-to-end in pure Python. Reading and writing GB-sized BLOBs goes through the same socket as any other query. This was the architectural pivot in [Phase 10/11](/explain/phase-log/) that made smart-LOBs work end-to-end in pure Python. Reading and writing GB-sized BLOBs goes through the same socket as any other query.
<Aside type="note"> <Aside type="note">
The protocol has many more PDU types than this page covers — mostly variants for specific server features (PUT, GET-DESCRIPTOR, ROWDESC, DBINFO, COLLINFO, etc.). The complete list is in [`docs/PROTOCOL_NOTES.md`](https://github.com/rsp2k/informix-db/blob/main/docs/PROTOCOL_NOTES.md), with hex captures for each. The protocol has many more PDU types than this page covers — mostly variants for specific server features (PUT, GET-DESCRIPTOR, ROWDESC, DBINFO, COLLINFO, etc.). The complete list is in [`docs/PROTOCOL_NOTES.md`](https://git.supported.systems/warehack.ing/informix-db/src/branch/main/docs/PROTOCOL_NOTES.md), with hex captures for each.
</Aside> </Aside>

View File

@ -7,7 +7,7 @@ sidebar:
import { Aside } from '@astrojs/starlight/components'; import { Aside } from '@astrojs/starlight/components';
`informix-db` has a native async API. Use it from FastAPI by creating the pool in the app's lifespan and dependency-injecting connections per request. `informix-driver` has a native async API. Use it from FastAPI by creating the pool in the app's lifespan and dependency-injecting connections per request.
## App skeleton ## App skeleton
@ -56,7 +56,7 @@ async def get_user(user_id: int, conn = Depends(get_conn)):
## Cancellation ## Cancellation
If a client disconnects mid-request, FastAPI cancels the task. `informix-db` is cancellation-safe — the cancellation propagates cleanly, the in-flight worker is reaped, and the connection returns to the pool clean (transactions rolled back). You don't need to wrap anything in `try/finally`. If a client disconnects mid-request, FastAPI cancels the task. `informix-driver` is cancellation-safe — the cancellation propagates cleanly, the in-flight worker is reaped, and the connection returns to the pool clean (transactions rolled back). You don't need to wrap anything in `try/finally`.
<Aside type="note"> <Aside type="note">
This is a Phase 27 invariant: async cancellation cannot leak running workers onto recycled connections. The earlier behavior was a `High` audit finding; the fix is a CI tripwire test that's been green every commit since. This is a Phase 27 invariant: async cancellation cannot leak running workers onto recycled connections. The earlier behavior was a `High` audit finding; the fix is a CI tripwire test that's been green every commit since.

View File

@ -65,7 +65,7 @@ docker exec -it informix-dev su - informix -c '
' '
``` ```
After that, BLOBs and CLOBs work end-to-end. See [`docs/DECISION_LOG.md` §10](https://github.com/rsp2k/informix-db/blob/main/docs/DECISION_LOG.md) for the gory details. After that, BLOBs and CLOBs work end-to-end. See [`docs/DECISION_LOG.md` §10](https://git.supported.systems/warehack.ing/informix-db/src/branch/main/docs/DECISION_LOG.md) for the gory details.
## Running the integration tests ## Running the integration tests

View File

@ -7,7 +7,7 @@ sidebar:
import { Aside } from '@astrojs/starlight/components'; import { Aside } from '@astrojs/starlight/components';
`executemany()` is the right tool for bulk inserts and updates. With `informix-db`'s pipelined implementation it's **1.6× faster than IfxPy** at 10k+ rows. `executemany()` is the right tool for bulk inserts and updates. With `informix-driver`'s pipelined implementation it's **1.6× faster than IfxPy** at 10k+ rows.
## The basic shape ## The basic shape

View File

@ -1,6 +1,6 @@
--- ---
title: Migrate from IfxPy title: Migrate from IfxPy
description: API differences between IfxPy and informix-db, what's the same, what's not, and how to migrate incrementally. description: API differences between IfxPy and informix-driver, what's the same, what's not, and how to migrate incrementally.
sidebar: sidebar:
order: 7 order: 7
--- ---
@ -18,7 +18,7 @@ conn_str = (
) )
conn = IfxPy.connect(conn_str, "", "") conn = IfxPy.connect(conn_str, "", "")
# informix-db # informix-driver
import informix_db import informix_db
conn = informix_db.connect( conn = informix_db.connect(
host="db.example.com", port=9088, host="db.example.com", port=9088,
@ -27,7 +27,7 @@ conn = informix_db.connect(
) )
``` ```
The `informix-db` keyword-argument form is closer to `psycopg`/`asyncpg` shapes. Connection strings aren't supported (deliberately — they're a security and parsing footgun). The `informix-driver` keyword-argument form is closer to `psycopg`/`asyncpg` shapes. Connection strings aren't supported (deliberately — they're a security and parsing footgun).
## The DB-API surface is the same ## The DB-API surface is the same
@ -45,7 +45,7 @@ The exception hierarchy is identical: `Error`, `Warning`, `InterfaceError`, `Dat
- **Async API** (`from informix_db import aio`) - **Async API** (`from informix_db import aio`)
- **Connection pool** (`informix_db.create_pool` / `aio.create_pool`) - **Connection pool** (`informix_db.create_pool` / `aio.create_pool`)
- **Type-safe annotations**`informix-db` ships with `py.typed` - **Type-safe annotations**`informix-driver` ships with `py.typed`
- **Python 3.12+ support** - **Python 3.12+ support**
- **Pipelined `executemany`** — 1.6× faster than IfxPy's per-row implementation - **Pipelined `executemany`** — 1.6× faster than IfxPy's per-row implementation
@ -53,7 +53,7 @@ The exception hierarchy is identical: `Error`, `Warning`, `InterfaceError`, `Dat
If your codebase has hundreds of IfxPy call sites, you can do a partial migration: If your codebase has hundreds of IfxPy call sites, you can do a partial migration:
1. Replace connection construction with `informix-db` at the application boundary. 1. Replace connection construction with `informix-driver` at the application boundary.
2. Where you used `IfxPy.fetch_assoc` (returns dict), wrap our cursor with a small adapter: 2. Where you used `IfxPy.fetch_assoc` (returns dict), wrap our cursor with a small adapter:
```python ```python
def fetch_assoc(cur): def fetch_assoc(cur):

View File

@ -7,7 +7,7 @@ sidebar:
import { Aside } from '@astrojs/starlight/components'; import { Aside } from '@astrojs/starlight/components';
`informix-db` reads and writes smart-LOB columns (BLOB and CLOB) end-to-end without any native machinery. The implementation uses Informix's `lotofile` and `filetoblob` SQL functions, intercepted at the `SQ_FILE` (98) wire-protocol level. `informix-driver` reads and writes smart-LOB columns (BLOB and CLOB) end-to-end without any native machinery. The implementation uses Informix's `lotofile` and `filetoblob` SQL functions, intercepted at the `SQ_FILE` (98) wire-protocol level.
## Reading a BLOB ## Reading a BLOB
@ -63,7 +63,7 @@ Smart-LOBs require server-side configuration that the IBM Developer Edition Dock
- A **level-0 archive** must be taken (`ontape -s -L 0`) before BLOBs can be created - A **level-0 archive** must be taken (`ontape -s -L 0`) before BLOBs can be created
- The database must be created **with logging** (`CREATE DATABASE foo WITH LOG`) - The database must be created **with logging** (`CREATE DATABASE foo WITH LOG`)
Full setup commands are in [`docs/DECISION_LOG.md` §10](https://github.com/rsp2k/informix-db/blob/main/docs/DECISION_LOG.md). Full setup commands are in [`docs/DECISION_LOG.md` §10](https://git.supported.systems/warehack.ing/informix-db/src/branch/main/docs/DECISION_LOG.md).
</Aside> </Aside>
## How it works (briefly) ## How it works (briefly)
@ -72,4 +72,4 @@ The `lotofile` server function returns a smart-LOB descriptor as a regular resul
Writing reverses the flow: `filetoblob` is invoked via a placeholder pattern in the SQL, the driver sends the bytes via `SQ_FILE` PDUs, and the server stores them in the sbspace. Writing reverses the flow: `filetoblob` is invoked via a placeholder pattern in the SQL, the driver sends the bytes via `SQ_FILE` PDUs, and the server stores them in the sbspace.
The architectural pivot from the heavier `SQ_FPROUTINE` + `SQ_LODATA` stack to this lighter `SQ_FILE` intercept is documented in [Phase 10/11 of the decision log](https://github.com/rsp2k/informix-db/blob/main/docs/DECISION_LOG.md). The result is roughly 3× smaller than originally projected. The architectural pivot from the heavier `SQ_FPROUTINE` + `SQ_LODATA` stack to this lighter `SQ_FILE` intercept is documented in [Phase 10/11 of the decision log](https://git.supported.systems/warehack.ing/informix-db/src/branch/main/docs/DECISION_LOG.md). The result is roughly 3× smaller than originally projected.

View File

@ -66,7 +66,7 @@ That's it. No `IBM_DB_HOME`. No DSN file. No `libcrypt.so.1`.
The existing tools were not my style. The existing tools were not my style.
Every other Informix driver in any language wraps either IBM's C Client SDK or the JDBC JAR. `IfxPy`, the legacy `informixdb`, ODBC bridges, JPype/JDBC, Perl `DBD::Informix` — all of them. To our knowledge **`informix-db` is the first pure-socket Informix driver in any language**. Every other Informix driver in any language wraps either IBM's C Client SDK or the JDBC JAR. `IfxPy`, the legacy `informixdb`, ODBC bridges, JPype/JDBC, Perl `DBD::Informix` — all of them. To our knowledge **`informix-driver` is the first pure-socket Informix driver in any language**.
The OneDB CSDK is a 92 MB tarball. It needs `libcrypt.so.1` (deprecated 2018, missing on Arch, Fedora 35+, RHEL 9). It needs four `LD_LIBRARY_PATH` entries. It needs `setuptools < 58`. And IfxPy itself is broken on Python 3.12+. For containerized deployments, ETL pipelines, FastAPI services, or anywhere a build toolchain on the runtime is friction, this driver is the alternative that didn't previously exist. Now it does. The OneDB CSDK is a 92 MB tarball. It needs `libcrypt.so.1` (deprecated 2018, missing on Arch, Fedora 35+, RHEL 9). It needs four `LD_LIBRARY_PATH` entries. It needs `setuptools < 58`. And IfxPy itself is broken on Python 3.12+. For containerized deployments, ETL pipelines, FastAPI services, or anywhere a build toolchain on the runtime is friction, this driver is the alternative that didn't previously exist. Now it does.

View File

@ -34,14 +34,14 @@ These are the steady-state numbers for the current release (`2026.05.05.12`), me
## vs IfxPy 3.0.5 ## vs IfxPy 3.0.5
| Benchmark | IfxPy | informix-db | Result | | Benchmark | IfxPy | informix-driver | Result |
|---|---:|---:|---:| |---|---:|---:|---:|
| Single-row SELECT round-trip | 118 µs | 114 µs | comparable | | Single-row SELECT round-trip | 118 µs | 114 µs | comparable |
| ~10-row server-side query | 130 µs | 159 µs | IfxPy 22% faster | | ~10-row server-side query | 130 µs | 159 µs | IfxPy 22% faster |
| Cold connect | 11.0 ms | 10.5 ms | comparable | | Cold connect | 11.0 ms | 10.5 ms | comparable |
| `executemany(1k)` | 23.5 ms | 23.2 ms | tied | | `executemany(1k)` | 23.5 ms | 23.2 ms | tied |
| `executemany(10k)` | 259 ms | **161 ms** | **informix-db 1.6× faster** | | `executemany(10k)` | 259 ms | **161 ms** | **informix-driver 1.6× faster** |
| `executemany(100k)` | 2376 ms | **1487 ms** | **informix-db 1.6× faster** | | `executemany(100k)` | 2376 ms | **1487 ms** | **informix-driver 1.6× faster** |
| `SELECT 1k` | 1.34 ms | 1.72 ms | IfxPy 1.28× | | `SELECT 1k` | 1.34 ms | 1.72 ms | IfxPy 1.28× |
| `SELECT 10k` | 11.7 ms | 16.1 ms | IfxPy 1.07× | | `SELECT 10k` | 11.7 ms | 16.1 ms | IfxPy 1.07× |
| `SELECT 100k` | 116 ms | 169 ms | IfxPy 1.15× | | `SELECT 100k` | 116 ms | 169 ms | IfxPy 1.15× |
@ -62,7 +62,7 @@ The Phase 39 jump is documented in [The buffered reader](/explain/buffered-reade
## Reproducing ## Reproducing
```bash ```bash
git clone https://github.com/rsp2k/informix-db git clone https://git.supported.systems/warehack.ing/informix-db
cd informix-db cd informix-db
make ifx-up # starts the dev container make ifx-up # starts the dev container
make bench # runs all benchmarks make bench # runs all benchmarks

View File

@ -7,20 +7,20 @@ sidebar:
import { Aside } from '@astrojs/starlight/components'; import { Aside } from '@astrojs/starlight/components';
[IfxPy](https://pypi.org/project/IfxPy/) is IBM's official Python driver — a C extension that wraps the OneDB Client SDK (CSDK), which itself wraps the same SQLI wire protocol `informix-db` speaks directly. It's the reasonable comparison: same protocol, same server, same workload, different transport. [IfxPy](https://pypi.org/project/IfxPy/) is IBM's official Python driver — a C extension that wraps the OneDB Client SDK (CSDK), which itself wraps the same SQLI wire protocol `informix-driver` speaks directly. It's the reasonable comparison: same protocol, same server, same workload, different transport.
Numbers below are **median + IQR over 10+ rounds**, all against the same IBM Informix Developer Edition Docker container on the same host. Methodology and reproduction steps live in [`tests/benchmarks/compare/`](https://github.com/rsp2k/informix-db/tree/main/tests/benchmarks/compare) in the repo. Numbers below are **median + IQR over 10+ rounds**, all against the same IBM Informix Developer Edition Docker container on the same host. Methodology and reproduction steps live in [`tests/benchmarks/compare/`](https://git.supported.systems/warehack.ing/informix-db/src/branch/main/tests/benchmarks/compare) in the repo.
## Headline numbers ## Headline numbers
| Benchmark | IfxPy 3.0.5 (C) | informix-db (pure Python) | Result | | Benchmark | IfxPy 3.0.5 (C) | informix-driver (pure Python) | Result |
|---|---:|---:|---:| |---|---:|---:|---:|
| Single-row SELECT round-trip | 118 µs | 114 µs | comparable | | Single-row SELECT round-trip | 118 µs | 114 µs | comparable |
| ~10-row server-side query | 130 µs | 159 µs | IfxPy 22% faster | | ~10-row server-side query | 130 µs | 159 µs | IfxPy 22% faster |
| Cold connect (login handshake) | 11.0 ms | 10.5 ms | comparable | | Cold connect (login handshake) | 11.0 ms | 10.5 ms | comparable |
| `executemany(1k)` in transaction | 23.5 ms | 23.2 ms | tied | | `executemany(1k)` in transaction | 23.5 ms | 23.2 ms | tied |
| **`executemany(10k)` in transaction** | 259 ms | **161 ms** | **informix-db 1.6× faster** | | **`executemany(10k)` in transaction** | 259 ms | **161 ms** | **informix-driver 1.6× faster** |
| **`executemany(100k)` in transaction** | 2376 ms | **1487 ms** | **informix-db 1.6× faster** | | **`executemany(100k)` in transaction** | 2376 ms | **1487 ms** | **informix-driver 1.6× faster** |
| `SELECT 1k` rows | 1.34 ms | 1.72 ms | IfxPy 1.28× faster | | `SELECT 1k` rows | 1.34 ms | 1.72 ms | IfxPy 1.28× faster |
| `SELECT 10k` rows | 11.7 ms | 16.1 ms | IfxPy 1.07× faster | | `SELECT 10k` rows | 11.7 ms | 16.1 ms | IfxPy 1.07× faster |
| `SELECT 100k` rows | 116 ms | 169 ms | IfxPy 1.15× faster | | `SELECT 100k` rows | 116 ms | 169 ms | IfxPy 1.15× faster |
@ -29,11 +29,11 @@ Numbers below are **median + IQR over 10+ rounds**, all against the same IBM Inf
Phase 39's connection-scoped buffered reader closed the bulk-fetch gap from a steady ~2.4× to ~1.051.15×. The story of how that landed is in [the buffered reader page](/explain/buffered-reader/). Phase 39's connection-scoped buffered reader closed the bulk-fetch gap from a steady ~2.4× to ~1.051.15×. The story of how that landed is in [the buffered reader page](/explain/buffered-reader/).
</Aside> </Aside>
## When informix-db wins ## When informix-driver wins
### Bulk inserts at scale ### Bulk inserts at scale
The clearest win is bulk insert throughput. `executemany(10_000_rows)` runs in **161 ms** vs IfxPy's **259 ms** — `informix-db` is 1.6× faster. The clearest win is bulk insert throughput. `executemany(10_000_rows)` runs in **161 ms** vs IfxPy's **259 ms** — `informix-driver` is 1.6× faster.
The mechanism is pipelining. Phase 33 changed `executemany` to send all N BIND+EXECUTE PDUs back-to-back **before** draining any response. IfxPy's C-level `IfxPy.execute(stmt, tuple)` makes one round-trip per row — N RTTs at ~80 µs each adds up to the 100 ms gap. The mechanism is pipelining. Phase 33 changed `executemany` to send all N BIND+EXECUTE PDUs back-to-back **before** draining any response. IfxPy's C-level `IfxPy.execute(stmt, tuple)` makes one round-trip per row — N RTTs at ~80 µs each adds up to the 100 ms gap.
@ -44,13 +44,13 @@ cur.executemany(
rows, # list of 10_000 tuples rows, # list of 10_000 tuples
) )
# informix-db: 161 ms — 10k PDUs sent, then 10k responses drained # informix-driver: 161 ms — 10k PDUs sent, then 10k responses drained
# IfxPy: 259 ms — 10k round-trips, each blocking on response # IfxPy: 259 ms — 10k round-trips, each blocking on response
``` ```
### Containerized deployment ### Containerized deployment
`informix-db` ships as a 50 KB pure-Python wheel with **zero runtime dependencies**. Your Dockerfile is: `informix-driver` ships as a 50 KB pure-Python wheel with **zero runtime dependencies**. Your Dockerfile is:
```dockerfile ```dockerfile
FROM python:3.13-slim FROM python:3.13-slim
@ -65,17 +65,17 @@ IfxPy's deployment surface is dramatically larger:
- `libcrypt.so.1` (deprecated 2018 — missing on Arch, Fedora 35+, RHEL 9) - `libcrypt.so.1` (deprecated 2018 — missing on Arch, Fedora 35+, RHEL 9)
- C compiler in the build image - C compiler in the build image
For slim images, multi-stage builds, FaaS deployments, or anywhere build-toolchain-on-the-runtime is friction, `informix-db` is the only reasonable option. For slim images, multi-stage builds, FaaS deployments, or anywhere build-toolchain-on-the-runtime is friction, `informix-driver` is the only reasonable option.
### Modern Python ### Modern Python
IfxPy works on Python ≤ 3.11 currently. The C extension breaks on 3.12+ (PyConfig changes, removed `_PyImport_AcquireLock`, etc.). IfxPy works on Python ≤ 3.11 currently. The C extension breaks on 3.12+ (PyConfig changes, removed `_PyImport_AcquireLock`, etc.).
`informix-db` works unmodified on **3.10, 3.11, 3.12, 3.13, and 3.14**. We've kept a CI matrix on every minor version since 3.10 from the start. `informix-driver` works unmodified on **3.10, 3.11, 3.12, 3.13, and 3.14**. We've kept a CI matrix on every minor version since 3.10 from the start.
### Async ### Async
`informix-db` ships an async API: `informix-driver` ships an async API:
```python ```python
from informix_db import aio from informix_db import aio
@ -109,7 +109,7 @@ If you're running analytical reports that pull millions of rows in a single SELE
### Workloads built around CSDK extensions ### Workloads built around CSDK extensions
If your existing code uses IBM-specific cursor extensions (`cursor.callproc` with named parameters, IBM's specific scrollable cursor semantics around `last`/`prior`/`relative`, `cursor.set_chunk_size` for fetch tuning), the migration to `informix-db` is straightforward but not zero-cost. We support the core PEP 249 surface plus our own scrollable cursor API — see [the migration guide](/how-to/migrate-from-ifxpy/). If your existing code uses IBM-specific cursor extensions (`cursor.callproc` with named parameters, IBM's specific scrollable cursor semantics around `last`/`prior`/`relative`, `cursor.set_chunk_size` for fetch tuning), the migration to `informix-driver` is straightforward but not zero-cost. We support the core PEP 249 surface plus our own scrollable cursor API — see [the migration guide](/how-to/migrate-from-ifxpy/).
## Methodology ## Methodology
@ -122,7 +122,7 @@ IfxPy's IQR on the 100k-row SELECT is ~21% (Docker→host loopback noise, plus t
To reproduce: To reproduce:
```bash ```bash
git clone https://github.com/rsp2k/informix-db git clone https://git.supported.systems/warehack.ing/informix-db
cd informix-db/tests/benchmarks/compare cd informix-db/tests/benchmarks/compare
make ifx-up make ifx-up
make compare make compare
@ -132,7 +132,7 @@ The `Makefile` handles the IfxPy install gauntlet (Python ≤ 3.11 environment,
## Summary ## Summary
Use `informix-db` when: Use `informix-driver` when:
- You're writing new code in Python ≥ 3.10 - You're writing new code in Python ≥ 3.10
- Your workload is bulk-insert / ETL / log-shipping - Your workload is bulk-insert / ETL / log-shipping

View File

@ -8,7 +8,7 @@ sidebar:
The existing tools were not my style. The existing tools were not my style.
Every Informix driver in any language — `IfxPy`, the legacy `informixdb`, ODBC bridges, JPype/JDBC, Perl `DBD::Informix` — wraps either IBM's C Client SDK or the JDBC JAR. To our knowledge `informix-db` is the **first pure-socket Informix driver in any language**. Every Informix driver in any language — `IfxPy`, the legacy `informixdb`, ODBC bridges, JPype/JDBC, Perl `DBD::Informix` — wraps either IBM's C Client SDK or the JDBC JAR. To our knowledge `informix-driver` is the **first pure-socket Informix driver in any language**.
## The problem with IBM's C SDK ## The problem with IBM's C SDK
@ -21,11 +21,11 @@ The IBM Informix Client SDK (CSDK), now packaged as part of OneDB Client, is a 9
- Four `LD_LIBRARY_PATH` directories - Four `LD_LIBRARY_PATH` directories
- `libcrypt.so.1` — deprecated in 2018, missing on Arch, Fedora 35+, RHEL 9 - `libcrypt.so.1` — deprecated in 2018, missing on Arch, Fedora 35+, RHEL 9
For containerized deployments, ETL pipelines, FastAPI services, or anywhere Python lives and IBM's C SDK is friction, the friction compounds. `informix-db`'s install is `pip install informix-driver` (`import informix_db` — the distribution name dodges PyPI's 2008-vintage `informixdb` package, the import name is what you'd expect). The wheel is ~50 KB. There are zero runtime dependencies. For containerized deployments, ETL pipelines, FastAPI services, or anywhere Python lives and IBM's C SDK is friction, the friction compounds. `informix-driver`'s install is `pip install informix-driver` (`import informix_db` — the distribution name dodges PyPI's 2008-vintage `informixdb` package, the import name is what you'd expect). The wheel is ~50 KB. There are zero runtime dependencies.
## What it does ## What it does
`informix-db` opens a TCP socket to an Informix server's SQLI listener and speaks the wire protocol directly — the same protocol IBM's JDBC driver uses, the same protocol the CSDK speaks under the hood. No native code is in the thread of execution. `informix-driver` opens a TCP socket to an Informix server's SQLI listener and speaks the wire protocol directly — the same protocol IBM's JDBC driver uses, the same protocol the CSDK speaks under the hood. No native code is in the thread of execution.
The wire protocol was reverse-engineered through three sources: The wire protocol was reverse-engineered through three sources:
@ -37,7 +37,7 @@ The result is a PEP 249 compliant driver with a sync API, an async API (FastAPI
## What it's good for ## What it's good for
The places where `informix-db` is unambiguously the right choice: The places where `informix-driver` is unambiguously the right choice:
- **ETL and bulk-load pipelines.** Pipelined `executemany` (Phase 33) is 1.6× faster than IfxPy at scale because every BIND+EXECUTE PDU goes out before any responses are drained. IfxPy still pays one round-trip per `IfxPy.execute(stmt, tuple)` call. - **ETL and bulk-load pipelines.** Pipelined `executemany` (Phase 33) is 1.6× faster than IfxPy at scale because every BIND+EXECUTE PDU goes out before any responses are drained. IfxPy still pays one round-trip per `IfxPy.execute(stmt, tuple)` call.
- **Container deployments.** The 50 KB wheel and absent native deps mean a slim base image works. No multi-stage build to compile the CSDK. - **Container deployments.** The 50 KB wheel and absent native deps mean a slim base image works. No multi-stage build to compile the CSDK.
@ -50,18 +50,18 @@ The places where `informix-db` is unambiguously the right choice:
Honesty matters here: Honesty matters here:
- **Large analytical fetches.** IfxPy's C-level `fetch_tuple` decoder is faster than our Python `parse_tuple_payload` (~1.1 µs/row vs ~2.0 µs/row after Phase 39). For workloads pulling 10k+ rows in a single SELECT where the per-row decode cost dominates, IfxPy is currently 515% faster. The gap is shrinking phase by phase. - **Large analytical fetches.** IfxPy's C-level `fetch_tuple` decoder is faster than our Python `parse_tuple_payload` (~1.1 µs/row vs ~2.0 µs/row after Phase 39). For workloads pulling 10k+ rows in a single SELECT where the per-row decode cost dominates, IfxPy is currently 515% faster. The gap is shrinking phase by phase.
- **Workloads built around the CSDK.** If your existing code already uses IfxPy idioms (`IfxPyDbi.connect_pooled`, IBM's specific cursor extensions), the migration to `informix-db` is straightforward but not zero-cost. - **Workloads built around the CSDK.** If your existing code already uses IfxPy idioms (`IfxPyDbi.connect_pooled`, IBM's specific cursor extensions), the migration to `informix-driver` is straightforward but not zero-cost.
The honest summary table from the [comparison page](/start/vs-ifxpy/): The honest summary table from the [comparison page](/start/vs-ifxpy/):
| Workload | Winner | Margin | | Workload | Winner | Margin |
|---|---|---| |---|---|---|
| Bulk insert (`executemany` 10k100k rows) | `informix-db` | 1.6× faster | | Bulk insert (`executemany` 10k100k rows) | `informix-driver` | 1.6× faster |
| Bulk SELECT (10k100k rows) | IfxPy | 1.051.15× faster | | Bulk SELECT (10k100k rows) | IfxPy | 1.051.15× faster |
| Single-row queries | tied | within noise | | Single-row queries | tied | within noise |
| Cold connect | tied | within noise | | Cold connect | tied | within noise |
| Containerized deployment | `informix-db` | no contest | | Containerized deployment | `informix-driver` | no contest |
| Python 3.12+ | `informix-db` | only option | | Python 3.12+ | `informix-driver` | only option |
## Production-ready ## Production-ready

View File

@ -1,5 +1,5 @@
/* /*
* informix-db docs theme * informix-driver docs theme
* - Charcoal base (no purple gradients, ever) * - Charcoal base (no purple gradients, ever)
* - Amber accent CRT-monitor nod, distinct from sibling sites' cyan * - Amber accent CRT-monitor nod, distinct from sibling sites' cyan
* - Inter for body, IBM Plex Mono for technical bytes * - Inter for body, IBM Plex Mono for technical bytes