diff --git a/README.md b/README.md
index e3e667d..ac4bde7 100644
--- a/README.md
+++ b/README.md
@@ -1,4 +1,4 @@
-# informix-db
+# informix-driver
Pure-Python driver for IBM Informix IDS, speaking the SQLI wire protocol over raw sockets. **No IBM Client SDK. No JVM. No native libraries.** PEP 249 compliant; sync + async APIs; built-in connection pool; TLS support.
@@ -176,25 +176,25 @@ Single-connection benchmarks against the dev container on loopback:
Head-to-head benchmarks against [IfxPy](https://pypi.org/project/IfxPy/) on identical workloads, same Informix server, matched conditions. Using **median + IQR over 10+ rounds** to resist outlier-round noise:
-| Benchmark | IfxPy 3.0.5 (C-bound) | `informix-db` (pure Python) | Result |
+| Benchmark | IfxPy 3.0.5 (C-bound) | `informix-driver` (pure Python) | Result |
|---|---:|---:|---:|
| Single-row SELECT round-trip | 118 µs | 114 µs | comparable |
| ~10-row server-side query | 130 µs | 159 µs | IfxPy 22% faster |
| Cold connect (login handshake) | 11.0 ms | 10.5 ms | comparable |
| **`executemany(1k)` in transaction** | 23.5 ms | 23.2 ms | tied |
-| **`executemany(10k)` in transaction** | 259 ms | **161 ms** | **`informix-db` 1.6× faster** |
-| **`executemany(100k)` in transaction** | 2376 ms | **1487 ms** | **`informix-db` 1.6× faster** |
+| **`executemany(10k)` in transaction** | 259 ms | **161 ms** | **`informix-driver` 1.6× faster** |
+| **`executemany(100k)` in transaction** | 2376 ms | **1487 ms** | **`informix-driver` 1.6× faster** |
| `SELECT` 1k rows | 1.2 ms | 2.7 ms | IfxPy 2.3× faster |
| `SELECT` 10k rows | 11.3 ms | 25.8 ms | IfxPy 2.3× faster |
| `SELECT` 100k rows | 112 ms | 271 ms | IfxPy 2.4× faster |
**The honest summary:**
-- **Bulk-insert workloads: `informix-db` wins 1.6× at scale.** The pipelined `executemany` (Phase 33) sends all N BIND+EXECUTE PDUs before draining responses, eliminating per-row RTT. IfxPy still pays one round-trip per `IfxPy.execute(stmt, tuple)` call.
+- **Bulk-insert workloads: `informix-driver` wins 1.6× at scale.** The pipelined `executemany` (Phase 33) sends all N BIND+EXECUTE PDUs before draining responses, eliminating per-row RTT. IfxPy still pays one round-trip per `IfxPy.execute(stmt, tuple)` call.
- **Large-fetch workloads: IfxPy wins 2.3× at scale.** Their C-level `fetch_tuple` decoder is genuinely faster than our Python `parse_tuple_payload` (~1.1 µs/row vs ~2.7 µs/row). At 100k rows, that 1.6 µs/row gap accumulates into a 160 ms wall-clock difference.
- **Small queries: comparable.** Both spend ~120 µs waiting for the server; the per-call codec cost is small relative to the round-trip.
-**When to prefer `informix-db`:**
+**When to prefer `informix-driver`:**
- ETL pipelines, log shipping, bulk writes (1.6× faster at scale)
- Containerized / minimal-dependency environments (50 KB wheel vs IfxPy's 92 MB OneDB tarball + libcrypt.so.1 dependency hell)
- Modern Python (works on 3.10–3.14; IfxPy is broken on Python 3.12+)
@@ -208,7 +208,7 @@ These results are reproducible from `tests/benchmarks/compare/` — the Dockerfi
Full methodology, IQR caveats, install gauntlet, and reproduction in [`tests/benchmarks/compare/README.md`](https://git.supported.systems/warehack.ing/informix-db/src/branch/main/tests/benchmarks/compare/README.md).
-A note on IfxPy's install gauntlet: getting it to run on a modern system requires Python ≤ 3.11, setuptools <58, permissive CFLAGS, manual download of a 92 MB ODBC tarball, four `LD_LIBRARY_PATH` directories, and `libcrypt.so.1` (deprecated 2018, missing on Arch / Fedora 35+ / RHEL 9). `informix-db`'s install: `pip install informix-driver`.
+A note on IfxPy's install gauntlet: getting it to run on a modern system requires Python ≤ 3.11, setuptools <58, permissive CFLAGS, manual download of a 92 MB ODBC tarball, four `LD_LIBRARY_PATH` directories, and `libcrypt.so.1` (deprecated 2018, missing on Arch / Fedora 35+ / RHEL 9). `informix-driver`'s install: `pip install informix-driver`.
## Standards & guarantees
diff --git a/docs-site/astro.config.mjs b/docs-site/astro.config.mjs
index 6a391d5..842b739 100644
--- a/docs-site/astro.config.mjs
+++ b/docs-site/astro.config.mjs
@@ -34,7 +34,7 @@ export default defineConfig({
baseUrl: 'https://git.supported.systems/warehack.ing/informix-db/_edit/branch/main/docs-site/',
},
social: [
- { icon: 'github', label: 'Source (Gitea)', href: 'https://git.supported.systems/warehack.ing/informix-db' },
+ { icon: 'seti:git', label: 'Source (Gitea)', href: 'https://git.supported.systems/warehack.ing/informix-db' },
{ icon: 'seti:python', label: 'PyPI', href: 'https://pypi.org/project/informix-driver/' },
],
customCss: ['./src/styles/theme.css', './src/styles/components.css'],
diff --git a/docs-site/src/components/Footer.astro b/docs-site/src/components/Footer.astro
index 38bcea8..85f2fa7 100644
--- a/docs-site/src/components/Footer.astro
+++ b/docs-site/src/components/Footer.astro
@@ -20,7 +20,7 @@ import Default from '@astrojs/starlight/components/Footer.astro';
A Supported Systems Joint
- informix-db is built and maintained by
+ informix-driver is built and maintained by
Supported Systems — a
boutique software studio focused on thoughtful, user-first technology.
We take databases personally.
diff --git a/docs-site/src/components/Hero.astro b/docs-site/src/components/Hero.astro
index 1cb1f57..4667adf 100644
--- a/docs-site/src/components/Hero.astro
+++ b/docs-site/src/components/Hero.astro
@@ -21,7 +21,7 @@
diff --git a/docs-site/src/content/docs/explain/async-strategy.mdx b/docs-site/src/content/docs/explain/async-strategy.mdx
index 6b9a368..7fc816b 100644
--- a/docs-site/src/content/docs/explain/async-strategy.mdx
+++ b/docs-site/src/content/docs/explain/async-strategy.mdx
@@ -1,13 +1,13 @@
---
title: Async strategy
-description: Why informix-db wraps a sync core in a thread pool instead of going fully async — and what that costs.
+description: Why informix-driver wraps a sync core in a thread pool instead of going fully async — and what that costs.
sidebar:
order: 4
---
import { Aside } from '@astrojs/starlight/components';
-`informix-db`'s async API (`from informix_db import aio`) is implemented by wrapping the sync core in a thread pool. Every `await conn.execute(...)` schedules the underlying sync `execute()` on the pool's executor.
+`informix-driver`'s async API (`from informix_db import aio`) is implemented by wrapping the sync core in a thread pool. Every `await conn.execute(...)` schedules the underlying sync `execute()` on the pool's executor.
This is a deliberate architectural choice from Phase 16. Here's the reasoning.
@@ -57,5 +57,5 @@ The two scenarios where a full async I/O implementation would matter:
If either becomes a real production concern, the layered architecture lets us swap in a fully-async lower half without changing the upper half. The cursor / connection / pool API doesn't care how the bytes get to and from the server. That's the option-2 win we explicitly preserved.
diff --git a/docs-site/src/content/docs/explain/buffered-reader.mdx b/docs-site/src/content/docs/explain/buffered-reader.mdx
index f8303a7..0acc34a 100644
--- a/docs-site/src/content/docs/explain/buffered-reader.mdx
+++ b/docs-site/src/content/docs/explain/buffered-reader.mdx
@@ -34,7 +34,7 @@ The kernel was doing maybe 25–30 ms of work. The other 130 ms of the gap-vs-If
Both `asyncpg` (in `buffer.pyx`) and `psycopg3` (in `pq.PGconn`) put a single growing read buffer on the protocol/connection object. The parser indexes into it via `struct.unpack_from(buf, offset)` rather than slicing copies. Refills happen via one large `recv(64K)` rather than many small `recv()`s for individual fields.
-Phase 39 ports that pattern to `informix-db`. The state machine:
+Phase 39 ports that pattern to `informix-driver`. The state machine:
```text
┌───────────────────────────────┐
@@ -106,7 +106,7 @@ A/B-measured against the same Docker container, warmed cache, only the env flag
Re-running the IfxPy comparison after Phase 39:
-| Workload | IfxPy 2.0.7 (C) | informix-db Phase 39 | Ratio |
+| Workload | IfxPy 2.0.7 (C) | informix-driver Phase 39 | Ratio |
|---|---:|---:|---:|
| `select_scaling_1000` | 1.637 ms | 1.716 ms | **1.05×** |
| `select_scaling_10000` | 15.07 ms | 16.08 ms | **1.07×** |
diff --git a/docs-site/src/content/docs/explain/phase-log.md b/docs-site/src/content/docs/explain/phase-log.md
index 3c4a42b..b948e01 100644
--- a/docs-site/src/content/docs/explain/phase-log.md
+++ b/docs-site/src/content/docs/explain/phase-log.md
@@ -1,11 +1,11 @@
---
title: The phase log
-description: Phase-by-phase narrative of how informix-db got built, with notable architectural decisions called out.
+description: Phase-by-phase narrative of how informix-driver got built, with notable architectural decisions called out.
sidebar:
order: 6
---
-The driver was built across 39+ phases, each with a focused scope and a decision log. This page is a high-level index; the gory details (with rationale, alternatives considered, and rollback notes) live in [`docs/DECISION_LOG.md`](https://github.com/rsp2k/informix-db/blob/main/docs/DECISION_LOG.md).
+The driver was built across 39+ phases, each with a focused scope and a decision log. This page is a high-level index; the gory details (with rationale, alternatives considered, and rollback notes) live in [`docs/DECISION_LOG.md`](https://git.supported.systems/warehack.ing/informix-db/src/branch/main/docs/DECISION_LOG.md).
## Foundation (Phases 1–10)
@@ -84,4 +84,4 @@ The roadmap (loose, not committed):
- **Phase 4x (protocol)**: Optional Cython acceleration for the codec hot loop. Would compromise "pure Python" — gated behind a build flag.
- **Phase 5x (API)**: Native `callproc` with named parameters, IBM-specific scrollable cursor extensions for full IfxPy parity.
-The phase log is updated as work lands. The repo's [`CHANGELOG.md`](https://github.com/rsp2k/informix-db/blob/main/CHANGELOG.md) is the source of truth for shipped changes.
+The phase log is updated as work lands. The repo's [`CHANGELOG.md`](https://git.supported.systems/warehack.ing/informix-db/src/branch/main/CHANGELOG.md) is the source of truth for shipped changes.
diff --git a/docs-site/src/content/docs/explain/pure-python.md b/docs-site/src/content/docs/explain/pure-python.md
index 8786e4e..ac4c50e 100644
--- a/docs-site/src/content/docs/explain/pure-python.md
+++ b/docs-site/src/content/docs/explain/pure-python.md
@@ -58,6 +58,6 @@ For I/O-bound workloads we're already at the ceiling. The buffered reader closed
Pure-Python costs us ~5–15% on bulk-fetch workloads and zero (or favorable) on everything else. The deployment, async, and modern-Python wins are large and don't depend on workload.
-If the codec gap matters for your case — analytical reporting against a wide table, pulling millions of rows in a single SELECT — IfxPy is probably the right tool today. If you're doing transactional or bulk-load work, FastAPI services, or any deployment where IBM's C SDK is friction, `informix-db` is the right tool.
+If the codec gap matters for your case — analytical reporting against a wide table, pulling millions of rows in a single SELECT — IfxPy is probably the right tool today. If you're doing transactional or bulk-load work, FastAPI services, or any deployment where IBM's C SDK is friction, `informix-driver` is the right tool.
The driver chose the goal — *first pure-socket Informix driver in any language* — over the local optimum. Phase 37 onward is a sustained effort to make that choice cost as little as possible.
diff --git a/docs-site/src/content/docs/explain/sqli-protocol.mdx b/docs-site/src/content/docs/explain/sqli-protocol.mdx
index 26d38bc..f36569e 100644
--- a/docs-site/src/content/docs/explain/sqli-protocol.mdx
+++ b/docs-site/src/content/docs/explain/sqli-protocol.mdx
@@ -9,7 +9,7 @@ import { Aside } from '@astrojs/starlight/components';
SQLI is Informix's wire protocol — the same protocol IBM's CSDK and JDBC driver speak. It's a binary, length-prefixed PDU stream over a single TCP connection.
-This page is a short tour. The byte-level reference (with hex annotations) lives in [`docs/PROTOCOL_NOTES.md`](https://github.com/rsp2k/informix-db/blob/main/docs/PROTOCOL_NOTES.md) in the repo.
+This page is a short tour. The byte-level reference (with hex annotations) lives in [`docs/PROTOCOL_NOTES.md`](https://git.supported.systems/warehack.ing/informix-db/src/branch/main/docs/PROTOCOL_NOTES.md) in the repo.
## PDU framing
@@ -80,5 +80,5 @@ For pipelined `executemany`, the driver sends `SQ_OPEN`+`SQ_FETCH` (or `SQ_BIND`
This was the architectural pivot in [Phase 10/11](/explain/phase-log/) that made smart-LOBs work end-to-end in pure Python. Reading and writing GB-sized BLOBs goes through the same socket as any other query.
diff --git a/docs-site/src/content/docs/how-to/async-fastapi.mdx b/docs-site/src/content/docs/how-to/async-fastapi.mdx
index d89e9bf..9b8fc5f 100644
--- a/docs-site/src/content/docs/how-to/async-fastapi.mdx
+++ b/docs-site/src/content/docs/how-to/async-fastapi.mdx
@@ -7,7 +7,7 @@ sidebar:
import { Aside } from '@astrojs/starlight/components';
-`informix-db` has a native async API. Use it from FastAPI by creating the pool in the app's lifespan and dependency-injecting connections per request.
+`informix-driver` has a native async API. Use it from FastAPI by creating the pool in the app's lifespan and dependency-injecting connections per request.
## App skeleton
@@ -56,7 +56,7 @@ async def get_user(user_id: int, conn = Depends(get_conn)):
## Cancellation
-If a client disconnects mid-request, FastAPI cancels the task. `informix-db` is cancellation-safe — the cancellation propagates cleanly, the in-flight worker is reaped, and the connection returns to the pool clean (transactions rolled back). You don't need to wrap anything in `try/finally`.
+If a client disconnects mid-request, FastAPI cancels the task. `informix-driver` is cancellation-safe — the cancellation propagates cleanly, the in-flight worker is reaped, and the connection returns to the pool clean (transactions rolled back). You don't need to wrap anything in `try/finally`.
## How it works (briefly)
@@ -72,4 +72,4 @@ The `lotofile` server function returns a smart-LOB descriptor as a regular resul
Writing reverses the flow: `filetoblob` is invoked via a placeholder pattern in the SQL, the driver sends the bytes via `SQ_FILE` PDUs, and the server stores them in the sbspace.
-The architectural pivot from the heavier `SQ_FPROUTINE` + `SQ_LODATA` stack to this lighter `SQ_FILE` intercept is documented in [Phase 10/11 of the decision log](https://github.com/rsp2k/informix-db/blob/main/docs/DECISION_LOG.md). The result is roughly 3× smaller than originally projected.
+The architectural pivot from the heavier `SQ_FPROUTINE` + `SQ_LODATA` stack to this lighter `SQ_FILE` intercept is documented in [Phase 10/11 of the decision log](https://git.supported.systems/warehack.ing/informix-db/src/branch/main/docs/DECISION_LOG.md). The result is roughly 3× smaller than originally projected.
diff --git a/docs-site/src/content/docs/index.mdx b/docs-site/src/content/docs/index.mdx
index 13c5d84..768b6b8 100644
--- a/docs-site/src/content/docs/index.mdx
+++ b/docs-site/src/content/docs/index.mdx
@@ -66,7 +66,7 @@ That's it. No `IBM_DB_HOME`. No DSN file. No `libcrypt.so.1`.
The existing tools were not my style.
-Every other Informix driver in any language wraps either IBM's C Client SDK or the JDBC JAR. `IfxPy`, the legacy `informixdb`, ODBC bridges, JPype/JDBC, Perl `DBD::Informix` — all of them. To our knowledge **`informix-db` is the first pure-socket Informix driver in any language**.
+Every other Informix driver in any language wraps either IBM's C Client SDK or the JDBC JAR. `IfxPy`, the legacy `informixdb`, ODBC bridges, JPype/JDBC, Perl `DBD::Informix` — all of them. To our knowledge **`informix-driver` is the first pure-socket Informix driver in any language**.
The OneDB CSDK is a 92 MB tarball. It needs `libcrypt.so.1` (deprecated 2018, missing on Arch, Fedora 35+, RHEL 9). It needs four `LD_LIBRARY_PATH` entries. It needs `setuptools < 58`. And IfxPy itself is broken on Python 3.12+. For containerized deployments, ETL pipelines, FastAPI services, or anywhere a build toolchain on the runtime is friction, this driver is the alternative that didn't previously exist. Now it does.
diff --git a/docs-site/src/content/docs/reference/benchmarks.mdx b/docs-site/src/content/docs/reference/benchmarks.mdx
index 2d81567..55d224d 100644
--- a/docs-site/src/content/docs/reference/benchmarks.mdx
+++ b/docs-site/src/content/docs/reference/benchmarks.mdx
@@ -34,14 +34,14 @@ These are the steady-state numbers for the current release (`2026.05.05.12`), me
## vs IfxPy 3.0.5
-| Benchmark | IfxPy | informix-db | Result |
+| Benchmark | IfxPy | informix-driver | Result |
|---|---:|---:|---:|
| Single-row SELECT round-trip | 118 µs | 114 µs | comparable |
| ~10-row server-side query | 130 µs | 159 µs | IfxPy 22% faster |
| Cold connect | 11.0 ms | 10.5 ms | comparable |
| `executemany(1k)` | 23.5 ms | 23.2 ms | tied |
-| `executemany(10k)` | 259 ms | **161 ms** | **informix-db 1.6× faster** |
-| `executemany(100k)` | 2376 ms | **1487 ms** | **informix-db 1.6× faster** |
+| `executemany(10k)` | 259 ms | **161 ms** | **informix-driver 1.6× faster** |
+| `executemany(100k)` | 2376 ms | **1487 ms** | **informix-driver 1.6× faster** |
| `SELECT 1k` | 1.34 ms | 1.72 ms | IfxPy 1.28× |
| `SELECT 10k` | 11.7 ms | 16.1 ms | IfxPy 1.07× |
| `SELECT 100k` | 116 ms | 169 ms | IfxPy 1.15× |
@@ -62,7 +62,7 @@ The Phase 39 jump is documented in [The buffered reader](/explain/buffered-reade
## Reproducing
```bash
-git clone https://github.com/rsp2k/informix-db
+git clone https://git.supported.systems/warehack.ing/informix-db
cd informix-db
make ifx-up # starts the dev container
make bench # runs all benchmarks
diff --git a/docs-site/src/content/docs/start/vs-ifxpy.mdx b/docs-site/src/content/docs/start/vs-ifxpy.mdx
index 86a188e..87a3a51 100644
--- a/docs-site/src/content/docs/start/vs-ifxpy.mdx
+++ b/docs-site/src/content/docs/start/vs-ifxpy.mdx
@@ -7,20 +7,20 @@ sidebar:
import { Aside } from '@astrojs/starlight/components';
-[IfxPy](https://pypi.org/project/IfxPy/) is IBM's official Python driver — a C extension that wraps the OneDB Client SDK (CSDK), which itself wraps the same SQLI wire protocol `informix-db` speaks directly. It's the reasonable comparison: same protocol, same server, same workload, different transport.
+[IfxPy](https://pypi.org/project/IfxPy/) is IBM's official Python driver — a C extension that wraps the OneDB Client SDK (CSDK), which itself wraps the same SQLI wire protocol `informix-driver` speaks directly. It's the reasonable comparison: same protocol, same server, same workload, different transport.
-Numbers below are **median + IQR over 10+ rounds**, all against the same IBM Informix Developer Edition Docker container on the same host. Methodology and reproduction steps live in [`tests/benchmarks/compare/`](https://github.com/rsp2k/informix-db/tree/main/tests/benchmarks/compare) in the repo.
+Numbers below are **median + IQR over 10+ rounds**, all against the same IBM Informix Developer Edition Docker container on the same host. Methodology and reproduction steps live in [`tests/benchmarks/compare/`](https://git.supported.systems/warehack.ing/informix-db/src/branch/main/tests/benchmarks/compare) in the repo.
## Headline numbers
-| Benchmark | IfxPy 3.0.5 (C) | informix-db (pure Python) | Result |
+| Benchmark | IfxPy 3.0.5 (C) | informix-driver (pure Python) | Result |
|---|---:|---:|---:|
| Single-row SELECT round-trip | 118 µs | 114 µs | comparable |
| ~10-row server-side query | 130 µs | 159 µs | IfxPy 22% faster |
| Cold connect (login handshake) | 11.0 ms | 10.5 ms | comparable |
| `executemany(1k)` in transaction | 23.5 ms | 23.2 ms | tied |
-| **`executemany(10k)` in transaction** | 259 ms | **161 ms** | **informix-db 1.6× faster** |
-| **`executemany(100k)` in transaction** | 2376 ms | **1487 ms** | **informix-db 1.6× faster** |
+| **`executemany(10k)` in transaction** | 259 ms | **161 ms** | **informix-driver 1.6× faster** |
+| **`executemany(100k)` in transaction** | 2376 ms | **1487 ms** | **informix-driver 1.6× faster** |
| `SELECT 1k` rows | 1.34 ms | 1.72 ms | IfxPy 1.28× faster |
| `SELECT 10k` rows | 11.7 ms | 16.1 ms | IfxPy 1.07× faster |
| `SELECT 100k` rows | 116 ms | 169 ms | IfxPy 1.15× faster |
@@ -29,11 +29,11 @@ Numbers below are **median + IQR over 10+ rounds**, all against the same IBM Inf
Phase 39's connection-scoped buffered reader closed the bulk-fetch gap from a steady ~2.4× to ~1.05–1.15×. The story of how that landed is in [the buffered reader page](/explain/buffered-reader/).
-## When informix-db wins
+## When informix-driver wins
### Bulk inserts at scale
-The clearest win is bulk insert throughput. `executemany(10_000_rows)` runs in **161 ms** vs IfxPy's **259 ms** — `informix-db` is 1.6× faster.
+The clearest win is bulk insert throughput. `executemany(10_000_rows)` runs in **161 ms** vs IfxPy's **259 ms** — `informix-driver` is 1.6× faster.
The mechanism is pipelining. Phase 33 changed `executemany` to send all N BIND+EXECUTE PDUs back-to-back **before** draining any response. IfxPy's C-level `IfxPy.execute(stmt, tuple)` makes one round-trip per row — N RTTs at ~80 µs each adds up to the 100 ms gap.
@@ -44,13 +44,13 @@ cur.executemany(
rows, # list of 10_000 tuples
)
-# informix-db: 161 ms — 10k PDUs sent, then 10k responses drained
+# informix-driver: 161 ms — 10k PDUs sent, then 10k responses drained
# IfxPy: 259 ms — 10k round-trips, each blocking on response
```
### Containerized deployment
-`informix-db` ships as a 50 KB pure-Python wheel with **zero runtime dependencies**. Your Dockerfile is:
+`informix-driver` ships as a 50 KB pure-Python wheel with **zero runtime dependencies**. Your Dockerfile is:
```dockerfile
FROM python:3.13-slim
@@ -65,17 +65,17 @@ IfxPy's deployment surface is dramatically larger:
- `libcrypt.so.1` (deprecated 2018 — missing on Arch, Fedora 35+, RHEL 9)
- C compiler in the build image
-For slim images, multi-stage builds, FaaS deployments, or anywhere build-toolchain-on-the-runtime is friction, `informix-db` is the only reasonable option.
+For slim images, multi-stage builds, FaaS deployments, or anywhere build-toolchain-on-the-runtime is friction, `informix-driver` is the only reasonable option.
### Modern Python
IfxPy works on Python ≤ 3.11 currently. The C extension breaks on 3.12+ (PyConfig changes, removed `_PyImport_AcquireLock`, etc.).
-`informix-db` works unmodified on **3.10, 3.11, 3.12, 3.13, and 3.14**. We've kept a CI matrix on every minor version since 3.10 from the start.
+`informix-driver` works unmodified on **3.10, 3.11, 3.12, 3.13, and 3.14**. We've kept a CI matrix on every minor version since 3.10 from the start.
### Async
-`informix-db` ships an async API:
+`informix-driver` ships an async API:
```python
from informix_db import aio
@@ -109,7 +109,7 @@ If you're running analytical reports that pull millions of rows in a single SELE
### Workloads built around CSDK extensions
-If your existing code uses IBM-specific cursor extensions (`cursor.callproc` with named parameters, IBM's specific scrollable cursor semantics around `last`/`prior`/`relative`, `cursor.set_chunk_size` for fetch tuning), the migration to `informix-db` is straightforward but not zero-cost. We support the core PEP 249 surface plus our own scrollable cursor API — see [the migration guide](/how-to/migrate-from-ifxpy/).
+If your existing code uses IBM-specific cursor extensions (`cursor.callproc` with named parameters, IBM's specific scrollable cursor semantics around `last`/`prior`/`relative`, `cursor.set_chunk_size` for fetch tuning), the migration to `informix-driver` is straightforward but not zero-cost. We support the core PEP 249 surface plus our own scrollable cursor API — see [the migration guide](/how-to/migrate-from-ifxpy/).
## Methodology
@@ -122,7 +122,7 @@ IfxPy's IQR on the 100k-row SELECT is ~21% (Docker→host loopback noise, plus t
To reproduce:
```bash
-git clone https://github.com/rsp2k/informix-db
+git clone https://git.supported.systems/warehack.ing/informix-db
cd informix-db/tests/benchmarks/compare
make ifx-up
make compare
@@ -132,7 +132,7 @@ The `Makefile` handles the IfxPy install gauntlet (Python ≤ 3.11 environment,
## Summary
-Use `informix-db` when:
+Use `informix-driver` when:
- You're writing new code in Python ≥ 3.10
- Your workload is bulk-insert / ETL / log-shipping
diff --git a/docs-site/src/content/docs/start/wtf.md b/docs-site/src/content/docs/start/wtf.md
index 61ae4a8..346b092 100644
--- a/docs-site/src/content/docs/start/wtf.md
+++ b/docs-site/src/content/docs/start/wtf.md
@@ -8,7 +8,7 @@ sidebar:
The existing tools were not my style.
-Every Informix driver in any language — `IfxPy`, the legacy `informixdb`, ODBC bridges, JPype/JDBC, Perl `DBD::Informix` — wraps either IBM's C Client SDK or the JDBC JAR. To our knowledge `informix-db` is the **first pure-socket Informix driver in any language**.
+Every Informix driver in any language — `IfxPy`, the legacy `informixdb`, ODBC bridges, JPype/JDBC, Perl `DBD::Informix` — wraps either IBM's C Client SDK or the JDBC JAR. To our knowledge `informix-driver` is the **first pure-socket Informix driver in any language**.
## The problem with IBM's C SDK
@@ -21,11 +21,11 @@ The IBM Informix Client SDK (CSDK), now packaged as part of OneDB Client, is a 9
- Four `LD_LIBRARY_PATH` directories
- `libcrypt.so.1` — deprecated in 2018, missing on Arch, Fedora 35+, RHEL 9
-For containerized deployments, ETL pipelines, FastAPI services, or anywhere Python lives and IBM's C SDK is friction, the friction compounds. `informix-db`'s install is `pip install informix-driver` (`import informix_db` — the distribution name dodges PyPI's 2008-vintage `informixdb` package, the import name is what you'd expect). The wheel is ~50 KB. There are zero runtime dependencies.
+For containerized deployments, ETL pipelines, FastAPI services, or anywhere Python lives and IBM's C SDK is friction, the friction compounds. `informix-driver`'s install is `pip install informix-driver` (`import informix_db` — the distribution name dodges PyPI's 2008-vintage `informixdb` package, the import name is what you'd expect). The wheel is ~50 KB. There are zero runtime dependencies.
## What it does
-`informix-db` opens a TCP socket to an Informix server's SQLI listener and speaks the wire protocol directly — the same protocol IBM's JDBC driver uses, the same protocol the CSDK speaks under the hood. No native code is in the thread of execution.
+`informix-driver` opens a TCP socket to an Informix server's SQLI listener and speaks the wire protocol directly — the same protocol IBM's JDBC driver uses, the same protocol the CSDK speaks under the hood. No native code is in the thread of execution.
The wire protocol was reverse-engineered through three sources:
@@ -37,7 +37,7 @@ The result is a PEP 249 compliant driver with a sync API, an async API (FastAPI
## What it's good for
-The places where `informix-db` is unambiguously the right choice:
+The places where `informix-driver` is unambiguously the right choice:
- **ETL and bulk-load pipelines.** Pipelined `executemany` (Phase 33) is 1.6× faster than IfxPy at scale because every BIND+EXECUTE PDU goes out before any responses are drained. IfxPy still pays one round-trip per `IfxPy.execute(stmt, tuple)` call.
- **Container deployments.** The 50 KB wheel and absent native deps mean a slim base image works. No multi-stage build to compile the CSDK.
@@ -50,18 +50,18 @@ The places where `informix-db` is unambiguously the right choice:
Honesty matters here:
- **Large analytical fetches.** IfxPy's C-level `fetch_tuple` decoder is faster than our Python `parse_tuple_payload` (~1.1 µs/row vs ~2.0 µs/row after Phase 39). For workloads pulling 10k+ rows in a single SELECT where the per-row decode cost dominates, IfxPy is currently 5–15% faster. The gap is shrinking phase by phase.
-- **Workloads built around the CSDK.** If your existing code already uses IfxPy idioms (`IfxPyDbi.connect_pooled`, IBM's specific cursor extensions), the migration to `informix-db` is straightforward but not zero-cost.
+- **Workloads built around the CSDK.** If your existing code already uses IfxPy idioms (`IfxPyDbi.connect_pooled`, IBM's specific cursor extensions), the migration to `informix-driver` is straightforward but not zero-cost.
The honest summary table from the [comparison page](/start/vs-ifxpy/):
| Workload | Winner | Margin |
|---|---|---|
-| Bulk insert (`executemany` 10k–100k rows) | `informix-db` | 1.6× faster |
+| Bulk insert (`executemany` 10k–100k rows) | `informix-driver` | 1.6× faster |
| Bulk SELECT (10k–100k rows) | IfxPy | 1.05–1.15× faster |
| Single-row queries | tied | within noise |
| Cold connect | tied | within noise |
-| Containerized deployment | `informix-db` | no contest |
-| Python 3.12+ | `informix-db` | only option |
+| Containerized deployment | `informix-driver` | no contest |
+| Python 3.12+ | `informix-driver` | only option |
## Production-ready
diff --git a/docs-site/src/styles/theme.css b/docs-site/src/styles/theme.css
index 0833da3..cbe5733 100644
--- a/docs-site/src/styles/theme.css
+++ b/docs-site/src/styles/theme.css
@@ -1,5 +1,5 @@
/*
- * informix-db docs theme
+ * informix-driver docs theme
* - Charcoal base (no purple gradients, ever)
* - Amber accent — CRT-monitor nod, distinct from sibling sites' cyan
* - Inter for body, IBM Plex Mono for technical bytes