Ryan Malloy 21c47385ae Prose rebrand: informix-db → informix-driver across docs site
Sweep all backticked + bold + table-cell + heading mentions of the
project's brand to match the PyPI distribution name and the docs domain.
Path references (`cd informix-db`, `git.supported.systems/.../informix-db`)
stay — those reference the actual Gitea repo directory which we did NOT
rename. Same with `import informix_db` (Python module name, separate
from distribution brand).

Also flip GitHub references to Gitea throughout the docs site:
- `github.com/rsp2k/informix-db/blob/main/X` → Gitea `/src/branch/main/X`
- `github.com/rsp2k/informix-db/tree/main/X` → Gitea same path
- `github.com/rsp2k/informix-db` (plain) → Gitea
- Hero "GitHub" CTA button → Gitea source URL
- Social icon: `github` → `seti:git` (generic git icon, not octocat)

Net result: zero stale GitHub references, brand consistency matches what
users `pip install`.
2026-05-08 05:43:07 -06:00

76 lines
2.6 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
title: BLOB / CLOB read & write
description: Reading and writing smart-LOB columns end-to-end in pure Python.
sidebar:
order: 6
---
import { Aside } from '@astrojs/starlight/components';
`informix-driver` reads and writes smart-LOB columns (BLOB and CLOB) end-to-end without any native machinery. The implementation uses Informix's `lotofile` and `filetoblob` SQL functions, intercepted at the `SQ_FILE` (98) wire-protocol level.
## Reading a BLOB
```python
data: bytes = cur.read_blob_column(
"SELECT data FROM photos WHERE id = ?",
(42,),
)
```
`read_blob_column` returns the raw bytes. For very large BLOBs (multi-GB), see the streaming variant below.
## Writing a BLOB
```python
cur.write_blob_column(
"INSERT INTO photos VALUES (?, BLOB_PLACEHOLDER)",
blob_data=jpeg_bytes,
params=(42,),
)
```
The `BLOB_PLACEHOLDER` token in the SQL marks where the BLOB data goes. Other parameters are bound positionally to `params=`.
## Reading a CLOB
```python
text: str = cur.read_clob_column(
"SELECT body FROM articles WHERE id = ?",
(42,),
)
```
Returns a decoded `str` using the connection's `client_locale`.
## Writing a CLOB
```python
cur.write_clob_column(
"INSERT INTO articles VALUES (?, CLOB_PLACEHOLDER)",
clob_data="long article text...",
params=(42,),
)
```
## Server-side prerequisites
<Aside type="caution">
Smart-LOBs require server-side configuration that the IBM Developer Edition Docker image doesn't ship with by default:
- An **`sbspace`** must be created (`onspaces -c -S sbspace1 -p ...`)
- `SBSPACENAME` must be set in `$ONCONFIG`
- A **level-0 archive** must be taken (`ontape -s -L 0`) before BLOBs can be created
- The database must be created **with logging** (`CREATE DATABASE foo WITH LOG`)
Full setup commands are in [`docs/DECISION_LOG.md` §10](https://git.supported.systems/warehack.ing/informix-db/src/branch/main/docs/DECISION_LOG.md).
</Aside>
## How it works (briefly)
The `lotofile` server function returns a smart-LOB descriptor as a regular result column when called via `SELECT`. The driver intercepts the `SQ_FILE` (PDU 98) response that contains the LOB bytes and reassembles them client-side.
Writing reverses the flow: `filetoblob` is invoked via a placeholder pattern in the SQL, the driver sends the bytes via `SQ_FILE` PDUs, and the server stores them in the sbspace.
The architectural pivot from the heavier `SQ_FPROUTINE` + `SQ_LODATA` stack to this lighter `SQ_FILE` intercept is documented in [Phase 10/11 of the decision log](https://git.supported.systems/warehack.ing/informix-db/src/branch/main/docs/DECISION_LOG.md). The result is roughly 3× smaller than originally projected.