Phase 18: server-side scrollable cursors via SQ_SFETCH (v2026.05.04.2)
Opt-in via conn.cursor(scrollable=True). Opens the cursor with
SQ_SCROLL (24) before SQ_OPEN (6), keeps it open server-side, and
sends SQ_SFETCH (23) per scroll call instead of materializing the
result set up-front.
User-facing API is identical to Phase 17's in-memory scroll
(fetch_first/last/prior/absolute/relative, scroll, rownumber).
Only the internal mechanism differs:
| feature | default | scrollable=True
|-------------------|------------------|------------------
| memory | all rows | one row at a time
| round-trips/fetch | 0 (after NFETCH) | 1 per call
| cursor lifetime | closed after exec| open until close()
| best for | sequential iter | random access on
| huge result sets
Wire format (verified against JDBC ScrollProbe capture):
* SQ_SFETCH: [short SQ_ID=4][int 23][short scrolltype]
[int target][int bufSize=4096][short SQ_EOT]
scrolltype: 1=NEXT, 4=LAST, 6=ABSOLUTE
* SQ_SCROLL (24): emitted between CURNAME and SQ_OPEN
* SQ_TUPID (25): response tag with 1-indexed row position;
authoritative source for client-side position tracking
Position tracking uses the server's SQ_TUPID rather than client-
computed indexes. Total row count discovered lazily via SFETCH(LAST)
when negative absolute indexing requires it; cached in
_scroll_total_rows.
Trap on the way: initial SFETCH used SHORT for bufSize → server
hung silently. Same SHORT-vs-INT diagnostic pattern as Phase 4.x's
CURNAME+NFETCH. Captured JDBC trace, byte-diffed against ours,
found the mismatch (bufSize is INT in modern Informix per
isXPSVER8_40 / is2GBFetchBufferSupported).
Tests: 14 integration tests in test_scroll_cursor_server.py
covering lifecycle, sequential fetch, fetch_first/last/prior/
absolute/relative, negative indexing, scroll, empty result sets,
past-end, and random-access on a 100-row result set.
Total: 69 unit + 191 integration = 260 tests.
This commit is contained in:
parent
461c62c8d3
commit
a42dc5c5de
29
CHANGELOG.md
29
CHANGELOG.md
@ -2,6 +2,35 @@
|
|||||||
|
|
||||||
All notable changes to `informix-db`. Versioning is [CalVer](https://calver.org/) — `YYYY.MM.DD` for date-based releases, `YYYY.MM.DD.N` for same-day post-releases per PEP 440.
|
All notable changes to `informix-db`. Versioning is [CalVer](https://calver.org/) — `YYYY.MM.DD` for date-based releases, `YYYY.MM.DD.N` for same-day post-releases per PEP 440.
|
||||||
|
|
||||||
|
## 2026.05.04.2 — Server-side scrollable cursors
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- **Server-side scrollable cursors** (Phase 18): opt in via `conn.cursor(scrollable=True)`. The cursor opens with `SQ_SCROLL` (24) before `SQ_OPEN` (6), the result set stays materialized server-side, and each scroll method sends `SQ_SFETCH` (23) to fetch one row at a time. Use this for huge result sets where in-memory materialization would be wasteful.
|
||||||
|
|
||||||
|
The user-facing API is identical to Phase 17's in-memory scroll (`fetch_first`, `fetch_last`, `fetch_prior`, `fetch_absolute`, `fetch_relative`, `scroll`, `rownumber`); only the internal mechanism differs:
|
||||||
|
|
||||||
|
| | Default cursor | `scrollable=True` |
|
||||||
|
|---|---|---|
|
||||||
|
| Memory | All rows materialized | One row at a time |
|
||||||
|
| Network round-trips per fetch | 0 (after initial NFETCH) | 1 (one SFETCH per call) |
|
||||||
|
| Cursor lifetime | Closed after `execute()` | Open until `close()` |
|
||||||
|
| Best for | Moderate result sets, sequential iteration | Huge result sets, random access |
|
||||||
|
|
||||||
|
Implementation discovers total row count lazily via SFETCH(LAST=4) when negative absolute indexing requires it; result is cached in `_scroll_total_rows`. Position tracking is authoritative from the server's `SQ_TUPID` (25) tag, not client-computed.
|
||||||
|
|
||||||
|
### Wire-protocol details
|
||||||
|
|
||||||
|
- `SQ_SFETCH` (23): `[short SQ_ID=4][int 23][short scrolltype][int target][int bufSize=4096][short SQ_EOT]`. scrolltype values: 1=NEXT, 4=LAST, 6=ABSOLUTE.
|
||||||
|
- `SQ_SCROLL` (24): emitted between CURNAME and SQ_OPEN to mark the cursor as scrollable.
|
||||||
|
- `SQ_TUPID` (25): server response carrying the 1-indexed row position the server just delivered. `[short 25][int rowID]`.
|
||||||
|
|
||||||
|
The trap on the way: I initially used SHORT for `bufSize` and the server hung silently — same SHORT-vs-INT diagnostic pattern as Phase 4.x's CURNAME+NFETCH. Captured a JDBC trace, byte-diffed against ours, found the mismatch.
|
||||||
|
|
||||||
|
### Tests
|
||||||
|
|
||||||
|
14 new integration tests in `test_scroll_cursor_server.py`. Total: **69 unit + 191 integration = 260 tests**.
|
||||||
|
|
||||||
## 2026.05.04.1 — Scroll cursors
|
## 2026.05.04.1 — Scroll cursors
|
||||||
|
|
||||||
### Added
|
### Added
|
||||||
|
|||||||
@ -1,6 +1,6 @@
|
|||||||
[project]
|
[project]
|
||||||
name = "informix-db"
|
name = "informix-db"
|
||||||
version = "2026.05.04.1"
|
version = "2026.05.04.2"
|
||||||
description = "Pure-Python driver for IBM Informix IDS — speaks the SQLI wire protocol over raw sockets. No CSDK, no JVM, no native libraries."
|
description = "Pure-Python driver for IBM Informix IDS — speaks the SQLI wire protocol over raw sockets. No CSDK, no JVM, no native libraries."
|
||||||
readme = "README.md"
|
readme = "README.md"
|
||||||
license = { text = "MIT" }
|
license = { text = "MIT" }
|
||||||
|
|||||||
@ -35,6 +35,16 @@ class MessageType(IntEnum):
|
|||||||
SQ_RELEASE = 11
|
SQ_RELEASE = 11
|
||||||
SQ_NDESCRIBE = 22 # numerical describe — request column metadata after a PREPARE/COMMAND
|
SQ_NDESCRIBE = 22 # numerical describe — request column metadata after a PREPARE/COMMAND
|
||||||
SQ_WANTDONE = 49 # request a SQ_DONE completion notification
|
SQ_WANTDONE = 49 # request a SQ_DONE completion notification
|
||||||
|
# Phase 18: server-side scrollable cursor.
|
||||||
|
SQ_SFETCH = 23 # scroll-fetch: ``[short SFETCH][short scrolltype]
|
||||||
|
# [int target][short bufSize]``. scrolltype values
|
||||||
|
# per JDBC IfxSqli.getaRow: 1=NEXT, 4=LAST, 6=ABSOLUTE.
|
||||||
|
SQ_SCROLL = 24 # cursor-open modifier — emitted *before* SQ_OPEN
|
||||||
|
# to mark the cursor as scrollable. Server keeps
|
||||||
|
# the result set materialized for random access.
|
||||||
|
SQ_TUPID = 25 # server response tag carrying the row's 1-indexed
|
||||||
|
# position. Body: ``[int tupleId]``. Sent before
|
||||||
|
# SQ_TUPLE in scrollable-cursor responses.
|
||||||
|
|
||||||
# --- Per-PDU framing ---
|
# --- Per-PDU framing ---
|
||||||
SQ_EOT = 12 # end-of-transmission / flush marker; ends every PDU
|
SQ_EOT = 12 # end-of-transmission / flush marker; ends every PDU
|
||||||
|
|||||||
@ -154,11 +154,24 @@ class Connection:
|
|||||||
def closed(self) -> bool:
|
def closed(self) -> bool:
|
||||||
return self._closed
|
return self._closed
|
||||||
|
|
||||||
def cursor(self) -> Cursor:
|
def cursor(self, *, scrollable: bool = False) -> Cursor:
|
||||||
"""Return a new Cursor for executing SQL on this connection."""
|
"""Return a new Cursor for executing SQL on this connection.
|
||||||
|
|
||||||
|
``scrollable=True`` opens a server-side scrollable cursor that
|
||||||
|
doesn't materialize all rows up-front. Each scroll method
|
||||||
|
(``fetch_first``/``fetch_last``/``fetch_absolute``/etc.) sends
|
||||||
|
``SQ_SFETCH`` to the server per call. Use this for huge result
|
||||||
|
sets where in-memory materialization (the default) would be
|
||||||
|
wasteful.
|
||||||
|
|
||||||
|
``scrollable=False`` (default): the cursor materializes the
|
||||||
|
whole result set on ``execute()`` and scroll methods do
|
||||||
|
index manipulation locally. Faster for moderate-sized result
|
||||||
|
sets.
|
||||||
|
"""
|
||||||
if self._closed:
|
if self._closed:
|
||||||
raise InterfaceError("connection is closed")
|
raise InterfaceError("connection is closed")
|
||||||
return Cursor(self)
|
return Cursor(self, scrollable=scrollable)
|
||||||
|
|
||||||
def _send_pdu(self, pdu: bytes) -> None:
|
def _send_pdu(self, pdu: bytes) -> None:
|
||||||
"""Send an assembled PDU. Used by Cursor."""
|
"""Send an assembled PDU. Used by Cursor."""
|
||||||
|
|||||||
@ -76,20 +76,45 @@ class Cursor:
|
|||||||
|
|
||||||
arraysize: int = 1
|
arraysize: int = 1
|
||||||
|
|
||||||
def __init__(self, connection: Connection):
|
def __init__(
|
||||||
|
self, connection: Connection, *, scrollable: bool = False
|
||||||
|
):
|
||||||
self._conn = connection
|
self._conn = connection
|
||||||
self._closed = False
|
self._closed = False
|
||||||
|
# Phase 18: scrollable=True opens a server-side scrollable
|
||||||
|
# cursor that doesn't materialize all rows up-front. Each
|
||||||
|
# scroll method sends SQ_SFETCH (tag 23) per call.
|
||||||
|
# scrollable=False (default): existing in-memory model from
|
||||||
|
# Phase 17 — execute() materializes all rows; scroll is index
|
||||||
|
# manipulation. Two-mode cursor; the same surface API works
|
||||||
|
# for both.
|
||||||
|
self._scrollable = scrollable
|
||||||
self._description: list[tuple] | None = None
|
self._description: list[tuple] | None = None
|
||||||
self._columns: list[ColumnInfo] = []
|
self._columns: list[ColumnInfo] = []
|
||||||
self._rowcount: int = -1
|
self._rowcount: int = -1
|
||||||
self._rows: list[tuple] = []
|
self._rows: list[tuple] = []
|
||||||
# Phase 17: index-based row access enables scroll cursors. The
|
# Phase 17: index-based row access enables scroll cursors. The
|
||||||
# cursor materializes all rows on execute() (current behavior),
|
# cursor materializes all rows on execute() (non-scrollable),
|
||||||
# then ``fetchone`` / ``scroll`` / ``fetch_*`` move ``_row_index``
|
# then ``fetchone`` / ``scroll`` / ``fetch_*`` move ``_row_index``
|
||||||
# through them. Default position is "before first row" (-1)
|
# through them. For scrollable cursors, ``_row_index`` instead
|
||||||
# so the first ``fetchone()`` returns rows[0]. Set to
|
# tracks the *server-side* position (1-indexed for SQ_SFETCH).
|
||||||
# ``len(_rows)`` after the last row is exhausted.
|
# Default position is "before first row" (-1) so the first
|
||||||
|
# ``fetchone()`` returns row 1.
|
||||||
self._row_index: int = -1
|
self._row_index: int = -1
|
||||||
|
# Phase 18: tracks whether a server-side scrollable cursor is
|
||||||
|
# still open. cur.close() sends CLOSE + RELEASE when True.
|
||||||
|
self._server_cursor_open: bool = False
|
||||||
|
# Phase 18: cached row count for scrollable cursors. Discovered
|
||||||
|
# lazily by SFETCH(LAST). Used so fetch_last() / negative
|
||||||
|
# absolute indexes can compute the target.
|
||||||
|
self._scroll_total_rows: int | None = None
|
||||||
|
# Phase 18: most-recent SQ_TUPID value from the server. The
|
||||||
|
# server sends this with every scrollable-cursor SFETCH
|
||||||
|
# response carrying the 1-indexed row position of the row it
|
||||||
|
# just delivered. Captures the source of truth for "what row
|
||||||
|
# did we get" — vital for SFETCH(LAST) where the response is
|
||||||
|
# ``[TUPLE][TUPID]`` with no SQ_DONE / rowcount payload.
|
||||||
|
self._last_tupid: int | None = None
|
||||||
# Set if the DESCRIBE response already includes SQ_INSERTDONE —
|
# Set if the DESCRIBE response already includes SQ_INSERTDONE —
|
||||||
# Informix optimizes literal-value INSERTs by executing during
|
# Informix optimizes literal-value INSERTs by executing during
|
||||||
# PREPARE. In that case we skip SQ_EXECUTE and go straight to RELEASE.
|
# PREPARE. In that case we skip SQ_EXECUTE and go straight to RELEASE.
|
||||||
@ -220,8 +245,21 @@ class Cursor:
|
|||||||
blob descriptors; the actual bytes live in the blobspace and must
|
blob descriptors; the actual bytes live in the blobspace and must
|
||||||
be retrieved via ``SQ_FETCHBLOB`` round-trips **while the cursor
|
be retrieved via ``SQ_FETCHBLOB`` round-trips **while the cursor
|
||||||
is still open**. The locator is invalidated by CLOSE.
|
is still open**. The locator is invalidated by CLOSE.
|
||||||
|
|
||||||
|
Phase 18: when ``self._scrollable`` is True, the cursor is opened
|
||||||
|
with ``SQ_SCROLL`` and stays open server-side after this method.
|
||||||
|
Initial rows are NOT fetched; ``fetchone`` / scroll methods
|
||||||
|
send ``SQ_SFETCH`` per call.
|
||||||
"""
|
"""
|
||||||
cursor_name = _generate_cursor_name()
|
cursor_name = _generate_cursor_name()
|
||||||
|
if self._scrollable:
|
||||||
|
self._conn._send_pdu(
|
||||||
|
self._build_curname_scroll_open_pdu(cursor_name)
|
||||||
|
)
|
||||||
|
self._drain_to_eot()
|
||||||
|
self._server_cursor_open = True
|
||||||
|
self._scroll_total_rows = None
|
||||||
|
return # don't close; cursor stays live for SQ_SFETCH
|
||||||
self._conn._send_pdu(self._build_curname_nfetch_pdu(cursor_name))
|
self._conn._send_pdu(self._build_curname_nfetch_pdu(cursor_name))
|
||||||
self._read_fetch_response()
|
self._read_fetch_response()
|
||||||
|
|
||||||
@ -683,8 +721,20 @@ class Cursor:
|
|||||||
self._rowcount = total_rowcount
|
self._rowcount = total_rowcount
|
||||||
|
|
||||||
def fetchone(self) -> tuple | None:
|
def fetchone(self) -> tuple | None:
|
||||||
"""Return the row at ``_row_index + 1`` and advance, or None at EOF."""
|
"""Return the next row, or None at EOF.
|
||||||
|
|
||||||
|
Non-scrollable: returns ``self._rows[_row_index + 1]`` from the
|
||||||
|
materialized result set and advances the index.
|
||||||
|
Scrollable: sends ``SQ_SFETCH(ABSOLUTE, current+1)`` to the
|
||||||
|
server. We use scrolltype=6 with a computed target rather than
|
||||||
|
scrolltype=1 because JDBC's ``IfxResultSet.next()`` does the
|
||||||
|
same — target=0 with scrolltype=1 is interpreted by the server
|
||||||
|
as "scan to last", not "next sequential".
|
||||||
|
"""
|
||||||
self._check_open()
|
self._check_open()
|
||||||
|
if self._scrollable:
|
||||||
|
target = self._row_index + 2 # current is 0-indexed; SFETCH wants 1-indexed (current+1) +1 for "next"
|
||||||
|
return self._sfetch_at(scrolltype=6, target=target)
|
||||||
if self._description is None or not self._rows:
|
if self._description is None or not self._rows:
|
||||||
return None
|
return None
|
||||||
nxt = self._row_index + 1
|
nxt = self._row_index + 1
|
||||||
@ -706,8 +756,19 @@ class Cursor:
|
|||||||
return out
|
return out
|
||||||
|
|
||||||
def fetchall(self) -> list[tuple]:
|
def fetchall(self) -> list[tuple]:
|
||||||
"""Return all remaining rows from the current position to the end."""
|
"""Return all remaining rows from the current position to the end.
|
||||||
|
|
||||||
|
Non-scrollable: slice from the materialized result set.
|
||||||
|
Scrollable: sequentially SFETCH(NEXT) until EOF — N round-trips
|
||||||
|
for N rows. For huge result sets, prefer indexed access via
|
||||||
|
``fetch_absolute`` if you don't actually need every row.
|
||||||
|
"""
|
||||||
self._check_open()
|
self._check_open()
|
||||||
|
if self._scrollable:
|
||||||
|
out: list[tuple] = []
|
||||||
|
while (row := self.fetchone()) is not None:
|
||||||
|
out.append(row)
|
||||||
|
return out
|
||||||
if self._description is None or not self._rows:
|
if self._description is None or not self._rows:
|
||||||
return []
|
return []
|
||||||
start = self._row_index + 1
|
start = self._row_index + 1
|
||||||
@ -715,22 +776,21 @@ class Cursor:
|
|||||||
self._row_index = len(self._rows)
|
self._row_index = len(self._rows)
|
||||||
return list(out)
|
return list(out)
|
||||||
|
|
||||||
# -- Phase 17: scroll cursor API --------------------------------------
|
# -- Phase 17/18: scroll cursor API -----------------------------------
|
||||||
|
|
||||||
def scroll(self, value: int, mode: str = "relative") -> None:
|
def scroll(self, value: int, mode: str = "relative") -> None:
|
||||||
"""Move the cursor position. PEP 249-compatible.
|
"""Move the cursor position. PEP 249-compatible.
|
||||||
|
|
||||||
``mode='relative'`` (default): move ``value`` rows forward
|
``mode='relative'`` (default): move ``value`` rows forward
|
||||||
(negative = backward). ``mode='absolute'``: jump to row ``value``
|
(negative = backward). ``mode='absolute'``: jump to row ``value``
|
||||||
(0-indexed; the next ``fetchone()`` returns ``rows[value]``).
|
(0-indexed; the next ``fetchone()`` returns the row at ``value``).
|
||||||
|
|
||||||
Raises :class:`IndexError` if the target position falls outside
|
Raises :class:`IndexError` if the target falls outside the result
|
||||||
the available result set (per PEP 249).
|
set (per PEP 249). For non-scrollable cursors, this is enforced
|
||||||
|
eagerly using the materialized result-set length. For scrollable
|
||||||
Note: this is *in-memory* scroll — the cursor materializes all
|
cursors, only out-of-range NEGATIVE positions raise immediately
|
||||||
rows on ``execute()`` and ``scroll()`` simply repositions the
|
— positions past the end are detected lazily on the next fetch
|
||||||
index. For true server-side scrollable cursors over huge
|
(returns None).
|
||||||
result sets, see Phase 18.
|
|
||||||
"""
|
"""
|
||||||
self._check_open()
|
self._check_open()
|
||||||
if self._description is None:
|
if self._description is None:
|
||||||
@ -743,6 +803,13 @@ class Cursor:
|
|||||||
raise ProgrammingError(
|
raise ProgrammingError(
|
||||||
f"scroll mode must be 'relative' or 'absolute', got {mode!r}"
|
f"scroll mode must be 'relative' or 'absolute', got {mode!r}"
|
||||||
)
|
)
|
||||||
|
if self._scrollable:
|
||||||
|
if target < -1:
|
||||||
|
raise IndexError(
|
||||||
|
f"scroll target out of range: position {target}"
|
||||||
|
)
|
||||||
|
self._row_index = target
|
||||||
|
return
|
||||||
if target < -1 or target >= len(self._rows):
|
if target < -1 or target >= len(self._rows):
|
||||||
raise IndexError(
|
raise IndexError(
|
||||||
f"scroll target out of range: position {target} "
|
f"scroll target out of range: position {target} "
|
||||||
@ -751,14 +818,20 @@ class Cursor:
|
|||||||
self._row_index = target
|
self._row_index = target
|
||||||
|
|
||||||
def fetch_first(self) -> tuple | None:
|
def fetch_first(self) -> tuple | None:
|
||||||
"""Reset to before-first then fetch row 0."""
|
"""Reset to before-first then fetch row 0 / SFETCH(ABSOLUTE, 1)."""
|
||||||
self._check_open()
|
self._check_open()
|
||||||
|
if self._scrollable:
|
||||||
|
self._row_index = -1 # before-first
|
||||||
|
return self._sfetch_at(scrolltype=6, target=1)
|
||||||
self._row_index = -1
|
self._row_index = -1
|
||||||
return self.fetchone()
|
return self.fetchone()
|
||||||
|
|
||||||
def fetch_last(self) -> tuple | None:
|
def fetch_last(self) -> tuple | None:
|
||||||
"""Position at the last row and return it (None if empty)."""
|
"""Position at and return the last row (None if empty)."""
|
||||||
self._check_open()
|
self._check_open()
|
||||||
|
if self._scrollable:
|
||||||
|
# SFETCH(LAST=4) returns the last row and tells us the count.
|
||||||
|
return self._sfetch_at(scrolltype=4, target=0, is_last_probe=True)
|
||||||
if not self._rows:
|
if not self._rows:
|
||||||
return None
|
return None
|
||||||
self._row_index = len(self._rows) - 1
|
self._row_index = len(self._rows) - 1
|
||||||
@ -767,6 +840,12 @@ class Cursor:
|
|||||||
def fetch_prior(self) -> tuple | None:
|
def fetch_prior(self) -> tuple | None:
|
||||||
"""Move backward one row and return it (None if before-first)."""
|
"""Move backward one row and return it (None if before-first)."""
|
||||||
self._check_open()
|
self._check_open()
|
||||||
|
if self._scrollable:
|
||||||
|
prev = self._row_index - 1 if self._row_index >= 0 else -1
|
||||||
|
if prev < 0:
|
||||||
|
self._row_index = -1
|
||||||
|
return None
|
||||||
|
return self._sfetch_at(scrolltype=6, target=prev + 1)
|
||||||
prev = self._row_index - 1
|
prev = self._row_index - 1
|
||||||
if prev < 0:
|
if prev < 0:
|
||||||
self._row_index = -1
|
self._row_index = -1
|
||||||
@ -778,9 +857,24 @@ class Cursor:
|
|||||||
"""Position at row ``n`` (0-indexed) and return it.
|
"""Position at row ``n`` (0-indexed) and return it.
|
||||||
|
|
||||||
Negative ``n`` indexes from the end (Python-style):
|
Negative ``n`` indexes from the end (Python-style):
|
||||||
``fetch_absolute(-1)`` returns the last row.
|
``fetch_absolute(-1)`` returns the last row. For scrollable
|
||||||
|
cursors, negative indexes need the row count, which is
|
||||||
|
discovered (cached) via a one-time ``SFETCH(LAST)`` probe.
|
||||||
"""
|
"""
|
||||||
self._check_open()
|
self._check_open()
|
||||||
|
if self._scrollable:
|
||||||
|
if n < 0:
|
||||||
|
# Need total row count for negative indexing — cache it.
|
||||||
|
if self._scroll_total_rows is None:
|
||||||
|
saved = self._row_index
|
||||||
|
self._sfetch_at(scrolltype=4, target=0, is_last_probe=True)
|
||||||
|
self._row_index = saved # restore
|
||||||
|
if self._scroll_total_rows is None:
|
||||||
|
return None # empty
|
||||||
|
n = self._scroll_total_rows + n
|
||||||
|
if n < 0:
|
||||||
|
return None
|
||||||
|
return self._sfetch_at(scrolltype=6, target=n + 1)
|
||||||
if not self._rows:
|
if not self._rows:
|
||||||
return None
|
return None
|
||||||
if n < 0:
|
if n < 0:
|
||||||
@ -797,6 +891,11 @@ class Cursor:
|
|||||||
Returns None if the target falls outside the result set.
|
Returns None if the target falls outside the result set.
|
||||||
"""
|
"""
|
||||||
self._check_open()
|
self._check_open()
|
||||||
|
if self._scrollable:
|
||||||
|
target = self._row_index + n
|
||||||
|
if target < 0:
|
||||||
|
return None
|
||||||
|
return self._sfetch_at(scrolltype=6, target=target + 1)
|
||||||
if not self._rows:
|
if not self._rows:
|
||||||
return None
|
return None
|
||||||
target = self._row_index + n
|
target = self._row_index + n
|
||||||
@ -812,7 +911,64 @@ class Cursor:
|
|||||||
return None
|
return None
|
||||||
return self._row_index
|
return self._row_index
|
||||||
|
|
||||||
|
def _sfetch_at(
|
||||||
|
self, scrolltype: int, target: int, *, is_last_probe: bool = False
|
||||||
|
) -> tuple | None:
|
||||||
|
"""Send SQ_SFETCH and parse the single-tuple response.
|
||||||
|
|
||||||
|
``scrolltype``: 1=NEXT, 4=LAST (probes for end-of-cursor and
|
||||||
|
returns the last row), 6=ABSOLUTE (target is 1-indexed row).
|
||||||
|
|
||||||
|
Side-effects:
|
||||||
|
- Updates ``self._row_index`` to reflect the new position
|
||||||
|
(from the server's authoritative ``SQ_TUPID`` response).
|
||||||
|
- Caches ``self._scroll_total_rows`` after a LAST probe.
|
||||||
|
- Returns the row tuple, or None if the target is past-end.
|
||||||
|
"""
|
||||||
|
if not self._server_cursor_open:
|
||||||
|
raise ProgrammingError(
|
||||||
|
"scrollable cursor is not open; call execute() first"
|
||||||
|
)
|
||||||
|
prior_count = len(self._rows)
|
||||||
|
self._last_tupid = None
|
||||||
|
self._conn._send_pdu(self._build_sfetch_pdu(scrolltype, target))
|
||||||
|
self._read_fetch_response()
|
||||||
|
new_count = len(self._rows)
|
||||||
|
if new_count == prior_count:
|
||||||
|
# No tuple arrived — past-end or empty result set.
|
||||||
|
# Don't move _row_index forward speculatively; let the
|
||||||
|
# caller observe the None return.
|
||||||
|
return None
|
||||||
|
row = self._rows[-1]
|
||||||
|
# Update position from the server's TUPID (authoritative).
|
||||||
|
# SQ_TUPID arrives in every scrollable-cursor response and
|
||||||
|
# carries the 1-indexed row position the server delivered.
|
||||||
|
if self._last_tupid is not None:
|
||||||
|
self._row_index = self._last_tupid - 1 # → 0-indexed
|
||||||
|
if scrolltype == 4 or is_last_probe:
|
||||||
|
# SFETCH(LAST) — TUPID == total row count
|
||||||
|
self._scroll_total_rows = self._last_tupid
|
||||||
|
return row
|
||||||
|
|
||||||
def close(self) -> None:
|
def close(self) -> None:
|
||||||
|
"""Close the cursor.
|
||||||
|
|
||||||
|
Non-scrollable: idempotent local cleanup.
|
||||||
|
Scrollable: sends ``SQ_CLOSE`` + ``SQ_RELEASE`` to free the
|
||||||
|
server-side cursor before marking the local cursor closed.
|
||||||
|
"""
|
||||||
|
if self._closed:
|
||||||
|
return
|
||||||
|
if self._scrollable and self._server_cursor_open:
|
||||||
|
try:
|
||||||
|
self._conn._send_pdu(self._build_close_pdu())
|
||||||
|
self._drain_to_eot()
|
||||||
|
self._conn._send_pdu(self._build_release_pdu())
|
||||||
|
self._drain_to_eot()
|
||||||
|
except Exception:
|
||||||
|
# Best-effort close — don't mask other errors
|
||||||
|
pass
|
||||||
|
self._server_cursor_open = False
|
||||||
self._closed = True
|
self._closed = True
|
||||||
self._row_index = len(self._rows) # mark exhausted
|
self._row_index = len(self._rows) # mark exhausted
|
||||||
|
|
||||||
@ -977,9 +1133,9 @@ class Cursor:
|
|||||||
[short SQ_ID=4][int 9][int 4096][int 0]
|
[short SQ_ID=4][int 9][int 4096][int 0]
|
||||||
[short SQ_EOT]
|
[short SQ_EOT]
|
||||||
|
|
||||||
The trailing ``[short 6]`` after the cursor name is opaque
|
The trailing ``[short 6]`` after the cursor name is the
|
||||||
(cursor type / scrollability flag from JDBC's ``sendCursorName``);
|
``SQ_OPEN`` action — JDBC chains ``CURNAME → OPEN → NFETCH``
|
||||||
we replay JDBC's value verbatim.
|
in one PDU.
|
||||||
"""
|
"""
|
||||||
writer, buf = make_pdu_writer()
|
writer, buf = make_pdu_writer()
|
||||||
# CURNAME
|
# CURNAME
|
||||||
@ -990,7 +1146,7 @@ class Cursor:
|
|||||||
writer.write_bytes(name_bytes)
|
writer.write_bytes(name_bytes)
|
||||||
if len(name_bytes) & 1:
|
if len(name_bytes) & 1:
|
||||||
writer.write_byte(0)
|
writer.write_byte(0)
|
||||||
writer.write_short(6) # cursor-type flag from JDBC
|
writer.write_short(MessageType.SQ_OPEN) # 6
|
||||||
|
|
||||||
# NFETCH (note: trailing field is a SHORT, not an int —
|
# NFETCH (note: trailing field is a SHORT, not an int —
|
||||||
# caught by byte-diff against JDBC's 42-byte reference PDU,
|
# caught by byte-diff against JDBC's 42-byte reference PDU,
|
||||||
@ -1003,6 +1159,62 @@ class Cursor:
|
|||||||
writer.write_short(MessageType.SQ_EOT)
|
writer.write_short(MessageType.SQ_EOT)
|
||||||
return buf.getvalue()
|
return buf.getvalue()
|
||||||
|
|
||||||
|
def _build_curname_scroll_open_pdu(self, cursor_name: str) -> bytes:
|
||||||
|
"""Open a scrollable cursor: SQ_CURNAME + SQ_SCROLL + SQ_OPEN.
|
||||||
|
|
||||||
|
Per JDBC's ``sendCursorOpen`` line 1413+: when
|
||||||
|
``ResultSet.TYPE_SCROLL_INSENSITIVE`` is set, JDBC emits
|
||||||
|
``SQ_SCROLL=24`` immediately before ``SQ_OPEN=6``. The server
|
||||||
|
treats subsequent fetches as scrollable (random-access via
|
||||||
|
``SQ_SFETCH``) instead of forward-only.
|
||||||
|
|
||||||
|
Phase 18: we don't chain an NFETCH here — scrollable cursors
|
||||||
|
do per-call ``SQ_SFETCH`` instead.
|
||||||
|
"""
|
||||||
|
writer, buf = make_pdu_writer()
|
||||||
|
writer.write_short(MessageType.SQ_ID)
|
||||||
|
writer.write_int(MessageType.SQ_CURNAME)
|
||||||
|
name_bytes = cursor_name.encode("ascii")
|
||||||
|
writer.write_short(len(name_bytes))
|
||||||
|
writer.write_bytes(name_bytes)
|
||||||
|
if len(name_bytes) & 1:
|
||||||
|
writer.write_byte(0)
|
||||||
|
writer.write_short(MessageType.SQ_SCROLL) # 24 — mark as scrollable
|
||||||
|
writer.write_short(MessageType.SQ_OPEN) # 6
|
||||||
|
writer.write_short(MessageType.SQ_EOT)
|
||||||
|
return buf.getvalue()
|
||||||
|
|
||||||
|
def _build_sfetch_pdu(self, scrolltype: int, target: int) -> bytes:
|
||||||
|
"""SQ_SFETCH (scroll-fetch) PDU.
|
||||||
|
|
||||||
|
Wire format verified against JDBC capture
|
||||||
|
(``tests/reference/ScrollProbe`` against the dev container):
|
||||||
|
|
||||||
|
``[short SQ_ID=4][int SQ_SFETCH=23]``
|
||||||
|
``[short scrolltype]`` (1=NEXT, 4=LAST, 6=ABSOLUTE)
|
||||||
|
``[int target_row]`` (1-indexed for scrolltype=6)
|
||||||
|
``[int bufSize=4096]``
|
||||||
|
``[short SQ_EOT]``
|
||||||
|
|
||||||
|
The action code follows the standard ``[short SQ_ID][int action]``
|
||||||
|
framing of other commands (SQ_BIND, SQ_EXECUTE, etc.). The
|
||||||
|
cursor being scrolled is implicit: it's the most-recently-named
|
||||||
|
cursor on this connection. ``sendStatementID`` is a no-op here
|
||||||
|
because we don't track a separate ``statementType``.
|
||||||
|
|
||||||
|
Initial draft used SHORT for ``bufSize`` and it caused the
|
||||||
|
server to silently hang — same diagnostic pattern as the
|
||||||
|
SHORT-vs-INT trap from Phase 4.x's CURNAME+NFETCH PDU.
|
||||||
|
"""
|
||||||
|
writer, buf = make_pdu_writer()
|
||||||
|
writer.write_short(MessageType.SQ_ID)
|
||||||
|
writer.write_int(MessageType.SQ_SFETCH) # 23
|
||||||
|
writer.write_short(scrolltype)
|
||||||
|
writer.write_int(target)
|
||||||
|
writer.write_int(4096) # tuple buffer size — INT, not SHORT
|
||||||
|
writer.write_short(MessageType.SQ_EOT)
|
||||||
|
return buf.getvalue()
|
||||||
|
|
||||||
def _build_nfetch_pdu(self) -> bytes:
|
def _build_nfetch_pdu(self) -> bytes:
|
||||||
"""SQ_ID(NFETCH 4096) + SQ_EOT — used to drain remaining rows."""
|
"""SQ_ID(NFETCH 4096) + SQ_EOT — used to drain remaining rows."""
|
||||||
writer, buf = make_pdu_writer()
|
writer, buf = make_pdu_writer()
|
||||||
@ -1077,7 +1289,7 @@ class Cursor:
|
|||||||
raise DatabaseError(f"unexpected tag in DESCRIBE response: 0x{tag:04x}")
|
raise DatabaseError(f"unexpected tag in DESCRIBE response: 0x{tag:04x}")
|
||||||
|
|
||||||
def _read_fetch_response(self) -> None:
|
def _read_fetch_response(self) -> None:
|
||||||
"""Read TUPLE* + DONE + COST + EOT after an NFETCH."""
|
"""Read TUPLE* + DONE + COST + EOT after an NFETCH or SFETCH."""
|
||||||
reader = _SocketReader(self._conn._sock)
|
reader = _SocketReader(self._conn._sock)
|
||||||
while True:
|
while True:
|
||||||
tag = reader.read_short()
|
tag = reader.read_short()
|
||||||
@ -1093,6 +1305,10 @@ class Cursor:
|
|||||||
reader.read_int()
|
reader.read_int()
|
||||||
elif tag == MessageType.SQ_XACTSTAT:
|
elif tag == MessageType.SQ_XACTSTAT:
|
||||||
reader.read_exact(2 + 2 + 2)
|
reader.read_exact(2 + 2 + 2)
|
||||||
|
elif tag == MessageType.SQ_TUPID:
|
||||||
|
# Phase 18: scrollable-cursor SFETCH responses include
|
||||||
|
# the 1-indexed row position. Capture for state-update.
|
||||||
|
self._last_tupid = reader.read_int()
|
||||||
elif tag == 98: # SQ_FILE — server orchestrates a file transfer
|
elif tag == 98: # SQ_FILE — server orchestrates a file transfer
|
||||||
self._handle_sq_file(reader)
|
self._handle_sq_file(reader)
|
||||||
elif tag == MessageType.SQ_ERR:
|
elif tag == MessageType.SQ_ERR:
|
||||||
|
|||||||
239
tests/test_scroll_cursor_server.py
Normal file
239
tests/test_scroll_cursor_server.py
Normal file
@ -0,0 +1,239 @@
|
|||||||
|
"""Phase 18 integration tests — server-side scrollable cursor.
|
||||||
|
|
||||||
|
When ``conn.cursor(scrollable=True)`` is set, the cursor opens with
|
||||||
|
``SQ_SCROLL`` (tag 24) before ``SQ_OPEN``, doesn't materialize the
|
||||||
|
result set, and uses ``SQ_SFETCH`` (tag 23) for each fetch. The
|
||||||
|
server-side cursor stays open across scroll operations and is
|
||||||
|
closed by ``cursor.close()``.
|
||||||
|
|
||||||
|
The user-facing API surface (``fetch_first``, ``fetch_last``,
|
||||||
|
``fetch_prior``, ``fetch_absolute``, ``fetch_relative``, ``scroll``,
|
||||||
|
``rownumber``) is identical to the in-memory scroll mode (Phase 17).
|
||||||
|
The internal mechanism is what changes.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
import informix_db
|
||||||
|
from tests.conftest import ConnParams
|
||||||
|
|
||||||
|
pytestmark = pytest.mark.integration
|
||||||
|
|
||||||
|
|
||||||
|
def _connect(params: ConnParams) -> informix_db.Connection:
|
||||||
|
return informix_db.connect(
|
||||||
|
host=params.host,
|
||||||
|
port=params.port,
|
||||||
|
user=params.user,
|
||||||
|
password=params.password,
|
||||||
|
database=params.database,
|
||||||
|
server=params.server,
|
||||||
|
autocommit=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# -------- Cursor lifecycle --------
|
||||||
|
|
||||||
|
|
||||||
|
def test_scrollable_cursor_opens_and_closes(conn_params: ConnParams) -> None:
|
||||||
|
"""A scrollable cursor reports its server-side state correctly."""
|
||||||
|
with _connect(conn_params) as conn:
|
||||||
|
cur = conn.cursor(scrollable=True)
|
||||||
|
assert cur._scrollable is True
|
||||||
|
cur.execute("SELECT FIRST 3 tabid FROM systables ORDER BY tabid")
|
||||||
|
assert cur._server_cursor_open is True
|
||||||
|
cur.close()
|
||||||
|
assert cur._server_cursor_open is False
|
||||||
|
assert cur.closed is True
|
||||||
|
|
||||||
|
|
||||||
|
def test_scrollable_default_off(conn_params: ConnParams) -> None:
|
||||||
|
"""``conn.cursor()`` without args still produces a non-scrollable cursor."""
|
||||||
|
with _connect(conn_params) as conn:
|
||||||
|
cur = conn.cursor()
|
||||||
|
assert cur._scrollable is False
|
||||||
|
|
||||||
|
|
||||||
|
# -------- Forward sequential --------
|
||||||
|
|
||||||
|
|
||||||
|
def test_scrollable_sequential_fetchone(conn_params: ConnParams) -> None:
|
||||||
|
"""``fetchone`` advances through rows when scrollable=True."""
|
||||||
|
with _connect(conn_params) as conn:
|
||||||
|
cur = conn.cursor(scrollable=True)
|
||||||
|
cur.execute("SELECT FIRST 5 tabid FROM systables ORDER BY tabid")
|
||||||
|
rows = []
|
||||||
|
while (row := cur.fetchone()) is not None:
|
||||||
|
rows.append(row[0])
|
||||||
|
assert rows == [1, 2, 3, 4, 5]
|
||||||
|
cur.close()
|
||||||
|
|
||||||
|
|
||||||
|
def test_scrollable_fetchall(conn_params: ConnParams) -> None:
|
||||||
|
"""``fetchall`` drains all rows from current position to end."""
|
||||||
|
with _connect(conn_params) as conn:
|
||||||
|
cur = conn.cursor(scrollable=True)
|
||||||
|
cur.execute("SELECT FIRST 5 tabid FROM systables ORDER BY tabid")
|
||||||
|
rows = cur.fetchall()
|
||||||
|
assert [r[0] for r in rows] == [1, 2, 3, 4, 5]
|
||||||
|
cur.close()
|
||||||
|
|
||||||
|
|
||||||
|
# -------- Scroll API --------
|
||||||
|
|
||||||
|
|
||||||
|
def test_fetch_first_via_sfetch(conn_params: ConnParams) -> None:
|
||||||
|
"""``fetch_first`` sends SFETCH(ABSOLUTE, 1)."""
|
||||||
|
with _connect(conn_params) as conn:
|
||||||
|
cur = conn.cursor(scrollable=True)
|
||||||
|
cur.execute("SELECT FIRST 5 tabid FROM systables ORDER BY tabid")
|
||||||
|
# Advance a few rows
|
||||||
|
cur.fetchone()
|
||||||
|
cur.fetchone()
|
||||||
|
# Reset
|
||||||
|
first = cur.fetch_first()
|
||||||
|
assert first == (1,)
|
||||||
|
assert cur.rownumber == 0
|
||||||
|
cur.close()
|
||||||
|
|
||||||
|
|
||||||
|
def test_fetch_last_caches_total_rows(conn_params: ConnParams) -> None:
|
||||||
|
"""``fetch_last`` populates ``_scroll_total_rows`` from the SFETCH(LAST) TUPID."""
|
||||||
|
with _connect(conn_params) as conn:
|
||||||
|
cur = conn.cursor(scrollable=True)
|
||||||
|
cur.execute("SELECT FIRST 7 tabid FROM systables ORDER BY tabid")
|
||||||
|
last = cur.fetch_last()
|
||||||
|
assert last is not None
|
||||||
|
assert cur._scroll_total_rows == 7
|
||||||
|
cur.close()
|
||||||
|
|
||||||
|
|
||||||
|
def test_fetch_prior_walks_backward(conn_params: ConnParams) -> None:
|
||||||
|
"""Sequential ``fetch_prior`` from the last row walks back to the first."""
|
||||||
|
with _connect(conn_params) as conn:
|
||||||
|
cur = conn.cursor(scrollable=True)
|
||||||
|
cur.execute("SELECT FIRST 4 tabid FROM systables ORDER BY tabid")
|
||||||
|
cur.fetch_last()
|
||||||
|
# last gave row 4; fetch_prior walks 3, 2, 1
|
||||||
|
assert cur.fetch_prior() == (3,)
|
||||||
|
assert cur.fetch_prior() == (2,)
|
||||||
|
assert cur.fetch_prior() == (1,)
|
||||||
|
assert cur.fetch_prior() is None
|
||||||
|
cur.close()
|
||||||
|
|
||||||
|
|
||||||
|
def test_fetch_absolute_random_access(conn_params: ConnParams) -> None:
|
||||||
|
"""``fetch_absolute(n)`` jumps to row ``n`` (0-indexed)."""
|
||||||
|
with _connect(conn_params) as conn:
|
||||||
|
cur = conn.cursor(scrollable=True)
|
||||||
|
cur.execute("SELECT FIRST 10 tabid FROM systables ORDER BY tabid")
|
||||||
|
# Random access in arbitrary order
|
||||||
|
assert cur.fetch_absolute(0) == (1,)
|
||||||
|
assert cur.fetch_absolute(9) == (10,)
|
||||||
|
assert cur.fetch_absolute(4) == (5,)
|
||||||
|
assert cur.fetch_absolute(2) == (3,)
|
||||||
|
assert cur.rownumber == 2
|
||||||
|
cur.close()
|
||||||
|
|
||||||
|
|
||||||
|
def test_fetch_absolute_negative(conn_params: ConnParams) -> None:
|
||||||
|
"""Negative absolute indexes count from the end (Python-style)."""
|
||||||
|
with _connect(conn_params) as conn:
|
||||||
|
cur = conn.cursor(scrollable=True)
|
||||||
|
cur.execute("SELECT FIRST 5 tabid FROM systables ORDER BY tabid")
|
||||||
|
# Without prior fetch_last, abs(-1) probes via SFETCH(LAST)
|
||||||
|
assert cur.fetch_absolute(-1) == (5,)
|
||||||
|
assert cur.fetch_absolute(-2) == (4,)
|
||||||
|
assert cur._scroll_total_rows == 5
|
||||||
|
cur.close()
|
||||||
|
|
||||||
|
|
||||||
|
def test_fetch_relative(conn_params: ConnParams) -> None:
|
||||||
|
"""``fetch_relative(n)`` moves ``n`` rows from the current position."""
|
||||||
|
with _connect(conn_params) as conn:
|
||||||
|
cur = conn.cursor(scrollable=True)
|
||||||
|
cur.execute("SELECT FIRST 8 tabid FROM systables ORDER BY tabid")
|
||||||
|
cur.fetch_first()
|
||||||
|
# Currently at row 0 (tabid=1); jump to position 4
|
||||||
|
assert cur.fetch_relative(4) == (5,)
|
||||||
|
# Jump back 3
|
||||||
|
assert cur.fetch_relative(-3) == (2,)
|
||||||
|
cur.close()
|
||||||
|
|
||||||
|
|
||||||
|
def test_scroll_relative_and_absolute(conn_params: ConnParams) -> None:
|
||||||
|
"""The PEP 249 ``scroll`` method works in both modes."""
|
||||||
|
with _connect(conn_params) as conn:
|
||||||
|
cur = conn.cursor(scrollable=True)
|
||||||
|
cur.execute("SELECT FIRST 6 tabid FROM systables ORDER BY tabid")
|
||||||
|
cur.fetchone() # row 0 (tabid=1)
|
||||||
|
cur.scroll(2, mode="relative") # to row 2
|
||||||
|
# rownumber tracks via TUPID; for scroll(no-fetch), our local
|
||||||
|
# _row_index moves but no SFETCH happens until next fetchone
|
||||||
|
assert cur.rownumber == 2
|
||||||
|
# Verify the position by fetching at the new position
|
||||||
|
cur.scroll(4, mode="absolute") # absolute index 4 (1-indexed → row 4 in API)
|
||||||
|
# absolute 4 in PEP 249 maps to _row_index = 3 (row at tabid=4)
|
||||||
|
assert cur.rownumber == 3
|
||||||
|
cur.close()
|
||||||
|
|
||||||
|
|
||||||
|
# -------- End-of-cursor / empty result set --------
|
||||||
|
|
||||||
|
|
||||||
|
def test_scrollable_empty_result_set(conn_params: ConnParams) -> None:
|
||||||
|
"""Scroll methods on empty result return None gracefully."""
|
||||||
|
with _connect(conn_params) as conn:
|
||||||
|
cur = conn.cursor(scrollable=True)
|
||||||
|
cur.execute("SELECT tabid FROM systables WHERE tabid = -999")
|
||||||
|
assert cur.fetch_first() is None
|
||||||
|
assert cur.fetch_last() is None
|
||||||
|
assert cur.fetch_absolute(0) is None
|
||||||
|
assert cur.fetchone() is None
|
||||||
|
cur.close()
|
||||||
|
|
||||||
|
|
||||||
|
def test_scrollable_past_end_returns_none(conn_params: ConnParams) -> None:
|
||||||
|
"""Fetching past the end returns None rather than wrapping."""
|
||||||
|
with _connect(conn_params) as conn:
|
||||||
|
cur = conn.cursor(scrollable=True)
|
||||||
|
cur.execute("SELECT FIRST 3 tabid FROM systables ORDER BY tabid")
|
||||||
|
cur.fetch_last()
|
||||||
|
# We're at the last row; one more fetchone exceeds end
|
||||||
|
assert cur.fetchone() is None
|
||||||
|
cur.close()
|
||||||
|
|
||||||
|
|
||||||
|
# -------- Mixed: 1000-row scrollable workload --------
|
||||||
|
|
||||||
|
|
||||||
|
def test_scrollable_random_access(conn_params: ConnParams) -> None:
|
||||||
|
"""Random-access into a moderate-size result set without OOM.
|
||||||
|
|
||||||
|
Doesn't assume contiguous tabids (systables has gaps); instead,
|
||||||
|
cross-checks scrollable-cursor results against a non-scrollable
|
||||||
|
materialized fetch.
|
||||||
|
"""
|
||||||
|
with _connect(conn_params) as conn:
|
||||||
|
# Reference: pull the first 100 rows once, materialized
|
||||||
|
ref_cur = conn.cursor()
|
||||||
|
ref_cur.execute("SELECT FIRST 100 tabid FROM systables ORDER BY tabid")
|
||||||
|
reference = ref_cur.fetchall()
|
||||||
|
ref_cur.close()
|
||||||
|
assert len(reference) >= 50 # systables has plenty of rows
|
||||||
|
|
||||||
|
# Now hit the same query through a scrollable cursor and
|
||||||
|
# verify random-access matches the reference.
|
||||||
|
cur = conn.cursor(scrollable=True)
|
||||||
|
cur.execute("SELECT FIRST 100 tabid FROM systables ORDER BY tabid")
|
||||||
|
# Random sampling
|
||||||
|
for idx in (0, 1, 5, 25, len(reference) - 1):
|
||||||
|
assert cur.fetch_absolute(idx) == reference[idx]
|
||||||
|
# Walk backward from the middle
|
||||||
|
mid = len(reference) // 2
|
||||||
|
cur.fetch_absolute(mid)
|
||||||
|
for offset in range(1, 5):
|
||||||
|
assert cur.fetch_prior() == reference[mid - offset]
|
||||||
|
cur.close()
|
||||||
Loading…
x
Reference in New Issue
Block a user