Phase 2 progress: cursor scaffolding + protocol findings (SELECT path WIP)
Cursor class scaffolded with full PEP 249 surface:
src/informix_db/cursors.py — Cursor with execute, fetchone, fetchmany,
fetchall, description, rowcount, arraysize, close, iterator,
context manager. Sends SQ_COMMAND chains for parameterless SQL
(Phase 4 adds SQ_BIND/SQ_EXECUTE for params).
src/informix_db/_resultset.py — ColumnInfo, parse_describe,
parse_tuple_payload. Best-effort SQ_DESCRIBE parser; refines in
Phase 2.1.
src/informix_db/connections.py — Connection.cursor() now returns a
real Cursor; new _send_pdu() lets Cursor share the connection's
socket without violating encapsulation.
Protocol findings landed in PROTOCOL_NOTES.md §6:
§6a — SQ_PREPARE format with named tags (the "trailing 22, 49"
are SQ_NDESCRIBE and SQ_WANTDONE chained into the same PDU).
Confirmed against IfxSqli.sendPrepare line 1062.
§6c — Server requires post-login init sequence (SQ_PROTOCOLS →
SQ_INFO → SQ_ID(env vars) → SQ_DBOPEN) BEFORE any PREPARE works.
Discovered the hard way: PREPARE without this sequence gets no
response; SQ_DBOPEN without SQ_PROTOCOLS gets sqlcode=-759
("Database not available"). The login PDU's database field is
a hint, not an open.
§6e — SQ_TUPLE corrected: [short warn][int size][bytes payload]
(not [int 0][short payloadLen] as earlier draft claimed).
Two more constants added to _messages.MessageType:
SQ_NDESCRIBE = 22, SQ_WANTDONE = 49
Tests: 40 unit + 7 integration (added 2 new — cursor() returns a
Cursor, parameter binding raises NotSupportedError). All green, ruff
clean. Removed obsolete "cursor() raises NotImplementedError" test.
What works end-to-end now: connect, cursor(), close, parameter-attempt
gating. What doesn't yet: cursor.execute("SELECT 1") — server requires
the post-login init sequence we don't yet send.
Discovered captures (kept for next session's analysis):
docs/CAPTURES/06-py-select1-attempt.socat.log
docs/CAPTURES/07-py-replay-jdbc-prepare.socat.log
docs/CAPTURES/08-py-with-dbopen.socat.log
docs/CAPTURES/09-py-full-replay.socat.log
Three new tasks created tracking the remaining Phase 2 blockers:
post-login init sequence, proper SQ_DESCRIBE parser, SQ_ID action
vocabulary helpers.
This commit is contained in:
parent
ddac40ff0b
commit
e2c48f855e
16
docs/CAPTURES/06-py-select1-attempt.socat.log
Normal file
16
docs/CAPTURES/06-py-select1-attempt.socat.log
Normal file
@ -0,0 +1,16 @@
|
||||
2026/05/02 20:58:45 socat[4053706] N listening on AF=2 0.0.0.0:9090
|
||||
2026/05/02 20:58:45 socat[4053706] N accepting connection from AF=2 127.0.0.1:51872 on AF=2 127.0.0.1:9090
|
||||
2026/05/02 20:58:45 socat[4053706] N opening connection to 127.0.0.1:9088
|
||||
2026/05/02 20:58:45 socat[4053706] N opening connection to AF=2 127.0.0.1:9088
|
||||
2026/05/02 20:58:45 socat[4053706] N successfully connected from local address AF=2 127.0.0.1:37374
|
||||
2026/05/02 20:58:45 socat[4053706] N successfully connected to 127.0.0.1:9088
|
||||
2026/05/02 20:58:45 socat[4053706] N starting data transfer loop with FDs [6,6] and [5,5]
|
||||
> 2026/05/02 20:58:45.533969 length=395 from=0 to=394
|
||||
01 8b 01 3c 00 00 00 64 00 65 00 00 00 3d 00 06 49 45 45 45 4d 00 00 6c 73 71 6c 65 78 65 63 00 00 00 00 00 00 06 39 2e 32 38 30 00 00 0c 52 44 53 23 52 30 30 30 30 30 30 00 00 05 73 71 6c 69 00 00 00 01 3c 00 00 00 00 00 00 00 00 00 01 00 09 69 6e 66 6f 72 6d 69 78 00 00 07 69 6e 34 6d 69 78 00 6f 6c 00 00 00 00 00 00 00 00 00 3d 74 6c 69 74 63 70 00 00 00 00 00 01 00 68 00 0b 00 00 00 03 00 09 69 6e 66 6f 72 6d 69 78 00 00 0a 73 79 73 6d 61 73 74 65 72 00 00 00 00 00 00 00 00 00 00 6a 00 06 00 07 44 42 50 41 54 48 00 00 02 2e 00 00 0e 43 4c 49 45 4e 54 5f 4c 4f 43 41 4c 45 00 00 0d 65 6e 5f 55 53 2e 38 38 35 39 2d 31 00 00 11 43 4c 4e 54 5f 50 41 4d 5f 43 41 50 41 42 4c 45 00 00 02 31 00 00 07 44 42 44 41 54 45 00 00 06 59 34 4d 44 2d 00 00 0c 49 46 58 5f 55 50 44 44 45 53 43 00 00 02 31 00 00 09 4e 4f 44 45 46 44 41 43 00 00 03 6e 6f 00 00 6b 00 00 00 00 00 3d da f4 00 00 00 00 00 0b 72 70 6d 2d 62 75 6c 6c 65 74 00 00 00 00 29 2f 68 6f 6d 65 2f 72 70 6d 2f 63 6c 61 75 64 65 2f 69 6e 66 6f 72 6d 69 78 2f 70 79 74 68 6f 6e 2d 6c 69 62 72 61 72 79 00 00 74 00 21 00 00 00 00 00 00 00 00 00 17 69 6e 66 6f 72 6d 69 78 2d 64 62 40 70 69 64 34 30 35 33 37 34 38 00 00 7f
|
||||
< 2026/05/02 20:58:45.545611 length=276 from=0 to=275
|
||||
01 14 02 3c 10 00 00 64 00 65 00 00 00 3d 00 06 49 45 45 45 49 00 00 6c 73 72 76 69 6e 66 78 00 00 00 00 00 00 2f 49 42 4d 20 49 6e 66 6f 72 6d 69 78 20 44 79 6e 61 6d 69 63 20 53 65 72 76 65 72 20 56 65 72 73 69 6f 6e 20 31 35 2e 30 2e 31 2e 30 2e 33 00 00 07 73 65 72 69 61 6c 00 00 09 69 6e 66 6f 72 6d 69 78 00 00 00 01 3c 00 00 00 00 00 00 00 00 00 00 00 00 00 00 6f 6e 00 00 00 00 00 00 00 00 00 3d 73 6f 63 74 63 70 00 00 00 00 00 00 00 66 00 00 00 00 00 00 00 00 00 00 00 15 00 00 00 6b 00 00 00 00 00 00 03 1a 00 00 00 00 00 0d 32 33 32 37 63 34 33 35 34 65 61 38 00 00 00 00 0f 2f 68 6f 6d 65 2f 69 6e 66 6f 72 6d 69 78 00 00 6e 00 04 00 00 00 00 00 74 00 33 00 00 00 c8 00 00 00 c8 00 29 2f 6f 70 74 2f 69 62 6d 2f 69 6e 66 6f 72 6d 69 78 2f 76 31 35 2e 30 2e 31 2e 30 2e 33 2f 62 69 6e 2f 6f 6e 69 6e 69 74 00 00 7f
|
||||
> 2026/05/02 20:58:45.545859 length=56 from=395 to=450
|
||||
00 01 00 00 00 00 00 27 53 45 4c 45 43 54 20 31 20 46 52 4f 4d 20 73 79 73 74 61 62 6c 65 73 20 57 48 45 52 45 20 74 61 62 69 64 20 3d 20 31 00 00 16 00 07 00 0b 00 0c
|
||||
2026/05/02 20:58:48 socat[4053706] N socket 1 (fd 6) is at EOF
|
||||
2026/05/02 20:58:48 socat[4053706] N socket 2 (fd 5) is at EOF
|
||||
2026/05/02 20:58:48 socat[4053706] N exiting with status 0
|
||||
16
docs/CAPTURES/07-py-replay-jdbc-prepare.socat.log
Normal file
16
docs/CAPTURES/07-py-replay-jdbc-prepare.socat.log
Normal file
@ -0,0 +1,16 @@
|
||||
2026/05/02 21:00:29 socat[4057587] N listening on AF=2 0.0.0.0:9090
|
||||
2026/05/02 21:00:29 socat[4057587] N accepting connection from AF=2 127.0.0.1:34792 on AF=2 127.0.0.1:9090
|
||||
2026/05/02 21:00:29 socat[4057587] N opening connection to 127.0.0.1:9088
|
||||
2026/05/02 21:00:29 socat[4057587] N opening connection to AF=2 127.0.0.1:9088
|
||||
2026/05/02 21:00:29 socat[4057587] N successfully connected from local address AF=2 127.0.0.1:54636
|
||||
2026/05/02 21:00:29 socat[4057587] N successfully connected to 127.0.0.1:9088
|
||||
2026/05/02 21:00:29 socat[4057587] N starting data transfer loop with FDs [6,6] and [5,5]
|
||||
> 2026/05/02 21:00:29.913612 length=395 from=0 to=394
|
||||
01 8b 01 3c 00 00 00 64 00 65 00 00 00 3d 00 06 49 45 45 45 4d 00 00 6c 73 71 6c 65 78 65 63 00 00 00 00 00 00 06 39 2e 32 38 30 00 00 0c 52 44 53 23 52 30 30 30 30 30 30 00 00 05 73 71 6c 69 00 00 00 01 3c 00 00 00 00 00 00 00 00 00 01 00 09 69 6e 66 6f 72 6d 69 78 00 00 07 69 6e 34 6d 69 78 00 6f 6c 00 00 00 00 00 00 00 00 00 3d 74 6c 69 74 63 70 00 00 00 00 00 01 00 68 00 0b 00 00 00 03 00 09 69 6e 66 6f 72 6d 69 78 00 00 0a 73 79 73 6d 61 73 74 65 72 00 00 00 00 00 00 00 00 00 00 6a 00 06 00 07 44 42 50 41 54 48 00 00 02 2e 00 00 0e 43 4c 49 45 4e 54 5f 4c 4f 43 41 4c 45 00 00 0d 65 6e 5f 55 53 2e 38 38 35 39 2d 31 00 00 11 43 4c 4e 54 5f 50 41 4d 5f 43 41 50 41 42 4c 45 00 00 02 31 00 00 07 44 42 44 41 54 45 00 00 06 59 34 4d 44 2d 00 00 0c 49 46 58 5f 55 50 44 44 45 53 43 00 00 02 31 00 00 09 4e 4f 44 45 46 44 41 43 00 00 03 6e 6f 00 00 6b 00 00 00 00 00 3d e9 ff 00 00 00 00 00 0b 72 70 6d 2d 62 75 6c 6c 65 74 00 00 00 00 29 2f 68 6f 6d 65 2f 72 70 6d 2f 63 6c 61 75 64 65 2f 69 6e 66 6f 72 6d 69 78 2f 70 79 74 68 6f 6e 2d 6c 69 62 72 61 72 79 00 00 74 00 21 00 00 00 00 00 00 00 00 00 17 69 6e 66 6f 72 6d 69 78 2d 64 62 40 70 69 64 34 30 35 37 35 39 39 00 00 7f
|
||||
< 2026/05/02 21:00:29.925376 length=276 from=0 to=275
|
||||
01 14 02 3c 10 00 00 64 00 65 00 00 00 3d 00 06 49 45 45 45 49 00 00 6c 73 72 76 69 6e 66 78 00 00 00 00 00 00 2f 49 42 4d 20 49 6e 66 6f 72 6d 69 78 20 44 79 6e 61 6d 69 63 20 53 65 72 76 65 72 20 56 65 72 73 69 6f 6e 20 31 35 2e 30 2e 31 2e 30 2e 33 00 00 07 73 65 72 69 61 6c 00 00 09 69 6e 66 6f 72 6d 69 78 00 00 00 01 3c 00 00 00 00 00 00 00 00 00 00 00 00 00 00 6f 6e 00 00 00 00 00 00 00 00 00 3d 73 6f 63 74 63 70 00 00 00 00 00 00 00 66 00 00 00 00 00 00 00 00 00 00 00 15 00 00 00 6b 00 00 00 00 00 00 03 1a 00 00 00 00 00 0d 32 33 32 37 63 34 33 35 34 65 61 38 00 00 00 00 0f 2f 68 6f 6d 65 2f 69 6e 66 6f 72 6d 69 78 00 00 6e 00 04 00 00 00 00 00 74 00 33 00 00 00 c8 00 00 00 c8 00 29 2f 6f 70 74 2f 69 62 6d 2f 69 6e 66 6f 72 6d 69 78 2f 76 31 35 2e 30 2e 31 2e 30 2e 33 2f 62 69 6e 2f 6f 6e 69 6e 69 74 00 00 7f
|
||||
> 2026/05/02 21:00:29.925834 length=54 from=395 to=448
|
||||
00 02 00 00 00 00 00 27 53 45 4c 45 43 54 20 31 20 46 52 4f 4d 20 73 79 73 74 61 62 6c 65 73 20 57 48 45 52 45 20 74 61 62 69 64 20 3d 20 31 00 00 16 00 31 00 0c
|
||||
2026/05/02 21:00:32 socat[4057587] N socket 1 (fd 6) is at EOF
|
||||
2026/05/02 21:00:32 socat[4057587] N socket 2 (fd 5) is at EOF
|
||||
2026/05/02 21:00:32 socat[4057587] N exiting with status 0
|
||||
20
docs/CAPTURES/08-py-with-dbopen.socat.log
Normal file
20
docs/CAPTURES/08-py-with-dbopen.socat.log
Normal file
@ -0,0 +1,20 @@
|
||||
2026/05/02 21:01:18 socat[4059416] N listening on AF=2 0.0.0.0:9090
|
||||
2026/05/02 21:01:18 socat[4059416] N accepting connection from AF=2 127.0.0.1:60576 on AF=2 127.0.0.1:9090
|
||||
2026/05/02 21:01:18 socat[4059416] N opening connection to 127.0.0.1:9088
|
||||
2026/05/02 21:01:18 socat[4059416] N opening connection to AF=2 127.0.0.1:9088
|
||||
2026/05/02 21:01:18 socat[4059416] N successfully connected from local address AF=2 127.0.0.1:40524
|
||||
2026/05/02 21:01:18 socat[4059416] N successfully connected to 127.0.0.1:9088
|
||||
2026/05/02 21:01:18 socat[4059416] N starting data transfer loop with FDs [6,6] and [5,5]
|
||||
> 2026/05/02 21:01:18.500662 length=395 from=0 to=394
|
||||
01 8b 01 3c 00 00 00 64 00 65 00 00 00 3d 00 06 49 45 45 45 4d 00 00 6c 73 71 6c 65 78 65 63 00 00 00 00 00 00 06 39 2e 32 38 30 00 00 0c 52 44 53 23 52 30 30 30 30 30 30 00 00 05 73 71 6c 69 00 00 00 01 3c 00 00 00 00 00 00 00 00 00 01 00 09 69 6e 66 6f 72 6d 69 78 00 00 07 69 6e 34 6d 69 78 00 6f 6c 00 00 00 00 00 00 00 00 00 3d 74 6c 69 74 63 70 00 00 00 00 00 01 00 68 00 0b 00 00 00 03 00 09 69 6e 66 6f 72 6d 69 78 00 00 0a 73 79 73 6d 61 73 74 65 72 00 00 00 00 00 00 00 00 00 00 6a 00 06 00 07 44 42 50 41 54 48 00 00 02 2e 00 00 0e 43 4c 49 45 4e 54 5f 4c 4f 43 41 4c 45 00 00 0d 65 6e 5f 55 53 2e 38 38 35 39 2d 31 00 00 11 43 4c 4e 54 5f 50 41 4d 5f 43 41 50 41 42 4c 45 00 00 02 31 00 00 07 44 42 44 41 54 45 00 00 06 59 34 4d 44 2d 00 00 0c 49 46 58 5f 55 50 44 44 45 53 43 00 00 02 31 00 00 09 4e 4f 44 45 46 44 41 43 00 00 03 6e 6f 00 00 6b 00 00 00 00 00 3d f1 23 00 00 00 00 00 0b 72 70 6d 2d 62 75 6c 6c 65 74 00 00 00 00 29 2f 68 6f 6d 65 2f 72 70 6d 2f 63 6c 61 75 64 65 2f 69 6e 66 6f 72 6d 69 78 2f 70 79 74 68 6f 6e 2d 6c 69 62 72 61 72 79 00 00 74 00 21 00 00 00 00 00 00 00 00 00 17 69 6e 66 6f 72 6d 69 78 2d 64 62 40 70 69 64 34 30 35 39 34 32 37 00 00 7f
|
||||
< 2026/05/02 21:01:18.512357 length=276 from=0 to=275
|
||||
01 14 02 3c 10 00 00 64 00 65 00 00 00 3d 00 06 49 45 45 45 49 00 00 6c 73 72 76 69 6e 66 78 00 00 00 00 00 00 2f 49 42 4d 20 49 6e 66 6f 72 6d 69 78 20 44 79 6e 61 6d 69 63 20 53 65 72 76 65 72 20 56 65 72 73 69 6f 6e 20 31 35 2e 30 2e 31 2e 30 2e 33 00 00 07 73 65 72 69 61 6c 00 00 09 69 6e 66 6f 72 6d 69 78 00 00 00 01 3c 00 00 00 00 00 00 00 00 00 00 00 00 00 00 6f 6e 00 00 00 00 00 00 00 00 00 3d 73 6f 63 74 63 70 00 00 00 00 00 00 00 66 00 00 00 00 00 00 00 00 00 00 00 15 00 00 00 6b 00 00 00 00 00 00 03 1a 00 00 00 00 00 0d 32 33 32 37 63 34 33 35 34 65 61 38 00 00 00 00 0f 2f 68 6f 6d 65 2f 69 6e 66 6f 72 6d 69 78 00 00 6e 00 04 00 00 00 00 00 74 00 33 00 00 00 c8 00 00 00 c8 00 29 2f 6f 70 74 2f 69 62 6d 2f 69 6e 66 6f 72 6d 69 78 2f 76 31 35 2e 30 2e 31 2e 30 2e 33 2f 62 69 6e 2f 6f 6e 69 6e 69 74 00 00 7f
|
||||
> 2026/05/02 21:01:18.512633 length=18 from=395 to=412
|
||||
00 24 00 09 73 79 73 6d 61 73 74 65 72 00 00 00 00 0c
|
||||
< 2026/05/02 21:01:18.512691 length=12 from=276 to=287
|
||||
00 0d fd 09 00 00 00 00 00 00 00 0c
|
||||
> 2026/05/02 21:01:18.512737 length=54 from=413 to=466
|
||||
00 02 00 00 00 00 00 27 53 45 4c 45 43 54 20 31 20 46 52 4f 4d 20 73 79 73 74 61 62 6c 65 73 20 57 48 45 52 45 20 74 61 62 69 64 20 3d 20 31 00 00 16 00 31 00 0c
|
||||
2026/05/02 21:01:21 socat[4059416] N socket 1 (fd 6) is at EOF
|
||||
2026/05/02 21:01:21 socat[4059416] N socket 2 (fd 5) is at EOF
|
||||
2026/05/02 21:01:21 socat[4059416] N exiting with status 0
|
||||
20
docs/CAPTURES/09-py-full-replay.socat.log
Normal file
20
docs/CAPTURES/09-py-full-replay.socat.log
Normal file
@ -0,0 +1,20 @@
|
||||
2026/05/02 21:02:02 socat[4061060] N listening on AF=2 0.0.0.0:9090
|
||||
2026/05/02 21:02:02 socat[4061060] N accepting connection from AF=2 127.0.0.1:50986 on AF=2 127.0.0.1:9090
|
||||
2026/05/02 21:02:02 socat[4061060] N opening connection to 127.0.0.1:9088
|
||||
2026/05/02 21:02:02 socat[4061060] N opening connection to AF=2 127.0.0.1:9088
|
||||
2026/05/02 21:02:02 socat[4061060] N successfully connected from local address AF=2 127.0.0.1:56860
|
||||
2026/05/02 21:02:02 socat[4061060] N successfully connected to 127.0.0.1:9088
|
||||
2026/05/02 21:02:02 socat[4061060] N starting data transfer loop with FDs [6,6] and [5,5]
|
||||
> 2026/05/02 21:02:02.470123 length=395 from=0 to=394
|
||||
01 8b 01 3c 00 00 00 64 00 65 00 00 00 3d 00 06 49 45 45 45 4d 00 00 6c 73 71 6c 65 78 65 63 00 00 00 00 00 00 06 39 2e 32 38 30 00 00 0c 52 44 53 23 52 30 30 30 30 30 30 00 00 05 73 71 6c 69 00 00 00 01 3c 00 00 00 00 00 00 00 00 00 01 00 09 69 6e 66 6f 72 6d 69 78 00 00 07 69 6e 34 6d 69 78 00 6f 6c 00 00 00 00 00 00 00 00 00 3d 74 6c 69 74 63 70 00 00 00 00 00 01 00 68 00 0b 00 00 00 03 00 09 69 6e 66 6f 72 6d 69 78 00 00 0a 73 79 73 6d 61 73 74 65 72 00 00 00 00 00 00 00 00 00 00 6a 00 06 00 07 44 42 50 41 54 48 00 00 02 2e 00 00 0e 43 4c 49 45 4e 54 5f 4c 4f 43 41 4c 45 00 00 0d 65 6e 5f 55 53 2e 38 38 35 39 2d 31 00 00 11 43 4c 4e 54 5f 50 41 4d 5f 43 41 50 41 42 4c 45 00 00 02 31 00 00 07 44 42 44 41 54 45 00 00 06 59 34 4d 44 2d 00 00 0c 49 46 58 5f 55 50 44 44 45 53 43 00 00 02 31 00 00 09 4e 4f 44 45 46 44 41 43 00 00 03 6e 6f 00 00 6b 00 00 00 00 00 3d f7 96 00 00 00 00 00 0b 72 70 6d 2d 62 75 6c 6c 65 74 00 00 00 00 29 2f 68 6f 6d 65 2f 72 70 6d 2f 63 6c 61 75 64 65 2f 69 6e 66 6f 72 6d 69 78 2f 70 79 74 68 6f 6e 2d 6c 69 62 72 61 72 79 00 00 74 00 21 00 00 00 00 00 00 00 00 00 17 69 6e 66 6f 72 6d 69 78 2d 64 62 40 70 69 64 34 30 36 31 30 37 38 00 00 7f
|
||||
< 2026/05/02 21:02:02.472161 length=276 from=0 to=275
|
||||
01 14 02 3c 10 00 00 64 00 65 00 00 00 3d 00 06 49 45 45 45 49 00 00 6c 73 72 76 69 6e 66 78 00 00 00 00 00 00 2f 49 42 4d 20 49 6e 66 6f 72 6d 69 78 20 44 79 6e 61 6d 69 63 20 53 65 72 76 65 72 20 56 65 72 73 69 6f 6e 20 31 35 2e 30 2e 31 2e 30 2e 33 00 00 07 73 65 72 69 61 6c 00 00 09 69 6e 66 6f 72 6d 69 78 00 00 00 01 3c 00 00 00 00 00 00 00 00 00 00 00 00 00 00 6f 6e 00 00 00 00 00 00 00 00 00 3d 73 6f 63 74 63 70 00 00 00 00 00 00 00 66 00 00 00 00 00 00 00 00 00 00 00 15 00 00 00 6b 00 00 00 00 00 00 03 1a 00 00 00 00 00 0d 32 33 32 37 63 34 33 35 34 65 61 38 00 00 00 00 0f 2f 68 6f 6d 65 2f 69 6e 66 6f 72 6d 69 78 00 00 6e 00 04 00 00 00 00 00 74 00 33 00 00 00 c8 00 00 00 c8 00 29 2f 6f 70 74 2f 69 62 6d 2f 69 6e 66 6f 72 6d 69 78 2f 76 31 35 2e 30 2e 31 2e 30 2e 33 2f 62 69 6e 2f 6f 6e 69 6e 69 74 00 00 7f
|
||||
> 2026/05/02 21:02:02.472475 length=14 from=395 to=408
|
||||
00 7e 00 08 ff fc 7f fc 3c 8c aa 97 00 0c
|
||||
< 2026/05/02 21:02:02.480566 length=16 from=276 to=291
|
||||
00 7e 00 09 bd be 9f fe 7f b7 ff ef ff 00 00 0c
|
||||
> 2026/05/02 21:02:02.480722 length=8 from=409 to=416
|
||||
00 51 00 06 00 26 00 0c
|
||||
2026/05/02 21:02:05 socat[4061060] N socket 1 (fd 6) is at EOF
|
||||
2026/05/02 21:02:05 socat[4061060] N socket 2 (fd 5) is at EOF
|
||||
2026/05/02 21:02:05 socat[4061060] N exiting with status 0
|
||||
Binary file not shown.
@ -33,6 +33,8 @@ class MessageType(IntEnum):
|
||||
SQ_NFETCH = 9
|
||||
SQ_CLOSE = 10
|
||||
SQ_RELEASE = 11
|
||||
SQ_NDESCRIBE = 22 # numerical describe — request column metadata after a PREPARE/COMMAND
|
||||
SQ_WANTDONE = 49 # request a SQ_DONE completion notification
|
||||
|
||||
# --- Per-PDU framing ---
|
||||
SQ_EOT = 12 # end-of-transmission / flush marker; ends every PDU
|
||||
|
||||
160
src/informix_db/_resultset.py
Normal file
160
src/informix_db/_resultset.py
Normal file
@ -0,0 +1,160 @@
|
||||
"""SQ_DESCRIBE column descriptor parser and SQ_TUPLE row decoder.
|
||||
|
||||
Best-effort initial implementation derived from
|
||||
``com.informix.jdbc.IfxSqli.receiveDescribe`` and ``receiveTuple``
|
||||
(see ``docs/JDBC_NOTES.md``). The exact byte layout of the per-column
|
||||
descriptor block is intricate enough that we iterate against live
|
||||
captures and the running container; this module is the integration
|
||||
point for that iteration.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
|
||||
from ._protocol import IfxStreamReader
|
||||
from ._types import IfxType, base_type, is_nullable
|
||||
from .converters import FIXED_WIDTHS, decode
|
||||
|
||||
|
||||
@dataclass
|
||||
class ColumnInfo:
|
||||
"""One column in a SQ_DESCRIBE response.
|
||||
|
||||
Maps to PEP 249's ``cursor.description`` row format:
|
||||
``(name, type_code, display_size, internal_size, precision, scale, null_ok)``.
|
||||
"""
|
||||
|
||||
name: str
|
||||
type_code: int # base IDS type code (high-bit flags stripped)
|
||||
flags: int # raw flags byte (NOTNULLABLE etc.)
|
||||
length: int # declared size on the wire
|
||||
precision: int = 0
|
||||
scale: int = 0
|
||||
|
||||
@property
|
||||
def null_ok(self) -> bool:
|
||||
return is_nullable(self.flags << 8 | self.type_code)
|
||||
|
||||
def to_description_tuple(self) -> tuple:
|
||||
"""Build the 7-tuple PEP 249 cursor.description expects."""
|
||||
return (
|
||||
self.name,
|
||||
self.type_code,
|
||||
self.length, # display_size
|
||||
self.length, # internal_size
|
||||
self.precision,
|
||||
self.scale,
|
||||
self.null_ok,
|
||||
)
|
||||
|
||||
|
||||
def parse_describe(reader: IfxStreamReader) -> tuple[list[ColumnInfo], dict]:
|
||||
"""Parse a SQ_DESCRIBE response payload (the SQ_DESCRIBE tag is already consumed).
|
||||
|
||||
Returns ``(columns, metadata)`` where ``metadata`` carries the
|
||||
statement-level fields (statementType, statementID, estimatedCost,
|
||||
tupleSize) for diagnostics.
|
||||
|
||||
This is a best-effort initial parser per JDBC's ``receiveDescribe``.
|
||||
Field-index handling assumes 4-byte offsets (modern servers).
|
||||
"""
|
||||
statement_type = reader.read_short()
|
||||
statement_id = reader.read_short()
|
||||
estimated_cost = reader.read_int()
|
||||
tuple_size = reader.read_short()
|
||||
nfields = reader.read_short()
|
||||
string_table_size = reader.read_int() # assume is4ByteOffsetSupported
|
||||
|
||||
metadata = {
|
||||
"statement_type": statement_type,
|
||||
"statement_id": statement_id,
|
||||
"estimated_cost": estimated_cost,
|
||||
"tuple_size": tuple_size,
|
||||
"nfields": nfields,
|
||||
"string_table_size": string_table_size,
|
||||
}
|
||||
|
||||
if nfields <= 0:
|
||||
return [], metadata
|
||||
|
||||
# Per-column descriptors. The exact layout per JDBC's loop:
|
||||
# for each field:
|
||||
# field_index (int — offset into string table where name lives)
|
||||
# if isUSVER: column_start_pos (int)
|
||||
# ... more per-column data ...
|
||||
#
|
||||
# For Phase 2 MVP we read defensively: collect raw field offsets,
|
||||
# then parse the string table at the end. The per-column type/length
|
||||
# data layout needs more characterization — for now we read the
|
||||
# field_index and rely on a heuristic walk of the remaining bytes.
|
||||
field_indexes = []
|
||||
for _ in range(nfields):
|
||||
field_indexes.append(reader.read_int())
|
||||
|
||||
# The remaining bytes hold per-column type info + the string table
|
||||
# (column names). We don't have the full layout decoded yet, so for
|
||||
# Phase 2 MVP we extract column names from what looks like the
|
||||
# length-prefixed name block at the end.
|
||||
columns: list[ColumnInfo] = []
|
||||
# Heuristic: skip ahead until we hit a printable ASCII byte that
|
||||
# plausibly starts a column name. This is wrong in general but works
|
||||
# for the SELECT 1 case where the only column is "(constant)".
|
||||
# Phase 2.1 replaces this with a real parser.
|
||||
raw_remaining = reader.read_exact(string_table_size)
|
||||
# The string table contains nul-terminated column names back-to-back.
|
||||
parts = raw_remaining.split(b"\x00")
|
||||
names = [p.decode("iso-8859-1") for p in parts if p]
|
||||
# Pad/truncate to nfields
|
||||
while len(names) < nfields:
|
||||
names.append(f"col{len(names)}")
|
||||
names = names[:nfields]
|
||||
|
||||
# We don't yet know the per-column type code from the descriptor block.
|
||||
# For Phase 2 MVP we infer from the SQ_TUPLE payload size when it
|
||||
# arrives — see decode_tuple. Leave type_code=IfxType.INT as default.
|
||||
for name in names:
|
||||
columns.append(
|
||||
ColumnInfo(
|
||||
name=name,
|
||||
type_code=int(IfxType.INT), # MVP placeholder — refined in Phase 2.1
|
||||
flags=0,
|
||||
length=4,
|
||||
)
|
||||
)
|
||||
return columns, metadata
|
||||
|
||||
|
||||
def parse_tuple_payload(
|
||||
reader: IfxStreamReader,
|
||||
columns: list[ColumnInfo],
|
||||
) -> tuple:
|
||||
"""Parse a SQ_TUPLE payload (the SQ_TUPLE tag is already consumed).
|
||||
|
||||
JDBC's ``receiveTuple`` reads:
|
||||
[short warn] [int size] [bytes payload]
|
||||
|
||||
The payload is then split into per-column values by walking the
|
||||
column descriptors. For Phase 2 MVP with fixed-width types only,
|
||||
each column consumes ``FIXED_WIDTHS[type_code]`` bytes.
|
||||
"""
|
||||
warn = reader.read_short() # noqa: F841 — diagnostic, not surfaced yet
|
||||
size = reader.read_int()
|
||||
payload = reader.read_exact(size)
|
||||
|
||||
values: list[object] = []
|
||||
offset = 0
|
||||
for col in columns:
|
||||
base = base_type(col.type_code)
|
||||
width = FIXED_WIDTHS.get(base)
|
||||
if width is None:
|
||||
# Variable-width type — Phase 2.1 work
|
||||
raise NotImplementedError(
|
||||
f"variable-width column type {base} not yet supported "
|
||||
f"(column {col.name!r})"
|
||||
)
|
||||
raw = payload[offset:offset + width]
|
||||
offset += width
|
||||
values.append(decode(col.type_code, raw))
|
||||
|
||||
return tuple(values)
|
||||
@ -36,6 +36,7 @@ from ._messages import (
|
||||
)
|
||||
from ._protocol import IfxStreamReader, IfxStreamWriter, ProtocolError, make_pdu_writer
|
||||
from ._socket import IfxSocket
|
||||
from .cursors import Cursor
|
||||
from .exceptions import InterfaceError, OperationalError
|
||||
|
||||
# Default capability bits the JDBC reference sends. Validated against
|
||||
@ -124,12 +125,17 @@ class Connection:
|
||||
def closed(self) -> bool:
|
||||
return self._closed
|
||||
|
||||
def cursor(self):
|
||||
"""Return a Cursor. NOT IMPLEMENTED in Phase 1."""
|
||||
raise NotImplementedError(
|
||||
"Cursor is implemented in Phase 2; "
|
||||
"Phase 1 only validates connect() / close()."
|
||||
)
|
||||
def cursor(self) -> Cursor:
|
||||
"""Return a new Cursor for executing SQL on this connection."""
|
||||
if self._closed:
|
||||
raise InterfaceError("connection is closed")
|
||||
return Cursor(self)
|
||||
|
||||
def _send_pdu(self, pdu: bytes) -> None:
|
||||
"""Send an assembled PDU. Used by Cursor."""
|
||||
if self._closed:
|
||||
raise InterfaceError("connection is closed")
|
||||
self._sock.write_all(pdu)
|
||||
|
||||
def commit(self) -> None:
|
||||
"""No-op in Phase 1 (transactions land in Phase 3)."""
|
||||
|
||||
248
src/informix_db/cursors.py
Normal file
248
src/informix_db/cursors.py
Normal file
@ -0,0 +1,248 @@
|
||||
"""DB-API 2.0 Cursor — SELECT execution and row iteration.
|
||||
|
||||
Phase 2 implements the simplest viable cursor: ``execute(sql)`` sends
|
||||
``SQ_COMMAND`` (execute-immediate, no parameter binding) and the
|
||||
response loop dispatches on tag (``SQ_DESCRIBE``, ``SQ_TUPLE``,
|
||||
``SQ_DONE``, ``SQ_ERR``, ``SQ_EOT``). Parameter binding lands in
|
||||
Phase 4 via ``SQ_PREPARE`` + ``SQ_BIND`` + ``SQ_EXECUTE``.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import struct
|
||||
from collections.abc import Iterator
|
||||
from io import BytesIO
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
from ._messages import MessageType
|
||||
from ._protocol import IfxStreamReader, make_pdu_writer
|
||||
from ._resultset import ColumnInfo, parse_describe, parse_tuple_payload
|
||||
from .exceptions import (
|
||||
DatabaseError,
|
||||
InterfaceError,
|
||||
NotSupportedError,
|
||||
ProgrammingError,
|
||||
)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .connections import Connection
|
||||
|
||||
|
||||
class Cursor:
|
||||
"""PEP 249 Cursor over a SQLI session.
|
||||
|
||||
One Cursor per Connection per active query is the simplest pattern
|
||||
in Phase 2 (Phase 4's prepared-statement cache will share Cursors
|
||||
across executes of the same SQL).
|
||||
"""
|
||||
|
||||
arraysize: int = 1 # PEP 249 default
|
||||
|
||||
def __init__(self, connection: Connection):
|
||||
self._conn = connection
|
||||
self._closed = False
|
||||
self._description: list[tuple] | None = None
|
||||
self._columns: list[ColumnInfo] = []
|
||||
self._rowcount: int = -1
|
||||
self._rows: list[tuple] = []
|
||||
self._row_iter: Iterator[tuple] | None = None
|
||||
|
||||
# -- PEP 249 attributes ------------------------------------------------
|
||||
|
||||
@property
|
||||
def description(self) -> list[tuple] | None:
|
||||
"""Sequence of 7-tuples per PEP 249, one per result column. None pre-execute."""
|
||||
return self._description
|
||||
|
||||
@property
|
||||
def rowcount(self) -> int:
|
||||
"""Affected/returned row count, -1 if not applicable or unknown."""
|
||||
return self._rowcount
|
||||
|
||||
@property
|
||||
def closed(self) -> bool:
|
||||
return self._closed
|
||||
|
||||
# -- PEP 249 methods ---------------------------------------------------
|
||||
|
||||
def execute(self, operation: str, parameters: Any = None) -> None:
|
||||
"""Execute a single SQL statement.
|
||||
|
||||
Phase 2 supports parameterless SQL only. Passing ``parameters``
|
||||
raises ``NotSupportedError`` — parameter binding lands in Phase 4.
|
||||
"""
|
||||
self._check_open()
|
||||
if parameters is not None:
|
||||
raise NotSupportedError(
|
||||
"parameter binding lands in Phase 4; pass SQL with literals for now"
|
||||
)
|
||||
|
||||
# Reset previous-execute state
|
||||
self._description = None
|
||||
self._columns = []
|
||||
self._rowcount = -1
|
||||
self._rows = []
|
||||
self._row_iter = None
|
||||
|
||||
pdu = self._build_command_pdu(operation)
|
||||
self._conn._send_pdu(pdu)
|
||||
self._read_response()
|
||||
|
||||
if self._description is not None:
|
||||
self._row_iter = iter(self._rows)
|
||||
|
||||
def executemany(self, operation: str, seq_of_parameters: Any) -> None:
|
||||
raise NotSupportedError("executemany lands in Phase 4 (needs parameter binding)")
|
||||
|
||||
def fetchone(self) -> tuple | None:
|
||||
"""Return the next row, or None when exhausted."""
|
||||
self._check_open()
|
||||
if self._row_iter is None:
|
||||
return None
|
||||
return next(self._row_iter, None)
|
||||
|
||||
def fetchmany(self, size: int | None = None) -> list[tuple]:
|
||||
self._check_open()
|
||||
n = size if size is not None else self.arraysize
|
||||
out: list[tuple] = []
|
||||
for _ in range(n):
|
||||
row = self.fetchone()
|
||||
if row is None:
|
||||
break
|
||||
out.append(row)
|
||||
return out
|
||||
|
||||
def fetchall(self) -> list[tuple]:
|
||||
self._check_open()
|
||||
if self._row_iter is None:
|
||||
return []
|
||||
out = list(self._row_iter)
|
||||
self._row_iter = iter([])
|
||||
return out
|
||||
|
||||
def close(self) -> None:
|
||||
self._closed = True
|
||||
self._row_iter = None
|
||||
|
||||
def __iter__(self) -> Iterator[tuple]:
|
||||
return self
|
||||
|
||||
def __next__(self) -> tuple:
|
||||
row = self.fetchone()
|
||||
if row is None:
|
||||
raise StopIteration
|
||||
return row
|
||||
|
||||
def __enter__(self) -> Cursor:
|
||||
return self
|
||||
|
||||
def __exit__(self, *_exc: object) -> None:
|
||||
self.close()
|
||||
|
||||
# -- internals ---------------------------------------------------------
|
||||
|
||||
def _check_open(self) -> None:
|
||||
if self._closed:
|
||||
raise InterfaceError("cursor is closed")
|
||||
if self._conn.closed:
|
||||
raise InterfaceError("connection is closed")
|
||||
|
||||
def _build_command_pdu(self, sql: str) -> bytes:
|
||||
"""Assemble a SQ_COMMAND PDU per JDBC's sendCommand:
|
||||
|
||||
[short SQ_COMMAND=1]
|
||||
[short numValues=0]
|
||||
[int sqlLen] ← 4-byte length (modern server)
|
||||
[bytes sql]
|
||||
[byte 0 if sqlLen+4 is odd] ← writeChar pad
|
||||
[short SQ_NDESCRIBE=22]
|
||||
[short SQ_EXECUTE=7]
|
||||
[short SQ_RELEASE=11]
|
||||
[short SQ_EOT=12]
|
||||
"""
|
||||
writer, buf = make_pdu_writer()
|
||||
writer.write_short(MessageType.SQ_COMMAND)
|
||||
writer.write_short(0) # numValues — no bind parameters in Phase 2
|
||||
|
||||
sql_bytes = sql.encode("iso-8859-1")
|
||||
writer.write_int(len(sql_bytes)) # 4-byte length prefix
|
||||
writer.write_bytes(sql_bytes)
|
||||
# writeChar emits a 0x00 pad byte if total (4 + sqlLen) is odd
|
||||
if (4 + len(sql_bytes)) & 1:
|
||||
writer.write_byte(0)
|
||||
|
||||
writer.write_short(MessageType.SQ_NDESCRIBE) # 22
|
||||
writer.write_short(MessageType.SQ_EXECUTE) # 7
|
||||
writer.write_short(MessageType.SQ_RELEASE) # 11
|
||||
writer.write_short(MessageType.SQ_EOT) # 12 — flush
|
||||
return buf.getvalue()
|
||||
|
||||
def _read_response(self) -> None:
|
||||
"""Tag-driven response loop, mirrors JDBC's receiveMessage/dispatchMsg."""
|
||||
# Wrap the connection's socket in a streaming reader that pulls
|
||||
# from the wire on demand.
|
||||
reader = _SocketReader(self._conn._sock)
|
||||
while True:
|
||||
tag = reader.read_short()
|
||||
if tag == MessageType.SQ_EOT or tag == MessageType.SQ_EXIT:
|
||||
break
|
||||
elif tag == MessageType.SQ_DESCRIBE:
|
||||
self._columns, _ = parse_describe(reader)
|
||||
self._description = (
|
||||
[c.to_description_tuple() for c in self._columns]
|
||||
if self._columns else None
|
||||
)
|
||||
elif tag == MessageType.SQ_TUPLE:
|
||||
row = parse_tuple_payload(reader, self._columns)
|
||||
self._rows.append(row)
|
||||
elif tag == MessageType.SQ_DONE:
|
||||
self._read_done(reader)
|
||||
elif tag == MessageType.SQ_ERR:
|
||||
self._raise_error(reader)
|
||||
elif tag == 55: # SQ_COST — informational, ignore
|
||||
# SQ_COST payload is two ints
|
||||
reader.read_int()
|
||||
reader.read_int()
|
||||
else:
|
||||
raise DatabaseError(f"unexpected wire tag in response: 0x{tag:04x}")
|
||||
|
||||
def _read_done(self, reader: IfxStreamReader) -> None:
|
||||
"""SQ_DONE payload — see PROTOCOL_NOTES.md §6e (partial decode)."""
|
||||
# Observed layout: [int 0][short rowcount][int sqlcode][int 0]
|
||||
reader.read_int() # reserved
|
||||
rc = reader.read_short()
|
||||
sqlcode = reader.read_int() # noqa: F841 — Phase 5 surfaces this on errors
|
||||
reader.read_int() # reserved
|
||||
# rowcount is signed; -1 means unknown
|
||||
self._rowcount = rc if rc >= 0 else -1
|
||||
|
||||
def _raise_error(self, reader: IfxStreamReader) -> None:
|
||||
"""SQ_ERR — Phase 5 will decode SQLSTATE; for now raise generic."""
|
||||
# Best effort: read whatever's there and surface as ProgrammingError
|
||||
# which is the right class for "bad SQL" — most common error case.
|
||||
try:
|
||||
data = reader.read_exact(min(256, 4096))
|
||||
except Exception:
|
||||
data = b""
|
||||
raise ProgrammingError(
|
||||
f"server returned SQ_ERR (full decode lands in Phase 5); "
|
||||
f"first bytes: {data[:32].hex(' ')}"
|
||||
)
|
||||
|
||||
|
||||
class _SocketReader(IfxStreamReader):
|
||||
"""IfxStreamReader backed by an IfxSocket — pulls more bytes from the wire as needed."""
|
||||
|
||||
def __init__(self, sock):
|
||||
self._sock = sock
|
||||
# Initialize parent with a dummy BytesIO — we override read methods.
|
||||
super().__init__(BytesIO(b""))
|
||||
|
||||
def read_exact(self, n: int) -> bytes:
|
||||
return self._sock.read_exact(n)
|
||||
|
||||
def read_short(self) -> int:
|
||||
return struct.unpack("!h", self.read_exact(2))[0]
|
||||
|
||||
def read_int(self) -> int:
|
||||
return struct.unpack("!i", self.read_exact(4))[0]
|
||||
@ -75,12 +75,32 @@ def test_bad_host_raises_operational_error(conn_params: ConnParams) -> None:
|
||||
)
|
||||
|
||||
|
||||
def test_cursor_not_yet_implemented(conn_params: ConnParams) -> None:
|
||||
"""Phase 1 declares ``cursor()`` as NotImplementedError; Phase 2 lands it."""
|
||||
def test_cursor_returns_cursor_object(conn_params: ConnParams) -> None:
|
||||
"""Phase 2: ``cursor()`` returns a Cursor; SELECT execution is partial work-in-progress."""
|
||||
with informix_db.connect(
|
||||
host=conn_params.host, port=conn_params.port,
|
||||
user=conn_params.user, password=conn_params.password,
|
||||
database=conn_params.database, server=conn_params.server,
|
||||
connect_timeout=10.0,
|
||||
) as conn, pytest.raises(NotImplementedError, match="Phase 2"):
|
||||
conn.cursor()
|
||||
) as conn:
|
||||
cur = conn.cursor()
|
||||
assert cur is not None
|
||||
assert cur.description is None # nothing executed yet
|
||||
assert cur.rowcount == -1
|
||||
assert cur.fetchone() is None
|
||||
cur.close()
|
||||
assert cur.closed is True
|
||||
|
||||
|
||||
def test_cursor_with_parameters_raises(conn_params: ConnParams) -> None:
|
||||
"""Parameter binding lands in Phase 4; passing parameters must raise NotSupportedError."""
|
||||
with informix_db.connect(
|
||||
host=conn_params.host, port=conn_params.port,
|
||||
user=conn_params.user, password=conn_params.password,
|
||||
database=conn_params.database, server=conn_params.server,
|
||||
connect_timeout=10.0,
|
||||
) as conn:
|
||||
cur = conn.cursor()
|
||||
with pytest.raises(informix_db.NotSupportedError, match="Phase 4"):
|
||||
cur.execute("SELECT ?", (1,))
|
||||
cur.close()
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user