Third-pass optimization on parse_tuple_payload's hot loop. Previous
phases removed redundant work; this one removes correct-but-wasteful
work: the if/elif chain checked branches in implementation order, not
frequency order. Fixed-width types (INT, FLOAT, DATE, BIGINT - the most
common columns in real queries) sat at the bottom, paying ~7 frozenset
misses per column.
Changes (src/informix_db/_resultset.py):
* Added _FIXED_WIDTH_TYPES = frozenset(FIXED_WIDTHS.keys()) at module
load.
* New fast-path branch at the TOP of parse_tuple_payload's loop body
that handles every _FIXED_WIDTH_TYPES column inline: one frozenset
check, one dict lookup, one decode, continue. Skips every other
branch.
* Cleaned up the bottom fall-through; it now genuinely only catches
unknown types.
Performance vs Phase 24 baseline:
* parse_tuple_5cols_iso8859: 1659 ns -> 1400 ns (-16%)
* parse_tuple_5cols_utf8: 1649 ns -> 1341 ns (-19%)
Cumulative vs Phase 21 baseline (before any optimization):
* parse_tuple_5cols: 2796 ns -> 1400 ns (-50%) - HALF the time
* decode_int: 230 ns -> 139 ns (-40%)
Margaret Hamilton review surfaced one HIGH finding addressed before
tagging:
* H: The fast-path optimization assumes every FIXED_WIDTHS key is
decodable WITHOUT qualifier inspection (encoded_length etc.). True
today, but a future contributor adding a fixed-width type that
needs qualifier bits (like DATETIME does) would silently get wrong
decode behavior - Lauren-Bug class failure.
Fix: added INVARIANT comment to FIXED_WIDTHS in converters.py AND
added tests/test_resultset_invariants.py with three CI tripwire
tests:
- _FIXED_WIDTH_TYPES is disjoint from every other dispatch branch
- Every FIXED_WIDTHS key has a DECODERS entry
- DECODERS keys stay < 0x100 (Phase 24 collision-free guarantee)
The tests carry instructions: if one fires, don't update the test
to match - either restore the property or refactor the optimization.
Comments rot when nobody reads them; tests fail loudly.
baseline.json refreshed; 72 unit + 224 integration + 28 bench = 324
tests; ruff clean.