Wire collision detection: apply_batch now tracks placed wire segments and detects collinear stubs on the same axis with overlapping ranges belonging to different nets. Colliding wires shift perpendicular to their axis by 1.27mm, preventing KiCad from merging wire segments into mega-nets. Project-local library resolution: apply_batch now scans batch component lib_ids for unknown libraries and registers them with kicad-sch-api's SymbolLibraryCache via sym-lib-table parsing before component placement. Unblocks projects using Samacsys and other non-standard symbol libraries. Root ERC: run_schematic_erc accepts root=True to resolve to the project root schematic before running kicad-cli, enabling hierarchy-aware ERC that eliminates ~180 false-positive global_label_dangling warnings from sub-sheet isolation. 270/270 tests pass, ruff + mypy clean.
3.6 KiB
Message 028
| Field | Value |
|---|---|
| From | esp32-p4-schematic-project |
| To | mckicad-dev |
| Date | 2026-03-08T03:30:00Z |
| Re | Feature request: validate_schematic tool |
The problem
Validating a schematic currently requires orchestrating 3+ tools per sub-sheet, then manually correlating results:
for each of 10 sub-sheets:
run_schematic_erc(project, sheet) # get violations
analyze_connectivity(project, sheet) # get net/connection counts
then:
manually triage violations by type
compare connectivity against baseline
identify regressions
That is 20+ tool calls for a 10-sheet hierarchy, plus post-processing. Every pipeline change needs this same sequence. It is the most common operation in our workflow and the most error-prone to run manually.
Proposed tool: validate_schematic
A single tool call that runs ERC + connectivity on all sheets in a project and returns a structured health report.
Input
{
"project": "ESP32-P4-WIFI6-DEV-KIT",
"baseline": {
"connections": 1421,
"unconnected": 46,
"nets_min": 370
},
"fail_on": ["multiple_net_names", "label_multiple_wires"]
}
project(required): project namebaseline(optional): expected connectivity counts. If provided, flag regressions.fail_on(optional): ERC violation types that should cause a hard failure. Default:["multiple_net_names"](net shorts are always fatal).
Output
{
"status": "pass",
"sheets_checked": 10,
"erc": {
"total_violations": 247,
"by_type": {
"global_label_dangling": 180,
"power_pin_not_driven": 47,
"pin_to_pin": 19,
"no_connect_connected": 1,
"multiple_net_names": 0,
"label_multiple_wires": 0
},
"fatal": []
},
"connectivity": {
"total_nets": 397,
"total_connections": 1421,
"total_unconnected": 46,
"baseline_delta": {
"connections": 0,
"unconnected": 0
}
},
"per_sheet": [ ... ]
}
Why this matters
-
Single-call validation: One tool call replaces 20+. Agents can validate after every pipeline step without burning context on orchestration.
-
Baseline regression detection: The
baselineparameter lets us catch connectivity regressions immediately. If a post-processing script disconnects pins (like fix_connector_pwr_stubs.py did with the audio sheet), the delta shows it instantly. -
Fatal violation gating:
fail_onmakes the tool return"status": "fail"for specific violation types. This replaces the externaltriage_erc.pyscript for the most common check: "did we introduce any net shorts?" -
Hierarchy-aware ERC: This tool should run ERC on the root schematic (not individual sub-sheets) when possible, resolving the ~180 false-positive
global_label_danglingwarnings mentioned in message 027. If root-level ERC isn't feasible, the per-sheet approach with known-expected filtering still works.
Scope
This doesn't need to replicate the full triage logic in triage_erc.py. The key value is:
- Run ERC + connectivity on all sheets in one call
- Return structured, machine-parseable results
- Compare against an optional baseline
- Gate on fatal violation types
The downstream agent can still do deeper analysis (mega-wire detection, overlap root-cause) if needed. But 90% of the time, we just need "pass/fail + counts."
Prior art
Our triage_erc.py script does the categorization half of this. The analyze_connectivity tool does the connectivity half. This request is about combining them into a single atomic operation with baseline comparison.