Compare commits

..

3 Commits

Author SHA1 Message Date
50fdfce73a Add IDF workflow how-to guide
Covers end-to-end ESP-IDF workflows: environment check, target
setup, build, flash, monitor, and troubleshooting. Uses Starlight
Steps/Tabs components for structured walkthroughs.
2026-03-02 02:16:45 -07:00
76ff1ad46a Fix v5.x parsers, clean up CLI, bump to 2026.02.25
- Rewrite _parse_tools_list for ESP-IDF v5.x compact format
  (handles both v5.x and older verbose output)
- Archive detection runs before v5.x version matching to avoid
  false positives on filenames like *.tar.gz
- Remove dead --config and --port CLI parameters
- Add 21 new tests: v5.x parser coverage, Tier 2 tool invocations,
  resource/prompt tests (193 total)
2026-03-02 02:16:40 -07:00
44553ebcdb Delete orphaned test_middleware.py
Middleware layer was removed in 78dc7e1 but test file remained,
blocking full test suite collection.
2026-03-02 02:09:08 -07:00
7 changed files with 716 additions and 170 deletions

View File

@ -0,0 +1,223 @@
---
title: IDF Project Workflows
description: Build, flash, and monitor ESP-IDF projects through MCP tools
sidebar:
order: 8
---
import { Steps, Aside, Tabs, TabItem, LinkCard } from '@astrojs/starlight/components';
mcesptool wraps ESP-IDF's toolchain management and project build system as MCP tools. This guide walks through the full development cycle: checking your environment, setting up toolchains for a target chip, building, flashing, and monitoring. Each section is self-contained -- jump to the step you need.
## Check your environment
Before building anything, confirm that ESP-IDF is detected and see which toolchains are installed.
<Steps>
1. Get the ESP-IDF path, version, and environment variables:
```python
result = await client.call_tool("idf_env_info", {})
```
The response includes `idf_path`, `idf_version`, and the `PATH` additions that ESP-IDF exports. If `success` is `false`, ESP-IDF is not installed or not on the system path.
2. Check which tools are installed vs missing:
```python
result = await client.call_tool("idf_tools_check", {})
```
The response lists every tool with its install status. The `installed_count` and `missing_count` fields give a quick summary.
</Steps>
<Aside type="tip">
You can also read the `esp://idf/status` resource for a quick overview that includes per-target readiness without calling any tools.
</Aside>
## Set up a new target chip
When you need to build for a chip you have not used before (say, an ESP32-P4), some toolchains may be missing. The RISC-V targets need the `riscv32-esp-elf` toolchain; Xtensa targets need `xtensa-esp-elf`.
<Steps>
1. Check readiness for the specific target:
```python
result = await client.call_tool("idf_tools_check", {
"target": "esp32p4"
})
```
Look at the `target_ready` field. If it is `false`, the `missing_for_target` array tells you exactly which tools are needed.
2. Install the missing tools:
```python
result = await client.call_tool("idf_tools_install", {
"targets": ["esp32p4"]
})
```
This downloads and installs only the tools required for the specified targets. You can pass multiple targets at once (e.g. `["esp32p4", "esp32c6"]`) to install everything in a single operation.
3. Verify the installation:
```python
result = await client.call_tool("idf_tools_check", {
"target": "esp32p4"
})
# result["target_ready"] should now be true
```
</Steps>
<Aside type="caution">
Toolchain downloads can be large -- 100MB or more per architecture. The install operation has a 10-minute timeout. On slow connections, install one target at a time.
</Aside>
## Build a project
With the toolchain in place, build an ESP-IDF project for your target.
```python
result = await client.call_tool("idf_build_project", {
"project_path": "/home/user/esp/my-project",
"target": "esp32p4"
})
```
The `project_path` must point to a directory containing a `CMakeLists.txt`. The tool runs `idf.py set-target` followed by `idf.py build`.
<Tabs>
<TabItem label="Incremental build">
The default behavior is an incremental build. CMake only recompiles files that changed since the last build.
```python
result = await client.call_tool("idf_build_project", {
"project_path": "/home/user/esp/my-project",
"target": "esp32s3"
})
```
</TabItem>
<TabItem label="Clean build">
If you are switching targets or need to start fresh, set `clean` to `true`. This runs `idf.py fullclean` before building.
```python
result = await client.call_tool("idf_build_project", {
"project_path": "/home/user/esp/my-project",
"target": "esp32s3",
"clean": True
})
```
A clean build is also useful when you see stale object files causing link errors after changing `sdkconfig` options.
</TabItem>
</Tabs>
## Flash to device
Once the build completes, flash it to a connected device.
```python
result = await client.call_tool("idf_flash_project", {
"project_path": "/home/user/esp/my-project",
"port": "/dev/ttyUSB0"
})
```
The tool runs `idf.py flash` using the build artifacts already in the project's `build/` directory. It flashes the bootloader, partition table, and application binary in one operation.
<Aside type="tip">
The default baud rate is 460800. If you experience frequent flash failures, drop to a lower rate:
```python
result = await client.call_tool("idf_flash_project", {
"project_path": "/home/user/esp/my-project",
"port": "/dev/ttyUSB0",
"baud": 115200
})
```
</Aside>
## Monitor output
After flashing, capture the device's serial output to verify it booted correctly.
```python
result = await client.call_tool("idf_monitor", {
"port": "/dev/ttyUSB0",
"duration": 15
})
```
The tool captures serial output for the specified duration (default 10 seconds, max 60) and returns it as text.
### ELF-based crash decoding
When you provide the `project_path`, IDF monitor uses the built ELF file to decode crash backtraces into source file names and line numbers.
```python
result = await client.call_tool("idf_monitor", {
"port": "/dev/ttyUSB0",
"project_path": "/home/user/esp/my-project",
"duration": 20
})
```
Without `project_path`, backtrace addresses appear as raw hex values. With it, you get output like `app_main.c:42` instead of `0x400d1234`.
## Troubleshoot
### ESP-IDF not detected
If `idf_env_info` returns `success: false`, the IDF path is not set. Common causes:
- ESP-IDF is not installed. Follow the [ESP-IDF Get Started guide](https://docs.espressif.com/projects/esp-idf/en/stable/esp32/get-started/) to install it.
- The `IDF_PATH` environment variable is not set in the shell where the MCP server runs. Make sure `export.sh` (or `export.bat` on Windows) has been sourced before starting the server.
### Tier 2 tools report "environment not available"
The build, flash, and monitor tools require a fully configured IDF environment. If they fail with this error:
<Steps>
1. Run `idf_tools_check` to see what is missing.
2. Run `idf_tools_install` with the appropriate targets.
3. Restart the MCP server so the updated PATH takes effect.
</Steps>
### Missing tools for a target
When `idf_tools_check` shows `target_ready: false`, the specific toolchain for that chip architecture is not installed. RISC-V targets (esp32c2, esp32c3, esp32c5, esp32c6, esp32c61, esp32h2, esp32p4) need `riscv32-esp-elf`. Xtensa targets (esp32, esp32s2, esp32s3) need `xtensa-esp-elf`. Run `idf_tools_install` with the target name and the correct toolchain is resolved automatically.
### Build failures after switching targets
If you change the `target` parameter without cleaning the build directory, CMake may fail with configuration errors. Use a clean build:
```python
result = await client.call_tool("idf_build_project", {
"project_path": "/home/user/esp/my-project",
"target": "esp32c6",
"clean": True
})
```
### Flash connection failures
If flashing fails with a connection timeout:
- Confirm the device is in download mode. Most dev boards enter download mode automatically, but some require holding the BOOT button during reset.
- Check the serial port path. Use `esp_detect_ports` to list connected devices.
- Try a lower baud rate. Long USB cables or cheap adapters may not sustain 460800 baud reliably.
<LinkCard
title="IDF Integration Reference"
description="Full parameter details for all IDF toolchain and project workflow tools."
href="/reference/idf-integration/"
/>

View File

@ -4,7 +4,7 @@ build-backend = "hatchling.build"
[project] [project]
name = "mcesptool" name = "mcesptool"
version = "2026.02.23" version = "2026.02.25"
description = "FastMCP server for ESP32/ESP8266 development with esptool integration" description = "FastMCP server for ESP32/ESP8266 development with esptool integration"
readme = "README.md" readme = "README.md"
requires-python = ">=3.10" requires-python = ">=3.10"

View File

@ -68,7 +68,14 @@ def _validate_tool_names(names: list[str]) -> list[str]:
def _parse_tools_list(output: str) -> list[dict[str, Any]]: def _parse_tools_list(output: str) -> list[dict[str, Any]]:
"""Parse ``idf_tools.py list`` text output into structured records. """Parse ``idf_tools.py list`` text output into structured records.
The output format is roughly:: Handles two output formats:
**ESP-IDF v5.x** (compact)::
* xtensa-esp-elf-gdb: GDB for Xtensa
- 14.2_20240403 (recommended, installed)
**Older IDF** (verbose)::
* xtensa-esp-elf-gdb * xtensa-esp-elf-gdb
- Version 14.2_20240403 - Version 14.2_20240403
@ -83,20 +90,32 @@ def _parse_tools_list(output: str) -> list[dict[str, Any]]:
if not stripped: if not stripped:
continue continue
# Tool header: "* tool-name" # Tool header: "* tool-name" or "* tool-name: Description"
if stripped.startswith("* "): if stripped.startswith("* "):
if current_tool is not None: if current_tool is not None:
if current_version: if current_version:
current_tool["versions"].append(current_version) current_tool["versions"].append(current_version)
tools.append(current_tool) tools.append(current_tool)
current_tool = {"name": stripped[2:].strip(), "versions": []}
header = stripped[2:].strip()
# v5.x format: "tool-name: Description"
if ": " in header:
name, description = header.split(": ", 1)
else:
name, description = header, ""
current_tool = {
"name": name.strip(),
"description": description.strip(),
"supported_targets": [],
"versions": [],
}
current_version = None current_version = None
continue continue
if current_tool is None: if current_tool is None:
continue continue
# Version line: "- Version X.Y.Z" # Version line: "- Version X.Y.Z" (older format)
version_match = re.match(r"^-\s+Version\s+(.+)", stripped) version_match = re.match(r"^-\s+Version\s+(.+)", stripped)
if version_match: if version_match:
if current_version: if current_version:
@ -108,17 +127,47 @@ def _parse_tools_list(output: str) -> list[dict[str, Any]]:
} }
continue continue
# Archive line: "- filename.tar.gz (installed)" or just "- filename" # Archive line: "- filename.tar.gz (installed)" (older format)
# Must check BEFORE v5.x version lines — archive filenames contain
# file extensions (.tar.gz, .zip, etc.) that distinguish them.
if current_version and stripped.startswith("- "): if current_version and stripped.startswith("- "):
archive_part = stripped[2:].strip() item_text = stripped[2:].strip()
installed = "(installed)" in archive_part # Detect archive by file extension in the first token
if installed: first_token = item_text.split()[0] if item_text.split() else ""
current_version["installed"] = True if re.search(r"\.(tar\.gz|tar\.xz|tar\.bz2|zip|dmg|exe)$", first_token):
archive_part = archive_part.replace("(installed)", "").strip() installed = "(installed)" in item_text
current_version["archives"].append({ if installed:
"file": archive_part, current_version["installed"] = True
"installed": installed, item_text = item_text.replace("(installed)", "").strip()
}) current_version["archives"].append({
"file": item_text,
"installed": installed,
})
continue
# v5.x version line: "- 14.2_20240403 (recommended, installed)"
v5_match = re.match(r"^-\s+(\S+)\s*\(([^)]+)\)", stripped)
if v5_match:
if current_version:
current_tool["versions"].append(current_version)
status_text = v5_match.group(2)
current_version = {
"version": v5_match.group(1),
"installed": "installed" in status_text,
"status": status_text.strip(),
"archives": [],
}
continue
# v5.x version line without status: "- 14.2_20240403"
v5_bare = re.match(r"^-\s+(\S+)\s*$", stripped)
if v5_bare and current_version is None:
current_version = {
"version": v5_bare.group(1),
"installed": False,
"archives": [],
}
continue
# Flush final tool # Flush final tool
if current_tool is not None: if current_tool is not None:

View File

@ -126,7 +126,7 @@ class ESPToolServer:
return { return {
"server_name": "MCP ESPTool Server", "server_name": "MCP ESPTool Server",
"version": "2026.02.23", "version": "2026.02.25",
"uptime_seconds": round(uptime, 2), "uptime_seconds": round(uptime, 2),
"configuration": self.config.to_dict(), "configuration": self.config.to_dict(),
"components": list(self.components.keys()), "components": list(self.components.keys()),
@ -402,29 +402,23 @@ class ESPToolServer:
# CLI interface # CLI interface
@click.command() @click.command()
@click.option("--config", "-c", help="Configuration file path")
@click.option("--debug", "-d", is_flag=True, help="Enable debug logging") @click.option("--debug", "-d", is_flag=True, help="Enable debug logging")
@click.option("--production", "-p", is_flag=True, help="Run in production mode") @click.option("--production", "-p", is_flag=True, help="Run in production mode")
@click.option("--port", default=8080, help="Server port (for future HTTP interface)") @click.version_option(version="2026.02.25")
@click.version_option(version="2026.02.23") def main(debug: bool, production: bool) -> None:
def main(config: str | None, debug: bool, production: bool, port: int) -> None:
""" """
FastMCP ESP Development Server FastMCP ESP Development Server
Provides AI-powered ESP32/ESP8266 development workflows through natural language. Provides ESP32/ESP8266 development workflows through MCP.
Configure via environment variables (see reference docs).
""" """
# Configure logging level # Configure logging level
if debug: if debug:
logging.getLogger().setLevel(logging.DEBUG) logging.getLogger().setLevel(logging.DEBUG)
logger.info("🐛 Debug logging enabled") logger.info("🐛 Debug logging enabled")
# Load configuration # Load configuration from environment
if config: server_config = ESPToolServerConfig.from_environment()
logger.info(f"📁 Loading configuration from: {config}")
# TODO: Implement configuration file loading
server_config = ESPToolServerConfig.from_environment()
else:
server_config = ESPToolServerConfig.from_environment()
# Override production mode if specified # Override production mode if specified
if production: if production:
@ -434,7 +428,7 @@ def main(config: str | None, debug: bool, production: bool, port: int) -> None:
# Display startup banner # Display startup banner
console.print("\n[bold blue]🚀 FastMCP ESP Development Server[/bold blue]") console.print("\n[bold blue]🚀 FastMCP ESP Development Server[/bold blue]")
console.print("[dim]AI-powered ESP32/ESP8266 development workflows[/dim]") console.print("[dim]AI-powered ESP32/ESP8266 development workflows[/dim]")
console.print("[dim]Version: 2026.02.23[/dim]") console.print("[dim]Version: 2026.02.25[/dim]")
console.print() console.print()
# Create and run server # Create and run server

View File

@ -166,6 +166,63 @@ class TestParseToolsList:
assert tools[0]["name"] == "lonely-tool" assert tools[0]["name"] == "lonely-tool"
assert tools[0]["versions"][0]["archives"] == [] assert tools[0]["versions"][0]["archives"] == []
# v5.x compact format tests (real ESP-IDF v5.3 output)
SAMPLE_V5 = textwrap.dedent("""\
* xtensa-esp-elf-gdb: GDB for Xtensa
- 14.2_20240403 (recommended, installed)
* riscv32-esp-elf-gdb: GDB for RISC-V
- 14.2_20240403 (recommended, installed)
* xtensa-esp-elf: Toolchain for 32-bit Xtensa based on GCC
- esp-13.2.0_20240530 (recommended, installed)
* cmake: CMake build system (optional)
- 3.24.0 (recommended, installed)
- 3.16.3 (supported)
* qemu-riscv32: QEMU for RISC-V (optional)
- esp_develop_8.2.0_20240122 (recommended)
""")
def test_v5_extracts_tool_names(self):
tools = _parse_tools_list(self.SAMPLE_V5)
names = [t["name"] for t in tools]
assert "xtensa-esp-elf-gdb" in names
assert "riscv32-esp-elf-gdb" in names
assert "cmake" in names
assert "qemu-riscv32" in names
def test_v5_extracts_descriptions(self):
tools = _parse_tools_list(self.SAMPLE_V5)
gdb = next(t for t in tools if t["name"] == "xtensa-esp-elf-gdb")
assert gdb["description"] == "GDB for Xtensa"
def test_v5_tool_count(self):
tools = _parse_tools_list(self.SAMPLE_V5)
assert len(tools) == 5
def test_v5_installed_status(self):
tools = _parse_tools_list(self.SAMPLE_V5)
gdb = next(t for t in tools if t["name"] == "xtensa-esp-elf-gdb")
assert gdb["versions"][0]["installed"] is True
qemu = next(t for t in tools if t["name"] == "qemu-riscv32")
assert qemu["versions"][0]["installed"] is False
def test_v5_version_extracted(self):
tools = _parse_tools_list(self.SAMPLE_V5)
xtensa = next(t for t in tools if t["name"] == "xtensa-esp-elf")
assert xtensa["versions"][0]["version"] == "esp-13.2.0_20240530"
def test_v5_multiple_versions(self):
tools = _parse_tools_list(self.SAMPLE_V5)
cmake = next(t for t in tools if t["name"] == "cmake")
assert len(cmake["versions"]) == 2
assert cmake["versions"][0]["version"] == "3.24.0"
assert cmake["versions"][1]["version"] == "3.16.3"
def test_v5_status_field(self):
tools = _parse_tools_list(self.SAMPLE_V5)
cmake = next(t for t in tools if t["name"] == "cmake")
assert cmake["versions"][0]["status"] == "recommended, installed"
assert cmake["versions"][1]["status"] == "supported"
class TestParseToolsCheck: class TestParseToolsCheck:
"""Tests for _parse_tools_check parser.""" """Tests for _parse_tools_check parser."""
@ -826,3 +883,366 @@ class TestTargetArch:
riscv_targets = {k for k, v in TARGET_ARCH.items() if v == "riscv"} riscv_targets = {k for k, v in TARGET_ARCH.items() if v == "riscv"}
expected = {"esp32c2", "esp32c3", "esp32c5", "esp32c6", "esp32c61", "esp32h2", "esp32p4"} expected = {"esp32c2", "esp32c3", "esp32c5", "esp32c6", "esp32c61", "esp32h2", "esp32p4"}
assert riscv_targets == expected assert riscv_targets == expected
# ------------------------------------------------------------------ #
# 5. Mocked invocation tests (tool / resource / prompt functions)
# ------------------------------------------------------------------ #
class TestIdfBuildProject:
"""Test idf_build_project tool function execution (mocked subprocess)."""
@pytest.fixture
def project_dir(self, tmp_path):
"""Create a minimal ESP-IDF project directory with CMakeLists.txt."""
proj = tmp_path / "my_project"
proj.mkdir()
(proj / "CMakeLists.txt").write_text("cmake_minimum_required(VERSION 3.16)\n")
return proj
@pytest.mark.asyncio
async def test_build_validates_target(self, mock_app, config, mock_context, project_dir):
"""Invalid target returns error before any subprocess is spawned."""
IDFIntegration(mock_app, config)
build_fn = mock_app._registered_tools["idf_build_project"]
result = await build_fn(mock_context, project_path=str(project_dir), target="esp8266")
assert result["success"] is False
assert "Unknown target" in result["error"]
@pytest.mark.asyncio
async def test_build_checks_cmakelists_exists(self, mock_app, config, mock_context, tmp_path):
"""Missing CMakeLists.txt returns error."""
empty_dir = tmp_path / "no_cmake"
empty_dir.mkdir()
# Put the directory inside a project root so path validation passes
config.project_roots = [tmp_path]
IDFIntegration(mock_app, config)
build_fn = mock_app._registered_tools["idf_build_project"]
result = await build_fn(mock_context, project_path=str(empty_dir), target="esp32")
assert result["success"] is False
assert "CMakeLists.txt" in result["error"]
@pytest.mark.asyncio
async def test_build_calls_set_target_and_build(
self, mock_app, config, mock_context, project_dir
):
"""Mocks _run_idf_py and verifies set-target + build are called."""
config.project_roots = [project_dir.parent]
integration = IDFIntegration(mock_app, config)
build_fn = mock_app._registered_tools["idf_build_project"]
calls = []
async def fake_run_idf_py(args, timeout=300.0):
calls.append(args)
return {"success": True, "output": "ok", "stderr": ""}
integration._run_idf_py = fake_run_idf_py
result = await build_fn(mock_context, project_path=str(project_dir), target="esp32s3")
assert result["success"] is True
# Should have called set-target then build
assert len(calls) == 2
assert "set-target" in calls[0]
assert "esp32s3" in calls[0]
assert "build" in calls[1]
@pytest.mark.asyncio
async def test_build_clean_runs_fullclean(
self, mock_app, config, mock_context, project_dir
):
"""When clean=True, verifies fullclean is called before set-target and build."""
config.project_roots = [project_dir.parent]
integration = IDFIntegration(mock_app, config)
build_fn = mock_app._registered_tools["idf_build_project"]
calls = []
async def fake_run_idf_py(args, timeout=300.0):
calls.append(args)
return {"success": True, "output": "ok", "stderr": ""}
integration._run_idf_py = fake_run_idf_py
result = await build_fn(
mock_context, project_path=str(project_dir), target="esp32", clean=True,
)
assert result["success"] is True
# First call should be fullclean, then set-target, then build
assert len(calls) == 3
assert "fullclean" in calls[0]
assert "set-target" in calls[1]
assert "build" in calls[2]
@pytest.mark.asyncio
async def test_build_project_path_validation(self, mock_app, config, mock_context, tmp_path):
"""Path outside project_roots is rejected."""
allowed_root = tmp_path / "allowed"
allowed_root.mkdir()
outside_dir = tmp_path / "outside"
outside_dir.mkdir()
(outside_dir / "CMakeLists.txt").write_text("# stub")
config.project_roots = [allowed_root]
# Clear idf_path so it can't be used as fallback
config.esp_idf_path = None
IDFIntegration(mock_app, config)
build_fn = mock_app._registered_tools["idf_build_project"]
result = await build_fn(mock_context, project_path=str(outside_dir), target="esp32")
assert result["success"] is False
assert "outside" in result["error"].lower() or "roots" in result["error"].lower()
class TestIdfFlashProject:
"""Test idf_flash_project tool function execution (mocked subprocess)."""
@pytest.fixture
def built_project(self, tmp_path):
"""Create a project directory that looks like it has been built."""
proj = tmp_path / "built_project"
proj.mkdir()
(proj / "CMakeLists.txt").write_text("# stub")
(proj / "build").mkdir()
return proj
@pytest.mark.asyncio
async def test_flash_validates_baud_rate(self, mock_app, config, mock_context, built_project):
"""Invalid baud rate returns error."""
config.project_roots = [built_project.parent]
IDFIntegration(mock_app, config)
flash_fn = mock_app._registered_tools["idf_flash_project"]
result = await flash_fn(
mock_context,
project_path=str(built_project),
port="/dev/ttyUSB0",
baud=12345,
)
assert result["success"] is False
assert "Invalid baud rate" in result["error"]
@pytest.mark.asyncio
async def test_flash_calls_idf_py_flash(self, mock_app, config, mock_context, built_project):
"""Mocks _run_idf_py and verifies args include --port and flash."""
config.project_roots = [built_project.parent]
integration = IDFIntegration(mock_app, config)
flash_fn = mock_app._registered_tools["idf_flash_project"]
calls = []
async def fake_run_idf_py(args, timeout=300.0):
calls.append(args)
return {"success": True, "output": "flash ok", "stderr": ""}
integration._run_idf_py = fake_run_idf_py
result = await flash_fn(
mock_context,
project_path=str(built_project),
port="/dev/ttyUSB0",
baud=460800,
)
assert result["success"] is True
assert len(calls) == 1
cmd_args = calls[0]
assert "flash" in cmd_args
assert "-p" in cmd_args
assert "/dev/ttyUSB0" in cmd_args
assert "-b" in cmd_args
assert "460800" in cmd_args
@pytest.mark.asyncio
async def test_flash_project_path_validation(self, mock_app, config, mock_context, tmp_path):
"""Path outside project_roots is rejected."""
allowed_root = tmp_path / "allowed"
allowed_root.mkdir()
outside_dir = tmp_path / "outside"
outside_dir.mkdir()
(outside_dir / "build").mkdir()
config.project_roots = [allowed_root]
config.esp_idf_path = None
IDFIntegration(mock_app, config)
flash_fn = mock_app._registered_tools["idf_flash_project"]
result = await flash_fn(
mock_context,
project_path=str(outside_dir),
port="/dev/ttyUSB0",
)
assert result["success"] is False
assert "outside" in result["error"].lower() or "roots" in result["error"].lower()
class TestIdfMonitor:
"""Test idf_monitor tool function execution (mocked subprocess)."""
@pytest.mark.asyncio
async def test_monitor_calls_idf_py(self, mock_app, config, mock_context):
"""Mocks subprocess and verifies monitor command is issued."""
config.get_idf_available = MagicMock(return_value=True)
integration = IDFIntegration(mock_app, config)
monitor_fn = mock_app._registered_tools["idf_monitor"]
# Mock _build_idf_env to return a valid env
integration._build_idf_env = AsyncMock(
return_value={"PATH": "/usr/bin", "IDF_PATH": str(config.esp_idf_path)}
)
mock_proc = AsyncMock()
mock_proc.communicate = AsyncMock(return_value=(b"serial output here", b""))
mock_proc.returncode = 0
with patch("asyncio.create_subprocess_exec", return_value=mock_proc) as mock_exec:
result = await monitor_fn(mock_context, port="/dev/ttyUSB0", duration=5)
assert result["success"] is True
assert result["port"] == "/dev/ttyUSB0"
assert result["duration"] == 5
# Verify the subprocess was called with monitor args
exec_args = mock_exec.call_args[0]
assert "monitor" in exec_args
assert "--no-reset" in exec_args
assert "-p" in exec_args
@pytest.mark.asyncio
async def test_monitor_timeout_cleanup(self, mock_app, config, mock_context):
"""Process is terminated/killed on timeout."""
config.get_idf_available = MagicMock(return_value=True)
integration = IDFIntegration(mock_app, config)
monitor_fn = mock_app._registered_tools["idf_monitor"]
integration._build_idf_env = AsyncMock(
return_value={"PATH": "/usr/bin", "IDF_PATH": str(config.esp_idf_path)}
)
mock_proc = AsyncMock()
# First communicate raises TimeoutError, then terminate+communicate also times out
mock_proc.communicate = AsyncMock(side_effect=asyncio.TimeoutError)
mock_proc.returncode = None
mock_proc.terminate = MagicMock()
mock_proc.kill = MagicMock()
mock_proc.wait = AsyncMock()
with patch("asyncio.create_subprocess_exec", return_value=mock_proc):
with patch("asyncio.wait_for", side_effect=asyncio.TimeoutError):
result = await monitor_fn(mock_context, port="/dev/ttyUSB0", duration=1)
assert result["success"] is True
# Process should have been killed (terminate or kill called)
assert mock_proc.kill.called or mock_proc.terminate.called
class TestIdfStatusResource:
"""Test esp://idf/status resource function."""
@pytest.mark.asyncio
async def test_status_resource_returns_valid_json(self, mock_app, config):
"""Call the registered resource function; verify it returns dict with expected keys."""
integration = IDFIntegration(mock_app, config)
status_fn = mock_app._registered_resources["esp://idf/status"]
# Mock _run_idf_tools so it doesn't actually run subprocess
integration._run_idf_tools = AsyncMock(
return_value={
"success": True,
"output": "xtensa-esp-elf 14.2.0: found\ncmake 3.24.0: not found\n",
"stderr": "",
}
)
# Mock _load_tools_json to skip disk I/O
integration._load_tools_json = AsyncMock(return_value=None)
# Write a version.txt so the resource can read it
version_file = config.esp_idf_path / "version.txt"
version_file.write_text("v5.3\n")
result = await status_fn()
assert isinstance(result, dict)
assert result["available"] is True
assert "idf_path" in result
assert "idf_version" in result
assert result["idf_version"] == "v5.3"
assert "installed_tools" in result
assert "missing_tools" in result
assert "missing_tool_names" in result
@pytest.mark.asyncio
async def test_status_with_no_idf_path(self, mock_app, config):
"""config.esp_idf_path = None returns graceful response."""
config.esp_idf_path = None
IDFIntegration(mock_app, config)
status_fn = mock_app._registered_resources["esp://idf/status"]
result = await status_fn()
assert isinstance(result, dict)
assert result["available"] is False
assert "error" in result
class TestIdfSetupTargetPrompt:
"""Test idf_setup_target prompt function."""
@pytest.mark.asyncio
async def test_prompt_returns_markdown(self, mock_app, config):
"""Call registered prompt with target='esp32p4'; verify non-empty output."""
integration = IDFIntegration(mock_app, config)
prompt_fn = mock_app._registered_prompts["idf_setup_target"]
# Mock _run_idf_tools to return something parseable
integration._run_idf_tools = AsyncMock(
return_value={
"success": True,
"output": "riscv32-esp-elf 14.2.0: found\ncmake 3.24.0: found\n",
"stderr": "",
}
)
integration._load_tools_json = AsyncMock(return_value={
"tools": [
{
"name": "riscv32-esp-elf",
"description": "RISC-V compiler",
"supported_targets": ["esp32c3", "esp32c6", "esp32h2", "esp32p4"],
"versions": [{"name": "14.2.0", "status": "recommended"}],
},
]
})
result = await prompt_fn(target="esp32p4")
assert isinstance(result, str)
assert len(result) > 0
assert "esp32p4" in result
assert "riscv" in result.lower()
@pytest.mark.asyncio
async def test_prompt_invalid_target(self, mock_app, config):
"""Call with bad target; verify architecture shows 'unknown'."""
integration = IDFIntegration(mock_app, config)
prompt_fn = mock_app._registered_prompts["idf_setup_target"]
# Mock _run_idf_tools -- the prompt still runs check even for unknown targets
integration._run_idf_tools = AsyncMock(
return_value={
"success": True,
"output": "",
"stderr": "",
}
)
integration._load_tools_json = AsyncMock(return_value={"tools": []})
result = await prompt_fn(target="badchip")
assert isinstance(result, str)
assert "badchip" in result
assert "unknown" in result.lower()

View File

@ -1,140 +0,0 @@
"""
Test middleware system
"""
from unittest.mock import AsyncMock
import pytest
from mcesptool.middleware import LoggerInterceptor, MiddlewareFactory
class MockContext:
"""Mock FastMCP context for testing"""
def __init__(self):
self.log = AsyncMock()
self.progress = AsyncMock()
self.request_user_input = AsyncMock()
self.sample = AsyncMock()
def test_middleware_factory_supported_tools():
"""Test middleware factory tool support"""
supported = MiddlewareFactory.get_supported_tools()
assert isinstance(supported, dict)
assert "esptool" in supported
assert isinstance(supported["esptool"], str)
def test_middleware_factory_tool_support_check():
"""Test tool support checking"""
assert MiddlewareFactory.is_tool_supported("esptool")
assert not MiddlewareFactory.is_tool_supported("nonexistent_tool")
def test_middleware_factory_create_esptool():
"""Test ESPTool middleware creation"""
context = MockContext()
middleware = MiddlewareFactory.create_esptool_middleware(context)
assert middleware is not None
assert middleware.context == context
assert middleware.operation_id.startswith("esptool_")
def test_middleware_factory_unsupported_tool():
"""Test error handling for unsupported tools"""
context = MockContext()
with pytest.raises(Exception): # Should raise ToolNotFoundError
MiddlewareFactory.create_middleware("unsupported_tool", context)
def test_middleware_info():
"""Test middleware information retrieval"""
info = MiddlewareFactory.get_middleware_info("esptool")
assert isinstance(info, dict)
assert info["tool_name"] == "esptool"
assert "middleware_class" in info
assert "description" in info
def test_logger_interceptor_capabilities():
"""Test logger interceptor capability detection"""
context = MockContext()
# Create a concrete implementation for testing
class TestInterceptor(LoggerInterceptor):
async def install_hooks(self):
pass
async def remove_hooks(self):
pass
def get_interaction_points(self):
return ["test_operation"]
interceptor = TestInterceptor(context, "test_op")
assert interceptor.capabilities["logging"] is True
assert interceptor.capabilities["progress"] is True
assert interceptor.capabilities["elicitation"] is True
@pytest.mark.asyncio
async def test_logger_interceptor_logging():
"""Test logger interceptor logging methods"""
context = MockContext()
class TestInterceptor(LoggerInterceptor):
async def install_hooks(self):
pass
async def remove_hooks(self):
pass
def get_interaction_points(self):
return []
interceptor = TestInterceptor(context, "test_op")
# Test logging methods
await interceptor._log_info("Test info message")
await interceptor._log_warning("Test warning")
await interceptor._log_error("Test error")
await interceptor._log_success("Test success")
# Verify context.log was called
assert context.log.call_count == 4
@pytest.mark.asyncio
async def test_logger_interceptor_progress():
"""Test logger interceptor progress tracking"""
context = MockContext()
class TestInterceptor(LoggerInterceptor):
async def install_hooks(self):
pass
async def remove_hooks(self):
pass
def get_interaction_points(self):
return []
interceptor = TestInterceptor(context, "test_op")
# Test progress update
await interceptor._update_progress(50, "Half complete")
# Verify context.progress was called
context.progress.assert_called_once()
# Check progress history
assert len(interceptor.progress_history) == 1
assert interceptor.progress_history[0]["percentage"] == 50

2
uv.lock generated
View File

@ -888,7 +888,7 @@ wheels = [
[[package]] [[package]]
name = "mcesptool" name = "mcesptool"
version = "2026.2.23" version = "2026.2.25"
source = { editable = "." } source = { editable = "." }
dependencies = [ dependencies = [
{ name = "click" }, { name = "click" },