Compare commits

..

No commits in common. "d5dc9c99c07e19e66b7bd43e56ab400cf6ba36cd" and "3818599b94b34174ea73cf94711432de13c94d60" have entirely different histories.

32 changed files with 2150 additions and 3596 deletions

View File

@ -1,6 +1,6 @@
# mcesptool
# MCP ESPTool Server
FastMCP server for ESP32/ESP8266 development workflows via Model Context Protocol.
FastMCP server providing AI-powered ESP32/ESP8266 development workflows through natural language interfaces.
## Features
@ -18,17 +18,17 @@ FastMCP server for ESP32/ESP8266 development workflows via Model Context Protoco
```bash
# Install with uvx (recommended)
uvx mcesptool
uvx mcp-esptool-server
# Or install in project
uv add mcesptool
uv add mcp-esptool-server
```
### Claude Code Integration
```bash
# Add to Claude Code
claude mcp add mcesptool "uvx mcesptool"
claude mcp add mcp-esptool-server "uvx mcp-esptool-server"
```
### Development Setup
@ -67,63 +67,6 @@ The server implements a component-based architecture with middleware for CLI too
- `Diagnostics`: Memory dumps and performance profiling
- `QemuManager`: QEMU-based ESP32 emulation with download mode, efuse, and flash support
## Flash Operations
Advanced flash management tools for efficient firmware deployment:
| Tool | Description |
|------|-------------|
| `esp_flash_firmware` | Flash a single binary to device |
| `esp_flash_multi` | Flash multiple binaries at different addresses in one operation |
| `esp_verify_flash` | Verify flash contents match a file without re-flashing |
| `esp_flash_read` | Read flash memory to a file |
| `esp_flash_erase` | Erase flash regions |
| `esp_flash_backup` | Create complete flash backup |
### Multi-File Flashing
Flash bootloader, partition table, and app in a single operation:
```python
esp_flash_multi(
files=[
{"address": "0x0", "path": "bootloader.bin"},
{"address": "0x8000", "path": "partitions.bin"},
{"address": "0x10000", "path": "app.bin"}
],
port="/dev/ttyUSB0",
verify=True
)
```
## RAM Loading (Development Iteration)
Test firmware changes without wearing out flash:
| Tool | Description |
|------|-------------|
| `esp_elf_to_ram_binary` | Convert ELF to RAM-loadable binary |
| `esp_load_ram` | Load and execute binary in RAM |
| `esp_serial_monitor` | Capture serial output from device |
### Workflow
```bash
# 1. Build your ESP-IDF project
idf.py build
# 2. Convert ELF to RAM binary
esp_elf_to_ram_binary(elf_path="build/my_app.elf", chip="esp32s3")
# 3. Load to RAM and execute (no flash wear!)
esp_load_ram(binary_path="my_app-ram.bin", port="/dev/ttyUSB0")
# 4. Capture output
esp_serial_monitor(port="/dev/ttyUSB0", duration_seconds=10)
```
**Note:** RAM loading requires ELFs built without secure boot (`CONFIG_SECURE_BOOT=n`). Some PlatformIO defaults may be incompatible.
## QEMU Emulation
Run virtual ESP32 devices without physical hardware. Requires [Espressif's QEMU fork](https://github.com/espressif/qemu):

View File

@ -3,14 +3,14 @@ requires = ["hatchling"]
build-backend = "hatchling.build"
[project]
name = "mcesptool"
name = "mcp-esptool-server"
version = "2025.09.28.1"
description = "FastMCP server for ESP32/ESP8266 development with esptool integration"
readme = "README.md"
requires-python = ">=3.10"
license = { text = "MIT" }
authors = [
{ name = "Ryan Malloy", email = "ryan@supported.systems" }
{ name = "ESP Development Team", email = "dev@example.com" }
]
keywords = [
@ -32,6 +32,7 @@ classifiers = [
dependencies = [
"fastmcp>=2.12.4", # FastMCP framework
"esptool>=5.0.0", # ESPTool Python API
"pyserial>=3.5", # Serial communication
"pyserial-asyncio>=0.6", # Async serial support
"thefuzz[speedup]>=0.22.1", # Fuzzy string matching
@ -71,15 +72,17 @@ production = [
]
[project.scripts]
mcesptool = "mcesptool.server:main"
mcp-esptool-server = "mcp_esptool_server.server:main"
esptool-mcp = "mcp_esptool_server.cli:cli"
[project.urls]
Homepage = "https://git.supported.systems/MCP/mcesptool"
Repository = "https://git.supported.systems/MCP/mcesptool"
Issues = "https://git.supported.systems/MCP/mcesptool/issues"
Homepage = "https://github.com/yourusername/mcp-esptool-server"
Repository = "https://github.com/yourusername/mcp-esptool-server"
Issues = "https://github.com/yourusername/mcp-esptool-server/issues"
Documentation = "https://yourusername.github.io/mcp-esptool-server"
[tool.hatch.build.targets.wheel]
packages = ["src/mcesptool"]
packages = ["src/mcp_esptool_server"]
[tool.ruff]
line-length = 100
@ -102,12 +105,15 @@ disallow_incomplete_defs = true
check_untyped_defs = true
strict_optional = true
[[tool.mypy.overrides]]
module = "esptool.*"
ignore_missing_imports = true
[tool.pytest.ini_options]
testpaths = ["tests"]
asyncio_mode = "auto"
addopts = [
"--cov=src/mcesptool",
"--cov=src/mcp_esptool_server",
"--cov-report=html",
"--cov-report=term-missing",
"--cov-fail-under=85"
@ -123,4 +129,4 @@ exclude_lines = [
"def __repr__",
"raise AssertionError",
"raise NotImplementedError",
]
]

View File

@ -1,684 +0,0 @@
"""
Chip Control Component
Provides ESP32/ESP8266 chip detection, connection verification,
and basic control operations using esptool CLI subprocesses.
"""
import asyncio
import logging
import os
import re
import time
from typing import Any
from fastmcp import Context, FastMCP
from ..config import ESPToolServerConfig
logger = logging.getLogger(__name__)
class ChipControl:
"""ESP32/ESP8266 chip control and management via esptool subprocess"""
def __init__(self, app: FastMCP, config: ESPToolServerConfig):
self.app = app
self.config = config
# Set by server after QemuManager initialization (avoids circular import)
self.qemu_manager = None
self._register_tools()
def _register_tools(self) -> None:
"""Register chip control tools with FastMCP"""
@self.app.tool("esp_detect_chip")
async def detect_chip(
context: Context,
port: str | None = None,
baud_rate: int | None = None,
detailed: bool = False,
) -> dict[str, Any]:
"""
Detect ESP chip type and gather comprehensive information
Args:
port: Serial port (auto-detect if not specified)
baud_rate: Connection baud rate (use config default if not specified)
detailed: Include detailed chip information and eFuse data
"""
return await self._detect_chip_impl(context, port, baud_rate, detailed)
@self.app.tool("esp_connect_advanced")
async def connect_advanced(
context: Context,
port: str | None = None,
baud_rate: int | None = None,
timeout: int | None = None,
use_stub: bool = True,
retry_count: int = 3,
) -> dict[str, Any]:
"""
Advanced ESP device connection with retry logic and stub loading
Args:
port: Serial port (auto-detect if not specified)
baud_rate: Connection baud rate
timeout: Connection timeout in seconds
use_stub: Load ROM bootloader stub for faster operations
retry_count: Number of connection attempts
"""
return await self._connect_advanced_impl(
context, port, baud_rate, timeout, use_stub, retry_count
)
@self.app.tool("esp_reset_chip")
async def reset_chip(
context: Context, port: str | None = None, reset_type: str = "hard"
) -> dict[str, Any]:
"""
Reset ESP chip using various methods
Args:
port: Serial port (use active connection if not specified)
reset_type: Type of reset (hard, soft, bootloader)
"""
return await self._reset_chip_impl(context, port, reset_type)
@self.app.tool("esp_scan_ports")
async def scan_ports(context: Context, detailed: bool = False) -> dict[str, Any]:
"""
Scan for available ESP devices on all ports
Args:
detailed: Include detailed information about each detected device
"""
return await self._scan_ports_impl(context, detailed)
@self.app.tool("esp_load_test_firmware")
async def load_test_firmware(
context: Context, port: str | None = None, firmware_type: str = "blink"
) -> dict[str, Any]:
"""
Load test firmware for chip validation
Args:
port: Serial port (auto-detect if not specified)
firmware_type: Type of test firmware (blink, hello_world, wifi_scan)
"""
return await self._load_test_firmware_impl(context, port, firmware_type)
@self.app.tool("esp_load_ram")
async def load_ram(
context: Context,
binary_path: str,
port: str | None = None,
) -> dict[str, Any]:
"""
Load and execute binary in RAM without touching flash.
Perfect for rapid development iteration test changes without
wearing out flash or waiting for full flash cycle. The binary
must be compiled specifically for RAM execution (no flash relocation).
Note: The binary runs until the device is reset. Execution cannot
be stopped remotely without a hardware reset.
Args:
binary_path: Path to the RAM-executable binary
port: Serial port (auto-detect if not specified)
"""
return await self._load_ram_impl(context, binary_path, port)
@self.app.tool("esp_serial_monitor")
async def serial_monitor(
context: Context,
port: str,
baud_rate: int = 115200,
duration_seconds: float = 5.0,
reset_on_connect: bool = True,
) -> dict[str, Any]:
"""
Capture serial output from ESP device.
Opens the serial port and captures output for the specified duration.
Useful for reading boot messages, debug output, or application logs
without switching to a separate terminal monitor.
Args:
port: Serial port (required no auto-detect for monitor)
baud_rate: Serial baud rate (default: 115200)
duration_seconds: How long to capture (max 30 seconds, default: 5)
reset_on_connect: Reset device before capturing to get boot messages (default: true)
"""
return await self._serial_monitor_impl(
context, port, baud_rate, duration_seconds, reset_on_connect
)
# ------------------------------------------------------------------
# Subprocess runner
# ------------------------------------------------------------------
async def _run_esptool(
self,
port: str,
command: str,
timeout: float = 10.0,
connect_attempts: int = 3,
extra_args: list[str] | None = None,
) -> dict[str, Any]:
"""
Run an esptool command as a fully async subprocess.
Args:
port: Serial port or socket:// URI
command: esptool command (e.g. "chip-id", "flash-id")
timeout: Timeout in seconds
connect_attempts: Number of connection attempts
extra_args: Additional CLI flags inserted before the command
Returns:
dict with "success", "output", and optionally "error"
"""
cmd = [
self.config.esptool_path,
"--port", port,
"--connect-attempts", str(connect_attempts),
]
if extra_args:
cmd.extend(extra_args)
cmd.append(command)
proc = None
try:
proc = await asyncio.create_subprocess_exec(
*cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await asyncio.wait_for(proc.communicate(), timeout=timeout)
output = (stdout or b"").decode() + (stderr or b"").decode()
if proc.returncode != 0:
return {"success": False, "error": output.strip()}
return {"success": True, "output": output}
except asyncio.TimeoutError:
if proc and proc.returncode is None:
proc.kill()
await proc.wait()
return {"success": False, "error": f"Timeout ({timeout}s)"}
except FileNotFoundError:
return {"success": False, "error": f"esptool not found at {self.config.esptool_path}"}
except Exception as e:
if proc and proc.returncode is None:
proc.kill()
await proc.wait()
return {"success": False, "error": str(e)}
# ------------------------------------------------------------------
# Output parsing helpers
# ------------------------------------------------------------------
@staticmethod
def _parse_chip_output(output: str) -> dict[str, Any]:
"""Extract chip info fields from esptool chip-id / flash-id output."""
result: dict[str, Any] = {}
chip_match = re.search(r"Chip type:\s*(.+?)(?:\n|$)", output)
if not chip_match:
chip_match = re.search(r"Chip is\s+(.+?)(?:\n|$)", output)
if not chip_match:
chip_match = re.search(r"Detecting chip type[.…]+\s*(\S+)", output)
if chip_match:
result["chip_type"] = chip_match.group(1).strip()
mac_match = re.search(r"MAC:\s*([0-9a-f:]+)", output, re.IGNORECASE)
if mac_match:
result["mac_address"] = mac_match.group(1)
features_match = re.search(r"Features:\s*(.+?)(?:\n|$)", output)
if features_match:
result["features"] = [f.strip() for f in features_match.group(1).split(",")]
crystal_match = re.search(r"Crystal\s+(?:frequency:\s*|is\s+)(\d+)\s*MHz", output)
if crystal_match:
result["crystal_freq"] = f"{crystal_match.group(1)}MHz"
flash_size_match = re.search(r"Detected flash size:\s*(\S+)", output)
if flash_size_match:
result["flash_size"] = flash_size_match.group(1)
flash_mfr_match = re.search(r"Manufacturer:\s*(\S+)", output)
if flash_mfr_match:
result["flash_manufacturer"] = flash_mfr_match.group(1)
return result
# ------------------------------------------------------------------
# Tool implementations
# ------------------------------------------------------------------
async def _detect_chip_impl(
self, context: Context, port: str | None, baud_rate: int | None, detailed: bool
) -> dict[str, Any]:
"""Detect chip type via esptool chip-id subprocess."""
if not port:
port = await self._auto_detect_port()
if not port:
return {
"success": False,
"error": "No ESP devices found on available ports",
"scanned_ports": self.config.get_common_ports(),
}
baud_rate = baud_rate or self.config.default_baud_rate
start_time = time.time()
info = await self._run_esptool(
port, "chip-id",
extra_args=["--baud", str(baud_rate)],
connect_attempts=1,
)
if not info["success"]:
return {"success": False, "error": info["error"], "port": port, "baud_rate": baud_rate}
parsed = self._parse_chip_output(info["output"])
# Optionally fetch flash details
if detailed:
flash_info = await self._run_esptool(
port, "flash-id",
extra_args=["--baud", str(baud_rate)],
)
if flash_info["success"]:
parsed.update(self._parse_chip_output(flash_info["output"]))
connection_time = time.time() - start_time
chip_data = (
{
"chip_type": parsed.get("chip_type", "Unknown"),
"mac_address": parsed.get("mac_address"),
"flash_size": parsed.get("flash_size"),
"crystal_frequency": parsed.get("crystal_freq"),
"features": parsed.get("features"),
}
if detailed
else {
"chip_type": parsed.get("chip_type", "Unknown"),
"mac_address": parsed.get("mac_address"),
}
)
return {
"success": True,
"port": port,
"baud_rate": baud_rate,
"connection_time_seconds": round(connection_time, 2),
"chip_info": chip_data,
}
async def _connect_advanced_impl(
self,
context: Context,
port: str | None,
baud_rate: int | None,
timeout: int | None,
use_stub: bool,
retry_count: int,
) -> dict[str, Any]:
"""Verify device connectivity with retries via esptool chip-id subprocess."""
if not port:
port = await self._auto_detect_port()
if not port:
return {"success": False, "error": "No ESP devices found"}
baud_rate = baud_rate or self.config.default_baud_rate
connection_timeout = float(timeout or self.config.connection_timeout)
last_error = None
for attempt in range(retry_count):
logger.info("Connection attempt %d/%d on %s", attempt + 1, retry_count, port)
info = await self._run_esptool(
port, "chip-id",
timeout=connection_timeout,
connect_attempts=1,
extra_args=["--baud", str(baud_rate)],
)
if info["success"]:
parsed = self._parse_chip_output(info["output"])
return {
"success": True,
"port": port,
"baud_rate": baud_rate,
"attempt": attempt + 1,
"stub_loaded": use_stub, # CLI loads stubs automatically
"chip_type": parsed.get("chip_type", "Unknown"),
"mac_address": parsed.get("mac_address"),
}
last_error = info["error"]
logger.warning("Attempt %d failed: %s", attempt + 1, last_error)
if attempt < retry_count - 1:
await asyncio.sleep(1)
return {"success": False, "error": last_error, "attempts": retry_count, "port": port}
async def _reset_chip_impl(
self, context: Context, port: str | None, reset_type: str
) -> dict[str, Any]:
"""Reset chip via esptool --after flag."""
if not port:
port = await self._auto_detect_port()
if not port:
return {"success": False, "error": "No ESP devices found"}
after_map = {
"hard": "hard_reset",
"soft": "soft_reset",
"bootloader": "no_reset",
}
if reset_type not in after_map:
return {
"success": False,
"error": f"Unknown reset type: {reset_type}",
"available_types": list(after_map.keys()),
}
info = await self._run_esptool(
port, "chip-id",
timeout=10.0,
connect_attempts=1,
extra_args=["--after", after_map[reset_type]],
)
if not info["success"]:
return {"success": False, "error": info["error"], "port": port, "reset_type": reset_type}
return {
"success": True,
"port": port,
"reset_type": reset_type,
"timestamp": time.time(),
}
async def _scan_ports_impl(self, context: Context, detailed: bool) -> dict[str, Any]:
"""Scan for available ESP devices using subprocess probes."""
common_esp_ports = [
"/dev/ttyUSB0", "/dev/ttyUSB1", "/dev/ttyUSB2", "/dev/ttyUSB3",
"/dev/ttyACM0", "/dev/ttyACM1", "/dev/ttyACM2", "/dev/ttyACM3",
]
usb_ports = [p for p in common_esp_ports if os.path.exists(p)]
detected_devices: list[dict[str, Any]] = []
scan_results: dict[str, Any] = {}
if not usb_ports:
# Still check QEMU before returning empty
qemu_devices = self._get_qemu_devices()
detected_devices.extend(qemu_devices)
return {
"success": True,
"detected_devices": detected_devices,
"total_scanned": len(common_esp_ports) + len(qemu_devices),
"checked_ports": common_esp_ports,
"qemu_devices": qemu_devices or None,
"scan_results": {"note": "No USB/ACM ports found on system"},
"timestamp": time.time(),
}
for port in usb_ports:
info = await self._run_esptool(port, "chip-id", connect_attempts=1)
device_info: dict[str, Any] = {"port": port, "available": info["success"]}
if info["success"]:
device_info.update(self._parse_chip_output(info["output"]))
if detailed:
flash_info = await self._run_esptool(port, "flash-id")
if flash_info["success"]:
device_info.update(self._parse_chip_output(flash_info["output"]))
detected_devices.append(device_info)
else:
device_info["error"] = info["error"]
scan_results[port] = device_info
qemu_devices = self._get_qemu_devices()
detected_devices.extend(qemu_devices)
return {
"success": True,
"detected_devices": detected_devices,
"total_scanned": len(usb_ports) + len(qemu_devices),
"checked_ports": common_esp_ports,
"available_ports": usb_ports,
"qemu_devices": qemu_devices or None,
"scan_results": scan_results if detailed else None,
"timestamp": time.time(),
}
async def _load_test_firmware_impl(
self, context: Context, port: str | None, firmware_type: str
) -> dict[str, Any]:
"""Load test firmware (stub — requires ESP-IDF integration)."""
if not port:
port = await self._auto_detect_port()
if not port:
return {"success": False, "error": "No ESP devices found"}
test_firmwares = {
"blink": "Simple LED blink test",
"hello_world": "Serial output hello world",
"wifi_scan": "WiFi network scanner",
}
if firmware_type not in test_firmwares:
return {
"success": False,
"error": f"Unknown firmware type: {firmware_type}",
"available_types": list(test_firmwares.keys()),
}
return {
"success": True,
"port": port,
"firmware_type": firmware_type,
"description": test_firmwares[firmware_type],
"note": "Test firmware loading requires ESP-IDF integration (coming soon)",
"timestamp": time.time(),
}
async def _load_ram_impl(
self, context: Context, binary_path: str, port: str | None
) -> dict[str, Any]:
"""Load and execute binary in RAM via esptool load_ram."""
from pathlib import Path
bin_path = Path(binary_path)
if not bin_path.exists():
return {"success": False, "error": f"Binary file not found: {binary_path}"}
if not port:
port = await self._auto_detect_port()
if not port:
return {"success": False, "error": "No ESP devices found"}
start_time = time.time()
file_size = bin_path.stat().st_size
# esptool load-ram command loads binary to RAM and executes it
cmd = [
self.config.esptool_path,
"--port", port,
"load-ram", str(bin_path),
]
proc = None
try:
proc = await asyncio.create_subprocess_exec(
*cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await asyncio.wait_for(proc.communicate(), timeout=30.0)
output = (stdout or b"").decode() + (stderr or b"").decode()
elapsed = round(time.time() - start_time, 2)
if proc.returncode != 0:
return {
"success": False,
"error": output.strip(),
"port": port,
"binary_path": binary_path,
}
return {
"success": True,
"port": port,
"binary_path": binary_path,
"file_size": file_size,
"elapsed_seconds": elapsed,
"note": "Binary loaded to RAM and executing. Reset device to stop.",
}
except asyncio.TimeoutError:
if proc and proc.returncode is None:
proc.kill()
await proc.wait()
return {"success": False, "error": "Timeout loading binary to RAM"}
except FileNotFoundError:
return {"success": False, "error": f"esptool not found at {self.config.esptool_path}"}
except Exception as e:
if proc and proc.returncode is None:
proc.kill()
await proc.wait()
return {"success": False, "error": str(e)}
async def _serial_monitor_impl(
self,
context: Context,
port: str,
baud_rate: int,
duration_seconds: float,
reset_on_connect: bool,
) -> dict[str, Any]:
"""Capture serial output using pyserial (async wrapper)."""
import serial
# Clamp duration to safe range
duration_seconds = min(max(duration_seconds, 0.1), 30.0)
if not os.path.exists(port):
return {"success": False, "error": f"Port not found: {port}"}
captured_lines: list[str] = []
start_time = time.time()
try:
# Open serial port with short timeout for non-blocking reads
ser = serial.Serial(
port=port,
baudrate=baud_rate,
timeout=0.1,
)
# Reset device if requested (toggle DTR/RTS)
if reset_on_connect:
ser.dtr = False
ser.rts = True
await asyncio.sleep(0.1)
ser.rts = False
ser.dtr = True
await asyncio.sleep(0.1)
ser.dtr = False
# Read serial output for specified duration
deadline = time.time() + duration_seconds
buffer = b""
while time.time() < deadline:
# Non-blocking read in executor to avoid blocking event loop
chunk = await asyncio.get_event_loop().run_in_executor(
None, lambda: ser.read(1024)
)
if chunk:
buffer += chunk
# Process complete lines
while b"\n" in buffer:
line, buffer = buffer.split(b"\n", 1)
try:
decoded = line.decode("utf-8", errors="replace").rstrip("\r")
captured_lines.append(decoded)
except Exception:
captured_lines.append(line.hex())
else:
# Small sleep to avoid busy-waiting
await asyncio.sleep(0.05)
# Capture any remaining partial line
if buffer:
try:
captured_lines.append(buffer.decode("utf-8", errors="replace").rstrip("\r"))
except Exception:
captured_lines.append(buffer.hex())
ser.close()
elapsed = round(time.time() - start_time, 2)
return {
"success": True,
"port": port,
"baud_rate": baud_rate,
"duration_seconds": elapsed,
"reset_performed": reset_on_connect,
"line_count": len(captured_lines),
"output": "\n".join(captured_lines),
}
except serial.SerialException as e:
return {
"success": False,
"error": f"Serial error: {e}",
"port": port,
}
except Exception as e:
return {
"success": False,
"error": str(e),
"port": port,
}
# ------------------------------------------------------------------
# Helpers
# ------------------------------------------------------------------
def _get_qemu_devices(self) -> list[dict[str, Any]]:
"""Collect running QEMU instances as device entries."""
if not self.qemu_manager:
return []
devices = []
for qemu_info in self.qemu_manager.get_running_ports():
qemu_info["available"] = True
devices.append(qemu_info)
return devices
async def _auto_detect_port(self) -> str | None:
"""Auto-detect an ESP device port via quick subprocess probes."""
for port in self.config.get_common_ports():
if not os.path.exists(port):
continue
info = await self._run_esptool(port, "chip-id", connect_attempts=1)
if info["success"]:
return port
return None
async def health_check(self) -> dict[str, Any]:
"""Component health check"""
return {
"status": "healthy",
"esptool_path": self.config.esptool_path,
}

View File

@ -1,356 +0,0 @@
"""
Diagnostics Component
Provides ESP device diagnostics including memory dumps, flash identification,
performance profiling, and comprehensive diagnostic reporting. All operations
shell out to esptool as async subprocesses.
"""
import asyncio
import logging
import re
import time
from typing import Any
from fastmcp import Context, FastMCP
from ..config import ESPToolServerConfig
logger = logging.getLogger(__name__)
# Size suffixes for human-friendly parsing
_SIZE_MULTIPLIERS = {"B": 1, "KB": 1024, "MB": 1024 * 1024}
def _parse_size(size_str: str) -> int:
"""Parse a human-friendly size string like '1KB', '256B', '4MB' into bytes."""
size_str = size_str.strip().upper()
for suffix, mult in sorted(_SIZE_MULTIPLIERS.items(), key=lambda x: -len(x[0])):
if size_str.endswith(suffix):
num = size_str[: -len(suffix)].strip()
return int(num) * mult
# Try as plain integer (decimal or hex)
return int(size_str, 0)
class Diagnostics:
"""ESP device diagnostics and analysis"""
def __init__(self, app: FastMCP, config: ESPToolServerConfig) -> None:
self.app = app
self.config = config
self._register_tools()
async def _run_esptool(
self,
port: str,
args: list[str],
timeout: float = 30.0,
) -> dict[str, Any]:
"""Run esptool as an async subprocess."""
cmd = [self.config.esptool_path, "--port", port, *args]
proc = None
try:
proc = await asyncio.create_subprocess_exec(
*cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await asyncio.wait_for(proc.communicate(), timeout=timeout)
output = (stdout or b"").decode() + (stderr or b"").decode()
if proc.returncode != 0:
return {"success": False, "error": output.strip()[:500]}
return {"success": True, "output": output}
except asyncio.TimeoutError:
if proc and proc.returncode is None:
proc.kill()
await proc.wait()
return {"success": False, "error": f"Timeout after {timeout}s"}
except FileNotFoundError:
return {"success": False, "error": f"esptool not found at {self.config.esptool_path}"}
except Exception as e:
if proc and proc.returncode is None:
proc.kill()
await proc.wait()
return {"success": False, "error": str(e)}
def _register_tools(self) -> None:
"""Register diagnostic tools"""
@self.app.tool("esp_memory_dump")
async def memory_dump(
context: Context,
port: str | None = None,
start_address: str = "0x0",
size: str = "1KB",
) -> dict[str, Any]:
"""Dump device memory for analysis.
Reads raw bytes from an arbitrary memory address on the ESP device
using esptool's dump-mem command. Useful for inspecting bootloader
state, peripheral registers, or RAM contents.
The output is hex-formatted for readability. For flash memory reads,
use esp_flash_read instead (faster, supports larger ranges).
Args:
port: Serial port or socket:// URI (required)
start_address: Memory address to start reading (hex string, default: "0x0")
size: Number of bytes to read (e.g. "256B", "1KB", "4KB", default: "1KB")
"""
return await self._memory_dump_impl(context, port, start_address, size)
@self.app.tool("esp_performance_profile")
async def performance_profile(
context: Context,
port: str | None = None,
duration: int = 30,
) -> dict[str, Any]:
"""Profile device communication performance.
Measures serial transport speed by timing a sequence of esptool
operations (chip-id, flash-id, small memory reads). Reports
round-trip latencies and throughput estimates. Useful for comparing
physical serial vs QEMU socket performance.
Args:
port: Serial port or socket:// URI (required)
duration: Not used for timing control; kept for API compatibility
"""
return await self._performance_profile_impl(context, port)
@self.app.tool("esp_diagnostic_report")
async def diagnostic_report(
context: Context,
port: str | None = None,
include_memory: bool = False,
) -> dict[str, Any]:
"""Generate comprehensive diagnostic report for an ESP device.
Collects chip identity, MAC address, flash information, and
optionally a small memory dump into a single structured report.
Useful for troubleshooting connectivity issues or characterizing
an unknown device.
For security-focused analysis, use esp_security_audit instead.
Args:
port: Serial port or socket:// URI (required)
include_memory: Include a 256-byte memory dump from 0x0 (default: false)
"""
return await self._diagnostic_report_impl(context, port, include_memory)
async def _memory_dump_impl(
self,
context: Context,
port: str | None,
start_address: str,
size: str,
) -> dict[str, Any]:
"""Read memory via esptool dump-mem (writes to temp file, then reads it)."""
if not port:
return {"success": False, "error": "Port is required for memory dump"}
byte_count = _parse_size(size)
if byte_count > 1024 * 1024:
return {"success": False, "error": "Maximum dump size is 1MB"}
# dump-mem writes raw bytes to a file
import tempfile
with tempfile.NamedTemporaryFile(suffix=".bin", delete=False) as tmp:
tmp_path = tmp.name
try:
result = await self._run_esptool(
port,
["dump-mem", start_address, str(byte_count), tmp_path],
timeout=60.0,
)
if not result["success"]:
return {"success": False, "error": result["error"], "port": port}
# Read the dump file and format as hex
from pathlib import Path
dump_path = Path(tmp_path)
if not dump_path.exists() or dump_path.stat().st_size == 0:
return {"success": False, "error": "Dump file is empty", "port": port}
raw = dump_path.read_bytes()
# Format as hex dump (16 bytes per line with ASCII)
hex_lines = []
for offset in range(0, len(raw), 16):
chunk = raw[offset : offset + 16]
addr = int(start_address, 0) + offset
hex_part = " ".join(f"{b:02x}" for b in chunk)
ascii_part = "".join(chr(b) if 32 <= b < 127 else "." for b in chunk)
hex_lines.append(f"0x{addr:08x}: {hex_part:<48s} {ascii_part}")
return {
"success": True,
"port": port,
"start_address": start_address,
"bytes_read": len(raw),
"hex_dump": "\n".join(hex_lines[:64]), # Cap at 64 lines (1KB)
"truncated": len(hex_lines) > 64,
}
finally:
import os
try:
os.unlink(tmp_path)
except OSError:
pass
async def _performance_profile_impl(
self,
context: Context,
port: str | None,
) -> dict[str, Any]:
"""Profile serial transport by timing esptool operations."""
if not port:
return {"success": False, "error": "Port is required for profiling"}
measurements: list[dict[str, Any]] = []
# Test 1: chip-id (lightweight command)
t0 = time.time()
r = await self._run_esptool(port, ["chip-id"], timeout=15.0)
elapsed = round(time.time() - t0, 3)
measurements.append({
"operation": "chip-id",
"elapsed_seconds": elapsed,
"success": r["success"],
})
# Test 2: flash-id (reads SPI flash ID register)
t0 = time.time()
r = await self._run_esptool(port, ["flash-id"], timeout=15.0)
elapsed = round(time.time() - t0, 3)
measurements.append({
"operation": "flash-id",
"elapsed_seconds": elapsed,
"success": r["success"],
})
# Test 3: read-mac
t0 = time.time()
r = await self._run_esptool(port, ["read-mac"], timeout=15.0)
elapsed = round(time.time() - t0, 3)
measurements.append({
"operation": "read-mac",
"elapsed_seconds": elapsed,
"success": r["success"],
})
# Test 4: read 4KB of flash (throughput test)
import tempfile
with tempfile.NamedTemporaryFile(suffix=".bin", delete=False) as tmp:
tmp_path = tmp.name
try:
t0 = time.time()
r = await self._run_esptool(
port,
["read-flash", "0x0", "4096", tmp_path],
timeout=60.0,
)
elapsed = round(time.time() - t0, 3)
throughput = None
if r["success"] and elapsed > 0:
throughput = round(4096 / elapsed, 0)
measurements.append({
"operation": "read-flash (4KB)",
"elapsed_seconds": elapsed,
"success": r["success"],
"throughput_bytes_per_sec": throughput,
})
finally:
import os
try:
os.unlink(tmp_path)
except OSError:
pass
# Summary
successes = [m for m in measurements if m["success"]]
avg_latency = (
round(sum(m["elapsed_seconds"] for m in successes) / len(successes), 3)
if successes
else None
)
return {
"success": True,
"port": port,
"measurements": measurements,
"summary": {
"operations_tested": len(measurements),
"operations_succeeded": len(successes),
"average_latency_seconds": avg_latency,
},
}
async def _diagnostic_report_impl(
self,
context: Context,
port: str | None,
include_memory: bool,
) -> dict[str, Any]:
"""Generate comprehensive device diagnostic report."""
if not port:
return {"success": False, "error": "Port is required for diagnostic report"}
report: dict[str, Any] = {"port": port}
# 1. Chip identification
chip_result = await self._run_esptool(port, ["chip-id"], timeout=15.0)
if chip_result["success"]:
output = chip_result["output"]
chip_match = re.search(r"Chip is (\S+)", output)
id_match = re.search(r"Chip ID:\s*(0x[0-9a-fA-F]+)", output)
report["chip"] = chip_match.group(1) if chip_match else "unknown"
report["chip_id"] = id_match.group(1) if id_match else "unknown"
else:
return {"success": False, "error": f"Cannot reach device: {chip_result['error']}", "port": port}
# 2. MAC address
mac_result = await self._run_esptool(port, ["read-mac"], timeout=15.0)
if mac_result["success"]:
mac_match = re.search(r"MAC:\s*([0-9a-fA-F:]+)", mac_result["output"])
report["mac_address"] = mac_match.group(1) if mac_match else "unknown"
# 3. Flash info
flash_result = await self._run_esptool(port, ["flash-id"], timeout=15.0)
if flash_result["success"]:
output = flash_result["output"]
mfr_match = re.search(r"Manufacturer:\s*(0x[0-9a-fA-F]+)", output)
dev_match = re.search(r"Device:\s*(0x[0-9a-fA-F]+)", output)
size_match = re.search(r"Detected flash size:\s*(\S+)", output)
report["flash"] = {
"manufacturer": mfr_match.group(1) if mfr_match else "unknown",
"device": dev_match.group(1) if dev_match else "unknown",
"size": size_match.group(1) if size_match else "unknown",
}
# 4. Optional memory dump
if include_memory:
mem_result = await self._memory_dump_impl(context, port, "0x0", "256B")
if mem_result.get("success"):
report["memory_dump_0x0"] = mem_result.get("hex_dump", "")
report["success"] = True
return report
async def health_check(self) -> dict[str, Any]:
"""Component health check"""
return {"status": "healthy", "note": "Diagnostics ready"}

View File

@ -1,290 +0,0 @@
"""
Firmware Builder Component
Provides firmware binary conversion and analysis using esptool's
elf2image and image-info commands.
"""
import asyncio
import logging
import re
from pathlib import Path
from typing import Any
from fastmcp import Context, FastMCP
from ..config import ESPToolServerConfig
logger = logging.getLogger(__name__)
class FirmwareBuilder:
"""ESP firmware building and compilation"""
def __init__(self, app: FastMCP, config: ESPToolServerConfig) -> None:
self.app = app
self.config = config
self._register_tools()
async def _run_cmd(
self,
cmd: list[str],
timeout: float = 30.0,
) -> dict[str, Any]:
"""Run a CLI command as an async subprocess."""
proc = None
try:
proc = await asyncio.create_subprocess_exec(
*cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await asyncio.wait_for(proc.communicate(), timeout=timeout)
output = (stdout or b"").decode() + (stderr or b"").decode()
if proc.returncode != 0:
return {"success": False, "error": output.strip()[:500]}
return {"success": True, "output": output}
except asyncio.TimeoutError:
if proc and proc.returncode is None:
proc.kill()
await proc.wait()
return {"success": False, "error": f"Timeout after {timeout}s"}
except FileNotFoundError:
return {"success": False, "error": f"Command not found: {cmd[0]}"}
except Exception as e:
if proc and proc.returncode is None:
proc.kill()
await proc.wait()
return {"success": False, "error": str(e)}
def _register_tools(self) -> None:
"""Register firmware building tools"""
@self.app.tool("esp_elf_to_binary")
async def elf_to_binary(
context: Context, elf_path: str, output_path: str | None = None
) -> dict[str, Any]:
"""Convert ELF file to flashable binary"""
return await self._elf_to_binary_impl(context, elf_path, output_path)
@self.app.tool("esp_elf_to_ram_binary")
async def elf_to_ram_binary(
context: Context,
elf_path: str,
output_path: str | None = None,
chip: str = "auto",
) -> dict[str, Any]:
"""Convert ELF file to RAM-loadable binary for use with esp_load_ram.
Creates a binary with RAM segments (IRAM/DRAM) placed first, suitable
for loading directly into device RAM without touching flash. Perfect
for rapid development iteration.
Workflow:
1. Build your project with ESP-IDF (disable secure boot/signed images)
2. Use this tool to convert the ELF to a RAM binary
3. Use esp_load_ram to load and execute on device
Requirements:
- ELF must NOT have embedded SHA256 digest at reserved offset
- Disable CONFIG_SECURE_BOOT and CONFIG_SECURE_SIGNED_APPS in sdkconfig
- Some PlatformIO builds may have incompatible settings
Note: Features requiring flash (OTA, NVS, SPIFFS) won't work from RAM.
Args:
elf_path: Path to the ELF file from your build
output_path: Output binary path (default: <elf_name>-ram.bin)
chip: Target chip type (auto, esp32, esp32s3, etc.)
"""
return await self._elf_to_ram_binary_impl(context, elf_path, output_path, chip)
@self.app.tool("esp_firmware_analyze")
async def analyze_firmware(context: Context, firmware_path: str) -> dict[str, Any]:
"""Analyze firmware binary structure"""
return await self._firmware_analyze_impl(context, firmware_path)
async def _elf_to_binary_impl(
self,
context: Context,
elf_path: str,
output_path: str | None,
) -> dict[str, Any]:
"""Convert ELF to flashable binary via esptool elf2image."""
elf = Path(elf_path)
if not elf.exists():
return {"success": False, "error": f"ELF file not found: {elf_path}"}
cmd = [self.config.esptool_path, "--chip", "auto", "elf2image"]
if output_path:
cmd.extend(["--output", output_path])
cmd.append(elf_path)
result = await self._run_cmd(cmd, timeout=30.0)
if not result["success"]:
return {"success": False, "error": result["error"], "elf_path": elf_path}
# Determine the output file path
if output_path:
out = Path(output_path)
else:
# esptool elf2image default: <input>-<chip>.bin or <input>.bin
# Look for likely output files
out = elf.with_suffix(".bin")
if not out.exists():
# Try common patterns
for candidate in elf.parent.glob(f"{elf.stem}*.bin"):
out = candidate
break
response: dict[str, Any] = {
"success": True,
"elf_path": elf_path,
"esptool_output": result["output"][:1000],
}
if out.exists():
response["output_path"] = str(out)
response["output_size_bytes"] = out.stat().st_size
return response
async def _elf_to_ram_binary_impl(
self,
context: Context,
elf_path: str,
output_path: str | None,
chip: str,
) -> dict[str, Any]:
"""Convert ELF to RAM-loadable binary via esptool elf2image --ram-only-header."""
elf = Path(elf_path)
if not elf.exists():
return {"success": False, "error": f"ELF file not found: {elf_path}"}
# Determine output path
if output_path:
out = Path(output_path)
else:
out = elf.with_name(f"{elf.stem}-ram.bin")
cmd = [
self.config.esptool_path,
"--chip", chip,
"elf2image",
"--ram-only-header",
"--output", str(out),
elf_path,
]
result = await self._run_cmd(cmd, timeout=30.0)
if not result["success"]:
return {
"success": False,
"error": result["error"],
"elf_path": elf_path,
}
response: dict[str, Any] = {
"success": True,
"elf_path": elf_path,
"output_path": str(out),
"chip": chip,
"ram_optimized": True,
}
if out.exists():
response["output_size_bytes"] = out.stat().st_size
# Add usage hint
response["usage_hint"] = (
f"Load to device with: esp_load_ram(binary_path='{out}', port='<your-port>')"
)
return response
async def _firmware_analyze_impl(
self,
context: Context,
firmware_path: str,
) -> dict[str, Any]:
"""Analyze firmware binary via esptool image-info."""
fw = Path(firmware_path)
if not fw.exists():
return {"success": False, "error": f"Firmware file not found: {firmware_path}"}
# image-info --version 2 gives extended output
result = await self._run_cmd(
[self.config.esptool_path, "image-info", "--version", "2", firmware_path],
timeout=15.0,
)
if not result["success"]:
return {"success": False, "error": result["error"], "firmware_path": firmware_path}
output = result["output"]
info = self._parse_image_info(output)
return {
"success": True,
"firmware_path": firmware_path,
"file_size_bytes": fw.stat().st_size,
**info,
"raw_output": output[:2000],
}
def _parse_image_info(self, output: str) -> dict[str, Any]:
"""Parse esptool image-info output into structured data."""
info: dict[str, Any] = {}
# Extract key fields using regex
patterns = {
"entry_point": r"Entry point:\s*(0x[0-9a-fA-F]+)",
"chip": r"Chip:\s*(\S+)",
"flash_mode": r"Flash mode:\s*(\S+)",
"flash_size": r"Flash size:\s*(\S+)",
"flash_freq": r"Flash freq:\s*(\S+)",
}
for key, pattern in patterns.items():
match = re.search(pattern, output)
if match:
info[key] = match.group(1)
# Parse segments
segments = []
# Pattern: Segment N: len 0xNNNNN load 0xNNNNNNNN ...
for match in re.finditer(
r"Segment\s+(\d+):\s+len\s+(0x[0-9a-fA-F]+)\s+load\s+(0x[0-9a-fA-F]+)",
output,
):
segments.append({
"index": int(match.group(1)),
"length": match.group(2),
"load_address": match.group(3),
})
if segments:
info["segments"] = segments
info["segment_count"] = len(segments)
# Check for validation status
if "valid" in output.lower():
valid_match = re.search(r"Validation\s+Hash:\s*(\S+)", output, re.IGNORECASE)
if valid_match:
info["validation_hash"] = valid_match.group(1)
return info
async def health_check(self) -> dict[str, Any]:
"""Component health check"""
return {"status": "healthy", "note": "Firmware builder ready"}

View File

@ -1,548 +0,0 @@
"""
Flash Manager Component
Provides ESP flash memory operations: write, read, erase, and backup.
All operations shell out to esptool as an async subprocess, matching
the pattern established in chip_control.py.
"""
import asyncio
import logging
import re
import time
from pathlib import Path
from typing import Any
from fastmcp import Context, FastMCP
from ..config import ESPToolServerConfig
logger = logging.getLogger(__name__)
class FlashManager:
"""ESP flash memory management and operations"""
def __init__(self, app: FastMCP, config: ESPToolServerConfig) -> None:
self.app = app
self.config = config
self._register_tools()
async def _run_esptool(
self,
port: str,
args: list[str],
timeout: float = 120.0,
) -> dict[str, Any]:
"""Run esptool with arbitrary args as an async subprocess.
Args:
port: Serial port or socket:// URI
args: esptool arguments after --port (e.g. ["write-flash", "0x0", "fw.bin"])
timeout: Timeout in seconds (flash operations can be slow)
Returns:
dict with "success", "output", and optionally "error"
"""
cmd = [
self.config.esptool_path,
"--port", port,
*args,
]
proc = None
try:
proc = await asyncio.create_subprocess_exec(
*cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await asyncio.wait_for(proc.communicate(), timeout=timeout)
output = (stdout or b"").decode() + (stderr or b"").decode()
if proc.returncode != 0:
return {"success": False, "error": output.strip()}
return {"success": True, "output": output}
except asyncio.TimeoutError:
if proc and proc.returncode is None:
proc.kill()
await proc.wait()
return {"success": False, "error": f"Timeout after {timeout}s"}
except FileNotFoundError:
return {
"success": False,
"error": f"esptool not found at {self.config.esptool_path}",
}
except Exception as e:
if proc and proc.returncode is None:
proc.kill()
await proc.wait()
return {"success": False, "error": str(e)}
def _register_tools(self) -> None:
"""Register flash management tools"""
@self.app.tool("esp_flash_firmware")
async def flash_firmware(
context: Context,
firmware_path: str,
port: str | None = None,
address: str = "0x0",
verify: bool = True,
) -> dict[str, Any]:
"""Flash firmware to ESP device.
Writes a binary firmware file to the device's flash memory using esptool.
Supports any port including socket:// URIs for QEMU virtual devices.
Args:
firmware_path: Path to the firmware binary (.bin) to flash
port: Serial port or socket:// URI (auto-detect if not specified)
address: Flash address to write to (hex string, default: "0x0").
Use partition offsets for non-firmware images (e.g. "0x290000" for LittleFS).
verify: Verify flash contents after writing (default: true)
"""
return await self._flash_firmware_impl(context, firmware_path, port, address, verify)
@self.app.tool("esp_flash_read")
async def flash_read(
context: Context,
output_path: str,
port: str | None = None,
start_address: str = "0x0",
size: str | None = None,
) -> dict[str, Any]:
"""Read flash memory contents to a file.
Reads raw bytes from flash and saves to the specified output path.
If size is not specified, reads the entire flash.
Args:
output_path: File path to save the flash contents
port: Serial port or socket:// URI (auto-detect if not specified)
start_address: Flash offset to start reading from (hex string, default: "0x0")
size: Number of bytes to read (hex or decimal string, reads all if not specified)
"""
return await self._flash_read_impl(context, output_path, port, start_address, size)
@self.app.tool("esp_flash_erase")
async def flash_erase(
context: Context,
port: str | None = None,
start_address: str = "0x0",
size: str | None = None,
) -> dict[str, Any]:
"""Erase flash memory regions.
Erases the entire flash if no start_address and size are given.
Otherwise erases the specified region. Erased bytes become 0xFF.
Args:
port: Serial port or socket:// URI (auto-detect if not specified)
start_address: Flash offset to start erasing (hex string, default: "0x0")
size: Number of bytes to erase (hex or decimal string, erases all if not specified)
"""
return await self._flash_erase_impl(context, port, start_address, size)
@self.app.tool("esp_flash_backup")
async def flash_backup(
context: Context,
backup_path: str,
port: str | None = None,
include_bootloader: bool = True,
) -> dict[str, Any]:
"""Create complete flash backup to a file.
Reads the entire flash contents and saves to the specified path.
The resulting file can be restored with esp_flash_firmware.
Args:
backup_path: File path to save the flash backup
port: Serial port or socket:// URI (auto-detect if not specified)
include_bootloader: Start from address 0x0 to include bootloader (default: true)
"""
return await self._flash_backup_impl(context, backup_path, port, include_bootloader)
@self.app.tool("esp_flash_multi")
async def flash_multi(
context: Context,
files: list[dict],
port: str | None = None,
verify: bool = True,
compress: bool = True,
) -> dict[str, Any]:
"""Flash multiple binaries at different addresses in one operation.
Writes multiple binary files to flash in a single esptool invocation.
Faster than multiple separate flash operations since it connects once.
Common use cases:
- Flash bootloader + partition table + app in one shot
- Deploy complete firmware stack with filesystem image
Args:
files: List of dicts with "address" (hex string) and "path" (file path).
Example: [{"address": "0x0", "path": "bootloader.bin"},
{"address": "0x8000", "path": "partitions.bin"},
{"address": "0x10000", "path": "app.bin"}]
port: Serial port or socket:// URI (required)
verify: Verify flash contents after writing (default: true)
compress: Use compression for faster transfer (default: true)
"""
return await self._flash_multi_impl(context, files, port, verify, compress)
@self.app.tool("esp_verify_flash")
async def verify_flash(
context: Context,
firmware_path: str,
port: str | None = None,
address: str = "0x0",
) -> dict[str, Any]:
"""Verify flash contents match a file without re-flashing.
Reads flash memory at the specified address and compares against
the provided file. Useful for confirming successful flash operations
or checking if an update is needed.
Args:
firmware_path: Path to the binary file to compare against
port: Serial port or socket:// URI (required)
address: Flash address to verify from (hex string, default: "0x0")
"""
return await self._verify_flash_impl(context, firmware_path, port, address)
async def _flash_firmware_impl(
self,
context: Context,
firmware_path: str,
port: str | None,
address: str,
verify: bool,
) -> dict[str, Any]:
"""Write firmware to flash via esptool write-flash."""
fw_path = Path(firmware_path)
if not fw_path.exists():
return {"success": False, "error": f"Firmware file not found: {firmware_path}"}
if not port:
return {"success": False, "error": "Port is required (no auto-detect for flash operations)"}
start_time = time.time()
args = ["write-flash", address, str(fw_path)]
if not verify:
args.insert(0, "--no-verify")
result = await self._run_esptool(port, args, timeout=180.0)
if not result["success"]:
return {
"success": False,
"error": result["error"],
"port": port,
"firmware_path": firmware_path,
"address": address,
}
output = result["output"]
elapsed = round(time.time() - start_time, 1)
# Parse bytes written from output
bytes_written = 0
write_matches = re.findall(r"Wrote (\d+) bytes", output)
for match in write_matches:
bytes_written += int(match)
verified = "Hash of data verified" in output or "Verified" in output
return {
"success": True,
"port": port,
"firmware_path": firmware_path,
"address": address,
"firmware_size": fw_path.stat().st_size,
"bytes_written": bytes_written,
"verified": verified if verify else None,
"elapsed_seconds": elapsed,
}
async def _flash_read_impl(
self,
context: Context,
output_path: str,
port: str | None,
start_address: str,
size: str | None,
) -> dict[str, Any]:
"""Read flash contents via esptool read-flash."""
if not port:
return {"success": False, "error": "Port is required (no auto-detect for flash operations)"}
# Determine read size — if not specified, read entire flash (detect first)
if not size:
detect = await self._run_esptool(port, ["flash-id"], timeout=15.0)
if not detect["success"]:
return {"success": False, "error": f"Could not detect flash size: {detect['error']}"}
# Parse flash size from output
flash_size_match = re.search(r"Detected flash size:\s*(\d+)([KMG]B)", detect["output"])
if flash_size_match:
num = int(flash_size_match.group(1))
unit = flash_size_match.group(2)
multiplier = {"KB": 1024, "MB": 1024 * 1024, "GB": 1024 * 1024 * 1024}
size = str(num * multiplier.get(unit, 1))
else:
return {"success": False, "error": "Could not determine flash size. Specify size manually."}
# Ensure output directory exists
out = Path(output_path)
out.parent.mkdir(parents=True, exist_ok=True)
start_time = time.time()
result = await self._run_esptool(
port,
["read-flash", start_address, size, str(out)],
timeout=300.0,
)
if not result["success"]:
return {"success": False, "error": result["error"], "port": port}
elapsed = round(time.time() - start_time, 1)
return {
"success": True,
"port": port,
"output_path": str(out),
"start_address": start_address,
"bytes_read": out.stat().st_size if out.exists() else 0,
"elapsed_seconds": elapsed,
}
async def _flash_erase_impl(
self,
context: Context,
port: str | None,
start_address: str,
size: str | None,
) -> dict[str, Any]:
"""Erase flash via esptool erase-flash or erase-region."""
if not port:
return {"success": False, "error": "Port is required (no auto-detect for flash operations)"}
start_time = time.time()
if size:
# Erase specific region
result = await self._run_esptool(
port,
["erase-region", start_address, size],
timeout=60.0,
)
else:
# Erase entire flash
result = await self._run_esptool(
port,
["erase-flash"],
timeout=60.0,
)
if not result["success"]:
return {"success": False, "error": result["error"], "port": port}
elapsed = round(time.time() - start_time, 1)
return {
"success": True,
"port": port,
"erase_type": "region" if size else "full",
"start_address": start_address if size else "0x0",
"size": size,
"elapsed_seconds": elapsed,
}
async def _flash_backup_impl(
self,
context: Context,
backup_path: str,
port: str | None,
include_bootloader: bool,
) -> dict[str, Any]:
"""Read entire flash to create a backup file."""
start_address = "0x0" if include_bootloader else "0x1000"
return await self._flash_read_impl(
context,
output_path=backup_path,
port=port,
start_address=start_address,
size=None, # auto-detect full flash
)
async def _flash_multi_impl(
self,
context: Context,
files: list[dict],
port: str | None,
verify: bool,
compress: bool,
) -> dict[str, Any]:
"""Flash multiple binaries at different addresses via esptool write-flash."""
if not port:
return {"success": False, "error": "Port is required (no auto-detect for flash operations)"}
if not files:
return {"success": False, "error": "No files specified"}
# Validate all files exist and build address/path pairs
flash_args: list[str] = []
validated_files: list[dict] = []
total_size = 0
for entry in files:
if "address" not in entry or "path" not in entry:
return {
"success": False,
"error": f"Each file entry must have 'address' and 'path' keys. Got: {entry}",
}
fw_path = Path(entry["path"])
if not fw_path.exists():
return {"success": False, "error": f"File not found: {entry['path']}"}
file_size = fw_path.stat().st_size
total_size += file_size
validated_files.append({
"address": entry["address"],
"path": str(fw_path),
"size": file_size,
})
flash_args.extend([entry["address"], str(fw_path)])
start_time = time.time()
# Build esptool command: write-flash [options] addr1 file1 addr2 file2 ...
# Options come after the subcommand in esptool v5.x
args = ["write-flash"]
if compress:
args.append("--compress")
if not verify:
args.append("--no-verify")
args.extend(flash_args)
result = await self._run_esptool(port, args, timeout=300.0)
if not result["success"]:
return {
"success": False,
"error": result["error"],
"port": port,
"files": validated_files,
}
output = result["output"]
elapsed = round(time.time() - start_time, 1)
# Parse bytes written from output
bytes_written = 0
write_matches = re.findall(r"Wrote (\d+) bytes", output)
for match in write_matches:
bytes_written += int(match)
verified = "Hash of data verified" in output or "Verified" in output
return {
"success": True,
"port": port,
"files": validated_files,
"file_count": len(validated_files),
"total_size": total_size,
"bytes_written": bytes_written,
"compressed": compress,
"verified": verified if verify else None,
"elapsed_seconds": elapsed,
}
async def _verify_flash_impl(
self,
context: Context,
firmware_path: str,
port: str | None,
address: str,
) -> dict[str, Any]:
"""Verify flash contents match a file via esptool verify-flash."""
fw_path = Path(firmware_path)
if not fw_path.exists():
return {"success": False, "error": f"File not found: {firmware_path}"}
if not port:
return {"success": False, "error": "Port is required (no auto-detect for flash operations)"}
start_time = time.time()
file_size = fw_path.stat().st_size
# Use hyphenated command form (verify-flash not verify_flash)
result = await self._run_esptool(
port,
["verify-flash", address, str(fw_path)],
timeout=120.0,
)
elapsed = round(time.time() - start_time, 1)
# Check for verification success or failure in output
output = result.get("output", "") or result.get("error", "")
output_lower = output.lower()
verified = (
"verification successful" in output_lower
or "verify ok" in output_lower
or "digest matched" in output_lower
)
mismatch = (
"verify failed" in output_lower
or "mismatch" in output_lower
or "does not match" in output_lower
)
if result["success"] and verified:
return {
"success": True,
"verified": True,
"port": port,
"firmware_path": firmware_path,
"address": address,
"file_size": file_size,
"elapsed_seconds": elapsed,
}
elif mismatch:
# Extract mismatch details if available
return {
"success": True,
"verified": False,
"mismatch": True,
"port": port,
"firmware_path": firmware_path,
"address": address,
"file_size": file_size,
"elapsed_seconds": elapsed,
"details": output.strip() if output else None,
}
else:
return {
"success": False,
"error": result.get("error", "Verification failed"),
"port": port,
"firmware_path": firmware_path,
"address": address,
}
async def health_check(self) -> dict[str, Any]:
"""Component health check"""
return {"status": "healthy", "note": "Flash manager ready"}

View File

@ -1,335 +0,0 @@
"""
OTA Manager Component
Handles Over-The-Air update operations including package creation,
deployment, rollback, and update management.
"""
import asyncio
import hashlib
import json
import logging
import time
import zipfile
from pathlib import Path
from typing import Any
from fastmcp import Context, FastMCP
from ..config import ESPToolServerConfig
logger = logging.getLogger(__name__)
class OTAManager:
"""ESP Over-The-Air update management"""
def __init__(self, app: FastMCP, config: ESPToolServerConfig) -> None:
self.app = app
self.config = config
self._register_tools()
async def _run_esptool(
self,
port: str,
args: list[str],
timeout: float = 30.0,
) -> dict[str, Any]:
"""Run esptool as an async subprocess."""
cmd = [self.config.esptool_path, "--port", port, *args]
proc = None
try:
proc = await asyncio.create_subprocess_exec(
*cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await asyncio.wait_for(proc.communicate(), timeout=timeout)
output = (stdout or b"").decode() + (stderr or b"").decode()
if proc.returncode != 0:
return {"success": False, "error": output.strip()[:500]}
return {"success": True, "output": output}
except asyncio.TimeoutError:
if proc and proc.returncode is None:
proc.kill()
await proc.wait()
return {"success": False, "error": f"Timeout after {timeout}s"}
except FileNotFoundError:
return {"success": False, "error": f"esptool not found at {self.config.esptool_path}"}
except Exception as e:
if proc and proc.returncode is None:
proc.kill()
await proc.wait()
return {"success": False, "error": str(e)}
def _register_tools(self) -> None:
"""Register OTA management tools"""
@self.app.tool("esp_ota_package_create")
async def create_ota_package(
context: Context, firmware_path: str, version: str, output_path: str
) -> dict[str, Any]:
"""Create OTA update package"""
return await self._package_create_impl(context, firmware_path, version, output_path)
@self.app.tool("esp_ota_deploy")
async def deploy_ota_update(
context: Context, package_path: str, target_url: str
) -> dict[str, Any]:
"""Deploy OTA update to device"""
return await self._deploy_impl(context, package_path, target_url)
@self.app.tool("esp_ota_rollback")
async def rollback_ota(context: Context, port: str | None = None) -> dict[str, Any]:
"""Rollback to previous firmware version"""
return await self._rollback_impl(context, port)
async def _package_create_impl(
self,
context: Context,
firmware_path: str,
version: str,
output_path: str,
) -> dict[str, Any]:
"""Create an OTA update package (zip with firmware + manifest).
The package contains:
- firmware.bin: The raw application binary
- manifest.json: Metadata (version, SHA-256, size, timestamp)
"""
fw = Path(firmware_path)
if not fw.exists():
return {"success": False, "error": f"Firmware file not found: {firmware_path}"}
fw_data = fw.read_bytes()
fw_sha256 = hashlib.sha256(fw_data).hexdigest()
manifest = {
"version": version,
"firmware_name": fw.name,
"firmware_size": len(fw_data),
"firmware_sha256": fw_sha256,
"created_at": time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()),
}
out = Path(output_path)
try:
with zipfile.ZipFile(out, "w", compression=zipfile.ZIP_DEFLATED) as zf:
zf.writestr("firmware.bin", fw_data)
zf.writestr("manifest.json", json.dumps(manifest, indent=2))
except OSError as e:
return {"success": False, "error": f"Failed to create package: {e}"}
return {
"success": True,
"output_path": str(out),
"package_size_bytes": out.stat().st_size,
"manifest": manifest,
}
async def _deploy_impl(
self,
context: Context,
package_path: str,
target_url: str,
) -> dict[str, Any]:
"""Deploy an OTA package to a device via HTTP POST.
Extracts firmware.bin from the package and POSTs it to the
device's OTA endpoint (e.g. http://192.168.1.100/ota/update).
The target device must be running an HTTP OTA server (like
esp_https_ota or a custom handler).
"""
pkg = Path(package_path)
if not pkg.exists():
return {"success": False, "error": f"Package not found: {package_path}"}
# Extract firmware from package
try:
with zipfile.ZipFile(pkg, "r") as zf:
if "firmware.bin" not in zf.namelist():
return {"success": False, "error": "Package missing firmware.bin"}
fw_data = zf.read("firmware.bin")
manifest = None
if "manifest.json" in zf.namelist():
manifest = json.loads(zf.read("manifest.json"))
except zipfile.BadZipFile:
return {"success": False, "error": "Invalid zip package"}
# POST firmware to device
# Using curl as an async subprocess since it's universally available
# and handles HTTP/HTTPS without Python dependency issues
import tempfile
with tempfile.NamedTemporaryFile(suffix=".bin", delete=False) as tmp:
tmp.write(fw_data)
tmp_path = tmp.name
try:
proc = await asyncio.create_subprocess_exec(
"curl",
"--silent",
"--show-error",
"--max-time", "120",
"--write-out", "%{http_code}",
"--output", "/dev/null",
"--data-binary", f"@{tmp_path}",
"--header", "Content-Type: application/octet-stream",
target_url,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await asyncio.wait_for(proc.communicate(), timeout=130.0)
http_code = (stdout or b"").decode().strip()
curl_error = (stderr or b"").decode().strip()
if proc.returncode != 0:
return {
"success": False,
"error": f"HTTP request failed: {curl_error}",
"target_url": target_url,
}
status_ok = http_code.startswith("2")
result: dict[str, Any] = {
"success": status_ok,
"target_url": target_url,
"http_status": http_code,
"firmware_size_bytes": len(fw_data),
}
if manifest:
result["version"] = manifest.get("version")
if not status_ok:
result["error"] = f"Device returned HTTP {http_code}"
return result
except asyncio.TimeoutError:
return {"success": False, "error": "OTA deploy timed out (130s)", "target_url": target_url}
except FileNotFoundError:
return {"success": False, "error": "curl not found — required for OTA deploy"}
finally:
import os
try:
os.unlink(tmp_path)
except OSError:
pass
async def _rollback_impl(
self,
context: Context,
port: str | None,
) -> dict[str, Any]:
"""Rollback OTA by erasing the otadata partition.
When the otadata partition is erased (all 0xFF), the bootloader
falls back to the factory app or ota_0 effectively rolling back
to the first-flashed firmware. This works because the otadata
partition tracks which OTA slot is active.
For more precise control, use esp_partition_analyze to find the
otadata offset, then esp_flash_erase to clear just that region.
"""
if not port:
return {"success": False, "error": "Port is required for OTA rollback"}
# First, read the partition table to find the otadata partition
# We need the partition manager's analyze logic, but we can just
# read the partition table directly with esptool
import struct
import tempfile
with tempfile.NamedTemporaryFile(suffix=".bin", delete=False) as tmp:
tmp_path = tmp.name
try:
# Read partition table from 0x8000
result = await self._run_esptool(
port,
["read-flash", "0x8000", "0xC00", tmp_path],
timeout=60.0,
)
if not result["success"]:
return {"success": False, "error": f"Cannot read partition table: {result['error']}", "port": port}
raw = Path(tmp_path).read_bytes()
# Find otadata partition (type=data/0x01, subtype=ota/0x00)
otadata_offset = None
otadata_size = None
for i in range(0, len(raw) - 32 + 1, 32):
entry = raw[i : i + 32]
magic = struct.unpack_from("<H", entry, 0)[0]
if magic == 0xFFFF:
break
if magic != 0x50AA:
continue
ptype = entry[2]
subtype = entry[3]
# data type (0x01) + ota subtype (0x00)
if ptype == 0x01 and subtype == 0x00:
otadata_offset = struct.unpack_from("<I", entry, 4)[0]
otadata_size = struct.unpack_from("<I", entry, 8)[0]
break
finally:
import os
try:
os.unlink(tmp_path)
except OSError:
pass
if otadata_offset is None:
return {
"success": False,
"error": "No otadata partition found — device may not use OTA layout",
"port": port,
}
# Erase the otadata region
result = await self._run_esptool(
port,
[
"erase-region",
f"0x{otadata_offset:x}",
f"0x{otadata_size:x}",
],
timeout=30.0,
)
if not result["success"]:
return {
"success": False,
"error": f"Failed to erase otadata: {result['error']}",
"port": port,
}
return {
"success": True,
"port": port,
"otadata_offset": f"0x{otadata_offset:x}",
"otadata_size": f"0x{otadata_size:x}",
"message": (
"OTA data partition erased. On next boot, the device will "
"fall back to the factory app or ota_0 slot."
),
}
async def health_check(self) -> dict[str, Any]:
"""Component health check"""
return {"status": "healthy", "note": "OTA manager ready"}

View File

@ -1,446 +0,0 @@
"""
Partition Manager Component
Handles ESP partition table operations: generating OTA-capable tables,
custom partition layouts, and reading/analyzing partition tables from
connected devices.
"""
import asyncio
import logging
from pathlib import Path
from typing import Any
from fastmcp import Context, FastMCP
from ..config import ESPToolServerConfig
logger = logging.getLogger(__name__)
# ESP partition types and subtypes
PARTITION_TYPES = {
"app": 0x00,
"data": 0x01,
}
APP_SUBTYPES = {
"factory": 0x00,
"ota_0": 0x10,
"ota_1": 0x11,
"ota_2": 0x12,
"ota_3": 0x13,
"test": 0x20,
}
DATA_SUBTYPES = {
"ota": 0x00,
"phy": 0x01,
"nvs": 0x02,
"coredump": 0x03,
"nvs_keys": 0x04,
"efuse": 0x05,
"spiffs": 0x82,
"littlefs": 0x83,
"fat": 0x81,
}
# Size multipliers
_SIZE_MULT = {"K": 1024, "M": 1024 * 1024}
def _parse_size_spec(spec: str) -> int:
"""Parse a partition size like '1MB', '64K', '0x10000' into bytes."""
spec = spec.strip().upper()
for suffix, mult in _SIZE_MULT.items():
if spec.endswith(suffix + "B"):
return int(spec[: -len(suffix) - 1]) * mult
if spec.endswith(suffix):
return int(spec[: -len(suffix)]) * mult
return int(spec, 0)
def _format_size(size_bytes: int) -> str:
"""Format byte count as human-readable size."""
if size_bytes >= 1024 * 1024 and size_bytes % (1024 * 1024) == 0:
return f"{size_bytes // (1024 * 1024)}MB"
if size_bytes >= 1024 and size_bytes % 1024 == 0:
return f"{size_bytes // 1024}KB"
return f"{size_bytes}B"
class PartitionManager:
"""ESP partition table management"""
def __init__(self, app: FastMCP, config: ESPToolServerConfig) -> None:
self.app = app
self.config = config
self._register_tools()
async def _run_esptool(
self,
port: str,
args: list[str],
timeout: float = 30.0,
) -> dict[str, Any]:
"""Run esptool as an async subprocess."""
cmd = [self.config.esptool_path, "--port", port, *args]
proc = None
try:
proc = await asyncio.create_subprocess_exec(
*cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await asyncio.wait_for(proc.communicate(), timeout=timeout)
output = (stdout or b"").decode() + (stderr or b"").decode()
if proc.returncode != 0:
return {"success": False, "error": output.strip()[:500]}
return {"success": True, "output": output}
except asyncio.TimeoutError:
if proc and proc.returncode is None:
proc.kill()
await proc.wait()
return {"success": False, "error": f"Timeout after {timeout}s"}
except FileNotFoundError:
return {"success": False, "error": f"esptool not found at {self.config.esptool_path}"}
except Exception as e:
if proc and proc.returncode is None:
proc.kill()
await proc.wait()
return {"success": False, "error": str(e)}
def _register_tools(self) -> None:
"""Register partition management tools"""
@self.app.tool("esp_partition_create_ota")
async def create_ota_partition(
context: Context,
flash_size: str = "4MB",
app_size: str = "1MB",
) -> dict[str, Any]:
"""Create OTA-enabled partition table.
Generates a partition table CSV with two OTA app slots, NVS storage,
OTA data partition, and PHY calibration data. The layout follows
Espressif's recommended OTA structure.
The generated CSV can be converted to binary with gen_esp32part.py
(from ESP-IDF) and flashed to the partition table offset (typically 0x8000).
Args:
flash_size: Total flash size (e.g. "4MB", "8MB", "16MB", default: "4MB")
app_size: Size for each OTA app slot (e.g. "1MB", "1536K", default: "1MB")
"""
return await self._create_ota_impl(context, flash_size, app_size)
@self.app.tool("esp_partition_custom")
async def create_custom_partition(
context: Context,
partition_config: dict[str, Any],
) -> dict[str, Any]:
"""Create custom partition table from a configuration dict.
Accepts a list of partition entries and generates a valid ESP
partition table CSV. Each entry needs: name, type, subtype, size.
Offset is auto-calculated if omitted.
Example partition_config:
{
"partitions": [
{"name": "nvs", "type": "data", "subtype": "nvs", "size": "24K"},
{"name": "factory", "type": "app", "subtype": "factory", "size": "1MB"},
{"name": "storage", "type": "data", "subtype": "spiffs", "size": "512K"}
]
}
Args:
partition_config: Dict with "partitions" key containing list of entries
"""
return await self._create_custom_impl(context, partition_config)
@self.app.tool("esp_partition_analyze")
async def analyze_partitions(
context: Context,
port: str | None = None,
) -> dict[str, Any]:
"""Analyze current partition table on a connected ESP device.
Reads the partition table from flash (at offset 0x8000, 0xC00 bytes)
and parses the binary format into a human-readable table. Shows
partition names, types, offsets, sizes, and flags.
Works with physical devices and QEMU virtual devices.
Args:
port: Serial port or socket:// URI (required)
"""
return await self._analyze_impl(context, port)
async def _create_ota_impl(
self,
context: Context,
flash_size: str,
app_size: str,
) -> dict[str, Any]:
"""Generate an OTA-capable partition table."""
try:
total_bytes = _parse_size_spec(flash_size)
app_bytes = _parse_size_spec(app_size)
except (ValueError, TypeError) as e:
return {"success": False, "error": f"Invalid size: {e}"}
# Standard layout:
# 0x9000 - nvs (24KB)
# 0xf000 - otadata (8KB)
# 0x11000 - phy_init (4KB)
# 0x12000 - ota_0 (app_size)
# ota_0 + app_size - ota_1 (app_size)
nvs_size = 24 * 1024
otadata_size = 8 * 1024
phy_size = 4 * 1024
# Check it all fits (partition table at 0x8000 + 0x1000)
overhead = 0x9000 + nvs_size + otadata_size + phy_size # Before first app
needed = overhead + (2 * app_bytes)
if needed > total_bytes:
return {
"success": False,
"error": (
f"Layout requires {_format_size(needed)} but flash is {flash_size}. "
f"Reduce app_size or increase flash_size."
),
}
partitions = [
("nvs", "data", "nvs", "0x9000", _format_size(nvs_size)),
("otadata", "data", "ota", f"0x{0x9000 + nvs_size:x}", _format_size(otadata_size)),
("phy_init", "data", "phy", f"0x{0x9000 + nvs_size + otadata_size:x}", _format_size(phy_size)),
("ota_0", "app", "ota_0", f"0x{overhead:x}", _format_size(app_bytes)),
("ota_1", "app", "ota_1", f"0x{overhead + app_bytes:x}", _format_size(app_bytes)),
]
# Remaining space for storage
used = overhead + (2 * app_bytes)
remaining = total_bytes - used
if remaining >= 4096:
partitions.append(
("storage", "data", "spiffs", f"0x{used:x}", _format_size(remaining))
)
# Generate CSV
csv_lines = ["# ESP-IDF Partition Table (OTA layout)", "# Name, Type, SubType, Offset, Size, Flags"]
for name, ptype, subtype, offset, size in partitions:
csv_lines.append(f"{name}, {ptype}, {subtype}, {offset}, {size},")
csv_text = "\n".join(csv_lines) + "\n"
return {
"success": True,
"flash_size": flash_size,
"app_size": app_size,
"partition_csv": csv_text,
"partitions": [
{"name": p[0], "type": p[1], "subtype": p[2], "offset": p[3], "size": p[4]}
for p in partitions
],
"space_remaining": _format_size(remaining) if remaining >= 4096 else "0",
"note": "Flash this CSV with gen_esp32part.py to binary, then write to 0x8000",
}
async def _create_custom_impl(
self,
context: Context,
partition_config: dict[str, Any],
) -> dict[str, Any]:
"""Generate a custom partition table from config."""
partitions_input = partition_config.get("partitions", [])
if not partitions_input:
return {"success": False, "error": "partition_config must have a 'partitions' list"}
# Auto-calculate offsets starting after partition table (0x9000)
current_offset = 0x9000
partitions = []
errors = []
for i, entry in enumerate(partitions_input):
name = entry.get("name")
ptype = entry.get("type")
subtype = entry.get("subtype")
size_str = entry.get("size")
if not all([name, ptype, subtype, size_str]):
errors.append(f"Partition {i}: requires name, type, subtype, size")
continue
# Validate type
if ptype not in PARTITION_TYPES:
errors.append(f"Partition '{name}': invalid type '{ptype}' (use: {list(PARTITION_TYPES.keys())})")
continue
# Validate subtype
valid_subtypes = APP_SUBTYPES if ptype == "app" else DATA_SUBTYPES
if subtype not in valid_subtypes:
errors.append(f"Partition '{name}': invalid subtype '{subtype}' (use: {list(valid_subtypes.keys())})")
continue
try:
size_bytes = _parse_size_spec(size_str)
except (ValueError, TypeError):
errors.append(f"Partition '{name}': invalid size '{size_str}'")
continue
# Use explicit offset if provided, otherwise auto-calculate
offset = entry.get("offset")
if offset:
current_offset = int(offset, 0) if isinstance(offset, str) else offset
# App partitions must be 64KB aligned
if ptype == "app" and current_offset % 0x10000 != 0:
current_offset = (current_offset + 0xFFFF) & ~0xFFFF
partitions.append({
"name": name,
"type": ptype,
"subtype": subtype,
"offset": f"0x{current_offset:x}",
"size": _format_size(size_bytes),
"size_bytes": size_bytes,
})
current_offset += size_bytes
if errors:
return {"success": False, "errors": errors}
# Generate CSV
csv_lines = ["# ESP-IDF Partition Table (custom layout)", "# Name, Type, SubType, Offset, Size, Flags"]
for p in partitions:
csv_lines.append(f"{p['name']}, {p['type']}, {p['subtype']}, {p['offset']}, {p['size']},")
csv_text = "\n".join(csv_lines) + "\n"
return {
"success": True,
"partition_csv": csv_text,
"partitions": [{k: v for k, v in p.items() if k != "size_bytes"} for p in partitions],
"total_size": _format_size(sum(p["size_bytes"] for p in partitions)),
"note": "Flash this CSV with gen_esp32part.py to binary, then write to 0x8000",
}
async def _analyze_impl(
self,
context: Context,
port: str | None,
) -> dict[str, Any]:
"""Read and parse partition table from a connected device."""
if not port:
return {"success": False, "error": "Port is required for partition analysis"}
import tempfile
# Partition table is at 0x8000, max size 0xC00 (3KB)
with tempfile.NamedTemporaryFile(suffix=".bin", delete=False) as tmp:
tmp_path = tmp.name
try:
result = await self._run_esptool(
port,
["read-flash", "0x8000", "0xC00", tmp_path],
timeout=60.0,
)
if not result["success"]:
return {"success": False, "error": result["error"], "port": port}
raw = Path(tmp_path).read_bytes()
partitions = self._parse_partition_table_binary(raw)
if not partitions:
return {
"success": True,
"port": port,
"partitions": [],
"note": "No valid partition entries found (flash may be blank or erased)",
}
return {
"success": True,
"port": port,
"partition_count": len(partitions),
"partitions": partitions,
}
finally:
import os
try:
os.unlink(tmp_path)
except OSError:
pass
def _parse_partition_table_binary(self, raw: bytes) -> list[dict[str, Any]]:
"""Parse ESP32 binary partition table format.
Each entry is 32 bytes:
- 2 bytes: magic (0xAA50)
- 1 byte: type
- 1 byte: subtype
- 4 bytes: offset (LE)
- 4 bytes: size (LE)
- 16 bytes: name (null-terminated)
- 4 bytes: flags
"""
import struct
entry_size = 32
magic_expected = 0x50AA # Little-endian
# Reverse lookup tables
type_names = {v: k for k, v in PARTITION_TYPES.items()}
app_subtype_names = {v: k for k, v in APP_SUBTYPES.items()}
data_subtype_names = {v: k for k, v in DATA_SUBTYPES.items()}
partitions = []
for i in range(0, len(raw) - entry_size + 1, entry_size):
entry = raw[i : i + entry_size]
magic = struct.unpack_from("<H", entry, 0)[0]
if magic == 0xFFFF:
# End of table (erased flash)
break
if magic != magic_expected:
continue
ptype = entry[2]
subtype = entry[3]
offset = struct.unpack_from("<I", entry, 4)[0]
size = struct.unpack_from("<I", entry, 8)[0]
name = entry[12:28].split(b"\x00")[0].decode("ascii", errors="replace")
flags = struct.unpack_from("<I", entry, 28)[0]
type_name = type_names.get(ptype, f"0x{ptype:02x}")
if ptype == 0x00:
subtype_name = app_subtype_names.get(subtype, f"0x{subtype:02x}")
elif ptype == 0x01:
subtype_name = data_subtype_names.get(subtype, f"0x{subtype:02x}")
else:
subtype_name = f"0x{subtype:02x}"
partitions.append({
"name": name,
"type": type_name,
"subtype": subtype_name,
"offset": f"0x{offset:x}",
"size": _format_size(size),
"size_bytes": size,
"encrypted": bool(flags & 1),
})
return partitions
async def health_check(self) -> dict[str, Any]:
"""Component health check"""
return {"status": "healthy", "note": "Partition manager ready"}

View File

@ -1,403 +0,0 @@
"""
Production Tools Component
Provides factory programming, batch operations, quality control,
and production line integration tools.
"""
import asyncio
import logging
import re
import time
from pathlib import Path
from typing import Any
from fastmcp import Context, FastMCP
from ..config import ESPToolServerConfig
logger = logging.getLogger(__name__)
class ProductionTools:
"""ESP production and factory programming tools"""
def __init__(self, app: FastMCP, config: ESPToolServerConfig) -> None:
self.app = app
self.config = config
self._register_tools()
async def _run_esptool(
self,
port: str,
args: list[str],
timeout: float = 30.0,
) -> dict[str, Any]:
"""Run esptool as an async subprocess."""
cmd = [self.config.esptool_path, "--port", port, *args]
proc = None
try:
proc = await asyncio.create_subprocess_exec(
*cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await asyncio.wait_for(proc.communicate(), timeout=timeout)
output = (stdout or b"").decode() + (stderr or b"").decode()
if proc.returncode != 0:
return {"success": False, "error": output.strip()[:500]}
return {"success": True, "output": output}
except asyncio.TimeoutError:
if proc and proc.returncode is None:
proc.kill()
await proc.wait()
return {"success": False, "error": f"Timeout after {timeout}s"}
except FileNotFoundError:
return {"success": False, "error": f"esptool not found at {self.config.esptool_path}"}
except Exception as e:
if proc and proc.returncode is None:
proc.kill()
await proc.wait()
return {"success": False, "error": str(e)}
def _register_tools(self) -> None:
"""Register production tools"""
@self.app.tool("esp_factory_program")
async def factory_program(
context: Context, program_config: dict[str, Any], port: str | None = None
) -> dict[str, Any]:
"""Program device for factory deployment"""
return await self._factory_program_impl(context, program_config, port)
@self.app.tool("esp_batch_program")
async def batch_program(
context: Context, device_list: list[str], firmware_path: str
) -> dict[str, Any]:
"""Program multiple devices in batch"""
return await self._batch_program_impl(context, device_list, firmware_path)
@self.app.tool("esp_quality_control")
async def quality_control(
context: Context, port: str | None = None, test_suite: str = "basic"
) -> dict[str, Any]:
"""Run quality control tests"""
return await self._quality_control_impl(context, port, test_suite)
async def _factory_program_impl(
self,
context: Context,
program_config: dict[str, Any],
port: str | None,
) -> dict[str, Any]:
"""Factory-program a device: erase → flash → verify.
program_config should contain:
{
"firmware_path": "/path/to/firmware.bin",
"address": "0x0", # optional, default "0x0"
"erase_before": true, # optional, default true
"verify": true, # optional, default true
"partition_table": "/path/to/partitions.bin", # optional
"partition_table_address": "0x8000", # optional
"bootloader": "/path/to/bootloader.bin", # optional
"bootloader_address": "0x1000", # optional
}
"""
if not port:
return {"success": False, "error": "Port is required for factory programming"}
firmware_path = program_config.get("firmware_path")
if not firmware_path:
return {"success": False, "error": "program_config must include 'firmware_path'"}
fw = Path(firmware_path)
if not fw.exists():
return {"success": False, "error": f"Firmware not found: {firmware_path}"}
erase_before = program_config.get("erase_before", True)
verify = program_config.get("verify", True)
address = program_config.get("address", "0x0")
steps: list[dict[str, Any]] = []
t_start = time.time()
# Step 1: Erase flash
if erase_before:
result = await self._run_esptool(port, ["erase-flash"], timeout=60.0)
steps.append({
"step": "erase_flash",
"success": result["success"],
"error": result.get("error"),
})
if not result["success"]:
return {
"success": False,
"error": f"Erase failed: {result['error']}",
"steps": steps,
"port": port,
}
# Step 2: Flash bootloader (if provided)
bootloader = program_config.get("bootloader")
if bootloader:
bl_path = Path(bootloader)
if not bl_path.exists():
return {"success": False, "error": f"Bootloader not found: {bootloader}"}
bl_addr = program_config.get("bootloader_address", "0x1000")
result = await self._run_esptool(
port,
["write-flash", bl_addr, bootloader],
timeout=120.0,
)
steps.append({
"step": "flash_bootloader",
"address": bl_addr,
"success": result["success"],
"error": result.get("error"),
})
if not result["success"]:
return {
"success": False,
"error": f"Bootloader flash failed: {result['error']}",
"steps": steps,
"port": port,
}
# Step 3: Flash partition table (if provided)
partition_table = program_config.get("partition_table")
if partition_table:
pt_path = Path(partition_table)
if not pt_path.exists():
return {"success": False, "error": f"Partition table not found: {partition_table}"}
pt_addr = program_config.get("partition_table_address", "0x8000")
result = await self._run_esptool(
port,
["write-flash", pt_addr, partition_table],
timeout=120.0,
)
steps.append({
"step": "flash_partition_table",
"address": pt_addr,
"success": result["success"],
"error": result.get("error"),
})
if not result["success"]:
return {
"success": False,
"error": f"Partition table flash failed: {result['error']}",
"steps": steps,
"port": port,
}
# Step 4: Flash main firmware
write_args = ["write-flash"]
if verify:
write_args.append("--verify")
write_args.extend([address, firmware_path])
result = await self._run_esptool(port, write_args, timeout=300.0)
steps.append({
"step": "flash_firmware",
"address": address,
"success": result["success"],
"error": result.get("error"),
})
if not result["success"]:
return {
"success": False,
"error": f"Firmware flash failed: {result['error']}",
"steps": steps,
"port": port,
}
elapsed = round(time.time() - t_start, 2)
return {
"success": True,
"port": port,
"steps": steps,
"total_time_seconds": elapsed,
"firmware_path": firmware_path,
"firmware_size_bytes": fw.stat().st_size,
}
async def _batch_program_impl(
self,
context: Context,
device_list: list[str],
firmware_path: str,
) -> dict[str, Any]:
"""Program multiple devices in parallel.
Each device gets the same firmware flashed at 0x0 with erase + verify.
Devices are programmed concurrently using asyncio.gather.
"""
if not device_list:
return {"success": False, "error": "device_list is empty"}
fw = Path(firmware_path)
if not fw.exists():
return {"success": False, "error": f"Firmware not found: {firmware_path}"}
t_start = time.time()
async def program_one(port: str) -> dict[str, Any]:
"""Program a single device."""
config = {
"firmware_path": firmware_path,
"erase_before": True,
"verify": True,
}
return await self._factory_program_impl(context, config, port)
# Run all programming tasks concurrently
results = await asyncio.gather(
*[program_one(port) for port in device_list],
return_exceptions=True,
)
device_results = []
succeeded = 0
for port, result in zip(device_list, results, strict=True):
if isinstance(result, Exception):
device_results.append({
"port": port,
"success": False,
"error": str(result),
})
else:
device_results.append({
"port": port,
"success": result.get("success", False),
"error": result.get("error"),
"time_seconds": result.get("total_time_seconds"),
})
if result.get("success"):
succeeded += 1
elapsed = round(time.time() - t_start, 2)
return {
"success": succeeded == len(device_list),
"total_devices": len(device_list),
"succeeded": succeeded,
"failed": len(device_list) - succeeded,
"total_time_seconds": elapsed,
"firmware_path": firmware_path,
"devices": device_results,
}
async def _quality_control_impl(
self,
context: Context,
port: str | None,
test_suite: str,
) -> dict[str, Any]:
"""Run quality control checks on a device.
Test suites:
- "basic": chip-id, flash-id, read-mac (fast verification)
- "extended": basic + flash read/verify + memory dump check
"""
if not port:
return {"success": False, "error": "Port is required for quality control"}
tests: list[dict[str, Any]] = []
t_start = time.time()
# Test 1: Chip identification
result = await self._run_esptool(port, ["chip-id"], timeout=15.0)
chip_info: dict[str, Any] = {"test": "chip_identification", "success": result["success"]}
if result["success"]:
output = result["output"]
chip_match = re.search(r"Chip is (\S+)", output)
id_match = re.search(r"Chip ID:\s*(0x[0-9a-fA-F]+)", output)
chip_info["chip"] = chip_match.group(1) if chip_match else "unknown"
chip_info["chip_id"] = id_match.group(1) if id_match else "unknown"
else:
chip_info["error"] = result.get("error", "")[:200]
tests.append(chip_info)
# Test 2: Flash identification
result = await self._run_esptool(port, ["flash-id"], timeout=15.0)
flash_info: dict[str, Any] = {"test": "flash_identification", "success": result["success"]}
if result["success"]:
output = result["output"]
mfr_match = re.search(r"Manufacturer:\s*(0x[0-9a-fA-F]+)", output)
size_match = re.search(r"Detected flash size:\s*(\S+)", output)
flash_info["manufacturer"] = mfr_match.group(1) if mfr_match else "unknown"
flash_info["flash_size"] = size_match.group(1) if size_match else "unknown"
else:
flash_info["error"] = result.get("error", "")[:200]
tests.append(flash_info)
# Test 3: MAC address
result = await self._run_esptool(port, ["read-mac"], timeout=15.0)
mac_info: dict[str, Any] = {"test": "mac_address", "success": result["success"]}
if result["success"]:
mac_match = re.search(r"MAC:\s*([0-9a-fA-F:]+)", result["output"])
mac_info["mac"] = mac_match.group(1) if mac_match else "unknown"
else:
mac_info["error"] = result.get("error", "")[:200]
tests.append(mac_info)
# Extended tests
if test_suite == "extended":
# Test 4: Read first 4KB of flash (checks flash connectivity)
import tempfile
with tempfile.NamedTemporaryFile(suffix=".bin", delete=False) as tmp:
tmp_path = tmp.name
try:
result = await self._run_esptool(
port,
["read-flash", "0x0", "4096", tmp_path],
timeout=60.0,
)
read_info: dict[str, Any] = {"test": "flash_read_4kb", "success": result["success"]}
if result["success"]:
data = Path(tmp_path).read_bytes()
read_info["bytes_read"] = len(data)
# Check if flash is all 0xFF (erased) or has data
non_ff = sum(1 for b in data if b != 0xFF)
read_info["has_data"] = non_ff > 0
read_info["non_erased_bytes"] = non_ff
else:
read_info["error"] = result.get("error", "")[:200]
tests.append(read_info)
finally:
import os
try:
os.unlink(tmp_path)
except OSError:
pass
elapsed = round(time.time() - t_start, 2)
# Determine overall pass/fail
passed = sum(1 for t in tests if t["success"])
all_passed = passed == len(tests)
return {
"success": True,
"port": port,
"test_suite": test_suite,
"verdict": "PASS" if all_passed else "FAIL",
"tests_run": len(tests),
"tests_passed": passed,
"tests_failed": len(tests) - passed,
"total_time_seconds": elapsed,
"tests": tests,
}
async def health_check(self) -> dict[str, Any]:
"""Component health check"""
return {"status": "healthy", "note": "Production tools ready"}

View File

@ -1,435 +0,0 @@
"""
Security Manager Component
Handles ESP security features including eFuse management, flash encryption
status, and security auditing. Operations shell out to esptool/espefuse
as async subprocesses.
"""
import asyncio
import logging
import re
from typing import Any
from fastmcp import Context, FastMCP
from ..config import ESPToolServerConfig
logger = logging.getLogger(__name__)
class SecurityManager:
"""ESP security features management"""
def __init__(self, app: FastMCP, config: ESPToolServerConfig) -> None:
self.app = app
self.config = config
self._register_tools()
async def _run_cmd(
self,
cmd: list[str],
timeout: float = 30.0,
) -> dict[str, Any]:
"""Run a CLI command as an async subprocess.
Returns:
dict with "success", "output", and optionally "error"
"""
proc = None
try:
proc = await asyncio.create_subprocess_exec(
*cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await asyncio.wait_for(proc.communicate(), timeout=timeout)
output = (stdout or b"").decode() + (stderr or b"").decode()
if proc.returncode != 0:
return {"success": False, "error": output.strip()[:500]}
return {"success": True, "output": output}
except asyncio.TimeoutError:
if proc and proc.returncode is None:
proc.kill()
await proc.wait()
return {"success": False, "error": f"Timeout after {timeout}s"}
except FileNotFoundError:
return {"success": False, "error": f"Command not found: {cmd[0]}"}
except Exception as e:
if proc and proc.returncode is None:
proc.kill()
await proc.wait()
return {"success": False, "error": str(e)}
def _espefuse_cmd(self, port: str, args: list[str]) -> list[str]:
"""Build an espefuse command list."""
return ["espefuse", "--port", port, *args]
def _esptool_cmd(self, port: str, args: list[str]) -> list[str]:
"""Build an esptool command list."""
return [self.config.esptool_path, "--port", port, *args]
def _register_tools(self) -> None:
"""Register security management tools"""
@self.app.tool("esp_security_audit")
async def security_audit(
context: Context,
port: str | None = None,
) -> dict[str, Any]:
"""Perform comprehensive security audit of an ESP device.
Connects to the device and gathers security-relevant information:
chip identity, flash encryption status, secure boot state, and
eFuse summary. Returns a structured report suitable for evaluating
the device's security posture.
Requires a connected device (physical or QEMU via socket:// URI).
Args:
port: Serial port or socket:// URI (required)
"""
return await self._security_audit_impl(context, port)
@self.app.tool("esp_enable_flash_encryption")
async def enable_flash_encryption(
context: Context,
port: str | None = None,
key_file: str | None = None,
) -> dict[str, Any]:
"""Enable flash encryption with optional key.
Checks current flash encryption status via eFuse summary. If
encryption is already enabled, reports the current state. Actual
eFuse burning for flash encryption requires esp_efuse_burn with
specific eFuse names (FLASH_CRYPT_CNT, etc.) this tool provides
guidance and status checking.
WARNING: Flash encryption is a one-way operation on real hardware.
Test thoroughly on QEMU first.
Args:
port: Serial port or socket:// URI (required)
key_file: Path to encryption key file (for reference only)
"""
return await self._flash_encryption_impl(context, port, key_file)
@self.app.tool("esp_efuse_read")
async def read_efuse(
context: Context,
port: str | None = None,
efuse_name: str | None = None,
) -> dict[str, Any]:
"""Read eFuse values from an ESP device.
Without efuse_name: returns full human-readable eFuse summary
(espefuse summary). With efuse_name: returns that specific eFuse's
value parsed from the summary.
eFuses are one-time-programmable bits that control chip security,
MAC address, calibration data, and more. Reading is non-destructive.
Args:
port: Serial port or socket:// URI (required)
efuse_name: Specific eFuse to read (e.g. "MAC", "FLASH_CRYPT_CNT").
If omitted, returns full summary.
"""
return await self._efuse_read_impl(context, port, efuse_name)
@self.app.tool("esp_efuse_burn")
async def burn_efuse(
context: Context,
efuse_name: str,
value: str,
port: str | None = None,
) -> dict[str, Any]:
"""Burn eFuse (DANGEROUS - requires confirmation).
Permanently programs an eFuse bit field on the ESP device. This
operation is IRREVERSIBLE on real hardware burned bits cannot be
reset. Safe to test on QEMU virtual devices (eFuses reset when
instance is recreated).
Common eFuses: FLASH_CRYPT_CNT, ABS_DONE_0, JTAG_DISABLE,
DISABLE_DL_ENCRYPT, DISABLE_DL_DECRYPT.
Uses --do-not-confirm flag since confirmation is handled at the
MCP client level.
Args:
efuse_name: Name of the eFuse to burn (e.g. "JTAG_DISABLE")
value: Value to burn (e.g. "1", "0x1")
port: Serial port or socket:// URI (required)
"""
return await self._efuse_burn_impl(context, efuse_name, value, port)
async def _security_audit_impl(
self,
context: Context,
port: str | None,
) -> dict[str, Any]:
"""Gather security-relevant info from the device."""
if not port:
return {"success": False, "error": "Port is required for security audit"}
report: dict[str, Any] = {"port": port}
# 1. Get chip info and security info from esptool
security_result = await self._run_cmd(
self._esptool_cmd(port, ["get-security-info"]),
timeout=15.0,
)
if security_result["success"]:
report["security_info"] = self._parse_security_info(security_result["output"])
else:
# get-security-info may not be supported on all chips
report["security_info"] = {"note": security_result["error"]}
# 2. Get chip ID
chip_result = await self._run_cmd(
self._esptool_cmd(port, ["chip-id"]),
timeout=15.0,
)
if chip_result["success"]:
chip_match = re.search(r"Chip ID:\s*(0x[0-9a-fA-F]+)", chip_result["output"])
if chip_match:
report["chip_id"] = chip_match.group(1)
# 3. Get eFuse summary for security-relevant fuses
efuse_result = await self._run_cmd(
self._espefuse_cmd(port, ["summary"]),
timeout=15.0,
)
if efuse_result["success"]:
parsed = self._parse_efuse_summary(efuse_result["output"])
report["efuse_summary"] = parsed
# Extract security-relevant fields
security_fuses = {}
security_names = [
"FLASH_CRYPT_CNT", "ABS_DONE_0", "ABS_DONE_1",
"JTAG_DISABLE", "DISABLE_DL_ENCRYPT", "DISABLE_DL_DECRYPT",
"DISABLE_DL_CACHE", "FLASH_CRYPT_CONFIG",
]
for name in security_names:
if name in parsed:
security_fuses[name] = parsed[name]
report["security_fuses"] = security_fuses
# Determine security posture
flash_encrypted = security_fuses.get("FLASH_CRYPT_CNT", "0") not in ("0", "0x0", "= 0")
secure_boot = security_fuses.get("ABS_DONE_0", "0") not in ("0", "0x0", "= 0")
jtag_disabled = security_fuses.get("JTAG_DISABLE", "0") not in ("0", "0x0", "= 0")
report["posture"] = {
"flash_encryption": "enabled" if flash_encrypted else "disabled",
"secure_boot": "enabled" if secure_boot else "disabled",
"jtag": "disabled" if jtag_disabled else "enabled (vulnerable)",
}
else:
report["efuse_error"] = efuse_result["error"]
report["success"] = True
return report
async def _flash_encryption_impl(
self,
context: Context,
port: str | None,
key_file: str | None,
) -> dict[str, Any]:
"""Check flash encryption status and provide guidance."""
if not port:
return {"success": False, "error": "Port is required"}
# Read the encryption-relevant eFuses
efuse_result = await self._run_cmd(
self._espefuse_cmd(port, ["summary"]),
timeout=15.0,
)
if not efuse_result["success"]:
return {"success": False, "error": efuse_result["error"]}
parsed = self._parse_efuse_summary(efuse_result["output"])
flash_crypt_cnt = parsed.get("FLASH_CRYPT_CNT", "unknown")
flash_crypt_config = parsed.get("FLASH_CRYPT_CONFIG", "unknown")
# Determine current state
is_encrypted = flash_crypt_cnt not in ("0", "0x0", "= 0", "unknown")
result: dict[str, Any] = {
"success": True,
"port": port,
"flash_encryption_enabled": is_encrypted,
"FLASH_CRYPT_CNT": flash_crypt_cnt,
"FLASH_CRYPT_CONFIG": flash_crypt_config,
}
if is_encrypted:
result["message"] = "Flash encryption is already enabled on this device."
else:
result["message"] = (
"Flash encryption is NOT enabled. To enable, you need to: "
"1) Generate or provide an encryption key, "
"2) Burn FLASH_CRYPT_CNT and FLASH_CRYPT_CONFIG eFuses, "
"3) Flash encrypted firmware. "
"WARNING: This is irreversible on real hardware. Test on QEMU first."
)
if key_file:
result["key_file"] = key_file
return result
async def _efuse_read_impl(
self,
context: Context,
port: str | None,
efuse_name: str | None,
) -> dict[str, Any]:
"""Read eFuse values via espefuse summary."""
if not port:
return {"success": False, "error": "Port is required for eFuse read"}
result = await self._run_cmd(
self._espefuse_cmd(port, ["summary"]),
timeout=15.0,
)
if not result["success"]:
return {"success": False, "error": result["error"], "port": port}
parsed = self._parse_efuse_summary(result["output"])
if efuse_name:
# Return specific eFuse
if efuse_name in parsed:
return {
"success": True,
"port": port,
"efuse_name": efuse_name,
"value": parsed[efuse_name],
}
else:
# Try case-insensitive match
for key, val in parsed.items():
if key.upper() == efuse_name.upper():
return {
"success": True,
"port": port,
"efuse_name": key,
"value": val,
}
return {
"success": False,
"error": f"eFuse '{efuse_name}' not found",
"available_efuses": list(parsed.keys()),
"port": port,
}
# Return full summary
return {
"success": True,
"port": port,
"efuses": parsed,
"raw_output": result["output"][:2000],
}
async def _efuse_burn_impl(
self,
context: Context,
efuse_name: str,
value: str,
port: str | None,
) -> dict[str, Any]:
"""Burn an eFuse value via espefuse burn-efuse."""
if not port:
return {"success": False, "error": "Port is required for eFuse burn"}
# Read before to record the change
before = await self._run_cmd(
self._espefuse_cmd(port, ["summary"]),
timeout=15.0,
)
before_parsed = self._parse_efuse_summary(before.get("output", "")) if before["success"] else {}
before_value = before_parsed.get(efuse_name, "unknown")
# Burn the eFuse (--do-not-confirm since MCP client handles confirmation)
result = await self._run_cmd(
self._espefuse_cmd(port, ["--do-not-confirm", "burn-efuse", efuse_name, value]),
timeout=30.0,
)
if not result["success"]:
return {
"success": False,
"error": result["error"],
"port": port,
"efuse_name": efuse_name,
}
# Read after to confirm
after = await self._run_cmd(
self._espefuse_cmd(port, ["summary"]),
timeout=15.0,
)
after_parsed = self._parse_efuse_summary(after.get("output", "")) if after["success"] else {}
after_value = after_parsed.get(efuse_name, "unknown")
return {
"success": True,
"port": port,
"efuse_name": efuse_name,
"value_requested": value,
"value_before": before_value,
"value_after": after_value,
"warning": "eFuse burn is IRREVERSIBLE on real hardware",
}
def _parse_efuse_summary(self, output: str) -> dict[str, str]:
"""Parse espefuse summary output into a dict of name -> value.
espefuse v5.x summary lines look like:
ADC_VREF (BLOCK0) True ADC reference voltage = 1100 R/W (0b00000)
MAC (BLOCK0) Factory MAC address = ab:cd:ef:01:02:03 R/W (0x...)
The value sits between '= ' and the R/W permission field.
"""
efuses: dict[str, str] = {}
for line in output.splitlines():
# Match: NAME (BLOCKn) <description> = <value> R/W|R/-|-/- (<hex>)
match = re.match(
r"\s*(\w+)\s+\(BLOCK\d+\)\s+.*?=\s+(.+?)\s+[RW/-]+/[RW/-]+\s",
line,
)
if match:
name = match.group(1).strip()
value = match.group(2).strip()
efuses[name] = value
return efuses
def _parse_security_info(self, output: str) -> dict[str, str]:
"""Parse esptool get-security-info output."""
info: dict[str, str] = {}
for line in output.splitlines():
if ":" in line:
key, _, val = line.partition(":")
key = key.strip()
val = val.strip()
if key and val:
info[key] = val
return info
async def health_check(self) -> dict[str, Any]:
"""Component health check"""
return {"status": "healthy", "note": "Security manager ready"}

View File

@ -0,0 +1,756 @@
"""
Chip Control Component
Provides comprehensive ESP32/ESP8266 chip detection, connection management,
and basic control operations with production-grade reliability features.
"""
import asyncio
import logging
import time
from collections.abc import Callable
from concurrent.futures import ThreadPoolExecutor
from dataclasses import dataclass
from typing import Any, TypeVar
import esptool
from fastmcp import Context, FastMCP
from ..config import ESPToolServerConfig
from ..middleware import MiddlewareFactory
logger = logging.getLogger(__name__)
# Type variable for generic return type
T = TypeVar("T")
@dataclass
class ChipInfo:
"""Information about detected ESP chip"""
chip_type: str
chip_revision: str | None = None
mac_address: str | None = None
flash_size: str | None = None
crystal_frequency: str | None = None
features: list[str] = None
efuse_info: dict[str, Any] = None
def __post_init__(self):
if self.features is None:
self.features = []
if self.efuse_info is None:
self.efuse_info = {}
@dataclass
class ConnectionInfo:
"""Information about ESP device connection"""
port: str
baud_rate: int
connected: bool = False
connection_time: float | None = None
stub_loaded: bool = False
chip_info: ChipInfo | None = None
class ChipControl:
"""ESP32/ESP8266 chip control and management"""
def __init__(self, app: FastMCP, config: ESPToolServerConfig):
self.app = app
self.config = config
self.connections: dict[str, ConnectionInfo] = {}
# Set by server after QemuManager initialization (avoids circular import)
self.qemu_manager = None
# Register tools
self._register_tools()
def _register_tools(self) -> None:
"""Register chip control tools with FastMCP"""
@self.app.tool("esp_detect_chip")
async def detect_chip(
context: Context,
port: str | None = None,
baud_rate: int | None = None,
detailed: bool = False,
) -> dict[str, Any]:
"""
Detect ESP chip type and gather comprehensive information
Args:
port: Serial port (auto-detect if not specified)
baud_rate: Connection baud rate (use config default if not specified)
detailed: Include detailed chip information and eFuse data
"""
return await self._detect_chip_impl(context, port, baud_rate, detailed)
@self.app.tool("esp_connect_advanced")
async def connect_advanced(
context: Context,
port: str | None = None,
baud_rate: int | None = None,
timeout: int | None = None,
use_stub: bool = True,
retry_count: int = 3,
) -> dict[str, Any]:
"""
Advanced ESP device connection with retry logic and stub loading
Args:
port: Serial port (auto-detect if not specified)
baud_rate: Connection baud rate
timeout: Connection timeout in seconds
use_stub: Load ROM bootloader stub for faster operations
retry_count: Number of connection attempts
"""
return await self._connect_advanced_impl(
context, port, baud_rate, timeout, use_stub, retry_count
)
@self.app.tool("esp_reset_chip")
async def reset_chip(
context: Context, port: str | None = None, reset_type: str = "hard"
) -> dict[str, Any]:
"""
Reset ESP chip using various methods
Args:
port: Serial port (use active connection if not specified)
reset_type: Type of reset (hard, soft, bootloader)
"""
return await self._reset_chip_impl(context, port, reset_type)
@self.app.tool("esp_scan_ports")
async def scan_ports(context: Context, detailed: bool = False) -> dict[str, Any]:
"""
Scan for available ESP devices on all ports
Args:
detailed: Include detailed information about each detected device
"""
return await self._scan_ports_impl(context, detailed)
@self.app.tool("esp_load_test_firmware")
async def load_test_firmware(
context: Context, port: str | None = None, firmware_type: str = "blink"
) -> dict[str, Any]:
"""
Load test firmware for chip validation
Args:
port: Serial port (auto-detect if not specified)
firmware_type: Type of test firmware (blink, hello_world, wifi_scan)
"""
return await self._load_test_firmware_impl(context, port, firmware_type)
async def _detect_chip_impl(
self, context: Context, port: str | None, baud_rate: int | None, detailed: bool
) -> dict[str, Any]:
"""Implementation of chip detection"""
# Use middleware for operation tracking
middleware = MiddlewareFactory.create_esptool_middleware(
context, f"detect_chip_{int(time.time())}"
)
async with middleware.activate():
try:
# Auto-detect port if not specified
if not port:
await middleware._log_info("🔍 Auto-detecting ESP device port...")
port = await self._auto_detect_port(context)
if not port:
return {
"success": False,
"error": "No ESP devices found on available ports",
"scanned_ports": self.config.get_common_ports(),
}
# Use provided baud rate or config default
baud_rate = baud_rate or self.config.default_baud_rate
await middleware._log_info(
f"🔌 Connecting to ESP device on {port} at {baud_rate} baud..."
)
start_time = time.time()
try:
# Use subprocess for reliable timeout (threads can't be killed)
probe_result = await self._probe_port_subprocess(port, detailed)
if not probe_result.get("available"):
await middleware._log_error(
f"Chip detection failed: {probe_result.get('error', 'Unknown')}"
)
return {
"success": False,
"error": probe_result.get("error", "Detection failed"),
"port": port,
"baud_rate": baud_rate,
}
connection_time = time.time() - start_time
# Create ChipInfo from probe result
chip_info = ChipInfo(
chip_type=probe_result.get("chip_type", "Unknown"),
mac_address=probe_result.get("mac_address"),
flash_size=probe_result.get("flash_size"),
crystal_frequency=probe_result.get("crystal_freq"),
features=probe_result.get("features"),
)
# Store connection info
self.connections[port] = ConnectionInfo(
port=port,
baud_rate=baud_rate,
connected=True,
connection_time=connection_time,
chip_info=chip_info,
)
await middleware._log_success(f"Successfully detected {chip_info.chip_type}")
return {
"success": True,
"port": port,
"baud_rate": baud_rate,
"connection_time_seconds": round(connection_time, 2),
"chip_info": {
"chip_type": chip_info.chip_type,
"mac_address": chip_info.mac_address,
"flash_size": chip_info.flash_size,
"crystal_frequency": chip_info.crystal_frequency,
"features": chip_info.features,
}
if detailed
else {
"chip_type": chip_info.chip_type,
"mac_address": chip_info.mac_address,
},
}
except Exception as e:
await middleware._log_error(f"Chip detection failed: {e}")
return {"success": False, "error": str(e), "port": port, "baud_rate": baud_rate}
except Exception as e:
await middleware._log_error(f"Detection operation failed: {e}")
return {"success": False, "error": f"Operation failed: {e}"}
async def _connect_advanced_impl(
self,
context: Context,
port: str | None,
baud_rate: int | None,
timeout: int | None,
use_stub: bool,
retry_count: int,
) -> dict[str, Any]:
"""Implementation of advanced connection"""
middleware = MiddlewareFactory.create_esptool_middleware(
context, f"connect_advanced_{int(time.time())}"
)
async with middleware.activate():
# Auto-detect port if needed
if not port:
port = await self._auto_detect_port(context)
if not port:
return {"success": False, "error": "No ESP devices found"}
baud_rate = baud_rate or self.config.default_baud_rate
connection_timeout = float(timeout or self.config.connection_timeout)
last_error = None
for attempt in range(retry_count):
await middleware._log_info(f"🔄 Connection attempt {attempt + 1}/{retry_count}")
# Capture variables for closure
target_port = port
target_baud = baud_rate
load_stub = use_stub and self.config.enable_stub_flasher
def connect_blocking() -> dict[str, Any]:
"""Blocking function to connect and get chip info"""
esp = self._connect_to_chip(target_port, target_baud)
# Load stub if requested
stub_loaded = False
if load_stub:
esp.run_stub()
stub_loaded = True
# Test connection
chip_type = esp.get_chip_description()
mac_address = ":".join(f"{b:02x}" for b in esp.read_mac())
return {
"chip_type": chip_type,
"mac_address": mac_address,
"stub_loaded": stub_loaded,
}
try:
result = await self._run_blocking_with_timeout(
connect_blocking, timeout=connection_timeout
)
# Store successful connection
self.connections[port] = ConnectionInfo(
port=port,
baud_rate=baud_rate,
connected=True,
connection_time=time.time(),
stub_loaded=result["stub_loaded"],
chip_info=ChipInfo(
chip_type=result["chip_type"],
mac_address=result["mac_address"],
),
)
await middleware._log_success(f"Connected to {result['chip_type']} on {port}")
return {
"success": True,
"port": port,
"baud_rate": baud_rate,
"attempt": attempt + 1,
"stub_loaded": result["stub_loaded"],
"chip_type": result["chip_type"],
"mac_address": result["mac_address"],
}
except asyncio.TimeoutError:
last_error = f"Connection timeout ({connection_timeout}s)"
await middleware._log_warning(f"Attempt {attempt + 1} timed out")
except Exception as e:
last_error = str(e)
await middleware._log_warning(f"Attempt {attempt + 1} failed: {e}")
if attempt < retry_count - 1:
await asyncio.sleep(1) # Brief delay between attempts
await middleware._log_error(f"All connection attempts failed. Last error: {last_error}")
return {"success": False, "error": last_error, "attempts": retry_count, "port": port}
async def _reset_chip_impl(
self, context: Context, port: str | None, reset_type: str
) -> dict[str, Any]:
"""Implementation of chip reset"""
middleware = MiddlewareFactory.create_esptool_middleware(
context, f"reset_chip_{int(time.time())}"
)
async with middleware.activate():
try:
# Find active connection or specified port
if not port:
active_connections = [
conn for conn in self.connections.values() if conn.connected
]
if not active_connections:
return {"success": False, "error": "No active connections found"}
port = active_connections[0].port
connection = self.connections.get(port)
baud_rate = connection.baud_rate if connection else self.config.default_baud_rate
# Validate reset type
if reset_type not in ("hard", "soft", "bootloader"):
return {
"success": False,
"error": f"Unknown reset type: {reset_type}",
"available_types": ["hard", "soft", "bootloader"],
}
await middleware._log_info(f"🔄 Performing {reset_type} reset on {port}")
# Capture variables for closure
target_port = port
target_baud = baud_rate
target_reset_type = reset_type
def perform_reset_blocking() -> bool:
"""Blocking function to perform reset"""
esp = self._connect_to_chip(target_port, target_baud)
if target_reset_type == "hard":
esp.hard_reset()
elif target_reset_type == "soft":
esp.soft_reset()
elif target_reset_type == "bootloader":
# Just connecting puts it in bootloader mode
pass
return True
try:
# Use timeout wrapper - 10 seconds for reset
await self._run_blocking_with_timeout(perform_reset_blocking, timeout=10.0)
# Update connection status
if port in self.connections:
self.connections[port].connected = False
await middleware._log_success(f"Reset completed: {reset_type}")
return {
"success": True,
"port": port,
"reset_type": reset_type,
"timestamp": time.time(),
}
except asyncio.TimeoutError:
await middleware._log_error("Reset operation timed out (10s)")
return {
"success": False,
"error": "Reset timeout (10s)",
"port": port,
"reset_type": reset_type,
}
except Exception as e:
await middleware._log_error(f"Reset failed: {e}")
return {"success": False, "error": str(e), "port": port, "reset_type": reset_type}
async def _scan_ports_impl(self, context: Context, detailed: bool) -> dict[str, Any]:
"""Implementation of port scanning using subprocess for reliable timeout."""
import os
import re
import subprocess
# Check common ESP device ports directly (more reliable than enumeration)
common_esp_ports = [
"/dev/ttyUSB0",
"/dev/ttyUSB1",
"/dev/ttyUSB2",
"/dev/ttyUSB3",
"/dev/ttyACM0",
"/dev/ttyACM1",
"/dev/ttyACM2",
"/dev/ttyACM3",
]
# Filter to only existing ports
usb_ports = [p for p in common_esp_ports if os.path.exists(p)]
detected_devices = []
scan_results = {}
if not usb_ports:
return {
"success": True,
"detected_devices": [],
"total_scanned": len(common_esp_ports),
"checked_ports": common_esp_ports,
"scan_results": {"note": "No USB/ACM ports found on system"},
"timestamp": time.time(),
}
for port in usb_ports:
device_info = await self._probe_port_subprocess(port, detailed)
if device_info.get("available"):
detected_devices.append(device_info)
scan_results[port] = device_info
# Include running QEMU instances
qemu_devices = []
if self.qemu_manager:
for qemu_info in self.qemu_manager.get_running_ports():
qemu_info["available"] = True
qemu_devices.append(qemu_info)
detected_devices.append(qemu_info)
return {
"success": True,
"detected_devices": detected_devices,
"total_scanned": len(usb_ports) + len(qemu_devices),
"checked_ports": common_esp_ports,
"available_ports": usb_ports,
"qemu_devices": qemu_devices if qemu_devices else None,
"scan_results": scan_results if detailed else None,
"timestamp": time.time(),
}
async def _probe_port_subprocess(self, port: str, detailed: bool = False) -> dict[str, Any]:
"""
Probe a single port using esptool as an async subprocess.
Uses asyncio.create_subprocess_exec() so it never blocks the event loop.
The subprocess can be killed on timeout, unlike Python threads.
"""
import re
try:
# Use 1 connect attempt for scanning (fast probe)
info = await self._run_esptool_async(port, "chip-id", connect_attempts=1)
if not info["success"]:
return {"port": port, "available": False, "error": info["error"]}
output = info["output"]
# Parse esptool chip-id output
result: dict[str, Any] = {"port": port, "available": True}
# Extract chip type - multiple formats:
# "Chip type: ESP32-D0WD-V3 (revision v3.1)"
# "Chip is ESP32-S3 (QFN56) (revision v0.2)"
chip_match = re.search(r"Chip type:\s*(.+?)(?:\n|$)", output)
if not chip_match:
chip_match = re.search(r"Chip is\s+(.+?)(?:\n|$)", output)
if not chip_match:
chip_match = re.search(r"Detecting chip type[.…]+\s*(\S+)", output)
if chip_match:
result["chip_type"] = chip_match.group(1).strip()
# Extract MAC address
mac_match = re.search(r"MAC:\s*([0-9a-f:]+)", output, re.IGNORECASE)
if mac_match:
result["mac_address"] = mac_match.group(1)
# Extract features if present
features_match = re.search(r"Features:\s*(.+?)(?:\n|$)", output)
if features_match:
result["features"] = [f.strip() for f in features_match.group(1).split(",")]
# Extract crystal frequency - formats:
# "Crystal frequency: 40MHz"
# "Crystal is 40MHz"
crystal_match = re.search(r"Crystal\s+(?:frequency:\s*|is\s+)(\d+)\s*MHz", output)
if crystal_match:
result["crystal_freq"] = f"{crystal_match.group(1)}MHz"
if detailed:
# Run flash-id for additional info
try:
flash_info = await self._run_esptool_async(port, "flash-id")
if flash_info["success"]:
flash_output = flash_info["output"]
flash_size_match = re.search(r"Detected flash size:\s*(\S+)", flash_output)
if flash_size_match:
result["flash_size"] = flash_size_match.group(1)
flash_mfr_match = re.search(r"Manufacturer:\s*(\S+)", flash_output)
if flash_mfr_match:
result["flash_manufacturer"] = flash_mfr_match.group(1)
else:
result["flash_info_error"] = flash_info["error"]
except Exception as e:
result["flash_info_error"] = str(e)
return result
except Exception as e:
return {"port": port, "available": False, "error": str(e)}
async def _run_esptool_async(
self,
port: str,
command: str,
timeout: float = 10.0,
connect_attempts: int = 3,
) -> dict[str, Any]:
"""
Run an esptool command as a fully async subprocess.
This is the ONLY safe way to call esptool from an async event loop:
- asyncio.create_subprocess_exec() never blocks the event loop
- asyncio.wait_for() can cancel and kill the process on timeout
- The OS sends SIGKILL if the process doesn't respond
Args:
port: Serial port
command: esptool command (e.g. "chip-id", "flash-id")
timeout: Timeout in seconds
connect_attempts: Number of connection attempts (default: 3)
Returns:
dict with "success", "output", and optionally "error"
"""
proc = None
try:
proc = await asyncio.create_subprocess_exec(
self.config.esptool_path,
"--port",
port,
"--connect-attempts",
str(connect_attempts),
command,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
# wait_for will cancel the coroutine on timeout
stdout, stderr = await asyncio.wait_for(proc.communicate(), timeout=timeout)
output = (stdout or b"").decode() + (stderr or b"").decode()
if proc.returncode != 0:
return {"success": False, "error": output.strip()[:200]}
return {"success": True, "output": output}
except asyncio.TimeoutError:
# Kill the hung process
if proc and proc.returncode is None:
proc.kill()
await proc.wait()
return {"success": False, "error": f"Timeout ({timeout}s)"}
except FileNotFoundError:
return {
"success": False,
"error": f"esptool not found at {self.config.esptool_path}",
}
except Exception as e:
if proc and proc.returncode is None:
proc.kill()
await proc.wait()
return {"success": False, "error": str(e)}
async def _load_test_firmware_impl(
self, context: Context, port: str | None, firmware_type: str
) -> dict[str, Any]:
"""Implementation of test firmware loading"""
middleware = MiddlewareFactory.create_esptool_middleware(
context, f"load_test_firmware_{int(time.time())}"
)
async with middleware.activate():
# This would integrate with ESP-IDF to build and flash test firmware
# For now, return a placeholder that shows the architecture
await middleware._log_info(f"🧪 Loading test firmware: {firmware_type}")
# Auto-detect port if needed
if not port:
port = await self._auto_detect_port(context)
if not port:
return {"success": False, "error": "No ESP devices found"}
# Check if we have test firmware available
test_firmwares = {
"blink": "Simple LED blink test",
"hello_world": "Serial output hello world",
"wifi_scan": "WiFi network scanner",
}
if firmware_type not in test_firmwares:
return {
"success": False,
"error": f"Unknown firmware type: {firmware_type}",
"available_types": list(test_firmwares.keys()),
}
await middleware._log_info(f"📦 Test firmware: {test_firmwares[firmware_type]}")
# This is where we would integrate with ESP-IDF or pre-built binaries
# For demonstration, we'll simulate the process
return {
"success": True,
"port": port,
"firmware_type": firmware_type,
"description": test_firmwares[firmware_type],
"note": "Test firmware loading requires ESP-IDF integration (coming soon)",
"timestamp": time.time(),
}
def _connect_to_chip(self, port: str, baud_rate: int, connect_attempts: int = 3):
"""
Helper method to connect to ESP chip using correct esptool API
Args:
port: Serial port
baud_rate: Connection baud rate
connect_attempts: Number of connection attempts (default: 3)
Returns:
Connected ESP device object
"""
return esptool.get_default_connected_device(
serial_list=[port],
port=port,
connect_attempts=connect_attempts,
initial_baud=baud_rate,
chip="auto",
trace=False,
before="default_reset",
)
async def _run_blocking_with_timeout(self, func: Callable[[], T], timeout: float = 5.0) -> T:
"""
Run a blocking function in a thread pool with proper timeout handling.
This solves the issue where asyncio.wait_for() times out but the
ThreadPoolExecutor context manager blocks waiting for the thread to finish.
Args:
func: Blocking function to run
timeout: Timeout in seconds (default: 5.0)
Returns:
Result of the function
Raises:
asyncio.TimeoutError: If the operation times out
Exception: Any exception from the function
"""
loop = asyncio.get_event_loop()
executor = ThreadPoolExecutor(max_workers=1)
try:
future = loop.run_in_executor(executor, func)
result = await asyncio.wait_for(future, timeout=timeout)
return result
except asyncio.TimeoutError:
# Critical: shutdown WITHOUT waiting - abandon the hung thread
# cancel_futures=True requires Python 3.9+
executor.shutdown(wait=False, cancel_futures=True)
raise
finally:
# Always try to shutdown, but don't wait
try:
executor.shutdown(wait=False, cancel_futures=True)
except Exception:
pass # Already shutdown or other error
async def _auto_detect_port(self, context: Context) -> str | None:
"""Auto-detect ESP device port using subprocess for reliable timeout."""
import os
ports = self.config.get_common_ports()
for port in ports:
if not os.path.exists(port):
continue
# Use subprocess probe - guaranteed to not hang
result = await self._probe_port_subprocess(port, detailed=False)
if result.get("available"):
return port
return None
async def health_check(self) -> dict[str, Any]:
"""Component health check"""
return {
"status": "healthy",
"active_connections": len([c for c in self.connections.values() if c.connected]),
"total_connections": len(self.connections),
"esptool_available": True, # We imported successfully
}

View File

@ -0,0 +1,55 @@
"""
Diagnostics Component
Provides comprehensive ESP device diagnostics including memory dumps,
performance profiling, and diagnostic reporting.
"""
import logging
from typing import Any
from fastmcp import Context, FastMCP
from ..config import ESPToolServerConfig
logger = logging.getLogger(__name__)
class Diagnostics:
"""ESP device diagnostics and analysis"""
def __init__(self, app: FastMCP, config: ESPToolServerConfig):
self.app = app
self.config = config
self._register_tools()
def _register_tools(self) -> None:
"""Register diagnostic tools"""
@self.app.tool("esp_memory_dump")
async def memory_dump(
context: Context,
port: str | None = None,
start_address: str = "0x0",
size: str = "1KB",
) -> dict[str, Any]:
"""Dump device memory for analysis"""
return {"success": True, "note": "Implementation coming soon"}
@self.app.tool("esp_performance_profile")
async def performance_profile(
context: Context, port: str | None = None, duration: int = 30
) -> dict[str, Any]:
"""Profile device performance"""
return {"success": True, "note": "Implementation coming soon"}
@self.app.tool("esp_diagnostic_report")
async def diagnostic_report(
context: Context, port: str | None = None, include_memory: bool = False
) -> dict[str, Any]:
"""Generate comprehensive diagnostic report"""
return {"success": True, "note": "Implementation coming soon"}
async def health_check(self) -> dict[str, Any]:
"""Component health check"""
return {"status": "healthy", "note": "Diagnostics ready"}

View File

@ -0,0 +1,50 @@
"""
Firmware Builder Component
Provides ESP-IDF integration for building, compiling, and managing
firmware projects with host application support.
"""
import logging
from typing import Any
from fastmcp import Context, FastMCP
from ..config import ESPToolServerConfig
logger = logging.getLogger(__name__)
class FirmwareBuilder:
"""ESP firmware building and compilation"""
def __init__(self, app: FastMCP, config: ESPToolServerConfig):
self.app = app
self.config = config
self._register_tools()
def _register_tools(self) -> None:
"""Register firmware building tools"""
@self.app.tool("esp_elf_to_binary")
async def elf_to_binary(
context: Context, elf_path: str, output_path: str | None = None
) -> dict[str, Any]:
"""Convert ELF file to flashable binary"""
return {"success": True, "note": "Implementation coming soon"}
@self.app.tool("esp_firmware_analyze")
async def analyze_firmware(context: Context, firmware_path: str) -> dict[str, Any]:
"""Analyze firmware binary structure"""
return {"success": True, "note": "Implementation coming soon"}
@self.app.tool("esp_binary_optimize")
async def optimize_binary(
context: Context, input_path: str, output_path: str
) -> dict[str, Any]:
"""Optimize firmware binary for size/performance"""
return {"success": True, "note": "Implementation coming soon"}
async def health_check(self) -> dict[str, Any]:
"""Component health check"""
return {"status": "healthy", "note": "Firmware builder ready"}

View File

@ -0,0 +1,70 @@
"""
Flash Manager Component
Provides comprehensive ESP flash memory operations including reading, writing,
erasing, verification, and backup with production-grade safety features.
"""
import logging
from typing import Any
from fastmcp import Context, FastMCP
from ..config import ESPToolServerConfig
logger = logging.getLogger(__name__)
class FlashManager:
"""ESP flash memory management and operations"""
def __init__(self, app: FastMCP, config: ESPToolServerConfig):
self.app = app
self.config = config
self._register_tools()
def _register_tools(self) -> None:
"""Register flash management tools"""
@self.app.tool("esp_flash_firmware")
async def flash_firmware(
context: Context, firmware_path: str, port: str | None = None, verify: bool = True
) -> dict[str, Any]:
"""Flash firmware to ESP device"""
# Implementation placeholder
return {"success": True, "note": "Implementation coming soon"}
@self.app.tool("esp_flash_read")
async def flash_read(
context: Context,
output_path: str,
port: str | None = None,
start_address: str = "0x0",
size: str | None = None,
) -> dict[str, Any]:
"""Read flash memory contents"""
return {"success": True, "note": "Implementation coming soon"}
@self.app.tool("esp_flash_erase")
async def flash_erase(
context: Context,
port: str | None = None,
start_address: str = "0x0",
size: str | None = None,
) -> dict[str, Any]:
"""Erase flash memory regions"""
return {"success": True, "note": "Implementation coming soon"}
@self.app.tool("esp_flash_backup")
async def flash_backup(
context: Context,
backup_path: str,
port: str | None = None,
include_bootloader: bool = True,
) -> dict[str, Any]:
"""Create complete flash backup"""
return {"success": True, "note": "Implementation coming soon"}
async def health_check(self) -> dict[str, Any]:
"""Component health check"""
return {"status": "healthy", "note": "Flash manager ready"}

View File

@ -0,0 +1,50 @@
"""
OTA Manager Component
Handles Over-The-Air update operations including package creation,
deployment, rollback, and update management.
"""
import logging
from typing import Any
from fastmcp import Context, FastMCP
from ..config import ESPToolServerConfig
logger = logging.getLogger(__name__)
class OTAManager:
"""ESP Over-The-Air update management"""
def __init__(self, app: FastMCP, config: ESPToolServerConfig):
self.app = app
self.config = config
self._register_tools()
def _register_tools(self) -> None:
"""Register OTA management tools"""
@self.app.tool("esp_ota_package_create")
async def create_ota_package(
context: Context, firmware_path: str, version: str, output_path: str
) -> dict[str, Any]:
"""Create OTA update package"""
return {"success": True, "note": "Implementation coming soon"}
@self.app.tool("esp_ota_deploy")
async def deploy_ota_update(
context: Context, package_path: str, target_url: str
) -> dict[str, Any]:
"""Deploy OTA update to device"""
return {"success": True, "note": "Implementation coming soon"}
@self.app.tool("esp_ota_rollback")
async def rollback_ota(context: Context, port: str | None = None) -> dict[str, Any]:
"""Rollback to previous firmware version"""
return {"success": True, "note": "Implementation coming soon"}
async def health_check(self) -> dict[str, Any]:
"""Component health check"""
return {"status": "healthy", "note": "OTA manager ready"}

View File

@ -0,0 +1,52 @@
"""
Partition Manager Component
Handles ESP partition table operations, OTA partition management,
and custom partition configurations.
"""
import logging
from typing import Any
from fastmcp import Context, FastMCP
from ..config import ESPToolServerConfig
logger = logging.getLogger(__name__)
class PartitionManager:
"""ESP partition table management"""
def __init__(self, app: FastMCP, config: ESPToolServerConfig):
self.app = app
self.config = config
self._register_tools()
def _register_tools(self) -> None:
"""Register partition management tools"""
@self.app.tool("esp_partition_create_ota")
async def create_ota_partition(
context: Context, flash_size: str = "4MB", app_size: str = "1MB"
) -> dict[str, Any]:
"""Create OTA-enabled partition table"""
return {"success": True, "note": "Implementation coming soon"}
@self.app.tool("esp_partition_custom")
async def create_custom_partition(
context: Context, partition_config: dict[str, Any]
) -> dict[str, Any]:
"""Create custom partition table"""
return {"success": True, "note": "Implementation coming soon"}
@self.app.tool("esp_partition_analyze")
async def analyze_partitions(
context: Context, port: str | None = None
) -> dict[str, Any]:
"""Analyze current partition table"""
return {"success": True, "note": "Implementation coming soon"}
async def health_check(self) -> dict[str, Any]:
"""Component health check"""
return {"status": "healthy", "note": "Partition manager ready"}

View File

@ -0,0 +1,52 @@
"""
Production Tools Component
Provides factory programming, batch operations, quality control,
and production line integration tools.
"""
import logging
from typing import Any
from fastmcp import Context, FastMCP
from ..config import ESPToolServerConfig
logger = logging.getLogger(__name__)
class ProductionTools:
"""ESP production and factory programming tools"""
def __init__(self, app: FastMCP, config: ESPToolServerConfig):
self.app = app
self.config = config
self._register_tools()
def _register_tools(self) -> None:
"""Register production tools"""
@self.app.tool("esp_factory_program")
async def factory_program(
context: Context, program_config: dict[str, Any], port: str | None = None
) -> dict[str, Any]:
"""Program device for factory deployment"""
return {"success": True, "note": "Implementation coming soon"}
@self.app.tool("esp_batch_program")
async def batch_program(
context: Context, device_list: list[str], firmware_path: str
) -> dict[str, Any]:
"""Program multiple devices in batch"""
return {"success": True, "note": "Implementation coming soon"}
@self.app.tool("esp_quality_control")
async def quality_control(
context: Context, port: str | None = None, test_suite: str = "basic"
) -> dict[str, Any]:
"""Run quality control tests"""
return {"success": True, "note": "Implementation coming soon"}
async def health_check(self) -> dict[str, Any]:
"""Component health check"""
return {"status": "healthy", "note": "Production tools ready"}

View File

@ -0,0 +1,57 @@
"""
Security Manager Component
Handles ESP security features including secure boot, flash encryption,
eFuse management, and security auditing.
"""
import logging
from typing import Any
from fastmcp import Context, FastMCP
from ..config import ESPToolServerConfig
logger = logging.getLogger(__name__)
class SecurityManager:
"""ESP security features management"""
def __init__(self, app: FastMCP, config: ESPToolServerConfig):
self.app = app
self.config = config
self._register_tools()
def _register_tools(self) -> None:
"""Register security management tools"""
@self.app.tool("esp_security_audit")
async def security_audit(context: Context, port: str | None = None) -> dict[str, Any]:
"""Perform comprehensive security audit"""
return {"success": True, "note": "Implementation coming soon"}
@self.app.tool("esp_enable_flash_encryption")
async def enable_flash_encryption(
context: Context, port: str | None = None, key_file: str | None = None
) -> dict[str, Any]:
"""Enable flash encryption with optional key"""
return {"success": True, "note": "Implementation coming soon"}
@self.app.tool("esp_efuse_read")
async def read_efuse(
context: Context, port: str | None = None, efuse_name: str | None = None
) -> dict[str, Any]:
"""Read eFuse values"""
return {"success": True, "note": "Implementation coming soon"}
@self.app.tool("esp_efuse_burn")
async def burn_efuse(
context: Context, efuse_name: str, value: str, port: str | None = None
) -> dict[str, Any]:
"""Burn eFuse (DANGEROUS - requires confirmation)"""
return {"success": True, "note": "Implementation coming soon"}
async def health_check(self) -> dict[str, Any]:
"""Component health check"""
return {"status": "healthy", "note": "Security manager ready"}

View File

@ -0,0 +1,16 @@
"""
MCP Middleware System
Universal middleware for integrating CLI tools with FastMCP servers.
Provides bidirectional communication, progress tracking, and user interaction.
"""
from .esptool_middleware import ESPToolMiddleware
from .logger_interceptor import LoggerInterceptor
from .middleware_factory import MiddlewareFactory
__all__ = [
"LoggerInterceptor",
"ESPToolMiddleware",
"MiddlewareFactory",
]

View File

@ -0,0 +1,362 @@
"""
ESPTool-specific middleware implementation
Provides specialized middleware for intercepting esptool operations and redirecting
output to MCP context with intelligent progress tracking and user interaction.
"""
import asyncio
import io
import logging
import re
import sys
from re import Pattern
from typing import Any
from fastmcp import Context
from .logger_interceptor import LoggerInterceptor, MiddlewareError
class ESPToolMiddleware(LoggerInterceptor):
"""ESPTool-specific middleware for MCP integration"""
def __init__(self, context: Context, operation_id: str):
super().__init__(context, operation_id)
# ESPTool-specific state
self.original_stdout = None
self.original_stderr = None
self.captured_output = io.StringIO()
self.captured_errors = io.StringIO()
# Progress tracking patterns
self.progress_patterns = self._setup_progress_patterns()
self.stage_patterns = self._setup_stage_patterns()
# Operation tracking
self.current_operation = None
self.chip_info = {}
self.flash_info = {}
def _setup_progress_patterns(self) -> dict[str, Pattern]:
"""Set up regex patterns for progress detection"""
return {
"flash_progress": re.compile(r"Writing at 0x[0-9a-f]+\.\.\. \((\d+) %\)"),
"read_progress": re.compile(r"Reading memory at 0x[0-9a-f]+\.\.\. \((\d+) %\)"),
"erase_progress": re.compile(
r"Erasing flash \(this may take a while\)\.\.\. \((\d+) %\)"
),
"verify_progress": re.compile(r"Verifying \((\d+) %\)"),
"compress_progress": re.compile(r"Compressed (\d+) bytes to (\d+)\.\.\. \((\d+) %\)"),
}
def _setup_stage_patterns(self) -> dict[str, Pattern]:
"""Set up regex patterns for stage detection"""
return {
"chip_detection": re.compile(r"Detecting chip type\.\.\. (.+)"),
"connecting": re.compile(r"Connecting\.\.\."),
"stub_loading": re.compile(r"Running stub\.\.\."),
"flash_begin": re.compile(r"Changing baud rate to (\d+)"),
"configuring_flash": re.compile(r"Configuring flash size\.\.\."),
"erasing_flash": re.compile(r"Erasing flash \(this may take a while\)\.\.\."),
"writing_flash": re.compile(r"Writing .+ bytes at 0x[0-9a-f]+\.\.\."),
"verifying": re.compile(r"Verifying\.\.\."),
"hard_reset": re.compile(r"Hard resetting via RTS pin\.\.\."),
"leaving_download": re.compile(r"Leaving\.\.\."),
}
async def install_hooks(self) -> None:
"""Install middleware hooks into esptool"""
try:
# Create custom logger that redirects to MCP
mcp_logger = self._create_mcp_logger()
# Patch esptool's logging
self.original_stdout = sys.stdout
self.original_stderr = sys.stderr
# Install our custom output capture
sys.stdout = MCPOutputCapture(self, "stdout")
sys.stderr = MCPOutputCapture(self, "stderr")
# Override esptool's main logger
esptool_logger = logging.getLogger("esptool")
esptool_logger.handlers.clear()
esptool_logger.addHandler(mcp_logger)
esptool_logger.setLevel(logging.DEBUG)
await self._log_info("🔌 ESPTool middleware hooks installed")
except Exception as e:
await self._log_error(f"Failed to install ESPTool hooks: {e}")
raise MiddlewareError(f"Hook installation failed: {e}")
async def remove_hooks(self) -> None:
"""Remove middleware hooks from esptool"""
try:
# Restore original streams
if self.original_stdout:
sys.stdout = self.original_stdout
if self.original_stderr:
sys.stderr = self.original_stderr
# Restore esptool logging
esptool_logger = logging.getLogger("esptool")
esptool_logger.handlers.clear()
await self._log_info("🔌 ESPTool middleware hooks removed")
except Exception as e:
await self._log_warning(f"Error removing ESPTool hooks: {e}")
def get_interaction_points(self) -> list[str]:
"""Return ESPTool operations that require user interaction"""
return [
"erase_flash",
"write_flash_encrypt",
"burn_efuse",
"secure_boot_signing_key",
"flash_encryption_key_generate",
"reset_to_factory",
]
def _create_mcp_logger(self) -> logging.Handler:
"""Create a logging handler that forwards to MCP context"""
class MCPLogHandler(logging.Handler):
def __init__(self, middleware):
super().__init__()
self.middleware = middleware
def emit(self, record):
try:
message = self.format(record)
# Run async logging in event loop
loop = asyncio.get_event_loop()
if record.levelno >= logging.ERROR:
loop.create_task(self.middleware._log_error(message))
elif record.levelno >= logging.WARNING:
loop.create_task(self.middleware._log_warning(message))
else:
loop.create_task(self.middleware._log_info(message))
except Exception:
pass # Prevent logging errors from breaking operations
return MCPLogHandler(self)
async def process_output_line(self, line: str, stream_type: str) -> None:
"""Process a line of output from esptool"""
if not line.strip():
return
# Check for progress updates
await self._check_progress_patterns(line)
# Check for stage changes
await self._check_stage_patterns(line)
# Check for chip information
await self._extract_chip_info(line)
# Check for flash information
await self._extract_flash_info(line)
# Check for errors
await self._check_error_patterns(line)
# Log the line if it contains useful information
if self._is_useful_output(line):
await self._log_info(f"📟 {line.strip()}")
async def _check_progress_patterns(self, line: str) -> None:
"""Check line against progress patterns and update progress"""
for operation, pattern in self.progress_patterns.items():
match = pattern.search(line)
if match:
if operation == "compress_progress":
# Special handling for compression progress
original_size = int(match.group(1))
compressed_size = int(match.group(2))
percentage = int(match.group(3))
await self._update_progress(
percentage,
f"Compressing: {original_size}{compressed_size} bytes",
current=compressed_size,
total=original_size,
)
else:
percentage = int(match.group(1))
operation_name = operation.replace("_", " ").title()
await self._update_progress(percentage, f"{operation_name}: {percentage}%")
break
async def _check_stage_patterns(self, line: str) -> None:
"""Check line against stage patterns and handle stage changes"""
for stage, pattern in self.stage_patterns.items():
match = pattern.search(line)
if match:
stage_message = self._format_stage_message(stage, match)
await self._handle_stage_start(stage_message)
# Store current operation context
self.current_operation = stage
break
def _format_stage_message(self, stage: str, match) -> str:
"""Format stage message for user display"""
stage_messages = {
"chip_detection": f"Detecting chip type: {match.group(1)}",
"connecting": "Connecting to ESP device",
"stub_loading": "Loading ROM bootloader stub",
"flash_begin": f"Setting baud rate to {match.group(1)}",
"configuring_flash": "Configuring flash parameters",
"erasing_flash": "Erasing flash memory",
"writing_flash": "Writing firmware to flash",
"verifying": "Verifying flash contents",
"hard_reset": "Performing hard reset",
"leaving_download": "Exiting download mode",
}
return stage_messages.get(stage, stage.replace("_", " ").title())
async def _extract_chip_info(self, line: str) -> None:
"""Extract chip information from esptool output"""
patterns = {
"chip_type": re.compile(r"Chip is (.+)"),
"mac_address": re.compile(r"MAC: ([0-9a-f:]{17})"),
"flash_id": re.compile(r"Detected flash size: (.+)"),
"crystal_freq": re.compile(r"Crystal is (.+)MHz"),
}
for info_type, pattern in patterns.items():
match = pattern.search(line)
if match:
self.chip_info[info_type] = match.group(1)
await self._log_info(f"📋 {info_type.replace('_', ' ').title()}: {match.group(1)}")
async def _extract_flash_info(self, line: str) -> None:
"""Extract flash information from esptool output"""
patterns = {
"flash_size": re.compile(r"Auto-detected Flash size: (.+)"),
"flash_frequency": re.compile(r"Flash frequency: (.+)"),
"flash_mode": re.compile(r"Flash mode: (.+)"),
}
for info_type, pattern in patterns.items():
match = pattern.search(line)
if match:
self.flash_info[info_type] = match.group(1)
await self._log_info(f"💾 {info_type.replace('_', ' ').title()}: {match.group(1)}")
async def _check_error_patterns(self, line: str) -> None:
"""Check for error patterns in output"""
error_patterns = [
r"Error:? (.+)",
r"Failed to (.+)",
r"Could not (.+)",
r"No such file or directory: (.+)",
r"Permission denied: (.+)",
r"Serial exception: (.+)",
]
for pattern in error_patterns:
match = re.search(pattern, line, re.IGNORECASE)
if match:
await self._log_error(f"ESPTool error: {match.group(1)}")
break
def _is_useful_output(self, line: str) -> bool:
"""Determine if output line contains useful information"""
# Skip common noise patterns
noise_patterns = [
r"^\s*$", # Empty lines
r"^Uploading stub\.\.\.",
r"^Running stub\.\.\.",
r"^Stub running\.\.\.",
r"^\.", # Progress dots
]
for pattern in noise_patterns:
if re.match(pattern, line):
return False
# Include lines with useful keywords
useful_keywords = [
"chip",
"flash",
"mac",
"crystal",
"baud",
"size",
"error",
"warning",
"failed",
"success",
"complete",
"writing",
"reading",
"erasing",
"verifying",
]
line_lower = line.lower()
return any(keyword in line_lower for keyword in useful_keywords)
async def get_operation_summary(self) -> dict[str, Any]:
"""Get summary of current operation"""
return {
"operation_id": self.operation_id,
"current_operation": self.current_operation,
"chip_info": self.chip_info,
"flash_info": self.flash_info,
"progress_history": self.progress_history[-5:], # Last 5 progress updates
"statistics": self.get_operation_statistics(),
}
class MCPOutputCapture:
"""Custom output capture that forwards to middleware"""
def __init__(self, middleware: ESPToolMiddleware, stream_type: str):
self.middleware = middleware
self.stream_type = stream_type
self.buffer = ""
def write(self, text: str) -> int:
"""Write text and process for MCP forwarding"""
self.buffer += text
# Process complete lines
while "\n" in self.buffer:
line, self.buffer = self.buffer.split("\n", 1)
# Forward to middleware async
loop = asyncio.get_event_loop()
loop.create_task(self.middleware.process_output_line(line, self.stream_type))
return len(text)
def flush(self):
"""Flush any remaining buffer content"""
if self.buffer.strip():
loop = asyncio.get_event_loop()
loop.create_task(self.middleware.process_output_line(self.buffer, self.stream_type))
self.buffer = ""
def isatty(self) -> bool:
return False
class ESPToolOperationError(MiddlewareError):
"""Raised when ESPTool operation fails"""
pass
class ESPToolConnectionError(MiddlewareError):
"""Raised when connection to ESP device fails"""
pass

View File

@ -0,0 +1,290 @@
"""
Logger Interceptor Base Class
Abstract base class for intercepting and redirecting CLI tool logging to MCP context.
Provides the foundation for bidirectional communication with any CLI tool.
"""
import logging
import time
from abc import ABC, abstractmethod
from contextlib import asynccontextmanager
from typing import Any
from fastmcp import Context
logger = logging.getLogger(__name__)
class LoggerInterceptor(ABC):
"""Abstract base class for CLI tool logger interception"""
def __init__(self, context: Context, operation_id: str):
"""
Initialize logger interceptor
Args:
context: FastMCP context for logging and user interaction
operation_id: Unique identifier for this operation
"""
self.context = context
self.operation_id = operation_id
self.operation_start_time = time.time()
# Detect MCP client capabilities
self.capabilities = self._detect_mcp_capabilities()
# Operation state
self.progress_history: list[dict[str, Any]] = []
self.user_confirmations: dict[str, bool] = {}
self.active_stages: list[str] = []
logger.debug(f"Logger interceptor initialized for operation: {operation_id}")
def _detect_mcp_capabilities(self) -> dict[str, bool]:
"""Detect available MCP client capabilities"""
capabilities = {
"logging": hasattr(self.context, "log") and callable(self.context.log),
"progress": hasattr(self.context, "progress") and callable(self.context.progress),
"elicitation": hasattr(self.context, "request_user_input")
and callable(self.context.request_user_input),
"sampling": hasattr(self.context, "sample") and callable(self.context.sample),
}
logger.debug(f"Detected MCP capabilities: {capabilities}")
return capabilities
@abstractmethod
async def install_hooks(self) -> None:
"""Install middleware hooks into the target tool"""
pass
@abstractmethod
async def remove_hooks(self) -> None:
"""Remove middleware hooks from the target tool"""
pass
@abstractmethod
def get_interaction_points(self) -> list[str]:
"""Return list of operations that require user interaction"""
pass
@asynccontextmanager
async def activate(self):
"""Context manager for middleware lifecycle"""
try:
await self.install_hooks()
await self._log_operation_start()
yield self
except Exception as e:
await self._log_error(f"Middleware activation failed: {e}")
raise
finally:
await self._log_operation_end()
await self.remove_hooks()
# Enhanced logging methods
async def _log_info(self, message: str, **kwargs) -> None:
"""Log informational message to MCP context"""
if self.capabilities["logging"]:
try:
await self.context.log(level="info", message=message, **kwargs)
except Exception as e:
logger.warning(f"Failed to log info message: {e}")
async def _log_warning(self, message: str, **kwargs) -> None:
"""Log warning message to MCP context"""
if self.capabilities["logging"]:
try:
await self.context.log(level="warning", message=f"⚠️ {message}", **kwargs)
except Exception as e:
logger.warning(f"Failed to log warning message: {e}")
async def _log_error(self, message: str, **kwargs) -> None:
"""Log error message to MCP context"""
if self.capabilities["logging"]:
try:
await self.context.log(level="error", message=f"{message}", **kwargs)
except Exception as e:
logger.error(f"Failed to log error message: {e}")
async def _log_success(self, message: str, **kwargs) -> None:
"""Log success message to MCP context"""
if self.capabilities["logging"]:
try:
await self.context.log(level="info", message=f"{message}", **kwargs)
except Exception as e:
logger.warning(f"Failed to log success message: {e}")
async def _update_progress(
self,
percentage: float,
message: str = "",
current: int | None = None,
total: int | None = None,
) -> None:
"""Update operation progress"""
if self.capabilities["progress"]:
try:
await self.context.progress(
operation_id=self.operation_id,
progress=percentage,
total=total or 100,
current=current or int(percentage),
message=message,
)
# Store progress history
self.progress_history.append(
{
"timestamp": time.time(),
"percentage": percentage,
"message": message,
"current": current,
"total": total,
}
)
except Exception as e:
logger.warning(f"Failed to update progress: {e}")
async def _request_user_confirmation(
self, prompt: str, default: bool = True, cache_key: str | None = None
) -> bool:
"""Request user confirmation with optional caching"""
# Use cache key or prompt as key
confirmation_key = cache_key or prompt
# Check cache first
if confirmation_key in self.user_confirmations:
logger.debug(f"Using cached confirmation for: {confirmation_key}")
return self.user_confirmations[confirmation_key]
if self.capabilities["elicitation"]:
try:
response = await self.context.request_user_input(
prompt=prompt, input_type="confirmation", additional_data={"default": default}
)
confirmed = response.get("confirmed", default)
self.user_confirmations[confirmation_key] = confirmed
await self._log_info(
f"User confirmation: {prompt} -> {'Yes' if confirmed else 'No'}"
)
return confirmed
except Exception as e:
await self._log_warning(f"User confirmation failed: {e}")
return default
else:
# No elicitation support, use default
await self._log_info(
f"Auto-confirming (no elicitation): {prompt} -> {'Yes' if default else 'No'}"
)
return default
async def _handle_stage_start(self, stage_message: str) -> None:
"""Handle stage start with potential user interaction"""
self.active_stages.append(stage_message)
await self._log_info(f"🔄 Starting: {stage_message}")
# Check if this stage requires user confirmation
if self._requires_user_interaction(stage_message):
confirmed = await self._request_user_confirmation(
f"🤔 About to: {stage_message}. Continue?",
default=True,
cache_key=f"stage_{stage_message}",
)
if not confirmed:
await self._log_error(f"Operation cancelled by user: {stage_message}")
raise RuntimeError(f"User cancelled operation: {stage_message}")
async def _handle_stage_end(self, stage_message: str | None = None) -> None:
"""Handle stage completion"""
if self.active_stages:
completed_stage = stage_message or self.active_stages.pop()
await self._log_success(f"Completed: {completed_stage}")
elif stage_message:
await self._log_success(f"Completed: {stage_message}")
def _requires_user_interaction(self, operation: str) -> bool:
"""Determine if operation requires user confirmation"""
critical_keywords = [
"erase",
"burn",
"encrypt",
"secure",
"factory",
"reset",
"delete",
"remove",
"clear",
"format",
"destroy",
]
operation_lower = operation.lower()
return any(keyword in operation_lower for keyword in critical_keywords)
def _format_message(self, message: str, *args) -> str:
"""Format message with optional arguments"""
try:
return message % args if args else message
except (TypeError, ValueError):
return f"{message} {' '.join(map(str, args))}" if args else message
async def _log_operation_start(self) -> None:
"""Log operation start"""
await self._log_info(f"🔧 Operation started: {self.operation_id}")
async def _log_operation_end(self) -> None:
"""Log operation completion with statistics"""
duration = time.time() - self.operation_start_time
await self._log_info(
f"⏱️ Operation completed: {self.operation_id} "
f"(duration: {duration:.2f}s, "
f"progress_updates: {len(self.progress_history)}, "
f"confirmations: {len(self.user_confirmations)})"
)
def get_operation_statistics(self) -> dict[str, Any]:
"""Get operation statistics for analysis"""
duration = time.time() - self.operation_start_time
return {
"operation_id": self.operation_id,
"duration_seconds": round(duration, 2),
"progress_updates": len(self.progress_history),
"user_confirmations": len(self.user_confirmations),
"stages_completed": len(self.active_stages),
"capabilities_used": [cap for cap, available in self.capabilities.items() if available],
"start_time": self.operation_start_time,
"end_time": time.time(),
}
class MiddlewareError(Exception):
"""Base exception for middleware-related errors"""
pass
class ToolNotFoundError(MiddlewareError):
"""Raised when target CLI tool is not found or available"""
pass
class HookInstallationError(MiddlewareError):
"""Raised when middleware hooks cannot be installed"""
pass
class UserCancellationError(MiddlewareError):
"""Raised when user cancels an operation"""
pass

View File

@ -0,0 +1,158 @@
"""
Middleware Factory
Provides factory methods for creating appropriate middleware instances
based on target CLI tools and operation context.
"""
import logging
from typing import Any
from uuid import uuid4
from fastmcp import Context
from .esptool_middleware import ESPToolMiddleware
from .logger_interceptor import LoggerInterceptor, ToolNotFoundError
logger = logging.getLogger(__name__)
class MiddlewareFactory:
"""Factory for creating CLI tool middleware instances"""
# Registry of available middleware classes
_middleware_registry: dict[str, type[LoggerInterceptor]] = {
"esptool": ESPToolMiddleware,
}
@classmethod
def create_middleware(
cls, tool_name: str, context: Context, operation_id: str | None = None, **kwargs
) -> LoggerInterceptor:
"""
Create middleware instance for specified CLI tool
Args:
tool_name: Name of the CLI tool (e.g., 'esptool')
context: FastMCP context for logging and user interaction
operation_id: Unique identifier for this operation
**kwargs: Additional parameters for middleware initialization
Returns:
Configured middleware instance
Raises:
ToolNotFoundError: If tool is not supported
"""
if tool_name not in cls._middleware_registry:
available_tools = ", ".join(cls._middleware_registry.keys())
raise ToolNotFoundError(
f"No middleware available for tool: {tool_name}. Available tools: {available_tools}"
)
# Generate operation ID if not provided
if operation_id is None:
operation_id = f"{tool_name}_{uuid4().hex[:8]}"
# Get middleware class and create instance
middleware_class = cls._middleware_registry[tool_name]
try:
middleware = middleware_class(context, operation_id, **kwargs)
logger.info(f"Created {tool_name} middleware with operation ID: {operation_id}")
return middleware
except Exception as e:
logger.error(f"Failed to create {tool_name} middleware: {e}")
raise ToolNotFoundError(f"Failed to initialize {tool_name} middleware: {e}")
@classmethod
def register_middleware(cls, tool_name: str, middleware_class: type[LoggerInterceptor]) -> None:
"""
Register a new middleware class for a CLI tool
Args:
tool_name: Name of the CLI tool
middleware_class: Middleware class that extends LoggerInterceptor
"""
if not issubclass(middleware_class, LoggerInterceptor):
raise ValueError(f"Middleware class must extend LoggerInterceptor: {middleware_class}")
cls._middleware_registry[tool_name] = middleware_class
logger.info(f"Registered middleware for tool: {tool_name}")
@classmethod
def get_supported_tools(cls) -> dict[str, str]:
"""
Get list of supported CLI tools and their descriptions
Returns:
Dictionary mapping tool names to descriptions
"""
tool_descriptions = {
"esptool": "ESP32/ESP8266 programming and debugging tool",
}
return {
tool: tool_descriptions.get(tool, "CLI tool integration")
for tool in cls._middleware_registry.keys()
}
@classmethod
def is_tool_supported(cls, tool_name: str) -> bool:
"""Check if a CLI tool is supported by middleware"""
return tool_name in cls._middleware_registry
@classmethod
def create_esptool_middleware(
cls, context: Context, operation_id: str | None = None, **kwargs
) -> ESPToolMiddleware:
"""
Convenience method to create ESPTool middleware with proper typing
Args:
context: FastMCP context
operation_id: Optional operation identifier
**kwargs: Additional ESPTool-specific parameters
Returns:
Configured ESPToolMiddleware instance
"""
middleware = cls.create_middleware("esptool", context, operation_id, **kwargs)
return middleware # Type checker knows this is ESPToolMiddleware
@classmethod
def get_middleware_info(cls, tool_name: str) -> dict[str, Any]:
"""
Get information about a specific middleware
Args:
tool_name: Name of the CLI tool
Returns:
Dictionary with middleware information
"""
if not cls.is_tool_supported(tool_name):
return {"error": f"Tool not supported: {tool_name}"}
middleware_class = cls._middleware_registry[tool_name]
# Create temporary instance to get interaction points
# (without context, for info purposes only)
try:
# Use a dummy context for information gathering
class DummyContext:
pass
temp_instance = middleware_class(DummyContext(), "info_query")
interaction_points = temp_instance.get_interaction_points()
except Exception:
interaction_points = []
return {
"tool_name": tool_name,
"middleware_class": middleware_class.__name__,
"description": cls.get_supported_tools()[tool_name],
"interaction_points": interaction_points,
"module": middleware_class.__module__,
}

View File

@ -5,7 +5,9 @@ This is the core server that orchestrates all ESP development components using F
Provides AI-powered ESP32/ESP8266 development workflows with production-grade capabilities.
"""
import asyncio
import logging
import signal
import sys
import time
from typing import Any
@ -167,7 +169,7 @@ class ESPToolServer:
"esp_efuse_read",
"esp_efuse_burn",
],
"firmware": ["esp_elf_to_binary", "esp_firmware_analyze"],
"firmware": ["esp_elf_to_binary", "esp_firmware_analyze", "esp_binary_optimize"],
"ota": ["esp_ota_package_create", "esp_ota_deploy", "esp_ota_rollback"],
"production": ["esp_factory_program", "esp_batch_program", "esp_quality_control"],
"diagnostics": [

View File

@ -8,7 +8,7 @@ from pathlib import Path
import pytest
from mcesptool.config import ESPToolServerConfig
from mcp_esptool_server.config import ESPToolServerConfig
def test_config_from_environment():

View File

@ -6,7 +6,7 @@ from unittest.mock import AsyncMock
import pytest
from mcesptool.middleware import LoggerInterceptor, MiddlewareFactory
from mcp_esptool_server.middleware import LoggerInterceptor, MiddlewareFactory
class MockContext:

View File

@ -10,13 +10,13 @@ from unittest.mock import AsyncMock, MagicMock
import pytest
from mcesptool.components.qemu_manager import (
from mcp_esptool_server.components.qemu_manager import (
CHIP_MACHINES,
QemuInstance,
QemuManager,
_create_blank_flash,
)
from mcesptool.config import ESPToolServerConfig
from mcp_esptool_server.config import ESPToolServerConfig
@pytest.fixture

178
uv.lock generated
View File

@ -67,6 +67,88 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/a0/59/76ab57e3fe74484f48a53f8e337171b4a2349e506eabe136d7e01d059086/backports_asyncio_runner-1.2.0-py3-none-any.whl", hash = "sha256:0da0a936a8aeb554eccb426dc55af3ba63bcdc69fa1a600b5bb305413a4477b5", size = 12313, upload-time = "2025-07-02T02:27:14.263Z" },
]
[[package]]
name = "bitarray"
version = "3.7.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/99/b6/282f5f0331b3877d4e79a8aa1cf63b5113a10f035a39bef1fa1dfe9e9e09/bitarray-3.7.1.tar.gz", hash = "sha256:795b1760418ab750826420ae24f06f392c08e21dc234f0a369a69cc00444f8ec", size = 150474, upload-time = "2025-08-28T22:18:15.346Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/42/98/bafe556fe4d97a975fa5c31965aaa282388cc91073aca57a2de206745b11/bitarray-3.7.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:a05982bb49c73463cb0f0f4bed2d8da82631708a2c2d1926107ba99651b419ec", size = 147651, upload-time = "2025-08-28T22:14:53.043Z" },
{ url = "https://files.pythonhosted.org/packages/03/87/639c1e4d869ecd7c23d517c326bfee7ab43ade5d5bd0f6ad3373edc861a8/bitarray-3.7.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d30e7daaf228e3d69cdd8b02c0dd4199cec034c4b93c80109f56f4675a6db957", size = 143967, upload-time = "2025-08-28T22:14:55.333Z" },
{ url = "https://files.pythonhosted.org/packages/24/e9/8248a05b35f3e3667ceb103febb0d687d3f7314e4692b2048d21ed943a4e/bitarray-3.7.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:160f449bb91686f8fc9984200e78b8d793b79e382decf7eb1dc9948d7c21b36f", size = 319901, upload-time = "2025-08-28T22:14:56.742Z" },
{ url = "https://files.pythonhosted.org/packages/de/e8/47f9d8eebb793b6828baf76027b9eefc4e5e09f32b84a25821c4bc19c3c4/bitarray-3.7.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6542e1cfe060badd160cd383ad93a84871595c14bb05fb8129f963248affd946", size = 339005, upload-time = "2025-08-28T22:14:58.291Z" },
{ url = "https://files.pythonhosted.org/packages/61/73/2c4695e5acd89d9904c5b3bea7b5b06df86dea15653eee6008881d18a632/bitarray-3.7.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b723f9d10f7d8259f010b87fa66e924bb4d67927d9dcff4526a755e9ee84fef4", size = 329495, upload-time = "2025-08-28T22:14:59.722Z" },
{ url = "https://files.pythonhosted.org/packages/0f/d9/dc17b9f5b7b750dc9183db0520e197f1ca635dedd48e37ad00ca450d2fab/bitarray-3.7.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ca4b6298c89b92d6b0a67dfc5f98d68ae92b08101d227263ef2033b9c9a03a72", size = 322141, upload-time = "2025-08-28T22:15:00.829Z" },
{ url = "https://files.pythonhosted.org/packages/a7/45/8fb00265c1b0313070e0a4b09a2f585fd3ee174aaa5352d971069983c983/bitarray-3.7.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:567d6891cb1ddbfd0051fcff3cb1bb86efc82ec818d9c5f98c37d59c1d23cc96", size = 310422, upload-time = "2025-08-28T22:15:01.964Z" },
{ url = "https://files.pythonhosted.org/packages/f6/77/04cb016694ae16ffe1a146f1a764b79e71f3ddbc7b9d78069594507c9762/bitarray-3.7.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:37a6a8382864a1defb5b370b66a635e04358c7334054457bbbb8645610cd95b2", size = 314796, upload-time = "2025-08-28T22:15:04.468Z" },
{ url = "https://files.pythonhosted.org/packages/b5/4f/8e15934995c5362e645ea27d9521e6b29953dc9f8df59e74525c8022e347/bitarray-3.7.1-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:01e3ba46c2dee6d47a4ab22561a01d8ee6772f681defc9fcb357097a055e48cf", size = 311222, upload-time = "2025-08-28T22:15:05.846Z" },
{ url = "https://files.pythonhosted.org/packages/f4/d2/9cc6df1ab5b9d10904bf78820e2427cf9b373376ca82af64a0b31eff7b31/bitarray-3.7.1-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:477b9456eb7d70f385dc8f097a1d66ee40771b62e47b3b3e33406dcfbc1c6a3b", size = 339685, upload-time = "2025-08-28T22:15:06.992Z" },
{ url = "https://files.pythonhosted.org/packages/ed/6d/b79e5e545a928270445c6916cf2d7613a8a8434eee8de023c900a0a08e15/bitarray-3.7.1-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:2965fd8ba31b04c42e4b696fad509dc5ab50663efca6eb06bb3b6d08587f3a09", size = 339660, upload-time = "2025-08-28T22:15:08.068Z" },
{ url = "https://files.pythonhosted.org/packages/e9/33/8b836518ba16a85c75c177aa0d6658e843b4b0c1ec5994fb9f1b28e9440d/bitarray-3.7.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:cc76ad7453816318d794248fba4032967eaffd992d76e5d1af10ef9d46589770", size = 320079, upload-time = "2025-08-28T22:15:09.276Z" },
{ url = "https://files.pythonhosted.org/packages/7b/8e/87603ccf798c99296fdb26b9297350f44f87cb2aced76d3b8b0446ac8cd2/bitarray-3.7.1-cp310-cp310-win32.whl", hash = "sha256:d3f38373d9b2629dedc559e647010541cc4ec4ad9bea560e2eb1017e6a00d9ef", size = 141228, upload-time = "2025-08-28T22:15:10.383Z" },
{ url = "https://files.pythonhosted.org/packages/50/06/7003c5520d2bb36edb68b016b1a83ddd5946da67b9d9982b12a8ef68d706/bitarray-3.7.1-cp310-cp310-win_amd64.whl", hash = "sha256:e39f5e85e1e3d7d84ac2217cd095b3678306c979e991532df47012880e02215d", size = 147988, upload-time = "2025-08-28T22:15:11.718Z" },
{ url = "https://files.pythonhosted.org/packages/c6/0b/6fc7221d6d6508b2648f2b99dda9188dc46640023e6c2d3fb78070013901/bitarray-3.7.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ac39319e6322c2c093a660c02cea6bb3b1ae53d049b573d4781df8896e443e04", size = 147645, upload-time = "2025-08-28T22:15:12.966Z" },
{ url = "https://files.pythonhosted.org/packages/43/96/122ef83579cde311e77d5da284b71dfb5ab1c38250b6a97a4f4adae4ef5a/bitarray-3.7.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:a43f4631ecb87bedc510568fef67db53f2a20c4a5953a9d1e07457e7b1d14911", size = 143971, upload-time = "2025-08-28T22:15:14.374Z" },
{ url = "https://files.pythonhosted.org/packages/f6/f9/cd0e27f8399b930fcea8b87b36de0ba8c88e8f953dbc98e81ca322352d24/bitarray-3.7.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ffd112646486a31ea5a45aa1eca0e2cd90b6a12f67e848e50349e324c24cc2e7", size = 327521, upload-time = "2025-08-28T22:15:15.381Z" },
{ url = "https://files.pythonhosted.org/packages/35/ad/f64f4be628536404c9576a0a40b10f5304bb37a69fb6cb37987e9ae92782/bitarray-3.7.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:db0441e80773d747a1ed9edfb9f75e7acb68ce8627583bbb6f770b7ec49f0064", size = 347583, upload-time = "2025-08-28T22:15:16.708Z" },
{ url = "https://files.pythonhosted.org/packages/e6/82/98774e33b3286fd83c6e48f5fb4e362d39b531011b4e1dd5aeba9dfdd3b8/bitarray-3.7.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ef5a99a8d1a5c47b4cf85925d1420fc4ee584c98be8efc548651447b3047242f", size = 338572, upload-time = "2025-08-28T22:15:20.235Z" },
{ url = "https://files.pythonhosted.org/packages/02/cc/aadc3bf1382d9660f755d74b3275c866a20e01ad2062cc777b2378423e97/bitarray-3.7.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fdb7af369df317527d697c5bb37ab944bb9a17ea1a5e82e47d5c7c638f3ccdd6", size = 329984, upload-time = "2025-08-28T22:15:21.684Z" },
{ url = "https://files.pythonhosted.org/packages/42/ba/f9db45b9d6d01793afe62190c3f58bfe1969bd5798612663225560c24d94/bitarray-3.7.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eda67136343db96752e58ef36ac37116f36cba40961e79fd0e9bd858f5a09b38", size = 318777, upload-time = "2025-08-28T22:15:22.816Z" },
{ url = "https://files.pythonhosted.org/packages/5e/1b/18d11fe8f3192be5c2986d0faada5b3c9c0e43082ba031c12c75ebc64fd2/bitarray-3.7.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:79038bf1a7b13d243e51f4b6909c6997c2ba2bffc45bcae264704308a2d17198", size = 322772, upload-time = "2025-08-28T22:15:24.063Z" },
{ url = "https://files.pythonhosted.org/packages/dc/20/3aaf1c21af0f8dca623d06f12ce44fb45f94c10c6550e8d2e57d811b1881/bitarray-3.7.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:d12c45da97b2f31d0233e15f8d68731cfa86264c9f04b2669b9fdf46aaf68e1f", size = 318773, upload-time = "2025-08-28T22:15:25.536Z" },
{ url = "https://files.pythonhosted.org/packages/b0/80/2d066264b1f3b3c495e12c55a9d0955733e890388d63ba75c408bb936fb7/bitarray-3.7.1-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:64d1143e90299ba8c967324840912a63a903494b1870a52f6675bda53dc332f7", size = 347391, upload-time = "2025-08-28T22:15:26.646Z" },
{ url = "https://files.pythonhosted.org/packages/e6/4b/819d5614433881ae779a6b23dd74d399c790777e3f084a270851059a77b2/bitarray-3.7.1-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:c4e04c12f507942f1ddf215cb3a08c244d24051cdd2ba571060166ce8a92be16", size = 347719, upload-time = "2025-08-28T22:15:27.851Z" },
{ url = "https://files.pythonhosted.org/packages/52/63/a278c08f1e47711f71e396135c0d6d38811f551613b84af8ac7901bfaea9/bitarray-3.7.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:ddc646cec4899a137c134b13818469e4178a251d77f9f4b23229267e3da78cfb", size = 328197, upload-time = "2025-08-28T22:15:29.392Z" },
{ url = "https://files.pythonhosted.org/packages/aa/73/6a74193cf565b01747ebd7979752060128e6c1423378471b04d8ed28b6f0/bitarray-3.7.1-cp311-cp311-win32.whl", hash = "sha256:a23b5f13f9b292004e94b0b13fead4dae79c7512db04dc817ff2c2478298e04a", size = 141377, upload-time = "2025-08-28T22:15:30.471Z" },
{ url = "https://files.pythonhosted.org/packages/13/03/7bbaadf90b282c7f1bc21c3c4855ce869d3ecd444071b1dab55baaec9328/bitarray-3.7.1-cp311-cp311-win_amd64.whl", hash = "sha256:acc56700963f63307ac096689d4547e8061028a66bb78b90e42c5da2898898fb", size = 148203, upload-time = "2025-08-28T22:15:31.525Z" },
{ url = "https://files.pythonhosted.org/packages/89/27/46b5b4dabecf84f750587cded3640658448d27c59f4dd2cbaa589085f43a/bitarray-3.7.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:b99a0347bc6131046c19e056a113daa34d7df99f1f45510161bc78bc8461a470", size = 147349, upload-time = "2025-08-28T22:15:32.729Z" },
{ url = "https://files.pythonhosted.org/packages/f9/1e/7f61150577127a1540136ba8a63ba17c661a17e721e03404fcd5833a4a05/bitarray-3.7.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:d7e274ac1975e55ebfb8166cce27e13dc99120c1d6ce9e490d7a716b9be9abb5", size = 143922, upload-time = "2025-08-28T22:15:33.963Z" },
{ url = "https://files.pythonhosted.org/packages/ca/b2/7c852472df8c644d05530bc0ad586fead5f23a9d176873c2c54f57e16b4e/bitarray-3.7.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3b9a2eb7d2e0e9c2f25256d2663c0a2a4798fe3110e3ddbbb1a7b71740b4de08", size = 330277, upload-time = "2025-08-28T22:15:34.997Z" },
{ url = "https://files.pythonhosted.org/packages/7b/38/681340eea0997c48ef2dbf1acb0786090518704ca32f9a2c3c669bdea08e/bitarray-3.7.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e15e70a3cf5bb519e2448524d689c02ff6bcd4750587a517e2bffee06065bf27", size = 349562, upload-time = "2025-08-28T22:15:36.554Z" },
{ url = "https://files.pythonhosted.org/packages/c4/f4/6fc43f896af85c5b10a74b1d8a87c05915464869594131a2d7731707a108/bitarray-3.7.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c65257899bb8faf6a111297b4ff0066324a6b901318582c0453a01422c3bcd5a", size = 341249, upload-time = "2025-08-28T22:15:37.774Z" },
{ url = "https://files.pythonhosted.org/packages/89/c7/1f71164799cacd44964ead87e1fc7e2f0ddec6d0519515a82d54eb8c8a13/bitarray-3.7.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:38b0261483c59bb39ae9300ad46bf0bbf431ab604266382d986a349c96171b36", size = 332874, upload-time = "2025-08-28T22:15:38.935Z" },
{ url = "https://files.pythonhosted.org/packages/95/cd/4d7c19064fa7fe94c2818712695fa186a1d0bb9c5cb0cf34693df81d3202/bitarray-3.7.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d2b1ed363a4ef5622dccbf7822f01b51195062c4f382b28c9bd125d046d0324c", size = 321107, upload-time = "2025-08-28T22:15:40.071Z" },
{ url = "https://files.pythonhosted.org/packages/1e/d2/7d5ffe491c70614c0eb4a0186666efe925a02e25ed80ebd19c5fcb1c62e8/bitarray-3.7.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:dfde50ae55e075dcd5801e2c3ea0e749c849ed2cbbee991af0f97f1bdbadb2a6", size = 324999, upload-time = "2025-08-28T22:15:41.241Z" },
{ url = "https://files.pythonhosted.org/packages/11/d9/95fb87ec72c01169dad574baf7bc9e0d2bb73975d7ea29a83920a38646f4/bitarray-3.7.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:45660e2fabcdc1bab9699a468b312f47956300d41d6a2ea91c8f067572aaf38a", size = 321816, upload-time = "2025-08-28T22:15:42.417Z" },
{ url = "https://files.pythonhosted.org/packages/6b/3d/57ac96bbd125df75219c59afa297242054c09f22548aff028a8cefa8f120/bitarray-3.7.1-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:7b4a41dc183d7d16750634f65566205990f94144755a39f33da44c0350c3e1a8", size = 349342, upload-time = "2025-08-28T22:15:43.997Z" },
{ url = "https://files.pythonhosted.org/packages/a9/14/d28f7456d2c3b3f7898186498b6d7fd3eecab267c300fb333fc2a8d55965/bitarray-3.7.1-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:8b8e07374d60040b24d1a158895d9758424db13be63d4b2fe1870e37f9dec009", size = 350501, upload-time = "2025-08-28T22:15:45.377Z" },
{ url = "https://files.pythonhosted.org/packages/bb/a4/0f803dc446e602b21e61315f5fa2cdec02a65340147b08f7efadba559f38/bitarray-3.7.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:f31d8c2168bf2a52e4539232392352832c2296e07e0e14b6e06a44da574099ba", size = 331362, upload-time = "2025-08-28T22:15:46.577Z" },
{ url = "https://files.pythonhosted.org/packages/c9/03/25e4c4b91a33f1eae0a9e9b2b11f1eaed14e37499abbde154ff33888f5f5/bitarray-3.7.1-cp312-cp312-win32.whl", hash = "sha256:fe1f1f4010244cb07f6a079854a12e1627e4fb9ea99d672f2ceccaf6653ca514", size = 141474, upload-time = "2025-08-28T22:15:48.185Z" },
{ url = "https://files.pythonhosted.org/packages/25/53/98efa8ee389e4cbd91fc7c87bfebd4e11d6f8a027eb3f9be42d1addf1f51/bitarray-3.7.1-cp312-cp312-win_amd64.whl", hash = "sha256:f41a4b57cbc128a699e9d716a56c90c7fc76554e680fe2962f49cc4d8688b051", size = 148458, upload-time = "2025-08-28T22:15:49.256Z" },
{ url = "https://files.pythonhosted.org/packages/97/7f/16d59c041b0208bc1003fcfbf466f1936b797440e6119ce0adca7318af48/bitarray-3.7.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:e62892645f6a214eefb58a42c3ed2501af2e40a797844e0e09ec1e400ce75f3d", size = 147343, upload-time = "2025-08-28T22:15:50.617Z" },
{ url = "https://files.pythonhosted.org/packages/1a/fb/5add457d3faa0e17fde5e220bb33c0084355b9567ff9bcba2fe70fef3626/bitarray-3.7.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:3092f6bbf4a75b1e6f14a5b1030e27c435f341afeb23987115e45a25cc68ba91", size = 143904, upload-time = "2025-08-28T22:15:52.06Z" },
{ url = "https://files.pythonhosted.org/packages/95/b9/c5ab584bb8d0ba1ec72eaac7fc1e712294db77a6230c033c9b15a2de33ae/bitarray-3.7.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:851398428f5604c53371b72c5e0a28163274264ada4a08cd1eafe65fde1f68d0", size = 330206, upload-time = "2025-08-28T22:15:53.492Z" },
{ url = "https://files.pythonhosted.org/packages/f0/cd/a4d95232a2374ce55e740fbb052a1e3a9aa52e96c7597d9152b1c9d79ecc/bitarray-3.7.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:fa05460dc4f57358680b977b4a254d331b24c8beb501319b998625fd6a22654b", size = 349372, upload-time = "2025-08-28T22:15:55.043Z" },
{ url = "https://files.pythonhosted.org/packages/69/6c/8fb54cea100bd9358a7478d392042845800e809ab3a00873f2f0ae3d0306/bitarray-3.7.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9ad0df7886cb9d6d2ff75e87d323108a0e32bdca5c9918071681864129ce8ea8", size = 341120, upload-time = "2025-08-28T22:15:56.372Z" },
{ url = "https://files.pythonhosted.org/packages/bd/eb/dcbb1782bf93afa2baccbc1206bb1053f61fe999443e9180e7d9be322565/bitarray-3.7.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:55c31bc3d2c9e48741c812ee5ce4607c6f33e33f339831c214d923ffc7777d21", size = 332759, upload-time = "2025-08-28T22:15:57.984Z" },
{ url = "https://files.pythonhosted.org/packages/e2/f2/164aed832c5ece367d5347610cb7e50e5706ca1a882b9f172cb84669f591/bitarray-3.7.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:44f468fb4857fff86c65bec5e2fb67067789e40dad69258e9bb78fc6a6df49e7", size = 320992, upload-time = "2025-08-28T22:16:01.039Z" },
{ url = "https://files.pythonhosted.org/packages/35/35/fd51da63ad364d5c03690bb895e34b20c9bedce10c6d0b4d7ed7677c4b09/bitarray-3.7.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:340c524c7c934b61d1985d805bffe7609180fb5d16ece6ce89b51aa535b936f2", size = 324987, upload-time = "2025-08-28T22:16:02.327Z" },
{ url = "https://files.pythonhosted.org/packages/a3/f3/3f4f31a80f343c6c3360ca4eac04f471bf009b6346de745016f8b4990bad/bitarray-3.7.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:0751596f60f33df66245b2dafa3f7fbe13cb7ac91dd14ead87d8c2eec57cb3ed", size = 321816, upload-time = "2025-08-28T22:16:03.751Z" },
{ url = "https://files.pythonhosted.org/packages/f5/60/26ce8cff96255198581cb88f9566820d6b3c262db4c185995cc5537b3d07/bitarray-3.7.1-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:e501bd27c795105aaba02b5212ecd1bb552ca2ee2ede53e5a8cb74deee0e2052", size = 349354, upload-time = "2025-08-28T22:16:04.966Z" },
{ url = "https://files.pythonhosted.org/packages/dc/f8/e2edda9c37ba9be5349beb145dcad14d8d339f7de293b4b2bd770227c5a7/bitarray-3.7.1-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:fe2493d3f49e314e573022ead4d8c845c9748979b7eb95e815429fe947c4bde2", size = 350491, upload-time = "2025-08-28T22:16:06.778Z" },
{ url = "https://files.pythonhosted.org/packages/c0/c5/b82dd6bd8699ad818c13ae02b6acfc6c38c9278af1f71005b5d0c5f29338/bitarray-3.7.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:1f1575cc0f66aa70a0bb5cb57c8d9d1b7d541d920455169c6266919bf804dc20", size = 331367, upload-time = "2025-08-28T22:16:08.53Z" },
{ url = "https://files.pythonhosted.org/packages/51/82/03613ad262d6e2a76b906dd279de26694910a95e4ed8ebde57c9fd3f3aa7/bitarray-3.7.1-cp313-cp313-win32.whl", hash = "sha256:da3dfd2776226e15d3288a3a24c7975f9ee160ba198f2efa66bc28c5ba76d792", size = 141481, upload-time = "2025-08-28T22:16:09.727Z" },
{ url = "https://files.pythonhosted.org/packages/f1/7e/1730701a865fd1e4353900d5821c96e68695aed88d121f8783aea14c4e74/bitarray-3.7.1-cp313-cp313-win_amd64.whl", hash = "sha256:33f604bffd06b170637f8a48ddcf42074ed1e1980366ac46058e065ce04bfe2a", size = 148450, upload-time = "2025-08-28T22:16:10.959Z" },
{ url = "https://files.pythonhosted.org/packages/58/1f/80316ba4ed605d005efeb0b09c97cecde2c66ac4deae9d1d698670e1525f/bitarray-3.7.1-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:c9bf2bf29854f165a47917b8782b6cf3a7d602971bf454806208d0cbb96f797a", size = 143423, upload-time = "2025-08-28T22:17:37.879Z" },
{ url = "https://files.pythonhosted.org/packages/9e/c3/52a491e18ba41911455f145906b20898fe8e7955d0bcc5b20207bf2aba09/bitarray-3.7.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:002b73bf4a9f7b3ecb02260bd4dd332a6ee4d7f74ee9779a1ef342a36244d0cf", size = 139870, upload-time = "2025-08-28T22:17:39.266Z" },
{ url = "https://files.pythonhosted.org/packages/46/df/4674d16f39841fc71db6ecc6298390cbb91a7dd8c4eccd55248a4ddced06/bitarray-3.7.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:481239cd0966f965c2b8fa78b88614be5f12a64e7773bb5feecc567d39bb2dd5", size = 148773, upload-time = "2025-08-28T22:17:40.81Z" },
{ url = "https://files.pythonhosted.org/packages/9b/85/9cd8bc811ab446491a5bdc47a70d6d51adb21e3b005b549d2fd5e04f5c7f/bitarray-3.7.1-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f583a1fb180a123c00064fab1a3bfb9d43e574b6474be1be3f6469e0331e3e2e", size = 149609, upload-time = "2025-08-28T22:17:42.308Z" },
{ url = "https://files.pythonhosted.org/packages/ea/84/e413c51313a4093ed67f657d21519c5fc592bdb9129c0ab8c7bad226e2b8/bitarray-3.7.1-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3db0648536f3e08afa7ceb928153c39913f98fd50a5c3adf92a4d0d4268f213e", size = 151343, upload-time = "2025-08-28T22:17:43.749Z" },
{ url = "https://files.pythonhosted.org/packages/a5/4f/921176e539866a8f7428d92962861bbfa6104f2cea0cbdd578abe5768a83/bitarray-3.7.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:3875578748b484638f6ea776f534e9088cfb15eee131aac051036cba40fd5d05", size = 146847, upload-time = "2025-08-28T22:17:45.209Z" },
]
[[package]]
name = "bitstring"
version = "4.3.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "bitarray" },
]
sdist = { url = "https://files.pythonhosted.org/packages/15/a8/a80c890db75d5bdd5314b5de02c4144c7de94fd0cefcae51acaeb14c6a3f/bitstring-4.3.1.tar.gz", hash = "sha256:a08bc09d3857216d4c0f412a1611056f1cc2b64fd254fb1e8a0afba7cfa1a95a", size = 251426, upload-time = "2025-03-22T09:39:06.978Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/75/2d/174566b533755ddf8efb32a5503af61c756a983de379f8ad3aed6a982d38/bitstring-4.3.1-py3-none-any.whl", hash = "sha256:69d1587f0ac18dc7d93fc7e80d5f447161a33e57027e726dc18a0a8bacf1711a", size = 71930, upload-time = "2025-03-22T09:39:05.163Z" },
]
[[package]]
name = "black"
version = "25.9.0"
@ -521,6 +603,22 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/de/15/545e2b6cf2e3be84bc1ed85613edd75b8aea69807a71c26f4ca6a9258e82/email_validator-2.3.0-py3-none-any.whl", hash = "sha256:80f13f623413e6b197ae73bb10bf4eb0908faf509ad8362c5edeb0be7fd450b4", size = 35604, upload-time = "2025-08-26T13:09:05.858Z" },
]
[[package]]
name = "esptool"
version = "5.1.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "bitstring" },
{ name = "click" },
{ name = "cryptography" },
{ name = "intelhex" },
{ name = "pyserial" },
{ name = "pyyaml" },
{ name = "reedsolo" },
{ name = "rich-click" },
]
sdist = { url = "https://files.pythonhosted.org/packages/c2/03/d7d79a77dd787dbe6029809c5f81ad88912340a131c88075189f40df3aba/esptool-5.1.0.tar.gz", hash = "sha256:2ea9bcd7eb263d380a4fe0170856a10e4c65e3f38c757ebdc73584c8dd8322da", size = 383926, upload-time = "2025-09-16T05:27:23.715Z" }
[[package]]
name = "exceptiongroup"
version = "1.3.0"
@ -682,6 +780,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/2c/e1/e6716421ea10d38022b952c159d5161ca1193197fb744506875fbb87ea7b/iniconfig-2.1.0-py3-none-any.whl", hash = "sha256:9deba5723312380e77435581c6bf4935c94cbfab9b1ed33ef8d238ea168eb760", size = 6050, upload-time = "2025-03-19T20:10:01.071Z" },
]
[[package]]
name = "intelhex"
version = "2.3.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/66/37/1e7522494557d342a24cb236e2aec5d078fac8ed03ad4b61372586406b01/intelhex-2.3.0.tar.gz", hash = "sha256:892b7361a719f4945237da8ccf754e9513db32f5628852785aea108dcd250093", size = 44513, upload-time = "2020-10-20T20:35:51.526Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/97/78/79461288da2b13ed0a13deb65c4ad1428acb674b95278fa9abf1cefe62a2/intelhex-2.3.0-py2.py3-none-any.whl", hash = "sha256:87cc5225657524ec6361354be928adfd56bcf2a3dcc646c40f8f094c39c07db4", size = 50914, upload-time = "2020-10-20T20:35:50.162Z" },
]
[[package]]
name = "isodate"
version = "0.7.2"
@ -885,12 +992,35 @@ wheels = [
]
[[package]]
name = "mcesptool"
name = "mcp"
version = "1.15.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "anyio" },
{ name = "httpx" },
{ name = "httpx-sse" },
{ name = "jsonschema" },
{ name = "pydantic" },
{ name = "pydantic-settings" },
{ name = "python-multipart" },
{ name = "pywin32", marker = "sys_platform == 'win32'" },
{ name = "sse-starlette" },
{ name = "starlette" },
{ name = "uvicorn", marker = "sys_platform != 'emscripten'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/0c/9e/e65114795f359f314d7061f4fcb50dfe60026b01b52ad0b986b4631bf8bb/mcp-1.15.0.tar.gz", hash = "sha256:5bda1f4d383cf539d3c035b3505a3de94b20dbd7e4e8b4bd071e14634eeb2d72", size = 469622, upload-time = "2025-09-25T15:39:51.995Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c9/82/4d0df23d5ff5bb982a59ad597bc7cb9920f2650278ccefb8e0d85c5ce3d4/mcp-1.15.0-py3-none-any.whl", hash = "sha256:314614c8addc67b663d6c3e4054db0a5c3dedc416c24ef8ce954e203fdc2333d", size = 166963, upload-time = "2025-09-25T15:39:50.538Z" },
]
[[package]]
name = "mcp-esptool-server"
version = "2025.9.28.1"
source = { editable = "." }
dependencies = [
{ name = "asyncio-mqtt" },
{ name = "click" },
{ name = "esptool" },
{ name = "fastmcp" },
{ name = "pydantic" },
{ name = "pyserial" },
@ -930,6 +1060,7 @@ requires-dist = [
{ name = "asyncio-mqtt", specifier = ">=0.16.0" },
{ name = "black", marker = "extra == 'dev'", specifier = ">=23.0.0" },
{ name = "click", specifier = ">=8.0.0" },
{ name = "esptool", specifier = ">=5.0.0" },
{ name = "factory-boy", marker = "extra == 'testing'", specifier = ">=3.3.0" },
{ name = "fastmcp", specifier = ">=2.12.4" },
{ name = "gunicorn", marker = "extra == 'production'", specifier = ">=21.0.0" },
@ -954,28 +1085,6 @@ requires-dist = [
]
provides-extras = ["dev", "idf", "testing", "production"]
[[package]]
name = "mcp"
version = "1.15.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "anyio" },
{ name = "httpx" },
{ name = "httpx-sse" },
{ name = "jsonschema" },
{ name = "pydantic" },
{ name = "pydantic-settings" },
{ name = "python-multipart" },
{ name = "pywin32", marker = "sys_platform == 'win32'" },
{ name = "sse-starlette" },
{ name = "starlette" },
{ name = "uvicorn", marker = "sys_platform != 'emscripten'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/0c/9e/e65114795f359f314d7061f4fcb50dfe60026b01b52ad0b986b4631bf8bb/mcp-1.15.0.tar.gz", hash = "sha256:5bda1f4d383cf539d3c035b3505a3de94b20dbd7e4e8b4bd071e14634eeb2d72", size = 469622, upload-time = "2025-09-25T15:39:51.995Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c9/82/4d0df23d5ff5bb982a59ad597bc7cb9920f2650278ccefb8e0d85c5ce3d4/mcp-1.15.0-py3-none-any.whl", hash = "sha256:314614c8addc67b663d6c3e4054db0a5c3dedc416c24ef8ce954e203fdc2333d", size = 166963, upload-time = "2025-09-25T15:39:50.538Z" },
]
[[package]]
name = "mdurl"
version = "0.1.2"
@ -1680,6 +1789,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/b6/58/f515c44ba8c6fa5daa35134b94b99661ced852628c5505ead07b905c3fc7/rapidfuzz-3.14.1-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:a4f18092db4825f2517d135445015b40033ed809a41754918a03ef062abe88a0", size = 1513859, upload-time = "2025-09-08T21:08:13.07Z" },
]
[[package]]
name = "reedsolo"
version = "1.7.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f7/61/a67338cbecf370d464e71b10e9a31355f909d6937c3a8d6b17dd5d5beb5e/reedsolo-1.7.0.tar.gz", hash = "sha256:c1359f02742751afe0f1c0de9f0772cc113835aa2855d2db420ea24393c87732", size = 59723, upload-time = "2023-01-17T05:10:19.733Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/09/19/1bb346c0e581557c88946d2bb979b2bee8992e72314cfb418b5440e383db/reedsolo-1.7.0-py3-none-any.whl", hash = "sha256:2b6a3e402a1ee3e1eea3f932f81e6c0b7bbc615588074dca1dbbcdeb055002bd", size = 32360, upload-time = "2023-01-17T05:10:17.652Z" },
]
[[package]]
name = "referencing"
version = "0.36.2"
@ -1734,6 +1852,20 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/e3/30/3c4d035596d3cf444529e0b2953ad0466f6049528a879d27534700580395/rich-14.1.0-py3-none-any.whl", hash = "sha256:536f5f1785986d6dbdea3c75205c473f970777b4a0d6c6dd1b696aa05a3fa04f", size = 243368, upload-time = "2025-07-25T07:32:56.73Z" },
]
[[package]]
name = "rich-click"
version = "1.9.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "click" },
{ name = "rich" },
{ name = "typing-extensions", marker = "python_full_version < '3.11'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/29/c2/f08b5e7c1a33af8a115be640aa0796ba01c4732696da6d2254391376b314/rich_click-1.9.1.tar.gz", hash = "sha256:4f2620589d7287f86265432e6a909de4f281de909fe68d8c835fbba49265d268", size = 73109, upload-time = "2025-09-20T22:40:35.362Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/a8/77/e9144dcf68a0b3f3f4386986f97255c3d9f7c659be58bb7a5fe8f26f3efa/rich_click-1.9.1-py3-none-any.whl", hash = "sha256:ea6114a9e081b7d68cc07b315070398f806f01bb0e0c49da56f129e672877817", size = 69759, upload-time = "2025-09-20T22:40:34.099Z" },
]
[[package]]
name = "rich-rst"
version = "1.3.1"