Compare commits

...

10 Commits

Author SHA1 Message Date
cf8394fa6f Rename mcp-ltspice -> mcltspice, remove stdout banner
Rename package from mcp-ltspice/mcp_ltspice to mcltspice throughout:
source directory, imports, pyproject.toml, tests, and README.

Remove startup banner prints from main() since FastMCP handles
its own banner and stdout is the MCP JSON-RPC transport.

Point repo URL at git.supported.systems/MCP/mcltspice.
2026-02-12 22:53:16 -07:00
0c545800f7 Merge phase6: SVG plots, tuning, templates, integration tests 2026-02-11 12:53:26 -07:00
b16c20c2ca Fix CE amp coupling cap routing and Colpitts test variable selection
CE amplifier schematic: the input coupling cap CC_in was placed
horizontally (R90) at y=336 — the same y as the RB1-to-base bias
wire. Both cap pins sat on the wire, shorting the cap and allowing
Vin's 0V DC to override the bias divider, putting Q1 in cutoff.

Fix: move CC_in to vertical orientation (R0) above the base wire.
Now pinA=(400,256) and pinB=(400,320) are off the y=336 bias path.
The cap properly blocks DC while passing the 1kHz input signal.
Result: V(out) swings 2.2Vpp (gain ≈ 110) instead of stuck at Vcc.

Colpitts oscillator test: the schematic was actually working (V(out)
pp=2.05V) but the test's fallback variable selection picked V(n001)
(the Vcc rail, constant 12V) instead of V(out). Fix: look for V(out)
first since the schematic labels the collector with "out".

Integration tests: 4/4 pass, unit tests: 360/360 pass.
2026-02-11 06:01:30 -07:00
9b418a06c5 Add SVG plotting, circuit tuning, 5 new templates, fix prompts
- SVG waveform plots (svg_plot.py): pure-SVG timeseries, Bode, spectrum
  generation with plot_waveform MCP tool — no matplotlib dependency
- Circuit tuning tool (tune_circuit): single-shot simulate → measure →
  compare targets → suggest adjustments workflow for iterative design
- 5 new circuit templates: Sallen-Key lowpass, boost converter,
  instrumentation amplifier, current mirror, transimpedance amplifier
  (both netlist and .asc schematic generators, 15 total templates)
- Fix all 6 prompts to return list[Message] per FastMCP 2.x spec
- Add ltspice://templates and ltspice://template/{name} resources
- Add troubleshoot_simulation prompt
- Integration tests for RC lowpass and non-inverting amp (2/4 pass;
  CE amp and Colpitts oscillator have pre-existing schematic bugs)
- 360 unit tests passing, ruff clean
2026-02-11 05:13:50 -07:00
c56ce918b4 Expand .asc schematic templates to 10 topologies, fix opamp subcircuit
Add 7 new graphical schematic templates (differential amp, buck converter,
LDO regulator, H-bridge, common emitter, Colpitts oscillator) and rewrite
inverting amp to actually include an op-amp instead of just passive components.

Fix UniversalOpamp2 subcircuit error: the .asy symbol defines SpiceModel as
"level2", so SYMATTR Value must be omitted to let the built-in model name
resolve. Previously emitting SYMATTR Value UniversalOpamp2 caused LTspice
to search for a non-existent subcircuit.

Fix wire-through-pin routing bugs: vertical wires crossing intermediate
opamp/source pins auto-connect at those pins, creating unintended shorts.
Rerouted V1-to-In+ paths to avoid crossing In- pins in non-inverting,
common-emitter, Colpitts, and differential amp templates.

Refactor generate_schematic tool from hardcoded if/elif to registry dispatch
via _ASC_TEMPLATES dict, matching the _TEMPLATES pattern for netlists.

All 10 templates verified: simulate with zero errors and zero NC nodes.
255 tests pass, source lint clean.
2026-02-11 00:46:13 -07:00
1afa4f112b Add noise, template catalog, DC/TF tools and workflow prompts (35 tools)
New tools: analyze_noise, get_spot_noise, get_total_noise,
create_from_template, list_templates, get_operating_point,
get_transfer_function, list_simulation_runs.

Enhanced get_waveform with per-run extraction for stepped sims.
Added 3 new workflow prompts: optimize_design, monte_carlo_analysis,
circuit_from_scratch.
2026-02-10 23:39:29 -07:00
cfcd0ae221 Initial commit 2026-02-10 23:35:53 -07:00
d2d33fff57 Fix schematic generator pin positions using actual .asy data
The .asc schematic templates had wrong pin offsets, causing LTspice
to extract netlists with disconnected (NC_*) nodes and singular
matrix errors.

Fixed by reading pin positions from the .asy symbol files and applying
the correct CCW rotation transform: R90 maps (px, py) → (-py, px).

Pin offsets: voltage (+0,+16)/(+0,+96), res (+16,+16)/(+16,+96),
cap (+16,+0)/(+16,+64). Added pin_position() helper and _PIN_OFFSETS
table for reuse by all layout functions.

Verified end-to-end: generate_rc_lowpass → simulate → bandwidth gives
1587.8 Hz vs theoretical 1591.5 Hz (0.24% error).
2026-02-10 23:15:48 -07:00
ba649d2a6e Add stability, power, optimization, batch, and schematic generation tools
Phase 3 features bringing the server to 27 tools:
- Stepped/multi-run .raw file parsing (.step, .mc, .temp)
- Stability analysis (gain/phase margin from AC loop gain)
- Power analysis (average, RMS, efficiency, power factor)
- Safe waveform expression evaluator (recursive-descent parser)
- Component value optimizer (binary search + coordinate descent)
- Batch simulation: parameter sweep, temperature sweep, Monte Carlo
- .asc schematic generation from templates (RC filter, divider, inverting amp)
- Touchstone .s1p/.s2p/.snp S-parameter file parsing
- 7 new netlist templates (diff amp, common emitter, buck, LDO, oscillator, H-bridge)
- Full ruff lint and format compliance across all modules
2026-02-10 23:05:35 -07:00
b31ff1cbe4 Add analysis, netlist builder, model search, DRC, and diff tools
New modules:
- log_parser: Extract .meas results and errors from sim logs
- waveform_math: FFT, THD, RMS, settling time, rise time, bandwidth
- netlist: Programmatic SPICE netlist builder with templates
- models: Search 2800+ SPICE models and subcircuits in library
- diff: Compare two schematics for component/topology changes
- drc: Design rule checks (ground, floating nodes, missing values)

Server now has 18 tools, 3 resources, and 3 guided prompts.
2026-02-10 13:59:26 -07:00
41 changed files with 13524 additions and 641 deletions

View File

@ -1,4 +1,4 @@
# mcp-ltspice
# mcltspice
MCP server for LTspice circuit simulation automation on Linux.
@ -19,7 +19,7 @@ MCP server for LTspice circuit simulation automation on Linux.
```bash
# From PyPI (once published)
uvx mcp-ltspice
uvx mcltspice
# From source
uv pip install -e .
@ -67,15 +67,19 @@ Set the `LTSPICE_DIR` environment variable or use the default `~/claude/ltspice/
## Usage with Claude Code
```bash
claude mcp add mcp-ltspice -- uvx mcp-ltspice
claude mcp add mcltspice -- uvx mcltspice
```
Or for local development:
```bash
claude mcp add mcp-ltspice -- uv run --directory /path/to/mcp-ltspice mcp-ltspice
claude mcp add mcltspice -- uv run --directory /path/to/mcltspice mcltspice
```
## Repository
[git.supported.systems/MCP/mcltspice](https://git.supported.systems/MCP/mcltspice)
## License
MIT

View File

@ -1,5 +1,5 @@
[project]
name = "mcp-ltspice"
name = "mcltspice"
version = "2026.02.10"
description = "MCP server for LTspice circuit simulation automation"
readme = "README.md"
@ -28,23 +28,31 @@ dependencies = [
dev = [
"ruff>=0.1.0",
"pytest>=7.0.0",
"pytest-asyncio>=0.23.0",
]
plot = [
"matplotlib>=3.7.0",
]
[project.scripts]
mcp-ltspice = "mcp_ltspice.server:main"
mcltspice = "mcltspice.server:main"
[project.urls]
Repository = "https://github.com/ryanmalloy/mcp-ltspice"
Repository = "https://git.supported.systems/MCP/mcltspice"
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.hatch.build.targets.wheel]
packages = ["src/mcp_ltspice"]
packages = ["src/mcltspice"]
[tool.pytest.ini_options]
testpaths = ["tests"]
asyncio_mode = "auto"
markers = [
"integration: tests requiring LTspice/Wine installation (deselect with '-m not integration')",
]
[tool.ruff]
line-length = 100
@ -53,3 +61,6 @@ target-version = "py311"
[tool.ruff.lint]
select = ["E", "F", "I", "N", "W", "UP"]
ignore = ["E501"]
[tool.ruff.lint.per-file-ignores]
"tests/**" = ["N806"] # Allow EE-conventional uppercase vars (K, Q, H, Vpk, etc.)

View File

@ -3,6 +3,6 @@
from importlib.metadata import version
try:
__version__ = version("mcp-ltspice")
__version__ = version("mcltspice")
except Exception:
__version__ = "0.0.0"

File diff suppressed because it is too large Load Diff

324
src/mcltspice/batch.py Normal file
View File

@ -0,0 +1,324 @@
"""Batch simulation runner for parameter sweeps and Monte Carlo analysis.
Runs multiple LTspice simulations sequentially (LTspice under Wine is
single-instance) and collects results into a single BatchResult.
"""
from __future__ import annotations
import random
import re
import tempfile
import time
from dataclasses import dataclass, field
from pathlib import Path
from .runner import SimulationResult, run_netlist
@dataclass
class BatchResult:
"""Aggregated results from a batch of simulations."""
results: list[SimulationResult] = field(default_factory=list)
parameter_values: list[dict] = field(default_factory=list)
total_elapsed: float = 0.0
@property
def success_count(self) -> int:
return sum(1 for r in self.results if r.success)
@property
def failure_count(self) -> int:
return sum(1 for r in self.results if not r.success)
async def run_batch(
netlists: list[tuple[str, str]],
timeout: float = 300,
) -> BatchResult:
"""Run a list of netlists sequentially and collect results.
Args:
netlists: List of ``(name, netlist_text)`` pairs. Each netlist
is written to a temp ``.cir`` file and simulated.
timeout: Per-simulation timeout in seconds.
Returns:
BatchResult with one SimulationResult per netlist.
"""
batch = BatchResult()
start = time.monotonic()
for name, text in netlists:
safe = _safe_filename(name)
work_dir = Path(tempfile.mkdtemp(prefix=f"ltbatch_{safe}_"))
cir_path = work_dir / f"{safe}.cir"
cir_path.write_text(text, encoding="utf-8")
result = await run_netlist(cir_path, timeout=timeout, work_dir=work_dir)
batch.results.append(result)
batch.parameter_values.append({"name": name})
batch.total_elapsed = time.monotonic() - start
return batch
async def run_parameter_sweep(
netlist_template: str,
param_name: str,
values: list[float],
timeout: float = 300,
) -> BatchResult:
"""Sweep a single parameter across a range of values.
The *netlist_template* should contain a ``.param`` directive for
*param_name* whose value will be replaced for each run. If no
``.param`` line is found, one is inserted before the ``.end``
directive.
Args:
netlist_template: Netlist text with a ``.param {param_name}=...`` line.
param_name: Parameter to sweep (e.g., ``"Rval"``).
values: List of numeric values to substitute.
timeout: Per-simulation timeout in seconds.
Returns:
BatchResult indexed by parameter value.
"""
batch = BatchResult()
start = time.monotonic()
param_re = re.compile(
rf"(\.param\s+{re.escape(param_name)}\s*=\s*)(\S+)",
re.IGNORECASE,
)
for val in values:
val_str = _format_value(val)
if param_re.search(netlist_template):
text = param_re.sub(rf"\g<1>{val_str}", netlist_template)
else:
# Insert a .param line before .end
text = _insert_before_end(netlist_template, f".param {param_name}={val_str}")
safe = f"sweep_{param_name}_{val_str}"
work_dir = Path(tempfile.mkdtemp(prefix=f"ltbatch_{_safe_filename(safe)}_"))
cir_path = work_dir / f"{_safe_filename(safe)}.cir"
cir_path.write_text(text, encoding="utf-8")
result = await run_netlist(cir_path, timeout=timeout, work_dir=work_dir)
batch.results.append(result)
batch.parameter_values.append({param_name: val})
batch.total_elapsed = time.monotonic() - start
return batch
async def run_temperature_sweep(
netlist_text: str,
temperatures: list[float],
timeout: float = 300,
) -> BatchResult:
"""Run the same netlist at different temperatures.
A ``.temp`` directive is added (or replaced) for each run.
Args:
netlist_text: Base netlist text.
temperatures: List of temperatures in degrees C.
timeout: Per-simulation timeout in seconds.
Returns:
BatchResult indexed by temperature.
"""
batch = BatchResult()
start = time.monotonic()
temp_re = re.compile(r"\.temp\s+\S+", re.IGNORECASE)
for temp in temperatures:
temp_str = _format_value(temp)
if temp_re.search(netlist_text):
text = temp_re.sub(f".temp {temp_str}", netlist_text)
else:
text = _insert_before_end(netlist_text, f".temp {temp_str}")
safe = f"temp_{temp_str}"
work_dir = Path(tempfile.mkdtemp(prefix=f"ltbatch_{_safe_filename(safe)}_"))
cir_path = work_dir / f"{_safe_filename(safe)}.cir"
cir_path.write_text(text, encoding="utf-8")
result = await run_netlist(cir_path, timeout=timeout, work_dir=work_dir)
batch.results.append(result)
batch.parameter_values.append({"temperature": temp})
batch.total_elapsed = time.monotonic() - start
return batch
async def run_monte_carlo(
netlist_text: str,
n_runs: int,
tolerances: dict[str, float],
timeout: float = 300,
seed: int | None = None,
) -> BatchResult:
"""Monte Carlo analysis with component tolerances.
For each run, every component listed in *tolerances* has its value
randomly varied using a normal distribution truncated to +/-3 sigma,
where sigma equals the tolerance fraction.
Args:
netlist_text: Base netlist text.
n_runs: Number of Monte Carlo iterations.
tolerances: Mapping of component name to tolerance fraction
(e.g., ``{"R1": 0.05}`` for 5%).
timeout: Per-simulation timeout in seconds.
seed: Optional RNG seed for reproducibility.
Returns:
BatchResult with per-run component values in ``parameter_values``.
"""
rng = random.Random(seed)
batch = BatchResult()
start = time.monotonic()
# Pre-extract nominal values from the netlist
nominals = _extract_component_values(netlist_text, list(tolerances.keys()))
for run_idx in range(n_runs):
text = netlist_text
params: dict[str, float] = {}
for comp_name, tol_frac in tolerances.items():
nominal = nominals.get(comp_name)
if nominal is None:
continue
# Normal distribution, clamp to +/-3sigma
sigma = nominal * tol_frac
deviation = rng.gauss(0, sigma)
deviation = max(-3 * sigma, min(3 * sigma, deviation))
varied = nominal + deviation
params[comp_name] = varied
# Replace the component value in the netlist text
text = _replace_component_value(text, comp_name, _format_value(varied))
safe = f"mc_{run_idx:04d}"
work_dir = Path(tempfile.mkdtemp(prefix=f"ltbatch_{safe}_"))
cir_path = work_dir / f"{safe}.cir"
cir_path.write_text(text, encoding="utf-8")
result = await run_netlist(cir_path, timeout=timeout, work_dir=work_dir)
batch.results.append(result)
batch.parameter_values.append(params)
batch.total_elapsed = time.monotonic() - start
return batch
# ============================================================================
# Internal helpers
# ============================================================================
# SPICE engineering suffixes
_SUFFIX_MAP = {
"T": 1e12,
"G": 1e9,
"MEG": 1e6,
"K": 1e3,
"M": 1e-3,
"U": 1e-6,
"N": 1e-9,
"P": 1e-12,
"F": 1e-15,
}
def _parse_spice_value(s: str) -> float | None:
"""Parse a SPICE value string like ``10k``, ``100n``, ``4.7meg`` to float."""
s = s.strip().upper()
if not s:
return None
# Try plain float first
try:
return float(s)
except ValueError:
pass
# Try suffixed form
for suffix, mult in sorted(_SUFFIX_MAP.items(), key=lambda x: -len(x[0])):
if s.endswith(suffix):
num_part = s[: -len(suffix)]
try:
return float(num_part) * mult
except ValueError:
continue
return None
def _format_value(v: float) -> str:
"""Format a float for SPICE, using engineering suffixes where tidy."""
if v == 0:
return "0"
abs_v = abs(v)
# Walk the suffix table from large to small
for suffix, mult in sorted(_SUFFIX_MAP.items(), key=lambda x: -x[1]):
if abs_v >= mult * 0.999:
scaled = v / mult
formatted = f"{scaled:g}{suffix.lower()}"
return formatted
return f"{v:g}"
def _safe_filename(name: str) -> str:
"""Turn an arbitrary name into a safe filename fragment."""
return re.sub(r"[^\w\-.]", "_", name)[:60]
def _insert_before_end(netlist: str, directive: str) -> str:
"""Insert a directive line just before ``.end``."""
end_re = re.compile(r"^(\.end\b)", re.IGNORECASE | re.MULTILINE)
if end_re.search(netlist):
return end_re.sub(f"{directive}\n\\1", netlist)
# No .end found -- append
return netlist.rstrip() + f"\n{directive}\n.end\n"
def _extract_component_values(netlist: str, names: list[str]) -> dict[str, float]:
"""Extract numeric component values from a netlist by instance name.
Looks for lines like ``R1 in out 10k`` and parses the value token.
"""
result: dict[str, float] = {}
for name in names:
pattern = re.compile(
rf"^\s*{re.escape(name)}\s+\S+\s+\S+\s+(\S+)",
re.IGNORECASE | re.MULTILINE,
)
match = pattern.search(netlist)
if match:
parsed = _parse_spice_value(match.group(1))
if parsed is not None:
result[name] = parsed
return result
def _replace_component_value(netlist: str, comp_name: str, new_value: str) -> str:
"""Replace a component's value token in the netlist text."""
pattern = re.compile(
rf"^(\s*{re.escape(comp_name)}\s+\S+\s+\S+\s+)\S+",
re.IGNORECASE | re.MULTILINE,
)
return pattern.sub(rf"\g<1>{new_value}", netlist, count=1)

View File

@ -1,13 +1,12 @@
"""Configuration for mcp-ltspice."""
"""Configuration for mcltspice."""
import os
from pathlib import Path
# LTspice installation paths
LTSPICE_DIR = Path(os.environ.get(
"LTSPICE_DIR",
Path.home() / "claude" / "ltspice" / "extracted" / "ltspice"
))
LTSPICE_DIR = Path(
os.environ.get("LTSPICE_DIR", Path.home() / "claude" / "ltspice" / "extracted" / "ltspice")
)
LTSPICE_EXE = LTSPICE_DIR / "LTspice.exe"
LTSPICE_LIB = LTSPICE_DIR / "lib"

417
src/mcltspice/diff.py Normal file
View File

@ -0,0 +1,417 @@
"""Compare two LTspice schematics and produce a structured diff."""
from dataclasses import dataclass, field
from pathlib import Path
from .schematic import Component, Schematic, parse_schematic
@dataclass
class ComponentChange:
"""A change to a component."""
name: str
change_type: str # "added", "removed", "modified"
symbol: str = ""
old_value: str | None = None
new_value: str | None = None
old_attributes: dict[str, str] = field(default_factory=dict)
new_attributes: dict[str, str] = field(default_factory=dict)
moved: bool = False # Position changed
@dataclass
class DirectiveChange:
"""A change to a SPICE directive."""
change_type: str # "added", "removed", "modified"
old_text: str | None = None
new_text: str | None = None
@dataclass
class SchematicDiff:
"""Complete diff between two schematics."""
component_changes: list[ComponentChange] = field(default_factory=list)
directive_changes: list[DirectiveChange] = field(default_factory=list)
nets_added: list[str] = field(default_factory=list)
nets_removed: list[str] = field(default_factory=list)
wires_added: int = 0
wires_removed: int = 0
@property
def has_changes(self) -> bool:
return bool(
self.component_changes
or self.directive_changes
or self.nets_added
or self.nets_removed
or self.wires_added
or self.wires_removed
)
def summary(self) -> str:
"""Human-readable summary of changes."""
lines: list[str] = []
if not self.has_changes:
return "No changes detected."
# Group component changes by type
added = [c for c in self.component_changes if c.change_type == "added"]
removed = [c for c in self.component_changes if c.change_type == "removed"]
modified = [c for c in self.component_changes if c.change_type == "modified"]
if modified:
lines.append(f"{len(modified)} component{'s' if len(modified) != 1 else ''} modified:")
for c in modified:
parts: list[str] = []
if c.old_value != c.new_value:
parts.append(f"{c.old_value} -> {c.new_value}")
if c.moved:
parts.append("moved")
# Check for attribute changes beyond value
attr_diff = _attr_diff_summary(c.old_attributes, c.new_attributes)
if attr_diff:
parts.append(attr_diff)
detail = ", ".join(parts) if parts else "attributes changed"
lines.append(f" {c.name}: {detail}")
if added:
for c in added:
val = f" = {c.new_value}" if c.new_value else ""
lines.append(
f"1 component added: {c.name} ({c.symbol}){val}"
if len(added) == 1
else f" {c.name} ({c.symbol}){val}"
)
if len(added) > 1:
lines.insert(
len(lines) - len(added),
f"{len(added)} components added:",
)
if removed:
for c in removed:
val = f" = {c.old_value}" if c.old_value else ""
lines.append(
f"1 component removed: {c.name} ({c.symbol}){val}"
if len(removed) == 1
else f" {c.name} ({c.symbol}){val}"
)
if len(removed) > 1:
lines.insert(
len(lines) - len(removed),
f"{len(removed)} components removed:",
)
# Directive changes
dir_added = [d for d in self.directive_changes if d.change_type == "added"]
dir_removed = [d for d in self.directive_changes if d.change_type == "removed"]
dir_modified = [d for d in self.directive_changes if d.change_type == "modified"]
for d in dir_modified:
lines.append(f"1 directive changed: {d.old_text} -> {d.new_text}")
for d in dir_added:
lines.append(f"1 directive added: {d.new_text}")
for d in dir_removed:
lines.append(f"1 directive removed: {d.old_text}")
# Net changes
if self.nets_added:
lines.append(
f"{len(self.nets_added)} net{'s' if len(self.nets_added) != 1 else ''} "
f"added: {', '.join(self.nets_added)}"
)
if self.nets_removed:
lines.append(
f"{len(self.nets_removed)} net{'s' if len(self.nets_removed) != 1 else ''} "
f"removed: {', '.join(self.nets_removed)}"
)
# Wire changes
wire_parts: list[str] = []
if self.wires_added:
wire_parts.append(
f"{self.wires_added} wire{'s' if self.wires_added != 1 else ''} added"
)
if self.wires_removed:
wire_parts.append(
f"{self.wires_removed} wire{'s' if self.wires_removed != 1 else ''} removed"
)
if wire_parts:
lines.append(", ".join(wire_parts))
return "\n".join(lines)
def to_dict(self) -> dict:
"""Convert to JSON-serializable dict."""
return {
"has_changes": self.has_changes,
"component_changes": [
{
"name": c.name,
"change_type": c.change_type,
"symbol": c.symbol,
"old_value": c.old_value,
"new_value": c.new_value,
"old_attributes": c.old_attributes,
"new_attributes": c.new_attributes,
"moved": c.moved,
}
for c in self.component_changes
],
"directive_changes": [
{
"change_type": d.change_type,
"old_text": d.old_text,
"new_text": d.new_text,
}
for d in self.directive_changes
],
"nets_added": self.nets_added,
"nets_removed": self.nets_removed,
"wires_added": self.wires_added,
"wires_removed": self.wires_removed,
"summary": self.summary(),
}
def _attr_diff_summary(old: dict[str, str], new: dict[str, str]) -> str:
"""Summarize attribute differences, excluding Value (handled separately)."""
changes: list[str] = []
all_keys = set(old) | set(new)
# Skip Value since it's reported on its own
all_keys.discard("Value")
all_keys.discard("Value2")
for key in sorted(all_keys):
old_val = old.get(key)
new_val = new.get(key)
if old_val != new_val:
if old_val is None:
changes.append(f"+{key}={new_val}")
elif new_val is None:
changes.append(f"-{key}={old_val}")
else:
changes.append(f"{key}: {old_val} -> {new_val}")
return "; ".join(changes)
def _normalize_directive(text: str) -> str:
"""Normalize whitespace in a SPICE directive for comparison."""
return " ".join(text.split())
def _wire_set(schematic: Schematic) -> set[tuple[int, int, int, int]]:
"""Convert wires to a set of coordinate tuples for comparison.
Each wire is stored in a canonical form (smaller point first) so that
reversed wires compare equal.
"""
result: set[tuple[int, int, int, int]] = set()
for w in schematic.wires:
# Canonical ordering: sort by (x, y) so direction doesn't matter
if (w.x1, w.y1) <= (w.x2, w.y2):
result.add((w.x1, w.y1, w.x2, w.y2))
else:
result.add((w.x2, w.y2, w.x1, w.y1))
return result
def _component_map(schematic: Schematic) -> dict[str, Component]:
"""Build a map of component instance name -> Component."""
return {comp.name: comp for comp in schematic.components}
def _diff_components(schema_a: Schematic, schema_b: Schematic) -> list[ComponentChange]:
"""Compare components between two schematics."""
map_a = _component_map(schema_a)
map_b = _component_map(schema_b)
names_a = set(map_a)
names_b = set(map_b)
changes: list[ComponentChange] = []
# Removed components (in A but not B)
for name in sorted(names_a - names_b):
comp = map_a[name]
changes.append(
ComponentChange(
name=name,
change_type="removed",
symbol=comp.symbol,
old_value=comp.value,
old_attributes=dict(comp.attributes),
)
)
# Added components (in B but not A)
for name in sorted(names_b - names_a):
comp = map_b[name]
changes.append(
ComponentChange(
name=name,
change_type="added",
symbol=comp.symbol,
new_value=comp.value,
new_attributes=dict(comp.attributes),
)
)
# Potentially modified components (in both)
for name in sorted(names_a & names_b):
comp_a = map_a[name]
comp_b = map_b[name]
moved = (comp_a.x, comp_a.y) != (comp_b.x, comp_b.y)
value_changed = comp_a.value != comp_b.value
attrs_changed = comp_a.attributes != comp_b.attributes
rotation_changed = comp_a.rotation != comp_b.rotation or comp_a.mirror != comp_b.mirror
if moved or value_changed or attrs_changed or rotation_changed:
changes.append(
ComponentChange(
name=name,
change_type="modified",
symbol=comp_a.symbol,
old_value=comp_a.value,
new_value=comp_b.value,
old_attributes=dict(comp_a.attributes),
new_attributes=dict(comp_b.attributes),
moved=moved,
)
)
return changes
def _diff_directives(schema_a: Schematic, schema_b: Schematic) -> list[DirectiveChange]:
"""Compare SPICE directives between two schematics."""
directives_a = [t.content for t in schema_a.texts if t.type == "spice"]
directives_b = [t.content for t in schema_b.texts if t.type == "spice"]
# Normalize for comparison but keep originals for display
norm_a = {_normalize_directive(d): d for d in directives_a}
norm_b = {_normalize_directive(d): d for d in directives_b}
keys_a = set(norm_a)
keys_b = set(norm_b)
changes: list[DirectiveChange] = []
# Removed directives
for key in sorted(keys_a - keys_b):
changes.append(DirectiveChange(change_type="removed", old_text=norm_a[key]))
# Added directives
for key in sorted(keys_b - keys_a):
changes.append(DirectiveChange(change_type="added", new_text=norm_b[key]))
# For modified detection: directives that share a command keyword but differ.
# We match by the first token (e.g., ".tran", ".ac") to detect modifications
# vs pure add/remove pairs.
removed = [c for c in changes if c.change_type == "removed"]
added = [c for c in changes if c.change_type == "added"]
matched_removed: set[int] = set()
matched_added: set[int] = set()
for ri, rc in enumerate(removed):
old_cmd = (rc.old_text or "").split()[0].lower() if rc.old_text else ""
for ai, ac in enumerate(added):
if ai in matched_added:
continue
new_cmd = (ac.new_text or "").split()[0].lower() if ac.new_text else ""
if old_cmd and old_cmd == new_cmd:
matched_removed.add(ri)
matched_added.add(ai)
break
# Rebuild the list: unmatched stay as-is, matched pairs become "modified"
final_changes: list[DirectiveChange] = []
for ri, rc in enumerate(removed):
if ri not in matched_removed:
final_changes.append(rc)
for ai, ac in enumerate(added):
if ai not in matched_added:
final_changes.append(ac)
for ri in sorted(matched_removed):
rc = removed[ri]
# Find its matched added entry
old_cmd = (rc.old_text or "").split()[0].lower()
for ai, ac in enumerate(added):
new_cmd = (ac.new_text or "").split()[0].lower() if ac.new_text else ""
if ai in matched_added and old_cmd == new_cmd:
final_changes.append(
DirectiveChange(
change_type="modified",
old_text=rc.old_text,
new_text=ac.new_text,
)
)
break
return final_changes
def _diff_nets(schema_a: Schematic, schema_b: Schematic) -> tuple[list[str], list[str]]:
"""Compare net flags between two schematics.
Returns:
(nets_added, nets_removed)
"""
names_a = {f.name for f in schema_a.flags}
names_b = {f.name for f in schema_b.flags}
added = sorted(names_b - names_a)
removed = sorted(names_a - names_b)
return added, removed
def _diff_wires(schema_a: Schematic, schema_b: Schematic) -> tuple[int, int]:
"""Compare wires between two schematics using set operations.
Returns:
(wires_added, wires_removed)
"""
set_a = _wire_set(schema_a)
set_b = _wire_set(schema_b)
added = len(set_b - set_a)
removed = len(set_a - set_b)
return added, removed
def diff_schematics(
path_a: Path | str,
path_b: Path | str,
) -> SchematicDiff:
"""Compare two schematics and return differences.
Args:
path_a: Path to the "before" schematic
path_b: Path to the "after" schematic
Returns:
SchematicDiff with all changes
"""
schema_a = parse_schematic(path_a)
schema_b = parse_schematic(path_b)
component_changes = _diff_components(schema_a, schema_b)
directive_changes = _diff_directives(schema_a, schema_b)
nets_added, nets_removed = _diff_nets(schema_a, schema_b)
wires_added, wires_removed = _diff_wires(schema_a, schema_b)
return SchematicDiff(
component_changes=component_changes,
directive_changes=directive_changes,
nets_added=nets_added,
nets_removed=nets_removed,
wires_added=wires_added,
wires_removed=wires_removed,
)

424
src/mcltspice/drc.py Normal file
View File

@ -0,0 +1,424 @@
"""Design Rule Checks for LTspice schematics."""
from collections import defaultdict
from dataclasses import dataclass, field
from enum import Enum
from pathlib import Path
from .schematic import Schematic, parse_schematic
class Severity(Enum):
ERROR = "error" # Will likely cause simulation failure
WARNING = "warning" # May cause unexpected results
INFO = "info" # Suggestion for improvement
@dataclass
class DRCViolation:
"""A single design rule violation."""
rule: str # Short rule identifier
severity: Severity
message: str # Human-readable description
component: str | None = None # Related component name
location: tuple[int, int] | None = None # (x, y) if applicable
@dataclass
class DRCResult:
"""Results of a design rule check."""
violations: list[DRCViolation] = field(default_factory=list)
checks_run: int = 0
@property
def errors(self) -> list[DRCViolation]:
return [v for v in self.violations if v.severity == Severity.ERROR]
@property
def warnings(self) -> list[DRCViolation]:
return [v for v in self.violations if v.severity == Severity.WARNING]
@property
def passed(self) -> bool:
return len(self.errors) == 0
def summary(self) -> str:
"""Human-readable summary."""
total = len(self.violations)
err_count = len(self.errors)
warn_count = len(self.warnings)
info_count = total - err_count - warn_count
if total == 0:
return f"DRC passed: {self.checks_run} checks run, no violations found."
parts = []
if err_count:
parts.append(f"{err_count} error{'s' if err_count != 1 else ''}")
if warn_count:
parts.append(f"{warn_count} warning{'s' if warn_count != 1 else ''}")
if info_count:
parts.append(f"{info_count} info")
status = "FAILED" if err_count else "passed with warnings"
return f"DRC {status}: {self.checks_run} checks run, {', '.join(parts)}."
def to_dict(self) -> dict:
"""Convert to JSON-serializable dict."""
return {
"passed": self.passed,
"checks_run": self.checks_run,
"summary": self.summary(),
"error_count": len(self.errors),
"warning_count": len(self.warnings),
"violations": [
{
"rule": v.rule,
"severity": v.severity.value,
"message": v.message,
"component": v.component,
"location": list(v.location) if v.location else None,
}
for v in self.violations
],
}
def run_drc(schematic_path: Path | str) -> DRCResult:
"""Run all design rule checks on a schematic.
Args:
schematic_path: Path to .asc file
Returns:
DRCResult with all violations found
"""
sch = parse_schematic(schematic_path)
result = DRCResult()
_check_ground(sch, result)
_check_floating_nodes(sch, result)
_check_simulation_directive(sch, result)
_check_voltage_source_loops(sch, result)
_check_component_values(sch, result)
_check_duplicate_names(sch, result)
_check_unconnected_components(sch, result)
return result
def _check_ground(sch: Schematic, result: DRCResult) -> None:
"""Check that at least one ground (node '0') exists."""
result.checks_run += 1
has_ground = any(f.name == "0" for f in sch.flags)
if not has_ground:
result.violations.append(
DRCViolation(
rule="NO_GROUND",
severity=Severity.ERROR,
message=(
"No ground node found. Every circuit needs at least one ground (0) connection."
),
)
)
def _check_floating_nodes(sch: Schematic, result: DRCResult) -> None:
"""Check for nodes with only one wire connection (likely floating).
Build a connectivity map from wires and flags.
A node is a unique (x,y) point where wires meet or flags are placed.
If a point only has one wire endpoint and no flag, it might be floating.
This is approximate -- we cannot fully determine connectivity without
knowing component pin locations, but we can flag obvious cases.
"""
result.checks_run += 1
# Count how many wire endpoints touch each point
point_count: dict[tuple[int, int], int] = defaultdict(int)
for wire in sch.wires:
point_count[(wire.x1, wire.y1)] += 1
point_count[(wire.x2, wire.y2)] += 1
# Flags also represent connections at a point
flag_points = {(f.x, f.y) for f in sch.flags}
# Component positions also represent connection points (approximately)
component_points = {(c.x, c.y) for c in sch.components}
for point, count in point_count.items():
if count == 1 and point not in flag_points and point not in component_points:
result.violations.append(
DRCViolation(
rule="FLOATING_NODE",
severity=Severity.WARNING,
message=(
f"Possible floating node at ({point[0]}, {point[1]}). "
f"Wire endpoint has only one connection."
),
location=point,
)
)
def _check_simulation_directive(sch: Schematic, result: DRCResult) -> None:
"""Check that at least one simulation directive exists."""
result.checks_run += 1
directives = sch.get_spice_directives()
sim_types = [".tran", ".ac", ".dc", ".op", ".noise", ".tf"]
has_sim = any(any(d.lower().startswith(s) for s in sim_types) for d in directives)
if not has_sim:
result.violations.append(
DRCViolation(
rule="NO_SIM_DIRECTIVE",
severity=Severity.ERROR,
message=("No simulation directive found. Add .tran, .ac, .dc, .op, etc."),
)
)
def _check_voltage_source_loops(sch: Schematic, result: DRCResult) -> None:
"""Check for voltage sources connected in parallel (short circuit).
This is a simplified check -- look for voltage sources that share both
pin nodes via wire connectivity. Two voltage sources whose positions
connect to the same pair of nets would create a loop or conflict.
"""
result.checks_run += 1
# Build a union-find structure from wire connectivity so we can
# determine which points belong to the same electrical net.
parent: dict[tuple[int, int], tuple[int, int]] = {}
def find(p: tuple[int, int]) -> tuple[int, int]:
if p not in parent:
parent[p] = p
while parent[p] != p:
parent[p] = parent[parent[p]]
p = parent[p]
return p
def union(a: tuple[int, int], b: tuple[int, int]) -> None:
ra, rb = find(a), find(b)
if ra != rb:
parent[ra] = rb
# Each wire merges its two endpoints into the same net
for wire in sch.wires:
union((wire.x1, wire.y1), (wire.x2, wire.y2))
# Also merge flag locations (named nets connect distant points with
# the same name)
flag_groups: dict[str, list[tuple[int, int]]] = defaultdict(list)
for flag in sch.flags:
flag_groups[flag.name].append((flag.x, flag.y))
for pts in flag_groups.values():
for pt in pts[1:]:
union(pts[0], pt)
# Find voltage sources and approximate their pin positions.
# LTspice voltage sources have pins at the component origin and
# offset along the component axis. Standard pin spacing is 64 units
# vertically for a non-rotated voltage source (pin+ at top, pin- at
# bottom relative to the symbol origin).
voltage_sources = [c for c in sch.components if "voltage" in c.symbol.lower()]
if len(voltage_sources) < 2:
return
# Estimate pin positions per voltage source based on rotation.
# Default (R0): positive pin at (x, y-16), negative at (x, y+16)
# We use a coarse offset; the exact value depends on the symbol but
# 16 is a common half-pin-spacing in LTspice grid units.
pin_offset = 16
def _pin_positions(comp):
"""Return approximate (positive_pin, negative_pin) coordinates."""
x, y = comp.x, comp.y
rot = comp.rotation
if rot == 0:
return (x, y - pin_offset), (x, y + pin_offset)
elif rot == 90:
return (x + pin_offset, y), (x - pin_offset, y)
elif rot == 180:
return (x, y + pin_offset), (x, y - pin_offset)
elif rot == 270:
return (x - pin_offset, y), (x + pin_offset, y)
return (x, y - pin_offset), (x, y + pin_offset)
def _nearest_net(pin: tuple[int, int]) -> tuple[int, int]:
"""Find the nearest wire/flag point to a pin and return its net root.
If the pin is directly on a known point, use it. Otherwise search
within a small radius for the closest connected point.
"""
if pin in parent:
return find(pin)
# Search nearby points (LTspice grid snap is typically 16 units)
best = None
best_dist = float("inf")
for pt in parent:
dx = pin[0] - pt[0]
dy = pin[1] - pt[1]
dist = dx * dx + dy * dy
if dist < best_dist:
best_dist = dist
best = pt
if best is not None and best_dist <= 32 * 32:
return find(best)
return pin # isolated -- return pin itself as its own net
# For each pair of voltage sources, check if they share both nets
for i in range(len(voltage_sources)):
pin_a_pos, pin_a_neg = _pin_positions(voltage_sources[i])
net_a_pos = _nearest_net(pin_a_pos)
net_a_neg = _nearest_net(pin_a_neg)
for j in range(i + 1, len(voltage_sources)):
pin_b_pos, pin_b_neg = _pin_positions(voltage_sources[j])
net_b_pos = _nearest_net(pin_b_pos)
net_b_neg = _nearest_net(pin_b_neg)
# Parallel if both nets match (in either polarity)
parallel = (net_a_pos == net_b_pos and net_a_neg == net_b_neg) or (
net_a_pos == net_b_neg and net_a_neg == net_b_pos
)
if parallel:
name_i = voltage_sources[i].name
name_j = voltage_sources[j].name
result.violations.append(
DRCViolation(
rule="VSOURCE_LOOP",
severity=Severity.ERROR,
message=(
f"Voltage sources '{name_i}' and '{name_j}' "
f"appear to be connected in parallel, which "
f"creates a short circuit / voltage conflict."
),
component=name_i,
)
)
def _check_component_values(sch: Schematic, result: DRCResult) -> None:
"""Check that components have values where expected.
Resistors, capacitors, inductors should have values.
Voltage/current sources should have values.
Skip if the value looks like a parameter expression (e.g., "{R1}").
"""
result.checks_run += 1
# Map symbol substrings to human-readable type names
value_required = {
"res": "Resistor",
"cap": "Capacitor",
"ind": "Inductor",
"voltage": "Voltage source",
"current": "Current source",
}
for comp in sch.components:
symbol_lower = comp.symbol.lower()
matched_type = None
for pattern, label in value_required.items():
if pattern in symbol_lower:
matched_type = label
break
if matched_type is None:
continue
val = comp.value
if not val or not val.strip():
result.violations.append(
DRCViolation(
rule="MISSING_VALUE",
severity=Severity.WARNING,
message=(f"{matched_type} '{comp.name}' has no value set."),
component=comp.name,
location=(comp.x, comp.y),
)
)
elif val.strip().startswith("{") and val.strip().endswith("}"):
# Parameter expression -- valid, skip
pass
def _check_duplicate_names(sch: Schematic, result: DRCResult) -> None:
"""Check for duplicate component instance names."""
result.checks_run += 1
seen: dict[str, bool] = {}
for comp in sch.components:
if not comp.name:
continue
if comp.name in seen:
result.violations.append(
DRCViolation(
rule="DUPLICATE_NAME",
severity=Severity.ERROR,
message=f"Duplicate component name '{comp.name}'.",
component=comp.name,
location=(comp.x, comp.y),
)
)
else:
seen[comp.name] = True
def _check_unconnected_components(sch: Schematic, result: DRCResult) -> None:
"""Check for components that don't seem to be connected to anything.
A component at (x, y) should have wire endpoints near its pins.
This is approximate without knowing exact pin positions, so we check
whether any wire endpoint falls within a reasonable distance (16
units -- one LTspice grid step) of the component's origin.
"""
result.checks_run += 1
proximity = 16 # LTspice grid spacing
# Collect all wire endpoints into a set for fast lookup
wire_points: set[tuple[int, int]] = set()
for wire in sch.wires:
wire_points.add((wire.x1, wire.y1))
wire_points.add((wire.x2, wire.y2))
# Also include flag positions (flags connect to nets)
flag_points: set[tuple[int, int]] = set()
for flag in sch.flags:
flag_points.add((flag.x, flag.y))
all_connection_points = wire_points | flag_points
for comp in sch.components:
if not comp.name:
continue
# Check if any connection point is within proximity of the
# component origin. We scan a small bounding box rather than
# iterating all points.
connected = False
for pt in all_connection_points:
dx = abs(pt[0] - comp.x)
dy = abs(pt[1] - comp.y)
if dx <= proximity and dy <= proximity:
connected = True
break
if not connected:
result.violations.append(
DRCViolation(
rule="UNCONNECTED_COMPONENT",
severity=Severity.WARNING,
message=(
f"Component '{comp.name}' ({comp.symbol}) at "
f"({comp.x}, {comp.y}) has no nearby wire "
f"connections."
),
component=comp.name,
location=(comp.x, comp.y),
)
)

210
src/mcltspice/log_parser.py Normal file
View File

@ -0,0 +1,210 @@
"""Parse LTspice simulation log files."""
import re
from dataclasses import dataclass, field
from pathlib import Path
@dataclass
class Measurement:
"""A .meas result from the log."""
name: str
value: float | None # None if FAILED
failed: bool = False
@dataclass
class SimulationLog:
"""Parsed contents of an LTspice .log file."""
measurements: list[Measurement] = field(default_factory=list)
errors: list[str] = field(default_factory=list)
warnings: list[str] = field(default_factory=list)
elapsed_time: float | None = None
n_equations: int | None = None
n_steps: int | None = None
raw_text: str = ""
operating_point: dict[str, float] = field(default_factory=dict)
transfer_function: dict[str, float] = field(default_factory=dict)
def get_measurement(self, name: str) -> Measurement | None:
"""Get a measurement by name (case-insensitive)."""
name_lower = name.lower()
for m in self.measurements:
if m.name.lower() == name_lower:
return m
return None
def get_all_measurements(self) -> dict[str, float | None]:
"""Return dict of measurement name -> value."""
return {m.name: m.value for m in self.measurements}
# Patterns for measurement results.
# LTspice emits results in several formats depending on version and OS:
# name=1.23456e-006 (no spaces around '=')
# name = 1.23456e-006 (spaces around '=')
# name: 1.23456e-006 (colon separator)
# name: FAILED (measurement could not be computed)
# name=FAILED
_MEAS_VALUE_RE = re.compile(
r"^(?P<name>\S+?)\s*[=:]\s*(?P<value>[+-]?\d+(?:\.\d+)?(?:e[+-]?\d+)?)\s*$",
re.IGNORECASE,
)
_MEAS_FAILED_RE = re.compile(
r"^(?P<name>\S+?)\s*[=:]?\s*FAILED\s*$",
re.IGNORECASE,
)
# Simulation statistics patterns.
_ELAPSED_TIME_RE = re.compile(
r"Total elapsed time:\s*(?P<seconds>[+-]?\d+(?:\.\d+)?)\s*seconds?",
re.IGNORECASE,
)
_N_EQUATIONS_RE = re.compile(
r"N-of-equations:\s*(?P<n>\d+)",
re.IGNORECASE,
)
_N_STEPS_RE = re.compile(
r"N-of-steps:\s*(?P<n>\d+)",
re.IGNORECASE,
)
# Lines starting with ".meas" are directive echoes, not results -- skip them.
_MEAS_DIRECTIVE_RE = re.compile(r"^\s*\.meas\s", re.IGNORECASE)
# Operating point / transfer function lines: "V(out):\t 2.5\t voltage"
# These have a name, colon, value, then optional trailing text (units/type).
_OP_TF_VALUE_RE = re.compile(
r"^(?P<name>\S+?):\s+(?P<value>[+-]?\d+(?:\.\d+)?(?:e[+-]?\d+)?)\s*",
re.IGNORECASE,
)
# Section headers in LTspice log files
_OP_SECTION_RE = re.compile(r"---\s*Operating\s+Point\s*---", re.IGNORECASE)
_TF_SECTION_RE = re.compile(r"---\s*Transfer\s+Function\s*---", re.IGNORECASE)
def _is_error_line(line: str) -> bool:
"""Return True if the line reports an error."""
return bool(re.search(r"\bError\b", line, re.IGNORECASE))
def _is_warning_line(line: str) -> bool:
"""Return True if the line reports a warning."""
return bool(re.search(r"\bWarning\b", line, re.IGNORECASE))
def parse_log(path: Path | str) -> SimulationLog:
"""Parse an LTspice .log file.
Reads the file at *path* and extracts measurement results, errors,
warnings, and basic simulation statistics. The parser is intentionally
lenient -- unknown lines are silently ignored so that it works across
different LTspice versions and simulation types (transient, AC, DC, etc.).
"""
path = Path(path)
# LTspice log files may be encoded as UTF-8 or Latin-1 depending on the
# platform. Try UTF-8 first, fall back to Latin-1 which never raises.
for encoding in ("utf-8", "latin-1"):
try:
raw_text = path.read_text(encoding=encoding)
break
except UnicodeDecodeError:
continue
else:
raw_text = path.read_text(encoding="latin-1", errors="replace")
log = SimulationLog(raw_text=raw_text)
# Track which section we're currently parsing
current_section: str | None = None # "op", "tf", or None
for line in raw_text.splitlines():
stripped = line.strip()
if not stripped:
continue
# Detect section headers
if _OP_SECTION_RE.search(stripped):
current_section = "op"
continue
if _TF_SECTION_RE.search(stripped):
current_section = "tf"
continue
# A new section header (any "---...---" line) ends the current section
if stripped.startswith("---") and stripped.endswith("---"):
current_section = None
continue
# Parse .op / .tf section values
if current_section in ("op", "tf"):
m = _OP_TF_VALUE_RE.match(stripped)
if m:
try:
val = float(m.group("value"))
except ValueError:
continue
target = log.operating_point if current_section == "op" else log.transfer_function
target[m.group("name")] = val
continue
# Skip echoed .meas directives -- they are not results.
if _MEAS_DIRECTIVE_RE.match(stripped):
continue
# Errors and warnings.
if _is_error_line(stripped):
log.errors.append(stripped)
if _is_warning_line(stripped):
log.warnings.append(stripped)
# Measurement: failed.
m = _MEAS_FAILED_RE.match(stripped)
if m:
log.measurements.append(Measurement(name=m.group("name"), value=None, failed=True))
continue
# Measurement: numeric value.
m = _MEAS_VALUE_RE.match(stripped)
if m:
try:
value = float(m.group("value"))
except ValueError:
value = None
log.measurements.append(
Measurement(name=m.group("name"), value=value, failed=(value is None))
)
continue
# Elapsed time.
m = _ELAPSED_TIME_RE.search(stripped)
if m:
try:
log.elapsed_time = float(m.group("seconds"))
except ValueError:
pass
continue
# Number of equations.
m = _N_EQUATIONS_RE.search(stripped)
if m:
try:
log.n_equations = int(m.group("n"))
except ValueError:
pass
continue
# Number of steps / iterations.
m = _N_STEPS_RE.search(stripped)
if m:
try:
log.n_steps = int(m.group("n"))
except ValueError:
pass
continue
return log

365
src/mcltspice/models.py Normal file
View File

@ -0,0 +1,365 @@
"""Search and parse LTspice SPICE model libraries."""
import re
from dataclasses import dataclass, field
from pathlib import Path
from .config import LTSPICE_LIB
# Known SPICE model types and their categories
_DISCRETE_TYPES = frozenset(
{
"NPN",
"PNP", # BJTs
"NMOS",
"PMOS",
"VDMOS", # MOSFETs
"D", # Diodes
"NJF",
"PJF", # JFETs
}
)
# Module-level cache
_cache: tuple[list["SpiceModel"], list["SpiceSubcircuit"]] | None = None
@dataclass
class SpiceModel:
"""A .model definition."""
name: str # e.g., "2N2222"
type: str # e.g., "NPN", "D", "NMOS", "PMOS", "PNP"
parameters: dict[str, str] = field(default_factory=dict)
source_file: str = ""
@dataclass
class SpiceSubcircuit:
"""A .subckt definition."""
name: str # e.g., "LT1001"
pins: list[str] = field(default_factory=list)
pin_names: list[str] = field(default_factory=list) # From comments
description: str = ""
source_file: str = ""
n_components: int = 0
def search_models(
search: str | None = None,
model_type: str | None = None,
limit: int = 50,
) -> list[SpiceModel]:
"""Search for .model definitions in the library.
Args:
search: Search term for model name (case-insensitive)
model_type: Filter by type: NPN, PNP, NMOS, PMOS, D, etc.
limit: Maximum results
Returns:
List of matching SpiceModel objects
"""
all_models, _ = _scan_all_libraries()
results: list[SpiceModel] = []
search_upper = search.upper() if search else None
type_upper = model_type.upper() if model_type else None
for model in all_models:
if type_upper and model.type.upper() != type_upper:
continue
if search_upper and search_upper not in model.name.upper():
continue
results.append(model)
if len(results) >= limit:
break
return results
def search_subcircuits(
search: str | None = None,
limit: int = 50,
) -> list[SpiceSubcircuit]:
"""Search for .subckt definitions in the library.
Args:
search: Search term for subcircuit name
limit: Maximum results
Returns:
List of matching SpiceSubcircuit objects
"""
_, all_subcircuits = _scan_all_libraries()
results: list[SpiceSubcircuit] = []
search_upper = search.upper() if search else None
for subckt in all_subcircuits:
if search_upper and search_upper not in subckt.name.upper():
continue
results.append(subckt)
if len(results) >= limit:
break
return results
def get_model_details(name: str) -> SpiceModel | SpiceSubcircuit | None:
"""Get detailed information about a specific model or subcircuit.
Searches all library files for exact match (case-insensitive).
"""
all_models, all_subcircuits = _scan_all_libraries()
name_upper = name.upper()
for model in all_models:
if model.name.upper() == name_upper:
return model
for subckt in all_subcircuits:
if subckt.name.upper() == name_upper:
return subckt
return None
def _read_file_text(path: Path) -> str:
"""Read a library file, handling both UTF-16-LE and ASCII/Latin-1 encodings.
LTspice stores some files (especially .bjt, .mos, .jft, etc.) as
UTF-16-LE without a BOM. Others are plain ASCII or Latin-1.
Binary/encrypted .sub files will raise on decode; callers handle that.
"""
raw = path.read_bytes()
if not raw:
return ""
# Detect UTF-16-LE: check if every other byte (odd positions) is 0x00
# for the first several bytes. This is a strong indicator of UTF-16-LE
# ASCII text without a BOM.
if len(raw) >= 20:
sample = raw[:40]
null_positions = sum(1 for i in range(1, len(sample), 2) if sample[i] == 0)
total_pairs = len(sample) // 2
if total_pairs > 0 and null_positions / total_pairs > 0.7:
return raw.decode("utf-16-le", errors="replace")
# Fall back to latin-1 which never fails (maps bytes 1:1 to codepoints)
return raw.decode("latin-1")
# Regex patterns compiled once
_MODEL_RE = re.compile(
r"^\s*\.model\s+" # .model keyword
r"(\S+)\s+" # model name
r"(?:ako:\S+\s+)?" # optional ako:reference
r"(\w+)" # type (NPN, D, VDMOS, etc.)
r"(?:\s*\(([^)]*)\))?" # optional (params)
r"(.*)", # trailing params outside parens
re.IGNORECASE,
)
_SUBCKT_RE = re.compile(
r"^\s*\.subckt\s+"
r"(\S+)" # subcircuit name
r"((?:\s+\S+)*)", # pins (space-separated)
re.IGNORECASE,
)
_ENDS_RE = re.compile(r"^\s*\.ends\b", re.IGNORECASE)
_PIN_COMMENT_RE = re.compile(
r"^\s*\*\s*[Pp]in\s+\S+\s*[:=]?\s*(.*)",
)
def _parse_params(param_str: str) -> dict[str, str]:
"""Parse SPICE parameter string into a dict.
Handles formats like: IS=14.34f BF=200 NF=1 VAF=74.03
"""
params: dict[str, str] = {}
if not param_str:
return params
for match in re.finditer(r"(\w+)\s*=\s*(\S+)", param_str):
params[match.group(1)] = match.group(2)
return params
def _scan_lib_file(path: Path) -> tuple[list[SpiceModel], list[SpiceSubcircuit]]:
"""Scan a single library file for model and subcircuit definitions.
Handles multi-line continuation (lines starting with +).
Silently skips binary/encrypted files that can't be decoded.
"""
models: list[SpiceModel] = []
subcircuits: list[SpiceSubcircuit] = []
try:
text = _read_file_text(path)
except Exception:
return models, subcircuits
if not text:
return models, subcircuits
# Join continuation lines (+ at start of line continues previous line)
# Work through lines, merging continuations
raw_lines = text.splitlines()
lines: list[str] = []
for line in raw_lines:
stripped = line.strip()
if stripped.startswith("+") and lines:
# Continuation: append to previous line (strip the +)
lines[-1] = lines[-1] + " " + stripped[1:].strip()
else:
lines.append(line)
source = str(path.relative_to(LTSPICE_LIB)) if _is_under(path, LTSPICE_LIB) else path.name
# State tracking for subcircuit parsing
in_subckt = False
current_subckt: SpiceSubcircuit | None = None
component_count = 0
pin_comments: list[str] = []
pre_subckt_comments: list[str] = []
for line in lines:
stripped = line.strip()
# Track comments that might describe pins (before or after .subckt)
if stripped.startswith("*"):
if in_subckt:
pin_match = _PIN_COMMENT_RE.match(stripped)
if pin_match and current_subckt is not None:
pin_comments.append(pin_match.group(1).strip())
else:
pre_subckt_comments.append(stripped)
continue
if not stripped:
if not in_subckt:
pre_subckt_comments.clear()
continue
# Check for .model
model_match = _MODEL_RE.match(stripped)
if model_match:
name = model_match.group(1)
mtype = model_match.group(2).upper()
param_str = (model_match.group(3) or "") + " " + (model_match.group(4) or "")
params = _parse_params(param_str)
models.append(
SpiceModel(
name=name,
type=mtype,
parameters=params,
source_file=source,
)
)
continue
# Check for .subckt
subckt_match = _SUBCKT_RE.match(stripped)
if subckt_match and not in_subckt:
name = subckt_match.group(1)
pin_str = subckt_match.group(2).strip()
pins = pin_str.split() if pin_str else []
# Extract description from preceding comments
description = ""
for comment in pre_subckt_comments:
cleaned = comment.lstrip("* ").strip()
if cleaned and not cleaned.startswith("Copyright") and len(cleaned) > 3:
description = cleaned
break
current_subckt = SpiceSubcircuit(
name=name,
pins=pins,
description=description,
source_file=source,
)
in_subckt = True
component_count = 0
pin_comments.clear()
pre_subckt_comments.clear()
continue
# Check for .ends
if _ENDS_RE.match(stripped):
if current_subckt is not None:
current_subckt.n_components = component_count
current_subckt.pin_names = pin_comments[:]
subcircuits.append(current_subckt)
in_subckt = False
current_subckt = None
pin_comments.clear()
continue
# Count component lines inside a subcircuit
if in_subckt and stripped and not stripped.startswith("."):
# Component lines typically start with a letter (R, C, L, M, Q, D, etc.)
if stripped[0].isalpha():
component_count += 1
return models, subcircuits
def _is_under(path: Path, parent: Path) -> bool:
"""Check if path is under parent directory."""
try:
path.relative_to(parent)
return True
except ValueError:
return False
def _collect_lib_files() -> list[Path]:
"""Collect all scannable library files from known directories."""
files: list[Path] = []
extensions = {".lib", ".sub", ".mod", ".bjt", ".dio", ".mos", ".jft"}
# Scan lib/cmp/ for component model files
cmp_dir = LTSPICE_LIB / "cmp"
if cmp_dir.is_dir():
for f in cmp_dir.iterdir():
if f.is_file() and f.suffix.lower() in extensions:
files.append(f)
# Scan lib/sub/ for subcircuit files (recursive to include Contrib)
sub_dir = LTSPICE_LIB / "sub"
if sub_dir.is_dir():
for f in sub_dir.rglob("*"):
if f.is_file() and f.suffix.lower() in extensions:
files.append(f)
return sorted(files)
def _scan_all_libraries() -> tuple[list[SpiceModel], list[SpiceSubcircuit]]:
"""Scan all library files. Results are cached after first call."""
global _cache
if _cache is not None:
return _cache
all_models: list[SpiceModel] = []
all_subcircuits: list[SpiceSubcircuit] = []
for path in _collect_lib_files():
models, subcircuits = _scan_lib_file(path)
all_models.extend(models)
all_subcircuits.extend(subcircuits)
# Sort for consistent ordering
all_models.sort(key=lambda m: m.name.upper())
all_subcircuits.sort(key=lambda s: s.name.upper())
_cache = (all_models, all_subcircuits)
return _cache

877
src/mcltspice/netlist.py Normal file
View File

@ -0,0 +1,877 @@
"""Programmatic SPICE netlist generation for LTspice."""
from dataclasses import dataclass, field
from pathlib import Path
@dataclass
class NetlistComponent:
"""A component in the netlist."""
name: str # R1, C1, V1, M1, X1, etc.
nodes: list[str] # Connected node names
value: str # Value or model name
params: str = "" # Additional parameters
@dataclass
class Netlist:
"""A SPICE netlist that can be saved as a .cir file.
Supports a builder pattern -- all add_* methods return self for chaining:
netlist = (Netlist("My Circuit")
.add_resistor("R1", "in", "out", "10k")
.add_capacitor("C1", "out", "0", "100n")
.add_voltage_source("V1", "in", "0", ac="1")
.add_directive(".ac dec 100 1 1meg"))
"""
title: str = "LTspice Simulation"
components: list[NetlistComponent] = field(default_factory=list)
directives: list[str] = field(default_factory=list)
comments: list[str] = field(default_factory=list)
includes: list[str] = field(default_factory=list)
# -- Passive components ---------------------------------------------------
def add_resistor(self, name: str, node_p: str, node_n: str, value: str) -> "Netlist":
"""Add a resistor. Example: add_resistor('R1', 'in', 'out', '10k')"""
self.components.append(NetlistComponent(name=name, nodes=[node_p, node_n], value=value))
return self
def add_capacitor(self, name: str, node_p: str, node_n: str, value: str) -> "Netlist":
"""Add a capacitor."""
self.components.append(NetlistComponent(name=name, nodes=[node_p, node_n], value=value))
return self
def add_inductor(
self,
name: str,
node_p: str,
node_n: str,
value: str,
series_resistance: str | None = None,
) -> "Netlist":
"""Add an inductor with optional series resistance (Rser)."""
params = f"Rser={series_resistance}" if series_resistance else ""
self.components.append(
NetlistComponent(name=name, nodes=[node_p, node_n], value=value, params=params)
)
return self
# -- Sources --------------------------------------------------------------
def add_voltage_source(
self,
name: str,
node_p: str,
node_n: str,
dc: str | None = None,
ac: str | None = None,
pulse: tuple | None = None,
sin: tuple | None = None,
) -> "Netlist":
"""Add a voltage source.
Args:
name: Source name (V1, V2, etc.)
node_p: Positive node
node_n: Negative node
dc: DC value (e.g., "5")
ac: AC magnitude (e.g., "1")
pulse: (Vinitial, Von, Tdelay, Trise, Tfall, Ton, Tperiod)
sin: (Voffset, Vamp, Freq, Td, Theta, Phi)
"""
value = self._build_source_value(dc=dc, ac=ac, pulse=pulse, sin=sin)
self.components.append(NetlistComponent(name=name, nodes=[node_p, node_n], value=value))
return self
def add_current_source(
self,
name: str,
node_p: str,
node_n: str,
dc: str | None = None,
ac: str | None = None,
) -> "Netlist":
"""Add a current source."""
value = self._build_source_value(dc=dc, ac=ac)
self.components.append(NetlistComponent(name=name, nodes=[node_p, node_n], value=value))
return self
# -- Semiconductors -------------------------------------------------------
def add_diode(self, name: str, anode: str, cathode: str, model: str) -> "Netlist":
"""Add a diode. Example: add_diode('D1', 'a', 'k', '1N4148')"""
self.components.append(NetlistComponent(name=name, nodes=[anode, cathode], value=model))
return self
def add_mosfet(
self,
name: str,
drain: str,
gate: str,
source: str,
body: str,
model: str,
w: str | None = None,
length: str | None = None,
) -> "Netlist":
"""Add a MOSFET."""
params_parts: list[str] = []
if w:
params_parts.append(f"W={w}")
if length:
params_parts.append(f"L={length}")
params = " ".join(params_parts)
self.components.append(
NetlistComponent(
name=name,
nodes=[drain, gate, source, body],
value=model,
params=params,
)
)
return self
def add_bjt(self, name: str, collector: str, base: str, emitter: str, model: str) -> "Netlist":
"""Add a BJT transistor."""
self.components.append(
NetlistComponent(name=name, nodes=[collector, base, emitter], value=model)
)
return self
# -- Subcircuits ----------------------------------------------------------
def add_opamp(
self,
name: str,
inp: str,
inn: str,
out: str,
vpos: str,
vneg: str,
model: str,
) -> "Netlist":
"""Add an op-amp subcircuit instance.
Pin order follows the LTspice convention:
X<name> <inp> <inn> <vpos> <vneg> <out> <model>
"""
self.components.append(
NetlistComponent(name=name, nodes=[inp, inn, vpos, vneg, out], value=model)
)
return self
def add_subcircuit(self, name: str, nodes: list[str], model: str) -> "Netlist":
"""Add a generic subcircuit instance."""
self.components.append(NetlistComponent(name=name, nodes=list(nodes), value=model))
return self
# -- Generic component ----------------------------------------------------
def add_component(self, name: str, nodes: list[str], value: str, params: str = "") -> "Netlist":
"""Add any component with explicit nodes."""
self.components.append(
NetlistComponent(name=name, nodes=list(nodes), value=value, params=params)
)
return self
# -- Directives -----------------------------------------------------------
def add_directive(self, directive: str) -> "Netlist":
"""Add a SPICE directive (e.g., '.tran 10m', '.ac dec 100 1 1meg')."""
self.directives.append(directive)
return self
def add_meas(self, analysis: str, name: str, expression: str) -> "Netlist":
"""Add a .meas directive.
Example:
add_meas('tran', 'rise_time',
'TRIG V(out) VAL=0.1 RISE=1 TARG V(out) VAL=0.9 RISE=1')
"""
self.directives.append(f".meas {analysis} {name} {expression}")
return self
def add_param(self, name: str, value: str) -> "Netlist":
"""Add a .param directive."""
self.directives.append(f".param {name}={value}")
return self
def add_include(self, path: str) -> "Netlist":
"""Add a .include directive for library files."""
self.includes.append(f".include {path}")
return self
def add_lib(self, path: str) -> "Netlist":
"""Add a .lib directive."""
self.includes.append(f".lib {path}")
return self
def add_comment(self, text: str) -> "Netlist":
"""Add a comment line."""
self.comments.append(text)
return self
# -- Rendering / I/O ------------------------------------------------------
def render(self) -> str:
"""Render the netlist to a SPICE string."""
lines = [f"* {self.title}"]
for comment in self.comments:
lines.append(f"* {comment}")
for inc in self.includes:
lines.append(inc) # Already formatted as .include or .lib
lines.append("") # blank separator
for comp in self.components:
line = f"{comp.name} {' '.join(comp.nodes)} {comp.value}"
if comp.params:
line += f" {comp.params}"
lines.append(line)
lines.append("")
for directive in self.directives:
lines.append(directive)
lines.append(".backanno")
lines.append(".end")
return "\n".join(lines) + "\n"
def save(self, path: Path | str) -> Path:
"""Save netlist to a .cir file."""
path = Path(path)
path.write_text(self.render())
return path
# -- Internal helpers -----------------------------------------------------
@staticmethod
def _build_source_value(
dc: str | None = None,
ac: str | None = None,
pulse: tuple | None = None,
sin: tuple | None = None,
) -> str:
"""Build the value string for a voltage/current source."""
parts: list[str] = []
if dc is not None:
parts.append(dc)
if ac is not None:
parts.append(f"AC {ac}")
if pulse is not None:
params_str = " ".join(str(p) for p in pulse)
parts.append(f"PULSE({params_str})")
if sin is not None:
params_str = " ".join(str(p) for p in sin)
parts.append(f"SIN({params_str})")
return " ".join(parts) if parts else "0"
# ---------------------------------------------------------------------------
# Convenience functions for common circuit topologies
# ---------------------------------------------------------------------------
def voltage_divider(
v_in: str = "5",
r1: str = "10k",
r2: str = "10k",
sim_type: str = "op",
) -> Netlist:
"""Create a voltage divider circuit.
Args:
v_in: Input voltage (DC).
r1: Top resistor value.
r2: Bottom resistor value.
sim_type: Simulation directive -- "op" for operating point,
or a full directive string like ".tran 10m".
"""
netlist = (
Netlist("Voltage Divider")
.add_voltage_source("V1", "in", "0", dc=v_in)
.add_resistor("R1", "in", "out", r1)
.add_resistor("R2", "out", "0", r2)
)
directive = sim_type if sim_type.startswith(".") else f".{sim_type}"
netlist.add_directive(directive)
return netlist
def rc_lowpass(
r: str = "1k",
c: str = "100n",
f_start: str = "1",
f_stop: str = "1meg",
) -> Netlist:
"""Create an RC lowpass filter with AC analysis.
The circuit is driven by a 1V AC source and outputs at node 'out'.
"""
return (
Netlist("RC Lowpass Filter")
.add_voltage_source("V1", "in", "0", ac="1")
.add_resistor("R1", "in", "out", r)
.add_capacitor("C1", "out", "0", c)
.add_directive(f".ac dec 100 {f_start} {f_stop}")
)
def inverting_amplifier(
r_in: str = "10k",
r_f: str = "100k",
opamp_model: str = "LT1001",
) -> Netlist:
"""Create an inverting op-amp amplifier.
Topology:
V1 --[R_in]--> inv(-) --[R_f]--> out
non-inv(+) --> GND
Supply: +/-15V
"""
return (
Netlist("Inverting Amplifier")
.add_comment(f"Gain = -{r_f}/{r_in}")
.add_lib(opamp_model)
.add_voltage_source("V1", "in", "0", ac="1")
.add_voltage_source("Vpos", "vdd", "0", dc="15")
.add_voltage_source("Vneg", "0", "vss", dc="15")
.add_resistor("Rin", "in", "inv", r_in)
.add_resistor("Rf", "inv", "out", r_f)
.add_opamp("X1", "0", "inv", "out", "vdd", "vss", opamp_model)
.add_directive(".ac dec 100 1 1meg")
)
def non_inverting_amplifier(
r_in: str = "10k",
r_f: str = "100k",
opamp_model: str = "LT1001",
) -> Netlist:
"""Create a non-inverting op-amp amplifier.
Topology:
V1 --> non-inv(+)
inv(-) --[R_in]--> GND
inv(-) --[R_f]--> out
Supply: +/-15V
Gain = 1 + R_f / R_in
"""
return (
Netlist("Non-Inverting Amplifier")
.add_comment(f"Gain = 1 + {r_f}/{r_in}")
.add_lib(opamp_model)
.add_voltage_source("V1", "in", "0", ac="1")
.add_voltage_source("Vpos", "vdd", "0", dc="15")
.add_voltage_source("Vneg", "0", "vss", dc="15")
.add_resistor("Rin", "inv", "0", r_in)
.add_resistor("Rf", "inv", "out", r_f)
.add_opamp("X1", "in", "inv", "out", "vdd", "vss", opamp_model)
.add_directive(".ac dec 100 1 1meg")
)
def differential_amplifier(
r1: str = "10k",
r2: str = "10k",
r3: str = "10k",
r4: str = "10k",
opamp_model: str = "LT1001",
) -> Netlist:
"""Create a classic differential amplifier.
Topology:
V1 --[R1]--> inv(-) --[R2]--> out
V2 --[R3]--> non-inv(+)
non-inv(+) --[R4]--> GND
Supply: +/-15V
Vout = (R2/R1) * (V2 - V1) when R2/R1 = R4/R3
"""
return (
Netlist("Differential Amplifier")
.add_comment(f"Vout = ({r2}/{r1}) * (V2 - V1) when R2/R1 = R4/R3")
.add_lib(opamp_model)
.add_voltage_source("V1", "in1", "0", ac="1")
.add_voltage_source("V2", "in2", "0", ac="1")
.add_voltage_source("Vpos", "vdd", "0", dc="15")
.add_voltage_source("Vneg", "0", "vss", dc="15")
.add_resistor("R1", "in1", "inv", r1)
.add_resistor("R2", "inv", "out", r2)
.add_resistor("R3", "in2", "noninv", r3)
.add_resistor("R4", "noninv", "0", r4)
.add_opamp("X1", "noninv", "inv", "out", "vdd", "vss", opamp_model)
.add_directive(".ac dec 100 1 1meg")
)
def common_emitter_amplifier(
rc: str = "2.2k",
rb1: str = "56k",
rb2: str = "12k",
re: str = "1k",
cc1: str = "10u",
cc2: str = "10u",
ce: str = "47u",
vcc: str = "12",
bjt_model: str = "2N2222",
) -> Netlist:
"""Create a common-emitter amplifier with voltage divider bias.
Topology:
VCC --[RC]--> collector --> [CC2] --> out
VCC --[RB1]--> base
base --[RB2]--> GND
emitter --[RE]--> GND
emitter --[CE]--> GND (bypass cap)
in --[CC1]--> base (input coupling cap)
"""
return (
Netlist("Common Emitter Amplifier")
.add_comment("Voltage divider bias with emitter bypass cap")
.add_lib(bjt_model)
.add_voltage_source("Vcc", "vcc", "0", dc=vcc)
.add_voltage_source("Vin", "in", "0", ac="1", sin=("0", "10m", "1k"))
.add_resistor("RC", "vcc", "collector", rc)
.add_resistor("RB1", "vcc", "base", rb1)
.add_resistor("RB2", "base", "0", rb2)
.add_resistor("RE", "emitter", "0", re)
.add_capacitor("CC1", "in", "base", cc1)
.add_capacitor("CC2", "collector", "out", cc2)
.add_capacitor("CE", "emitter", "0", ce)
.add_bjt("Q1", "collector", "base", "emitter", bjt_model)
.add_directive(".tran 5m")
.add_directive(".ac dec 100 10 10meg")
)
def buck_converter(
ind: str = "10u",
c_out: str = "100u",
r_load: str = "10",
v_in: str = "12",
duty_cycle: float = 0.5,
freq: str = "100k",
mosfet_model: str = "IRF540N",
diode_model: str = "1N5819",
) -> Netlist:
"""Create a buck (step-down) converter.
Topology:
V_in --> MOSFET (high-side switch) --> sw node
sw --> [L] --> out
sw --> Diode (cathode) --> GND (freewheeling)
out --> [C_out] --> GND
out --> [R_load] --> GND
Gate driven by PULSE source at specified frequency and duty cycle.
"""
# Compute timing from duty cycle and frequency
# freq string may use SPICE suffixes; keep it as-is for the netlist.
# For the PULSE source we need numeric period and on-time.
freq_hz = _parse_spice_value(freq)
period = 1.0 / freq_hz
t_on = period * duty_cycle
t_rise = period * 0.01 # 1% of period
t_fall = t_rise
return (
Netlist("Buck Converter")
.add_comment(f"Duty cycle = {duty_cycle:.0%}, Fsw = {freq}")
.add_lib(mosfet_model)
.add_lib(diode_model)
.add_voltage_source("Vin", "vin", "0", dc=v_in)
.add_voltage_source(
"Vgate",
"gate",
"0",
pulse=(
"0",
v_in,
"0",
f"{t_rise:.4g}",
f"{t_fall:.4g}",
f"{t_on:.4g}",
f"{period:.4g}",
),
)
.add_mosfet("M1", "vin", "gate", "sw", "sw", mosfet_model)
.add_diode("D1", "0", "sw", diode_model)
.add_inductor("L1", "sw", "out", ind)
.add_capacitor("Cout", "out", "0", c_out)
.add_resistor("Rload", "out", "0", r_load)
.add_directive(f".tran {period * 200:.4g}")
)
def ldo_regulator(
opamp_model: str = "LT1001",
r1: str = "10k",
r2: str = "10k",
pass_transistor: str = "IRF9540N",
v_in: str = "8",
v_ref: str = "2.5",
) -> Netlist:
"""Create a simple LDO voltage regulator.
Topology:
V_in --> PMOS pass transistor (source) --> out (drain)
Error amp: non-inv(+) = V_ref, inv(-) = feedback
Feedback divider: out --[R1]--> fb --[R2]--> GND
Error amp output drives PMOS gate
Vout = V_ref * (1 + R1/R2)
"""
return (
Netlist("LDO Regulator")
.add_comment(f"Vout = {v_ref} * (1 + {r1}/{r2})")
.add_lib(opamp_model)
.add_lib(pass_transistor)
.add_voltage_source("Vin", "vin", "0", dc=v_in)
.add_voltage_source("Vref", "vref", "0", dc=v_ref)
.add_voltage_source("Vpos", "vdd", "0", dc=v_in)
.add_voltage_source("Vneg", "0", "vss", dc=v_in)
.add_mosfet("M1", "out", "gate", "vin", "vin", pass_transistor)
.add_opamp("X1", "vref", "fb", "gate", "vdd", "vss", opamp_model)
.add_resistor("R1", "out", "fb", r1)
.add_resistor("R2", "fb", "0", r2)
.add_resistor("Rload", "out", "0", "100")
.add_capacitor("Cout", "out", "0", "10u")
.add_directive(".tran 10m")
)
def colpitts_oscillator(
ind: str = "1u",
c1: str = "100p",
c2: str = "100p",
rb: str = "47k",
rc: str = "1k",
re: str = "470",
vcc: str = "12",
bjt_model: str = "2N2222",
) -> Netlist:
"""Create a Colpitts oscillator.
Topology:
VCC --[RC]--> collector
collector --[L]--> tank
tank --[C1]--> base
tank --[C2]--> GND
base --[RB]--> VCC (bias)
emitter --[RE]--> GND
Output taken at the collector.
The oscillation frequency is approximately:
f = 1 / (2*pi*sqrt(L * C1*C2/(C1+C2)))
"""
return (
Netlist("Colpitts Oscillator")
.add_comment("f ~ 1 / (2*pi*sqrt(L * Cseries))")
.add_lib(bjt_model)
.add_voltage_source("Vcc", "vcc", "0", dc=vcc)
.add_resistor("RC", "vcc", "collector", rc)
.add_resistor("RB", "vcc", "base", rb)
.add_resistor("RE", "emitter", "0", re)
.add_inductor("L1", "collector", "tank", ind)
.add_capacitor("C1", "tank", "base", c1)
.add_capacitor("C2", "tank", "0", c2)
.add_bjt("Q1", "collector", "base", "emitter", bjt_model)
.add_directive(".tran 100u")
.add_directive(".ic V(collector)=6")
)
def h_bridge(
v_supply: str = "12",
r_load: str = "10",
mosfet_model: str = "IRF540N",
) -> Netlist:
"""Create an H-bridge motor driver.
Topology:
V+ --[M1 (high-side A)]--> outA --[M3 (low-side A)]--> GND
V+ --[M2 (high-side B)]--> outB --[M4 (low-side B)]--> GND
R_load between outA and outB.
M1 & M4 driven together (forward), M2 & M3 driven together (reverse).
Gate signals are complementary PULSE sources with dead time.
"""
# Drive at 1 kHz with 50% duty, small dead time to prevent shoot-through
period = "1m"
t_on = "450u"
t_dead = "25u"
return (
Netlist("H-Bridge Motor Driver")
.add_comment("Complementary gate drives with dead time")
.add_lib(mosfet_model)
.add_voltage_source("Vsupply", "vcc", "0", dc=v_supply)
# Forward drive: M1 (high-A) and M4 (low-B) on first half
.add_voltage_source(
"Vg_fwd",
"gate_fwd",
"0",
pulse=("0", v_supply, t_dead, "10n", "10n", t_on, period),
)
# Reverse drive: M2 (high-B) and M3 (low-A) on second half
.add_voltage_source(
"Vg_rev",
"gate_rev",
"0",
pulse=("0", v_supply, "525u", "10n", "10n", t_on, period),
)
# High-side A
.add_mosfet("M1", "vcc", "gate_fwd", "outA", "outA", mosfet_model)
# High-side B
.add_mosfet("M2", "vcc", "gate_rev", "outB", "outB", mosfet_model)
# Low-side A
.add_mosfet("M3", "outA", "gate_rev", "0", "0", mosfet_model)
# Low-side B
.add_mosfet("M4", "outB", "gate_fwd", "0", "0", mosfet_model)
.add_resistor("Rload", "outA", "outB", r_load)
.add_directive(".tran 5m")
)
def sallen_key_lowpass(
r1: str = "10k",
r2: str = "10k",
c1: str = "10n",
c2: str = "10n",
opamp_model: str = "LT1001",
) -> Netlist:
"""Create a Sallen-Key lowpass filter (unity gain).
Topology:
in --[R1]--> n1 --[R2]--> n2
n2 --> opamp In+ (non-inverting)
opamp out --> In- (unity gain feedback)
C1 from n1 to out (feedback element)
C2 from n2 to GND
f_c = 1 / (2*pi*sqrt(R1*R2*C1*C2))
Supply: +/-15V
"""
return (
Netlist("Sallen-Key Lowpass Filter")
.add_comment(f"f_c = 1/(2*pi*sqrt({r1}*{r2}*{c1}*{c2}))")
.add_lib(opamp_model)
.add_voltage_source("V1", "in", "0", ac="1")
.add_voltage_source("Vpos", "vdd", "0", dc="15")
.add_voltage_source("Vneg", "0", "vss", dc="15")
.add_resistor("R1", "in", "n1", r1)
.add_resistor("R2", "n1", "n2", r2)
.add_capacitor("C1", "n1", "out", c1)
.add_capacitor("C2", "n2", "0", c2)
.add_opamp("X1", "n2", "out", "out", "vdd", "vss", opamp_model)
.add_directive(".ac dec 100 1 1meg")
)
def boost_converter(
ind: str = "10u",
c_out: str = "100u",
r_load: str = "50",
v_in: str = "5",
duty_cycle: float = 0.5,
freq: str = "100k",
mosfet_model: str = "IRF540N",
diode_model: str = "1N5819",
) -> Netlist:
"""Create a boost (step-up) converter.
Topology:
Vin --[L1]--> sw_node --[D1 (anode->cathode)]--> out
| |
[MOSFET] [Cout] [Rload]
drain=sw | |
gate=gate GND GND
source=GND
Vout_ideal = Vin / (1 - duty_cycle)
Gate driven by PULSE source at switching frequency and duty cycle.
"""
freq_hz = _parse_spice_value(freq)
period = 1.0 / freq_hz
t_on = period * duty_cycle
t_rise = period * 0.01
t_fall = t_rise
return (
Netlist("Boost Converter")
.add_comment(f"Duty cycle = {duty_cycle:.0%}, Fsw = {freq}")
.add_lib(mosfet_model)
.add_lib(diode_model)
.add_voltage_source("Vin", "vin", "0", dc=v_in)
.add_voltage_source(
"Vgate",
"gate",
"0",
pulse=(
"0",
v_in,
"0",
f"{t_rise:.4g}",
f"{t_fall:.4g}",
f"{t_on:.4g}",
f"{period:.4g}",
),
)
.add_inductor("L1", "vin", "sw", ind)
.add_mosfet("M1", "sw", "gate", "0", "0", mosfet_model)
.add_diode("D1", "sw", "out", diode_model)
.add_capacitor("Cout", "out", "0", c_out)
.add_resistor("Rload", "out", "0", r_load)
.add_directive(f".tran {period * 200:.4g}")
)
def instrumentation_amplifier(
r1: str = "10k",
r2: str = "10k",
r3: str = "10k",
r_gain: str = "10k",
) -> Netlist:
"""Create a classic 3-opamp instrumentation amplifier.
Stage 1 (input buffers with gain):
X1: In+ = Vin+, In- = node_a, Out = out1
X2: In+ = Vin-, In- = node_b, Out = out2
R1 from out1 to node_a (feedback)
R1_match from out2 to node_b (feedback)
Rgain between node_a and node_b
Stage 2 (difference amplifier):
X3: In- receives out1 via R2, feedback via R3 to Vout
In+ receives out2 via R2_match, R3_match to GND
Gain = (1 + 2*R1/Rgain) * (R3/R2)
All opamps use LT1001, supply +/-15V.
"""
opamp_model = "LT1001"
return (
Netlist("Instrumentation Amplifier")
.add_comment(f"Gain = (1 + 2*{r1}/{r_gain}) * ({r3}/{r2})")
.add_lib(opamp_model)
.add_voltage_source("V1", "vinp", "0", ac="1")
.add_voltage_source("V2", "vinn", "0", dc="0")
.add_voltage_source("Vpos", "vdd", "0", dc="15")
.add_voltage_source("Vneg", "0", "vss", dc="15")
# Stage 1: input buffers
.add_opamp("X1", "vinp", "node_a", "out1", "vdd", "vss", opamp_model)
.add_opamp("X2", "vinn", "node_b", "out2", "vdd", "vss", opamp_model)
.add_resistor("R1", "out1", "node_a", r1)
.add_resistor("R1b", "out2", "node_b", r1)
.add_resistor("Rgain", "node_a", "node_b", r_gain)
# Stage 2: difference amplifier
.add_opamp("X3", "noninv3", "inv3", "vout", "vdd", "vss", opamp_model)
.add_resistor("R2", "out1", "inv3", r2)
.add_resistor("R3", "inv3", "vout", r3)
.add_resistor("R2b", "out2", "noninv3", r2)
.add_resistor("R3b", "noninv3", "0", r3)
.add_directive(".ac dec 100 1 1meg")
)
def current_mirror(
r_ref: str = "10k",
r_load: str = "1k",
vcc: str = "12",
bjt_model: str = "2N2222",
) -> Netlist:
"""Create a basic BJT current mirror.
Topology:
Vcc --[Rref]--> collector_Q1 = base_Q1 = base_Q2
emitter_Q1 = GND
Vcc --[Rload]--> collector_Q2
emitter_Q2 = GND
Q1 is diode-connected (collector tied to base).
I_ref = (Vcc - Vbe) / Rref, I_load ~ I_ref.
"""
return (
Netlist("Current Mirror")
.add_comment(f"I_ref ~ ({vcc} - 0.7) / {r_ref}")
.add_lib(bjt_model)
.add_voltage_source("Vcc", "vcc", "0", dc=vcc)
.add_resistor("Rref", "vcc", "mirror", r_ref)
.add_resistor("Rload", "vcc", "out", r_load)
.add_bjt("Q1", "mirror", "mirror", "0", bjt_model)
.add_bjt("Q2", "out", "mirror", "0", bjt_model)
.add_directive(".op")
.add_directive(".tran 1m")
)
def transimpedance_amplifier(
rf: str = "100k",
cf: str = "1p",
i_source: str = "1u",
) -> Netlist:
"""Create a transimpedance amplifier (TIA).
Topology:
I1 (current source, AC) --> In- (inverting)
In+ (non-inverting) --> GND
Rf from In- to out (feedback resistor)
Cf from In- to out (feedback cap, parallel with Rf for stability)
Vout = -I_in * Rf (at low frequencies)
Bandwidth limited by Cf.
Supply: +/-15V, opamp_model = LT1001
"""
opamp_model = "LT1001"
return (
Netlist("Transimpedance Amplifier")
.add_comment(f"Vout = -I_in * {rf}")
.add_lib(opamp_model)
.add_current_source("I1", "inv", "0", ac=i_source)
.add_voltage_source("Vpos", "vdd", "0", dc="15")
.add_voltage_source("Vneg", "0", "vss", dc="15")
.add_resistor("Rf", "inv", "out", rf)
.add_capacitor("Cf", "inv", "out", cf)
.add_opamp("X1", "0", "inv", "out", "vdd", "vss", opamp_model)
.add_directive(".ac dec 100 1 1meg")
)
def _parse_spice_value(value: str) -> float:
"""Convert a SPICE-style value string to a float.
Handles common suffixes: T, G, meg, k, m, u, n, p, f.
"""
suffixes = {
"T": 1e12,
"G": 1e9,
"MEG": 1e6,
"K": 1e3,
"M": 1e-3,
"U": 1e-6,
"N": 1e-9,
"P": 1e-12,
"F": 1e-15,
}
value = value.strip()
# Try plain float first
try:
return float(value)
except ValueError:
pass
# Try suffixes (longest match first to catch "meg" before "m")
upper = value.upper()
for suffix, mult in sorted(suffixes.items(), key=lambda x: -len(x[0])):
if upper.endswith(suffix):
num_part = value[: len(value) - len(suffix)]
try:
return float(num_part) * mult
except ValueError:
continue
raise ValueError(f"Cannot parse SPICE value: {value!r}")

View File

@ -0,0 +1,365 @@
"""Noise analysis for LTspice .noise simulation results.
LTspice .noise analysis produces output with variables like 'onoise'
(output-referred noise spectral density in V/sqrt(Hz)) and 'inoise'
(input-referred noise spectral density). The data is complex-valued
in the .raw file; magnitude gives the spectral density.
"""
import numpy as np
# np.trapz was renamed to np.trapezoid in numpy 2.0
_trapz = getattr(np, "trapezoid", getattr(np, "trapz", None))
# Boltzmann constant (J/K)
_K_BOLTZMANN = 1.380649e-23
def compute_noise_spectral_density(frequency: np.ndarray, noise_signal: np.ndarray) -> dict:
"""Compute noise spectral density from raw noise simulation data.
Takes the frequency array and complex noise signal directly from the
.raw file and returns the noise spectral density in V/sqrt(Hz) and dB.
Args:
frequency: Frequency array in Hz (may be complex; real part is used)
noise_signal: Complex noise signal from .raw file (magnitude = V/sqrt(Hz))
Returns:
Dict with frequency_hz, noise_density_v_per_sqrt_hz, noise_density_db
"""
if len(frequency) == 0 or len(noise_signal) == 0:
return {
"frequency_hz": [],
"noise_density_v_per_sqrt_hz": [],
"noise_density_db": [],
}
freq = np.real(frequency).astype(np.float64)
density = np.abs(noise_signal)
# Single-point case: still return valid data
density_db = 20.0 * np.log10(np.maximum(density, 1e-30))
return {
"frequency_hz": freq.tolist(),
"noise_density_v_per_sqrt_hz": density.tolist(),
"noise_density_db": density_db.tolist(),
}
def compute_total_noise(
frequency: np.ndarray,
noise_signal: np.ndarray,
f_low: float | None = None,
f_high: float | None = None,
) -> dict:
"""Integrate noise spectral density over frequency to get total RMS noise.
Computes total_rms = sqrt(integral(|noise|^2 * df)) using trapezoidal
integration over the specified frequency range.
Args:
frequency: Frequency array in Hz (may be complex; real part is used)
noise_signal: Complex noise signal from .raw file
f_low: Lower integration bound in Hz (default: min frequency in data)
f_high: Upper integration bound in Hz (default: max frequency in data)
Returns:
Dict with total_rms_v, integration_range_hz, equivalent_noise_bandwidth_hz
"""
if len(frequency) < 2 or len(noise_signal) < 2:
return {
"total_rms_v": 0.0,
"integration_range_hz": [0.0, 0.0],
"equivalent_noise_bandwidth_hz": 0.0,
}
freq = np.real(frequency).astype(np.float64)
density = np.abs(noise_signal)
# Sort by frequency to ensure correct integration order
sort_idx = np.argsort(freq)
freq = freq[sort_idx]
density = density[sort_idx]
# Apply frequency bounds
if f_low is None:
f_low = float(freq[0])
if f_high is None:
f_high = float(freq[-1])
mask = (freq >= f_low) & (freq <= f_high)
freq_band = freq[mask]
density_band = density[mask]
if len(freq_band) < 2:
return {
"total_rms_v": 0.0,
"integration_range_hz": [f_low, f_high],
"equivalent_noise_bandwidth_hz": 0.0,
}
# Integrate |noise|^2 over frequency, then take sqrt for RMS
noise_power = density_band**2
integrated = float(_trapz(noise_power, freq_band))
total_rms = float(np.sqrt(max(integrated, 0.0)))
# Equivalent noise bandwidth: bandwidth of a brick-wall filter with the
# same peak density that would pass the same total noise power
peak_density = float(np.max(density_band))
if peak_density > 1e-30:
enbw = integrated / (peak_density**2)
else:
enbw = 0.0
return {
"total_rms_v": total_rms,
"integration_range_hz": [f_low, f_high],
"equivalent_noise_bandwidth_hz": float(enbw),
}
def compute_spot_noise(frequency: np.ndarray, noise_signal: np.ndarray, target_freq: float) -> dict:
"""Interpolate noise spectral density at a specific frequency.
Uses linear interpolation between adjacent data points to estimate
the noise density at the requested frequency.
Args:
frequency: Frequency array in Hz (may be complex; real part is used)
noise_signal: Complex noise signal from .raw file
target_freq: Desired frequency in Hz
Returns:
Dict with spot_noise_v_per_sqrt_hz, spot_noise_db, actual_freq_hz
"""
if len(frequency) == 0 or len(noise_signal) == 0:
return {
"spot_noise_v_per_sqrt_hz": 0.0,
"spot_noise_db": float("-inf"),
"actual_freq_hz": target_freq,
}
freq = np.real(frequency).astype(np.float64)
density = np.abs(noise_signal)
# Sort by frequency
sort_idx = np.argsort(freq)
freq = freq[sort_idx]
density = density[sort_idx]
# Clamp to data range
if target_freq <= freq[0]:
spot = float(density[0])
actual = float(freq[0])
elif target_freq >= freq[-1]:
spot = float(density[-1])
actual = float(freq[-1])
else:
# Linear interpolation
spot = float(np.interp(target_freq, freq, density))
actual = target_freq
spot_db = 20.0 * np.log10(max(spot, 1e-30))
return {
"spot_noise_v_per_sqrt_hz": spot,
"spot_noise_db": spot_db,
"actual_freq_hz": actual,
}
def compute_noise_figure(
frequency: np.ndarray,
noise_signal: np.ndarray,
source_resistance: float = 50.0,
temperature: float = 290.0,
) -> dict:
"""Compute noise figure from noise spectral density.
Noise figure is the ratio of the measured output noise power to
the thermal noise of the source resistance at the given temperature.
NF(f) = 10*log10(|noise(f)|^2 / (4*k*T*R))
Args:
frequency: Frequency array in Hz (may be complex; real part is used)
noise_signal: Complex noise signal from .raw file (output-referred)
source_resistance: Source impedance in ohms (default 50)
temperature: Temperature in Kelvin (default 290 K, IEEE standard)
Returns:
Dict with noise_figure_db (array), frequency_hz (array),
min_nf_db, nf_at_1khz
"""
if len(frequency) == 0 or len(noise_signal) == 0:
return {
"noise_figure_db": [],
"frequency_hz": [],
"min_nf_db": None,
"nf_at_1khz": None,
}
freq = np.real(frequency).astype(np.float64)
density = np.abs(noise_signal)
# Thermal noise power spectral density of the source: 4*k*T*R (V^2/Hz)
thermal_psd = 4.0 * _K_BOLTZMANN * temperature * source_resistance
if thermal_psd < 1e-50:
return {
"noise_figure_db": [],
"frequency_hz": freq.tolist(),
"min_nf_db": None,
"nf_at_1khz": None,
}
# NF = 10*log10(measured_noise_power / thermal_noise_power)
# where noise_power = density^2 per Hz
noise_power = density**2
nf_ratio = noise_power / thermal_psd
nf_db = 10.0 * np.log10(np.maximum(nf_ratio, 1e-30))
min_nf_db = float(np.min(nf_db))
# Noise figure at 1 kHz (interpolated)
nf_at_1khz = None
if freq[0] <= 1000.0 <= freq[-1]:
sort_idx = np.argsort(freq)
nf_at_1khz = float(np.interp(1000.0, freq[sort_idx], nf_db[sort_idx]))
elif len(freq) == 1:
nf_at_1khz = float(nf_db[0])
return {
"noise_figure_db": nf_db.tolist(),
"frequency_hz": freq.tolist(),
"min_nf_db": min_nf_db,
"nf_at_1khz": nf_at_1khz,
}
def _estimate_flicker_corner(frequency: np.ndarray, density: np.ndarray) -> float | None:
"""Estimate the 1/f noise corner frequency.
The 1/f corner is where the noise transitions from 1/f (flicker) behavior
to flat (white) noise. We find where the slope of log(density) vs log(freq)
crosses -0.25 (midpoint between 0 for white and -0.5 for 1/f in V/sqrt(Hz)).
Args:
frequency: Sorted frequency array in Hz (positive, ascending)
density: Noise spectral density magnitude (same order as frequency)
Returns:
Corner frequency in Hz, or None if not detectable
"""
if len(frequency) < 4:
return None
# Work in log-log space
pos_mask = (frequency > 0) & (density > 0)
freq_pos = frequency[pos_mask]
dens_pos = density[pos_mask]
if len(freq_pos) < 4:
return None
log_f = np.log10(freq_pos)
log_d = np.log10(dens_pos)
# Compute local slope using central differences (smoothed)
# Use a window of ~5 points for robustness
n = len(log_f)
slopes = np.zeros(n)
half_win = min(2, (n - 1) // 2)
for i in range(half_win, n - half_win):
df = log_f[i + half_win] - log_f[i - half_win]
dd = log_d[i + half_win] - log_d[i - half_win]
if abs(df) > 1e-15:
slopes[i] = dd / df
# Fill edges with nearest valid slope
slopes[:half_win] = slopes[half_win]
slopes[n - half_win :] = slopes[n - half_win - 1]
# Find where slope crosses the threshold (-0.25)
# 1/f noise has slope ~ -0.5 in V/sqrt(Hz), white has slope ~ 0
threshold = -0.25
for i in range(len(slopes) - 1):
if slopes[i] < threshold <= slopes[i + 1]:
# Interpolate
ds = slopes[i + 1] - slopes[i]
if abs(ds) < 1e-15:
return float(freq_pos[i])
frac = (threshold - slopes[i]) / ds
log_corner = log_f[i] + frac * (log_f[i + 1] - log_f[i])
return float(10.0**log_corner)
return None
def compute_noise_metrics(
frequency: np.ndarray,
noise_signal: np.ndarray,
source_resistance: float = 50.0,
) -> dict:
"""Comprehensive noise analysis report.
Combines spectral density, spot noise at standard frequencies, total
integrated noise, noise figure, and 1/f corner estimation.
Args:
frequency: Frequency array in Hz (may be complex; real part is used)
noise_signal: Complex noise signal from .raw file
source_resistance: Source impedance in ohms for noise figure (default 50)
Returns:
Dict with spectral_density, spot_noise (at standard frequencies),
total_noise, noise_figure, flicker_corner_hz
"""
if len(frequency) < 2 or len(noise_signal) < 2:
return {
"spectral_density": compute_noise_spectral_density(frequency, noise_signal),
"spot_noise": {},
"total_noise": compute_total_noise(frequency, noise_signal),
"noise_figure": compute_noise_figure(frequency, noise_signal, source_resistance),
"flicker_corner_hz": None,
}
freq = np.real(frequency).astype(np.float64)
# Spectral density
spectral = compute_noise_spectral_density(frequency, noise_signal)
# Spot noise at standard frequencies
spot_freqs = [10.0, 100.0, 1000.0, 10000.0, 100000.0]
spot_labels = ["10Hz", "100Hz", "1kHz", "10kHz", "100kHz"]
spot_noise = {}
f_min = float(np.min(freq))
f_max = float(np.max(freq))
for label, sf in zip(spot_labels, spot_freqs):
if f_min <= sf <= f_max:
spot_noise[label] = compute_spot_noise(frequency, noise_signal, sf)
# Total noise over full bandwidth
total = compute_total_noise(frequency, noise_signal)
# Noise figure
nf = compute_noise_figure(frequency, noise_signal, source_resistance)
# 1/f corner frequency estimation
sort_idx = np.argsort(freq)
sorted_freq = freq[sort_idx]
sorted_density = np.abs(noise_signal)[sort_idx]
flicker_corner = _estimate_flicker_corner(sorted_freq, sorted_density)
return {
"spectral_density": spectral,
"spot_noise": spot_noise,
"total_noise": total,
"noise_figure": nf,
"flicker_corner_hz": float(flicker_corner) if flicker_corner is not None else None,
}

809
src/mcltspice/optimizer.py Normal file
View File

@ -0,0 +1,809 @@
"""Component value optimizer for iterative circuit tuning.
Adjusts circuit parameters toward target specifications by running
real LTspice simulations in a loop. Supports single-component binary
search and multi-component coordinate descent.
"""
import logging
import math
import tempfile
from dataclasses import dataclass, field
from pathlib import Path
import numpy as np
from .raw_parser import RawFile
from .runner import run_netlist
from .waveform_math import (
compute_bandwidth,
compute_peak_to_peak,
compute_rms,
compute_settling_time,
)
logger = logging.getLogger(__name__)
# ---------------------------------------------------------------------------
# Standard component value series
# ---------------------------------------------------------------------------
# E12: 12 values per decade (10% tolerance)
_E12 = [1.0, 1.2, 1.5, 1.8, 2.2, 2.7, 3.3, 3.9, 4.7, 5.6, 6.8, 8.2]
# E24: 24 values per decade (5% tolerance)
_E24 = [
1.0,
1.1,
1.2,
1.3,
1.5,
1.6,
1.8,
2.0,
2.2,
2.4,
2.7,
3.0,
3.3,
3.6,
3.9,
4.3,
4.7,
5.1,
5.6,
6.2,
6.8,
7.5,
8.2,
9.1,
]
# E96: 96 values per decade (1% tolerance)
_E96 = [
1.00,
1.02,
1.05,
1.07,
1.10,
1.13,
1.15,
1.18,
1.21,
1.24,
1.27,
1.30,
1.33,
1.37,
1.40,
1.43,
1.47,
1.50,
1.54,
1.58,
1.62,
1.65,
1.69,
1.74,
1.78,
1.82,
1.87,
1.91,
1.96,
2.00,
2.05,
2.10,
2.15,
2.21,
2.26,
2.32,
2.37,
2.43,
2.49,
2.55,
2.61,
2.67,
2.74,
2.80,
2.87,
2.94,
3.01,
3.09,
3.16,
3.24,
3.32,
3.40,
3.48,
3.57,
3.65,
3.74,
3.83,
3.92,
4.02,
4.12,
4.22,
4.32,
4.42,
4.53,
4.64,
4.75,
4.87,
4.99,
5.11,
5.23,
5.36,
5.49,
5.62,
5.76,
5.90,
6.04,
6.19,
6.34,
6.49,
6.65,
6.81,
6.98,
7.15,
7.32,
7.50,
7.68,
7.87,
8.06,
8.25,
8.45,
8.66,
8.87,
9.09,
9.31,
9.53,
9.76,
]
_SERIES = {
"E12": _E12,
"E24": _E24,
"E96": _E96,
}
# Engineering prefixes: exponent -> suffix
_ENG_PREFIXES = {
-15: "f",
-12: "p",
-9: "n",
-6: "u",
-3: "m",
0: "",
3: "k",
6: "M",
9: "G",
12: "T",
}
# ---------------------------------------------------------------------------
# Data classes
# ---------------------------------------------------------------------------
@dataclass
class OptimizationTarget:
"""A single performance target to optimize toward."""
signal_name: str # e.g. "V(out)"
metric: str # "bandwidth_hz", "rms", "peak_to_peak", "settling_time",
# "gain_db", "phase_margin_deg"
target_value: float
weight: float = 1.0
@dataclass
class ComponentRange:
"""Allowed range for a component value during optimization."""
component_name: str # e.g. "R1"
min_value: float
max_value: float
preferred_series: str | None = None # "E12", "E24", "E96"
@dataclass
class OptimizationResult:
"""Outcome of an optimization run."""
best_values: dict[str, float]
best_cost: float
iterations: int
history: list[dict] = field(default_factory=list)
targets_met: dict[str, bool] = field(default_factory=dict)
final_metrics: dict[str, float] = field(default_factory=dict)
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def snap_to_preferred(value: float, series: str = "E12") -> float:
"""Snap a value to the nearest standard component value.
Works across decades -- e.g. 4800 snaps to 4.7k (E12).
Args:
value: Raw component value in base units (ohms, farads, etc.)
series: Preferred series name ("E12", "E24", "E96")
Returns:
Nearest standard value from the requested series.
"""
if value <= 0:
return _SERIES.get(series, _E12)[0]
base_values = _SERIES.get(series, _E12)
decade = math.floor(math.log10(value))
mantissa = value / (10**decade)
best = base_values[0]
best_ratio = abs(math.log10(mantissa / best))
for bv in base_values[1:]:
ratio = abs(math.log10(mantissa / bv))
if ratio < best_ratio:
best = bv
best_ratio = ratio
# Also check the decade above (mantissa might round up past 9.x)
for bv in base_values[:2]:
candidate = bv * 10
ratio = abs(math.log10(mantissa / candidate))
if ratio < best_ratio:
best = candidate
best_ratio = ratio
return best * (10**decade)
def format_engineering(value: float) -> str:
"""Format a float in engineering notation with SI suffix.
Examples:
10000 -> "10k"
0.0001 -> "100u"
4700 -> "4.7k"
0.0000033 -> "3.3u"
1.5 -> "1.5"
"""
if value == 0:
return "0"
sign = "-" if value < 0 else ""
value = abs(value)
exp = math.floor(math.log10(value))
# Round to nearest multiple of 3 (downward)
eng_exp = (exp // 3) * 3
# Clamp to known prefixes
eng_exp = max(-15, min(12, eng_exp))
mantissa = value / (10**eng_exp)
suffix = _ENG_PREFIXES.get(eng_exp, f"e{eng_exp}")
# Format mantissa: strip trailing zeros but keep at least one digit
if mantissa == int(mantissa) and mantissa < 1000:
formatted = f"{sign}{int(mantissa)}{suffix}"
else:
formatted = f"{sign}{mantissa:.3g}{suffix}"
return formatted
# ---------------------------------------------------------------------------
# Metric extraction
# ---------------------------------------------------------------------------
def _extract_metric(
raw_data: RawFile,
target: OptimizationTarget,
) -> float | None:
"""Compute a single metric from simulation results.
Returns the measured value, or None if the signal/data is unavailable.
"""
signal = raw_data.get_variable(target.signal_name)
if signal is None:
return None
metric = target.metric
if metric == "rms":
return compute_rms(signal)
if metric == "peak_to_peak":
result = compute_peak_to_peak(signal)
return result["peak_to_peak"]
if metric == "settling_time":
time = raw_data.get_time()
if time is None:
return None
if np.iscomplexobj(time):
time = time.real
if np.iscomplexobj(signal):
signal = np.abs(signal)
result = compute_settling_time(time, signal)
return result["settling_time"]
if metric == "bandwidth_hz":
freq = raw_data.get_frequency()
if freq is None:
return None
# Convert complex magnitude to dB
mag_db = np.where(
np.abs(signal) > 0,
20.0 * np.log10(np.abs(signal)),
-200.0,
)
result = compute_bandwidth(np.real(freq), np.real(mag_db))
return result["bandwidth_hz"]
if metric == "gain_db":
# Peak gain in dB (for AC analysis signals)
mag = np.abs(signal)
peak = float(np.max(mag))
if peak > 0:
return 20.0 * math.log10(peak)
return -200.0
if metric == "phase_margin_deg":
freq = raw_data.get_frequency()
if freq is None:
return None
# Phase margin: phase at the frequency where |gain| crosses 0 dB
mag = np.abs(signal)
mag_db = np.where(mag > 0, 20.0 * np.log10(mag), -200.0)
phase_deg = np.degrees(np.angle(signal))
# Find 0 dB crossing (gain crossover)
for i in range(len(mag_db) - 1):
if mag_db[i] >= 0 and mag_db[i + 1] < 0:
# Interpolate phase at 0 dB crossing
dm = mag_db[i + 1] - mag_db[i]
if abs(dm) < 1e-30:
phase_at_xover = float(phase_deg[i])
else:
frac = (0.0 - mag_db[i]) / dm
phase_at_xover = float(phase_deg[i] + frac * (phase_deg[i + 1] - phase_deg[i]))
# Phase margin = 180 + phase (since we want distance from -180)
return 180.0 + phase_at_xover
# No 0 dB crossing found -- gain never reaches unity
return None
return None
def _compute_cost(
raw_data: RawFile,
targets: list[OptimizationTarget],
) -> tuple[float, dict[str, float]]:
"""Evaluate weighted cost across all targets.
Cost for each target: weight * |measured - target| / |target|
Uses |target| normalization so different units are comparable.
Returns:
(total_cost, metrics_dict) where metrics_dict maps
"signal_name.metric" -> measured_value
"""
total_cost = 0.0
metrics: dict[str, float] = {}
for t in targets:
measured = _extract_metric(raw_data, t)
key = f"{t.signal_name}.{t.metric}"
if measured is None:
# Signal not found -- heavy penalty
total_cost += t.weight * 1e6
metrics[key] = float("nan")
continue
metrics[key] = measured
if abs(t.target_value) > 1e-30:
cost = t.weight * abs(measured - t.target_value) / abs(t.target_value)
else:
# Target is ~0: use absolute error
cost = t.weight * abs(measured)
total_cost += cost
return total_cost, metrics
# ---------------------------------------------------------------------------
# Core optimizer
# ---------------------------------------------------------------------------
async def _run_with_values(
netlist_template: str,
values: dict[str, float],
work_dir: Path,
) -> RawFile | None:
"""Substitute values into template, write .cir, simulate, return parsed data."""
text = netlist_template
for name, val in values.items():
text = text.replace(f"{{{name}}}", format_engineering(val))
cir_path = work_dir / "opt_iter.cir"
cir_path.write_text(text)
result = await run_netlist(cir_path, work_dir=work_dir)
if not result.success or result.raw_data is None:
logger.warning("Simulation failed: %s", result.error)
return None
return result.raw_data
async def _binary_search_single(
netlist_template: str,
target: OptimizationTarget,
comp: ComponentRange,
max_iterations: int,
work_dir: Path,
) -> OptimizationResult:
"""Optimize a single component via binary search.
Assumes the metric is monotonic (or at least locally monotonic)
with respect to the component value. Evaluates the midpoint,
then narrows the half that moves the metric toward the target.
"""
lo = comp.min_value
hi = comp.max_value
history: list[dict] = []
best_values = {comp.component_name: (lo + hi) / 2}
best_cost = float("inf")
best_metrics: dict[str, float] = {}
for iteration in range(max_iterations):
mid = (lo + hi) / 2
values = {comp.component_name: mid}
raw_data = await _run_with_values(netlist_template, values, work_dir)
if raw_data is None:
history.append(
{
"iteration": iteration,
"values": {comp.component_name: mid},
"cost": float("inf"),
"error": "simulation_failed",
}
)
# Narrow toward the other half on failure
hi = mid
continue
cost, metrics = _compute_cost(raw_data, [target])
measured = metrics.get(f"{target.signal_name}.{target.metric}")
history.append(
{
"iteration": iteration,
"values": {comp.component_name: mid},
"cost": cost,
"metrics": metrics,
}
)
if cost < best_cost:
best_cost = cost
best_values = {comp.component_name: mid}
best_metrics = metrics
# Decide which half to keep
if measured is not None and not math.isnan(measured):
if measured < target.target_value:
# Need larger metric -- which direction depends on monotonicity.
# Test: does increasing component value increase or decrease metric?
# We probe a point slightly above mid to determine direction.
probe = mid * 1.1
if probe > hi:
probe = mid * 0.9
probe_is_lower = True
else:
probe_is_lower = False
probe_data = await _run_with_values(
netlist_template, {comp.component_name: probe}, work_dir
)
if probe_data is not None:
probe_measured = _extract_metric(probe_data, target)
if probe_measured is not None:
if probe_is_lower:
# We probed lower. If metric went up, metric increases
# with decreasing value => go lower.
if probe_measured > measured:
hi = mid
else:
lo = mid
else:
# We probed higher. If metric went up, go higher.
if probe_measured > measured:
lo = mid
else:
hi = mid
else:
hi = mid
else:
hi = mid
elif measured > target.target_value:
# Need smaller metric -- mirror logic
probe = mid * 1.1
if probe > hi:
probe = mid * 0.9
probe_is_lower = True
else:
probe_is_lower = False
probe_data = await _run_with_values(
netlist_template, {comp.component_name: probe}, work_dir
)
if probe_data is not None:
probe_measured = _extract_metric(probe_data, target)
if probe_measured is not None:
if probe_is_lower:
if probe_measured < measured:
hi = mid
else:
lo = mid
else:
if probe_measured < measured:
lo = mid
else:
hi = mid
else:
lo = mid
else:
lo = mid
else:
# Exact match
break
# Converged if search range is tight enough
if hi - lo < lo * 0.001:
break
# Snap to preferred series if requested
final_val = best_values[comp.component_name]
if comp.preferred_series:
final_val = snap_to_preferred(final_val, comp.preferred_series)
best_values[comp.component_name] = final_val
target_key = f"{target.signal_name}.{target.metric}"
met = best_metrics.get(target_key)
tolerance = abs(target.target_value * 0.05) if abs(target.target_value) > 0 else 0.05
targets_met = {
target_key: met is not None and abs(met - target.target_value) <= tolerance,
}
return OptimizationResult(
best_values=best_values,
best_cost=best_cost,
iterations=len(history),
history=history,
targets_met=targets_met,
final_metrics=best_metrics,
)
async def _coordinate_descent(
netlist_template: str,
targets: list[OptimizationTarget],
component_ranges: list[ComponentRange],
max_iterations: int,
work_dir: Path,
) -> OptimizationResult:
"""Optimize multiple components via coordinate descent.
Cycles through components, running a bounded golden-section search
on each one while holding the others fixed. Repeats until the
overall cost stops improving or we exhaust the iteration budget.
"""
# Start at geometric midpoint of each range
current_values: dict[str, float] = {}
for cr in component_ranges:
current_values[cr.component_name] = math.sqrt(cr.min_value * cr.max_value)
history: list[dict] = []
best_cost = float("inf")
best_values = dict(current_values)
best_metrics: dict[str, float] = {}
iteration = 0
n_comps = len(component_ranges)
# Golden ratio for search
phi = (1 + math.sqrt(5)) / 2
resphi = 2 - phi # ~0.382
while iteration < max_iterations:
improved_this_cycle = False
for cr in component_ranges:
if iteration >= max_iterations:
break
lo = cr.min_value
hi = cr.max_value
# Golden-section search for this component
# Allocate a few iterations per component per cycle
iters_per_comp = max(2, (max_iterations - iteration) // n_comps)
a = lo
b = hi
x1 = a + resphi * (b - a)
x2 = b - resphi * (b - a)
# Evaluate x1
trial_1 = dict(current_values)
trial_1[cr.component_name] = x1
raw_1 = await _run_with_values(netlist_template, trial_1, work_dir)
cost_1 = float("inf")
metrics_1: dict[str, float] = {}
if raw_1 is not None:
cost_1, metrics_1 = _compute_cost(raw_1, targets)
iteration += 1
# Evaluate x2
trial_2 = dict(current_values)
trial_2[cr.component_name] = x2
raw_2 = await _run_with_values(netlist_template, trial_2, work_dir)
cost_2 = float("inf")
metrics_2: dict[str, float] = {}
if raw_2 is not None:
cost_2, metrics_2 = _compute_cost(raw_2, targets)
iteration += 1
for _ in range(iters_per_comp - 2):
if iteration >= max_iterations:
break
if cost_1 < cost_2:
b = x2
x2 = x1
cost_2 = cost_1
metrics_2 = metrics_1
x1 = a + resphi * (b - a)
trial = dict(current_values)
trial[cr.component_name] = x1
raw = await _run_with_values(netlist_template, trial, work_dir)
if raw is not None:
cost_1, metrics_1 = _compute_cost(raw, targets)
else:
cost_1 = float("inf")
metrics_1 = {}
else:
a = x1
x1 = x2
cost_1 = cost_2
metrics_1 = metrics_2
x2 = b - resphi * (b - a)
trial = dict(current_values)
trial[cr.component_name] = x2
raw = await _run_with_values(netlist_template, trial, work_dir)
if raw is not None:
cost_2, metrics_2 = _compute_cost(raw, targets)
else:
cost_2 = float("inf")
metrics_2 = {}
iteration += 1
# Pick the better of the final two
if cost_1 <= cost_2:
chosen_val = x1
chosen_cost = cost_1
chosen_metrics = metrics_1
else:
chosen_val = x2
chosen_cost = cost_2
chosen_metrics = metrics_2
# Snap to preferred series
if cr.preferred_series:
chosen_val = snap_to_preferred(chosen_val, cr.preferred_series)
current_values[cr.component_name] = chosen_val
history.append(
{
"iteration": iteration,
"optimized_component": cr.component_name,
"values": dict(current_values),
"cost": chosen_cost,
"metrics": chosen_metrics,
}
)
if chosen_cost < best_cost:
best_cost = chosen_cost
best_values = dict(current_values)
best_metrics = chosen_metrics
improved_this_cycle = True
if not improved_this_cycle:
break
# Determine which targets are met (within 5%)
targets_met: dict[str, bool] = {}
for t in targets:
key = f"{t.signal_name}.{t.metric}"
measured = best_metrics.get(key)
if measured is None or math.isnan(measured):
targets_met[key] = False
else:
tolerance = abs(t.target_value * 0.05) if abs(t.target_value) > 0 else 0.05
targets_met[key] = abs(measured - t.target_value) <= tolerance
return OptimizationResult(
best_values=best_values,
best_cost=best_cost,
iterations=iteration,
history=history,
targets_met=targets_met,
final_metrics=best_metrics,
)
async def optimize_component_values(
netlist_template: str,
targets: list[OptimizationTarget],
component_ranges: list[ComponentRange],
max_iterations: int = 20,
method: str = "binary_search",
) -> OptimizationResult:
"""Iteratively adjust component values toward target specifications.
Creates temporary .cir files, runs real LTspice simulations via Wine,
and evaluates a cost function against the targets after each run.
Args:
netlist_template: Netlist text with ``{ComponentName}`` placeholders
(e.g. ``{R1}``, ``{C1}``) that get substituted each iteration.
targets: Performance targets to optimize toward.
component_ranges: Allowed ranges for each tunable component.
max_iterations: Maximum simulation iterations (default 20).
method: ``"binary_search"`` for single-component optimization,
``"coordinate_descent"`` for multi-component. When
``"binary_search"`` is requested with multiple components,
coordinate descent is used automatically.
Returns:
OptimizationResult with best values found, cost history,
and whether each target was met.
"""
work_dir = Path(tempfile.mkdtemp(prefix="ltspice_opt_"))
if len(component_ranges) == 1 and len(targets) == 1 and method == "binary_search":
return await _binary_search_single(
netlist_template,
targets[0],
component_ranges[0],
max_iterations,
work_dir,
)
return await _coordinate_descent(
netlist_template,
targets,
component_ranges,
max_iterations,
work_dir,
)

View File

@ -0,0 +1,196 @@
"""Power and efficiency calculations from simulation data."""
import numpy as np
# np.trapz was renamed to np.trapezoid in numpy 2.0
_trapz = getattr(np, "trapezoid", getattr(np, "trapz", None))
def compute_instantaneous_power(voltage: np.ndarray, current: np.ndarray) -> np.ndarray:
"""Element-wise power: P(t) = V(t) * I(t).
Args:
voltage: Voltage waveform array
current: Current waveform array
Returns:
Instantaneous power array (same length as inputs)
"""
return np.real(voltage) * np.real(current)
def compute_average_power(time: np.ndarray, voltage: np.ndarray, current: np.ndarray) -> float:
"""Time-averaged power: P_avg = (1/T) * integral(V*I dt).
Uses trapezoidal integration over the full time span.
Args:
time: Time array in seconds
voltage: Voltage waveform array
current: Current waveform array
Returns:
Average power in watts
"""
t = np.real(time)
duration = t[-1] - t[0]
if len(t) < 2 or duration <= 0:
return 0.0
p_inst = np.real(voltage) * np.real(current)
return float(_trapz(p_inst, t) / duration)
def compute_efficiency(
time: np.ndarray,
input_voltage: np.ndarray,
input_current: np.ndarray,
output_voltage: np.ndarray,
output_current: np.ndarray,
) -> dict:
"""Compute power conversion efficiency.
Args:
time: Time array in seconds
input_voltage: Input voltage waveform
input_current: Input current waveform
output_voltage: Output voltage waveform
output_current: Output current waveform
Returns:
Dict with efficiency_percent, input_power_watts,
output_power_watts, power_dissipated_watts
"""
p_in = compute_average_power(time, input_voltage, input_current)
p_out = compute_average_power(time, output_voltage, output_current)
p_dissipated = p_in - p_out
if abs(p_in) < 1e-15:
efficiency = 0.0
else:
efficiency = (p_out / p_in) * 100.0
return {
"efficiency_percent": efficiency,
"input_power_watts": p_in,
"output_power_watts": p_out,
"power_dissipated_watts": p_dissipated,
}
def compute_power_spectrum(
time: np.ndarray,
voltage: np.ndarray,
current: np.ndarray,
max_harmonics: int = 20,
) -> dict:
"""FFT of instantaneous power to identify ripple frequencies.
Args:
time: Time array in seconds
voltage: Voltage waveform array
current: Current waveform array
max_harmonics: Maximum number of frequency bins to return
Returns:
Dict with frequencies, power_magnitudes, dominant_freq, dc_power
"""
t = np.real(time)
if len(t) < 2:
return {
"frequencies": [],
"power_magnitudes": [],
"dominant_freq": 0.0,
"dc_power": 0.0,
}
dt = (t[-1] - t[0]) / (len(t) - 1)
if dt <= 0:
return {
"frequencies": [],
"power_magnitudes": [],
"dominant_freq": 0.0,
"dc_power": float(np.mean(np.real(voltage) * np.real(current))),
}
p_inst = np.real(voltage) * np.real(current)
n = len(p_inst)
spectrum = np.fft.rfft(p_inst)
freqs = np.fft.rfftfreq(n, d=dt)
magnitudes = np.abs(spectrum) * 2.0 / n
magnitudes[0] /= 2.0 # DC component correction
dc_power = float(magnitudes[0])
# Dominant AC frequency (largest non-DC bin)
if len(magnitudes) > 1:
dominant_idx = int(np.argmax(magnitudes[1:])) + 1
dominant_freq = float(freqs[dominant_idx])
else:
dominant_freq = 0.0
# Trim to requested harmonics (plus DC)
limit = min(max_harmonics + 1, len(freqs))
freqs = freqs[:limit]
magnitudes = magnitudes[:limit]
return {
"frequencies": freqs.tolist(),
"power_magnitudes": magnitudes.tolist(),
"dominant_freq": dominant_freq,
"dc_power": dc_power,
}
def compute_power_metrics(time: np.ndarray, voltage: np.ndarray, current: np.ndarray) -> dict:
"""Comprehensive power report for a voltage/current pair.
Power factor here is defined as the ratio of real (average) power
to apparent power (Vrms * Irms).
Args:
time: Time array in seconds
voltage: Voltage waveform array
current: Current waveform array
Returns:
Dict with avg_power, rms_power, peak_power, min_power, power_factor
"""
v = np.real(voltage)
i = np.real(current)
p_inst = v * i
if len(p_inst) == 0:
return {
"avg_power": 0.0,
"rms_power": 0.0,
"peak_power": 0.0,
"min_power": 0.0,
"power_factor": 0.0,
}
avg_power = compute_average_power(time, voltage, current)
rms_power = float(np.sqrt(np.mean(p_inst**2)))
peak_power = float(np.max(p_inst))
min_power = float(np.min(p_inst))
# Power factor = avg power / apparent power (Vrms * Irms)
v_rms = float(np.sqrt(np.mean(v**2)))
i_rms = float(np.sqrt(np.mean(i**2)))
apparent = v_rms * i_rms
if apparent < 1e-15:
power_factor = 0.0
else:
power_factor = avg_power / apparent
return {
"avg_power": avg_power,
"rms_power": rms_power,
"peak_power": peak_power,
"min_power": min_power,
"power_factor": power_factor,
}

View File

@ -15,6 +15,7 @@ import numpy as np
@dataclass
class Variable:
"""A variable (signal) in the raw file."""
index: int
name: str
type: str # e.g., "voltage", "current", "time", "frequency"
@ -23,6 +24,7 @@ class Variable:
@dataclass
class RawFile:
"""Parsed LTspice .raw file."""
title: str
date: str
plotname: str
@ -30,22 +32,65 @@ class RawFile:
variables: list[Variable]
points: int
data: np.ndarray # Shape: (n_variables, n_points)
n_runs: int = 1 # Number of runs (>1 for .step/.mc/.temp)
run_boundaries: list[int] | None = None # Start index of each run
def get_variable(self, name: str) -> np.ndarray | None:
"""Get data for a variable by name (case-insensitive partial match)."""
def get_variable(self, name: str, run: int | None = None) -> np.ndarray | None:
"""Get data for a variable by name (case-insensitive partial match).
Args:
name: Signal name (partial match OK)
run: If stepped data, return only this run (0-indexed). None = all data.
"""
name_lower = name.lower()
for var in self.variables:
if name_lower in var.name.lower():
return self.data[var.index]
arr = self.data[var.index]
if run is not None and self.run_boundaries:
start, end = self._run_slice(run)
return arr[start:end]
return arr
return None
def get_time(self) -> np.ndarray | None:
def get_time(self, run: int | None = None) -> np.ndarray | None:
"""Get the time axis (for transient analysis)."""
return self.get_variable("time")
return self.get_variable("time", run=run)
def get_frequency(self) -> np.ndarray | None:
def get_frequency(self, run: int | None = None) -> np.ndarray | None:
"""Get the frequency axis (for AC analysis)."""
return self.get_variable("frequency")
return self.get_variable("frequency", run=run)
def get_run_data(self, run: int) -> "RawFile":
"""Extract a single run as a new RawFile (for stepped simulations)."""
if not self.run_boundaries or run >= self.n_runs:
return self
start, end = self._run_slice(run)
return RawFile(
title=self.title,
date=self.date,
plotname=self.plotname,
flags=self.flags,
variables=self.variables,
points=end - start,
data=self.data[:, start:end].copy(),
n_runs=1,
run_boundaries=None,
)
@property
def is_stepped(self) -> bool:
return self.n_runs > 1
def _run_slice(self, run: int) -> tuple[int, int]:
"""Get (start, end) indices for a given run."""
if not self.run_boundaries:
return 0, self.points
start = self.run_boundaries[run]
if run + 1 < len(self.run_boundaries):
end = self.run_boundaries[run + 1]
else:
end = self.data.shape[1]
return start, end
def parse_raw_file(path: Path | str) -> RawFile:
@ -68,7 +113,7 @@ def parse_raw_file(path: Path | str) -> RawFile:
# Detect encoding: UTF-16 LE (Windows) vs ASCII
# UTF-16 LE has null bytes between characters
is_utf16 = content[1:2] == b'\x00' and content[3:4] == b'\x00'
is_utf16 = content[1:2] == b"\x00" and content[3:4] == b"\x00"
if is_utf16:
# UTF-16 LE encoding - decode header portion, find Binary marker
@ -106,7 +151,6 @@ def parse_raw_file(path: Path | str) -> RawFile:
points = 0
in_variables = False
var_count = 0
for line in header_lines:
line = line.strip()
@ -120,12 +164,17 @@ def parse_raw_file(path: Path | str) -> RawFile:
elif line.startswith("Flags:"):
flags = line[6:].strip().split()
elif line.startswith("No. Variables:"):
var_count = int(line[14:].strip())
pass # Parsed from Variables section instead
elif line.startswith("No. Points:"):
points = int(line[11:].strip())
elif line.startswith("Variables:"):
in_variables = True
elif in_variables and line and not line.startswith("Binary") and not line.startswith("Values"):
elif (
in_variables
and line
and not line.startswith("Binary")
and not line.startswith("Values")
):
# Parse variable line: "index name type"
parts = line.split()
if len(parts) >= 3:
@ -143,27 +192,36 @@ def parse_raw_file(path: Path | str) -> RawFile:
is_complex = "complex" in flags
is_stepped = "stepped" in flags
is_forward = "forward" in flags # FastAccess format
# "forward" flag indicates FastAccess format (unused for now)
if is_binary:
data = _parse_binary_data(data_bytes, n_vars, points, is_complex)
else:
data = _parse_ascii_data(data_bytes.decode("utf-8"), n_vars, points, is_complex)
# Detect run boundaries for stepped simulations
n_runs = 1
run_boundaries = None
if is_stepped and data.shape[1] > 1:
run_boundaries = _detect_run_boundaries(data[0])
n_runs = len(run_boundaries)
actual_points = data.shape[1]
return RawFile(
title=title,
date=date,
plotname=plotname,
flags=flags,
variables=variables,
points=points,
points=actual_points,
data=data,
n_runs=n_runs,
run_boundaries=run_boundaries,
)
def _parse_binary_data(
data: bytes, n_vars: int, points: int, is_complex: bool
) -> np.ndarray:
def _parse_binary_data(data: bytes, n_vars: int, points: int, is_complex: bool) -> np.ndarray:
"""Parse binary data section.
LTspice binary formats:
@ -178,7 +236,7 @@ def _parse_binary_data(
actual_points = len(data) // bytes_per_point
# Read as flat complex128 array and reshape
flat = np.frombuffer(data[:actual_points * bytes_per_point], dtype=np.complex128)
flat = np.frombuffer(data[: actual_points * bytes_per_point], dtype=np.complex128)
return flat.reshape(actual_points, n_vars).T.copy()
# Real data - detect format from data size
@ -197,12 +255,12 @@ def _parse_binary_data(
for p in range(actual_points):
# Time as double
result[0, p] = struct.unpack("<d", data[offset:offset + 8])[0]
result[0, p] = struct.unpack("<d", data[offset : offset + 8])[0]
offset += 8
# Other variables as float32
for v in range(1, n_vars):
result[v, p] = struct.unpack("<f", data[offset:offset + 4])[0]
result[v, p] = struct.unpack("<f", data[offset : offset + 4])[0]
offset += 4
return result
@ -210,14 +268,12 @@ def _parse_binary_data(
# All double format (older LTspice or specific settings)
actual_points = len(data) // bytes_per_point_double
flat = np.frombuffer(data[:actual_points * bytes_per_point_double], dtype=np.float64)
flat = np.frombuffer(data[: actual_points * bytes_per_point_double], dtype=np.float64)
# LTspice stores point-by-point
return flat.reshape(actual_points, n_vars).T.copy()
def _parse_ascii_data(
data: str, n_vars: int, points: int, is_complex: bool
) -> np.ndarray:
def _parse_ascii_data(data: str, n_vars: int, points: int, is_complex: bool) -> np.ndarray:
"""Parse ASCII data section."""
lines = data.strip().split("\n")
@ -258,3 +314,22 @@ def _parse_ascii_data(
continue
return result
def _detect_run_boundaries(x_axis: np.ndarray) -> list[int]:
"""Detect run boundaries in stepped simulation data.
In stepped data, the independent variable (time or frequency) resets
at the start of each new run. We detect this as a point where the
value decreases (time goes backwards) or resets to near-zero.
"""
x = np.real(x_axis) if np.iscomplexobj(x_axis) else x_axis
boundaries = [0] # First run always starts at index 0
for i in range(1, len(x)):
# Detect reset: value drops back to near the starting value
if x[i] <= x[0] and x[i] < x[i - 1]:
boundaries.append(i)
return boundaries

View File

@ -19,6 +19,7 @@ from .raw_parser import RawFile, parse_raw_file
@dataclass
class SimulationResult:
"""Result of a simulation run."""
success: bool
raw_file: Path | None
log_file: Path | None
@ -47,6 +48,7 @@ async def run_simulation(
SimulationResult with status and data
"""
import time
start_time = time.monotonic()
# Validate installation
@ -121,10 +123,9 @@ async def run_simulation(
try:
stdout_bytes, stderr_bytes = await asyncio.wait_for(
process.communicate(),
timeout=timeout
process.communicate(), timeout=timeout
)
except asyncio.TimeoutError:
except TimeoutError:
process.kill()
await process.wait()
return SimulationResult(
@ -226,21 +227,33 @@ async def run_netlist(
SimulationResult with status and data
"""
import time
start_time = time.monotonic()
ok, msg = validate_installation()
if not ok:
return SimulationResult(
success=False, raw_file=None, log_file=None, raw_data=None,
error=msg, stdout="", stderr="", elapsed_seconds=0,
success=False,
raw_file=None,
log_file=None,
raw_data=None,
error=msg,
stdout="",
stderr="",
elapsed_seconds=0,
)
netlist_path = Path(netlist_path).resolve()
if not netlist_path.exists():
return SimulationResult(
success=False, raw_file=None, log_file=None, raw_data=None,
success=False,
raw_file=None,
log_file=None,
raw_data=None,
error=f"Netlist not found: {netlist_path}",
stdout="", stderr="", elapsed_seconds=0,
stdout="",
stderr="",
elapsed_seconds=0,
)
temp_dir = None
@ -277,16 +290,19 @@ async def run_netlist(
try:
stdout_bytes, stderr_bytes = await asyncio.wait_for(
process.communicate(),
timeout=timeout
process.communicate(), timeout=timeout
)
except asyncio.TimeoutError:
except TimeoutError:
process.kill()
await process.wait()
return SimulationResult(
success=False, raw_file=None, log_file=None, raw_data=None,
success=False,
raw_file=None,
log_file=None,
raw_data=None,
error=f"Simulation timed out after {timeout} seconds",
stdout="", stderr="",
stdout="",
stderr="",
elapsed_seconds=time.monotonic() - start_time,
)
@ -308,8 +324,11 @@ async def run_netlist(
success=False,
raw_file=None,
log_file=log_file if log_file.exists() else None,
raw_data=None, error=error_msg,
stdout=stdout, stderr=stderr, elapsed_seconds=elapsed,
raw_data=None,
error=error_msg,
stdout=stdout,
stderr=stderr,
elapsed_seconds=elapsed,
)
raw_data = None
@ -326,7 +345,9 @@ async def run_netlist(
log_file=log_file if log_file.exists() else None,
raw_data=raw_data,
error=parse_error,
stdout=stdout, stderr=stderr, elapsed_seconds=elapsed,
stdout=stdout,
stderr=stderr,
elapsed_seconds=elapsed,
)
finally:
pass # Keep temp files for debugging

View File

@ -11,6 +11,7 @@ from pathlib import Path
@dataclass
class Component:
"""A component instance in the schematic."""
name: str # Instance name (e.g., R1, C1, M1)
symbol: str # Symbol name (e.g., res, cap, nmos)
x: int
@ -33,6 +34,7 @@ class Component:
@dataclass
class Wire:
"""A wire connection."""
x1: int
y1: int
x2: int
@ -42,6 +44,7 @@ class Wire:
@dataclass
class Text:
"""Text annotation or SPICE directive."""
x: int
y: int
content: str
@ -51,6 +54,7 @@ class Text:
@dataclass
class Flag:
"""A net flag/label."""
x: int
y: int
name: str
@ -60,6 +64,7 @@ class Flag:
@dataclass
class Schematic:
"""A parsed LTspice schematic."""
version: int = 4
sheet: tuple[int, int, int, int] = (1, 1, 0, 0)
components: list[Component] = field(default_factory=list)
@ -113,27 +118,23 @@ def parse_schematic(path: Path | str) -> Schematic:
elif line.startswith("SHEET"):
parts = line.split()
if len(parts) >= 5:
schematic.sheet = (
int(parts[1]), int(parts[2]),
int(parts[3]), int(parts[4])
)
schematic.sheet = (int(parts[1]), int(parts[2]), int(parts[3]), int(parts[4]))
elif line.startswith("WIRE"):
parts = line.split()
if len(parts) >= 5:
schematic.wires.append(Wire(
int(parts[1]), int(parts[2]),
int(parts[3]), int(parts[4])
))
schematic.wires.append(
Wire(int(parts[1]), int(parts[2]), int(parts[3]), int(parts[4]))
)
elif line.startswith("FLAG"):
parts = line.split()
if len(parts) >= 4:
schematic.flags.append(Flag(
int(parts[1]), int(parts[2]),
parts[3],
parts[4] if len(parts) > 4 else "0"
))
schematic.flags.append(
Flag(
int(parts[1]), int(parts[2]), parts[3], parts[4] if len(parts) > 4 else "0"
)
)
elif line.startswith("SYMBOL"):
# Save previous component before starting a new one
@ -159,11 +160,7 @@ def parse_schematic(path: Path | str) -> Schematic:
rotation = int(rot_str[1:])
current_component = Component(
name="",
symbol=symbol,
x=x, y=y,
rotation=rotation,
mirror=mirror
name="", symbol=symbol, x=x, y=y, rotation=rotation, mirror=mirror
)
else:
current_component = None
@ -184,8 +181,7 @@ def parse_schematic(path: Path | str) -> Schematic:
match = re.match(r"TEXT\s+(-?\d+)\s+(-?\d+)\s+(\w+)\s+(\d+)\s*(.*)", line)
if match:
x, y = int(match.group(1)), int(match.group(2))
align = match.group(3)
size = int(match.group(4))
# groups 3 (align) and 4 (size) are parsed but not stored
content = match.group(5) if match.group(5) else ""
# Check for multi-line text (continuation with \n or actual newlines)
@ -219,7 +215,9 @@ def write_schematic(schematic: Schematic, path: Path | str) -> None:
lines = []
lines.append(f"Version {schematic.version}")
lines.append(f"SHEET {schematic.sheet[0]} {schematic.sheet[1]} {schematic.sheet[2]} {schematic.sheet[3]}")
lines.append(
f"SHEET {schematic.sheet[0]} {schematic.sheet[1]} {schematic.sheet[2]} {schematic.sheet[3]}"
)
# Write wires
for wire in schematic.wires:
@ -251,10 +249,7 @@ def write_schematic(schematic: Schematic, path: Path | str) -> None:
def modify_component_value(
path: Path | str,
component_name: str,
new_value: str,
output_path: Path | str | None = None
path: Path | str, component_name: str, new_value: str, output_path: Path | str | None = None
) -> Schematic:
"""Modify a component's value in a schematic.
@ -276,8 +271,7 @@ def modify_component_value(
if not comp:
available = [c.name for c in schematic.components]
raise ValueError(
f"Component '{component_name}' not found. "
f"Available components: {', '.join(available)}"
f"Component '{component_name}' not found. Available components: {', '.join(available)}"
)
comp.value = new_value

2757
src/mcltspice/server.py Normal file

File diff suppressed because it is too large Load Diff

167
src/mcltspice/stability.py Normal file
View File

@ -0,0 +1,167 @@
"""Stability analysis for AC simulation loop gain data (gain/phase margins)."""
import numpy as np
def _interp_crossing(x: np.ndarray, y: np.ndarray, threshold: float) -> list[float]:
"""Find all x values where y crosses threshold, using linear interpolation."""
crossings = []
for i in range(len(y) - 1):
if (y[i] - threshold) * (y[i + 1] - threshold) < 0:
dy = y[i + 1] - y[i]
if abs(dy) < 1e-30:
crossings.append(float(x[i]))
else:
frac = (threshold - y[i]) / dy
crossings.append(float(x[i] + frac * (x[i + 1] - x[i])))
return crossings
def _interp_y_at_crossing(
x: np.ndarray, y: np.ndarray, ref: np.ndarray, threshold: float
) -> list[tuple[float, float]]:
"""Find interpolated (x, y) pairs where ref crosses threshold."""
results = []
for i in range(len(ref) - 1):
if (ref[i] - threshold) * (ref[i + 1] - threshold) < 0:
dref = ref[i + 1] - ref[i]
if abs(dref) < 1e-30:
results.append((float(x[i]), float(y[i])))
else:
frac = (threshold - ref[i]) / dref
xi = float(x[i] + frac * (x[i + 1] - x[i]))
yi = float(y[i] + frac * (y[i + 1] - y[i]))
results.append((xi, yi))
return results
def compute_gain_margin(frequency: np.ndarray, loop_gain_complex: np.ndarray) -> dict:
"""Compute gain margin from loop gain T(s).
Args:
frequency: Frequency array in Hz (real-valued)
loop_gain_complex: Complex loop gain T(jw)
Returns:
Dict with gain_margin_db, phase_crossover_freq_hz, is_stable
"""
if len(frequency) < 2 or len(loop_gain_complex) < 2:
return {
"gain_margin_db": None,
"phase_crossover_freq_hz": None,
"is_stable": None,
}
freq = np.real(frequency).astype(np.float64)
mag_db = 20.0 * np.log10(np.maximum(np.abs(loop_gain_complex), 1e-30))
phase_deg = np.degrees(np.unwrap(np.angle(loop_gain_complex)))
# Phase crossover: where phase crosses -180 degrees
# Find magnitude at that crossing via interpolation
hits = _interp_y_at_crossing(freq, mag_db, phase_deg, -180.0)
if not hits:
# No phase crossover => infinite gain margin (stable for any gain)
return {
"gain_margin_db": float("inf"),
"phase_crossover_freq_hz": None,
"is_stable": True,
}
# Use the first phase crossover
crossover_freq, mag_at_crossover = hits[0]
gain_margin = -mag_at_crossover
return {
"gain_margin_db": float(gain_margin),
"phase_crossover_freq_hz": float(crossover_freq),
"is_stable": gain_margin > 0,
}
def compute_phase_margin(frequency: np.ndarray, loop_gain_complex: np.ndarray) -> dict:
"""Compute phase margin from loop gain T(s).
Args:
frequency: Frequency array in Hz (real-valued)
loop_gain_complex: Complex loop gain T(jw)
Returns:
Dict with phase_margin_deg, gain_crossover_freq_hz, is_stable
"""
if len(frequency) < 2 or len(loop_gain_complex) < 2:
return {
"phase_margin_deg": None,
"gain_crossover_freq_hz": None,
"is_stable": None,
}
freq = np.real(frequency).astype(np.float64)
mag_db = 20.0 * np.log10(np.maximum(np.abs(loop_gain_complex), 1e-30))
phase_deg = np.degrees(np.unwrap(np.angle(loop_gain_complex)))
# Gain crossover: where magnitude crosses 0 dB
# Find phase at that crossing via interpolation
hits = _interp_y_at_crossing(freq, phase_deg, mag_db, 0.0)
if not hits:
# No gain crossover. If gain is always below 0 dB, system is stable.
is_stable = bool(np.all(mag_db < 0))
return {
"phase_margin_deg": float("inf") if is_stable else None,
"gain_crossover_freq_hz": None,
"is_stable": is_stable,
}
# Use the first gain crossover
crossover_freq, phase_at_crossover = hits[0]
phase_margin = 180.0 + phase_at_crossover
return {
"phase_margin_deg": float(phase_margin),
"gain_crossover_freq_hz": float(crossover_freq),
"is_stable": phase_margin > 0,
}
def compute_stability_metrics(frequency: np.ndarray, loop_gain_complex: np.ndarray) -> dict:
"""Compute comprehensive stability metrics including Bode plot data.
Args:
frequency: Frequency array in Hz (real-valued)
loop_gain_complex: Complex loop gain T(jw)
Returns:
Dict with gain_margin, phase_margin, bode data, crossover
frequencies, and overall stability assessment
"""
if len(frequency) < 2 or len(loop_gain_complex) < 2:
return {
"gain_margin": compute_gain_margin(frequency, loop_gain_complex),
"phase_margin": compute_phase_margin(frequency, loop_gain_complex),
"bode": {"frequency_hz": [], "magnitude_db": [], "phase_deg": []},
"is_stable": None,
}
freq = np.real(frequency).astype(np.float64)
mag_db = 20.0 * np.log10(np.maximum(np.abs(loop_gain_complex), 1e-30))
phase_deg = np.degrees(np.unwrap(np.angle(loop_gain_complex)))
gm = compute_gain_margin(frequency, loop_gain_complex)
pm = compute_phase_margin(frequency, loop_gain_complex)
# Overall stability: both margins must be positive (when defined)
gm_ok = gm["is_stable"] is not False
pm_ok = pm["is_stable"] is not False
is_stable = gm_ok and pm_ok
return {
"gain_margin": gm,
"phase_margin": pm,
"bode": {
"frequency_hz": freq.tolist(),
"magnitude_db": mag_db.tolist(),
"phase_deg": phase_deg.tolist(),
},
"is_stable": is_stable,
}

533
src/mcltspice/svg_plot.py Normal file
View File

@ -0,0 +1,533 @@
"""Pure SVG waveform plot generation -- no matplotlib dependency.
Generates complete <svg> XML strings for time-domain, Bode, and spectrum plots
suitable for embedding in HTML or saving as standalone .svg files.
"""
from __future__ import annotations
import math
from html import escape as _html_escape
import numpy as np
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
_ENG_PREFIXES = [
(1e18, "E"),
(1e15, "P"),
(1e12, "T"),
(1e9, "G"),
(1e6, "M"),
(1e3, "k"),
(1e0, ""),
(1e-3, "m"),
(1e-6, "\u00b5"), # micro sign
(1e-9, "n"),
(1e-12, "p"),
(1e-15, "f"),
(1e-18, "a"),
]
_FREQ_PREFIXES = [
(1e9, "G"),
(1e6, "M"),
(1e3, "k"),
(1e0, ""),
(1e-3, "m"),
]
def _svg_escape(text: str) -> str:
"""Escape special characters for embedding inside SVG XML."""
return _html_escape(str(text), quote=True)
def _format_eng(value: float, unit: str = "") -> str:
"""Format *value* with an engineering prefix and optional *unit*."""
if value == 0:
return f"0{unit}"
abs_val = abs(value)
for threshold, prefix in _ENG_PREFIXES:
if abs_val >= threshold * 0.9999:
scaled = value / threshold
# Trim trailing zeros but keep at least one digit
txt = f"{scaled:.3g}"
return f"{txt}{prefix}{unit}"
# Extremely small -- fall back to scientific
return f"{value:.2e}{unit}"
def _format_freq(hz: float) -> str:
"""Format a frequency value for axis labels (1, 10, 1k, 1M, etc.)."""
if hz == 0:
return "0"
abs_hz = abs(hz)
for threshold, prefix in _FREQ_PREFIXES:
if abs_hz >= threshold * 0.9999:
scaled = hz / threshold
txt = f"{scaled:.4g}"
# Strip unnecessary trailing zeros after decimal
if "." in txt:
txt = txt.rstrip("0").rstrip(".")
return f"{txt}{prefix}"
return f"{hz:.2e}"
def _nice_ticks(vmin: float, vmax: float, n_ticks: int = 5) -> list[float]:
"""Compute human-friendly tick values spanning [vmin, vmax].
Returns a list of round numbers that cover the data range.
"""
if vmin == vmax:
return [vmin]
if not math.isfinite(vmin) or not math.isfinite(vmax):
return [0.0]
raw_step = (vmax - vmin) / max(n_ticks - 1, 1)
if raw_step == 0:
return [vmin]
magnitude = 10 ** math.floor(math.log10(abs(raw_step)))
residual = raw_step / magnitude
# Snap to a "nice" step: 1, 2, 2.5, 5, 10
if residual <= 1.0:
nice_step = magnitude
elif residual <= 2.0:
nice_step = 2 * magnitude
elif residual <= 2.5:
nice_step = 2.5 * magnitude
elif residual <= 5.0:
nice_step = 5 * magnitude
else:
nice_step = 10 * magnitude
tick_min = math.floor(vmin / nice_step) * nice_step
tick_max = math.ceil(vmax / nice_step) * nice_step
ticks: list[float] = []
t = tick_min
while t <= tick_max + nice_step * 0.001:
ticks.append(round(t, 12))
t += nice_step
return ticks
def _log_ticks(vmin: float, vmax: float) -> list[float]:
"""Generate tick values at powers of 10 spanning [vmin, vmax] (linear values)."""
if vmin <= 0:
vmin = 1.0
if vmax <= vmin:
vmax = vmin * 10
low = math.floor(math.log10(vmin))
high = math.ceil(math.log10(vmax))
ticks = [10.0**i for i in range(low, high + 1)]
# Filter to range
return [t for t in ticks if vmin * 0.9999 <= t <= vmax * 1.0001]
def _data_extent(arr: np.ndarray, pad_frac: float = 0.05) -> tuple[float, float]:
"""Return (min, max) of *arr* with *pad_frac* padding on each side."""
if len(arr) == 0:
return (0.0, 1.0)
lo, hi = float(np.nanmin(arr)), float(np.nanmax(arr))
if lo == hi:
lo -= 1.0
hi += 1.0
span = hi - lo
return (lo - span * pad_frac, hi + span * pad_frac)
# ---------------------------------------------------------------------------
# Core SVG building blocks
# ---------------------------------------------------------------------------
_FONT = "system-ui, -apple-system, sans-serif"
def _build_path_d(
xs: np.ndarray,
ys: np.ndarray,
x_min: float,
x_max: float,
y_min: float,
y_max: float,
plot_x: float,
plot_y: float,
plot_w: float,
plot_h: float,
log_x: bool = False,
) -> str:
"""Convert data arrays into an SVG path *d* attribute string."""
if len(xs) == 0:
return ""
# Map data -> pixel coords
if log_x:
safe_xs = np.clip(xs, max(x_min, 1e-30), None)
lx = np.log10(safe_xs)
lx_min = math.log10(max(x_min, 1e-30))
lx_max = math.log10(max(x_max, 1e-30))
denom_x = lx_max - lx_min if lx_max != lx_min else 1.0
px = plot_x + (lx - lx_min) / denom_x * plot_w
else:
denom_x = x_max - x_min if x_max != x_min else 1.0
px = plot_x + (xs - x_min) / denom_x * plot_w
denom_y = y_max - y_min if y_max != y_min else 1.0
# Y axis is inverted in SVG (top = 0)
py = plot_y + plot_h - (ys - y_min) / denom_y * plot_h
parts = [f"M{px[0]:.2f},{py[0]:.2f}"]
for i in range(1, len(px)):
parts.append(f"L{px[i]:.2f},{py[i]:.2f}")
return "".join(parts)
def _render_subplot(
*,
xs: np.ndarray,
ys: np.ndarray,
x_min: float,
x_max: float,
y_min: float,
y_max: float,
plot_x: float,
plot_y: float,
plot_w: float,
plot_h: float,
log_x: bool,
xlabel: str,
ylabel: str,
title: str | None,
stroke: str,
x_ticks: list[float] | None = None,
y_ticks: list[float] | None = None,
show_x_labels: bool = True,
) -> str:
"""Render a single subplot region as a block of SVG elements."""
lines: list[str] = []
# Compute ticks
if y_ticks is None:
y_ticks = _nice_ticks(y_min, y_max, n_ticks=6)
if x_ticks is None:
if log_x:
x_ticks = _log_ticks(x_min, x_max)
else:
x_ticks = _nice_ticks(x_min, x_max, n_ticks=6)
# Background
lines.append(
f'<rect x="{plot_x}" y="{plot_y}" width="{plot_w}" height="{plot_h}" '
f'fill="white" stroke="#ccc" stroke-width="1"/>'
)
# Grid + Y tick labels
denom_y = y_max - y_min if y_max != y_min else 1.0
for tv in y_ticks:
if tv < y_min or tv > y_max:
continue
py = plot_y + plot_h - (tv - y_min) / denom_y * plot_h
lines.append(
f'<line x1="{plot_x}" y1="{py:.1f}" x2="{plot_x + plot_w}" y2="{py:.1f}" '
f'stroke="#ddd" stroke-width="0.5" stroke-dasharray="4,3"/>'
)
label = _format_eng(tv)
lines.append(
f'<text x="{plot_x - 8}" y="{py:.1f}" text-anchor="end" '
f'dominant-baseline="middle" font-size="11" font-family="{_FONT}" '
f'fill="#444">{_svg_escape(label)}</text>'
)
# Grid + X tick labels
if log_x:
lx_min = math.log10(max(x_min, 1e-30))
lx_max = math.log10(max(x_max, 1e-30))
denom_x = lx_max - lx_min if lx_max != lx_min else 1.0
else:
denom_x = x_max - x_min if x_max != x_min else 1.0
for tv in x_ticks:
if log_x:
if tv <= 0:
continue
frac = (math.log10(tv) - lx_min) / denom_x
else:
frac = (tv - x_min) / denom_x
if frac < -0.001 or frac > 1.001:
continue
px = plot_x + frac * plot_w
lines.append(
f'<line x1="{px:.1f}" y1="{plot_y}" x2="{px:.1f}" y2="{plot_y + plot_h}" '
f'stroke="#ddd" stroke-width="0.5" stroke-dasharray="4,3"/>'
)
if show_x_labels:
if log_x:
label = _format_freq(tv)
else:
label = _format_eng(tv)
lines.append(
f'<text x="{px:.1f}" y="{plot_y + plot_h + 16}" text-anchor="middle" '
f'font-size="11" font-family="{_FONT}" fill="#444">'
f'{_svg_escape(label)}</text>'
)
# Data path
d = _build_path_d(xs, ys, x_min, x_max, y_min, y_max, plot_x, plot_y, plot_w, plot_h, log_x)
if d:
lines.append(
f'<path d="{d}" fill="none" stroke="{stroke}" stroke-width="1.5" '
f'stroke-linejoin="round" stroke-linecap="round"/>'
)
# Title
if title:
lines.append(
f'<text x="{plot_x + plot_w / 2}" y="{plot_y - 12}" text-anchor="middle" '
f'font-size="14" font-weight="600" font-family="{_FONT}" fill="#111">'
f'{_svg_escape(title)}</text>'
)
# Axis labels
if ylabel:
mid_y = plot_y + plot_h / 2
lines.append(
f'<text x="{plot_x - 55}" y="{mid_y}" text-anchor="middle" '
f'font-size="12" font-family="{_FONT}" fill="#333" '
f'transform="rotate(-90, {plot_x - 55}, {mid_y})">'
f'{_svg_escape(ylabel)}</text>'
)
if xlabel and show_x_labels:
lines.append(
f'<text x="{plot_x + plot_w / 2}" y="{plot_y + plot_h + 42}" '
f'text-anchor="middle" font-size="12" font-family="{_FONT}" fill="#333">'
f'{_svg_escape(xlabel)}</text>'
)
return "\n".join(lines)
def _wrap_svg(inner: str, width: int, height: int) -> str:
"""Wrap inner SVG elements in a root <svg> tag with white background."""
return (
f'<svg xmlns="http://www.w3.org/2000/svg" width="{width}" height="{height}" '
f'viewBox="0 0 {width} {height}">\n'
f'<rect width="{width}" height="{height}" fill="white"/>\n'
f'{inner}\n'
f'</svg>'
)
# ---------------------------------------------------------------------------
# Public API
# ---------------------------------------------------------------------------
def plot_timeseries(
time: list[float] | np.ndarray,
values: list[float] | np.ndarray,
title: str = "Time Domain",
ylabel: str = "Voltage (V)",
width: int = 800,
height: int = 400,
) -> str:
"""Plot a time-domain signal. Linear X axis (time), linear Y axis.
Returns a complete ``<svg>`` XML string.
"""
t = np.asarray(time, dtype=float).ravel()
v = np.asarray(values, dtype=float).ravel()
n = min(len(t), len(v))
t, v = t[:n], v[:n]
margin_l, margin_r, margin_t, margin_b = 80, 20, 40, 60
plot_x = float(margin_l)
plot_y = float(margin_t)
plot_w = float(width - margin_l - margin_r)
plot_h = float(height - margin_t - margin_b)
x_min, x_max = _data_extent(t)
y_min, y_max = _data_extent(v)
inner = _render_subplot(
xs=t,
ys=v,
x_min=x_min,
x_max=x_max,
y_min=y_min,
y_max=y_max,
plot_x=plot_x,
plot_y=plot_y,
plot_w=plot_w,
plot_h=plot_h,
log_x=False,
xlabel="Time (s)",
ylabel=ylabel,
title=title,
stroke="#2563eb",
)
return _wrap_svg(inner, width, height)
def plot_bode(
freq: list[float] | np.ndarray,
mag_db: list[float] | np.ndarray,
phase_deg: list[float] | np.ndarray | None = None,
title: str = "Bode Plot",
width: int = 800,
height: int = 500,
) -> str:
"""Plot frequency response (Bode plot). Log10 X, linear Y (dB).
If *phase_deg* is provided, the SVG contains two stacked subplots:
magnitude on top, phase on the bottom.
Returns a complete ``<svg>`` XML string.
"""
f = np.asarray(freq, dtype=float).ravel()
m = np.asarray(mag_db, dtype=float).ravel()
n = min(len(f), len(m))
f, m = f[:n], m[:n]
has_phase = phase_deg is not None
if has_phase:
p = np.asarray(phase_deg, dtype=float).ravel()
n = min(len(f), len(p))
f, m, p = f[:n], m[:n], p[:n]
margin_l, margin_r, margin_t, margin_b = 80, 20, 40, 60
plot_x = float(margin_l)
plot_w = float(width - margin_l - margin_r)
# Frequency range (shared)
f_min, f_max = _data_extent(f[f > 0] if np.any(f > 0) else f, pad_frac=0.0)
if f_min <= 0:
f_min = 1.0
freq_ticks = _log_ticks(f_min, f_max)
if has_phase:
# Split into two subplots with a gap
gap = 30
available_h = height - margin_t - margin_b - gap
mag_h = available_h * 0.55
phase_h = available_h * 0.45
mag_y = float(margin_t)
phase_y = float(margin_t + mag_h + gap)
m_min, m_max = _data_extent(m)
p_min, p_max = _data_extent(p)
mag_svg = _render_subplot(
xs=f,
ys=m,
x_min=f_min,
x_max=f_max,
y_min=m_min,
y_max=m_max,
plot_x=plot_x,
plot_y=mag_y,
plot_w=plot_w,
plot_h=mag_h,
log_x=True,
xlabel="",
ylabel="Magnitude (dB)",
title=title,
stroke="#2563eb",
x_ticks=freq_ticks,
show_x_labels=False,
)
phase_svg = _render_subplot(
xs=f,
ys=p,
x_min=f_min,
x_max=f_max,
y_min=p_min,
y_max=p_max,
plot_x=plot_x,
plot_y=phase_y,
plot_w=plot_w,
plot_h=phase_h,
log_x=True,
xlabel="Frequency (Hz)",
ylabel="Phase (deg)",
title=None,
stroke="#dc2626",
x_ticks=freq_ticks,
)
return _wrap_svg(mag_svg + "\n" + phase_svg, width, height)
else:
plot_y = float(margin_t)
plot_h = float(height - margin_t - margin_b)
m_min, m_max = _data_extent(m)
inner = _render_subplot(
xs=f,
ys=m,
x_min=f_min,
x_max=f_max,
y_min=m_min,
y_max=m_max,
plot_x=plot_x,
plot_y=plot_y,
plot_w=plot_w,
plot_h=plot_h,
log_x=True,
xlabel="Frequency (Hz)",
ylabel="Magnitude (dB)",
title=title,
stroke="#2563eb",
x_ticks=freq_ticks,
)
return _wrap_svg(inner, width, height)
def plot_spectrum(
freq: list[float] | np.ndarray,
mag_db: list[float] | np.ndarray,
title: str = "FFT Spectrum",
width: int = 800,
height: int = 400,
) -> str:
"""Plot an FFT spectrum. Log10 X axis, linear Y axis (dB).
Returns a complete ``<svg>`` XML string.
"""
f = np.asarray(freq, dtype=float).ravel()
m = np.asarray(mag_db, dtype=float).ravel()
n = min(len(f), len(m))
f, m = f[:n], m[:n]
margin_l, margin_r, margin_t, margin_b = 80, 20, 40, 60
plot_x = float(margin_l)
plot_y = float(margin_t)
plot_w = float(width - margin_l - margin_r)
plot_h = float(height - margin_t - margin_b)
f_min, f_max = _data_extent(f[f > 0] if np.any(f > 0) else f, pad_frac=0.0)
if f_min <= 0:
f_min = 1.0
m_min, m_max = _data_extent(m)
inner = _render_subplot(
xs=f,
ys=m,
x_min=f_min,
x_max=f_max,
y_min=m_min,
y_max=m_max,
plot_x=plot_x,
plot_y=plot_y,
plot_w=plot_w,
plot_h=plot_h,
log_x=True,
xlabel="Frequency (Hz)",
ylabel="Magnitude (dB)",
title=title,
stroke="#2563eb",
)
return _wrap_svg(inner, width, height)

254
src/mcltspice/touchstone.py Normal file
View File

@ -0,0 +1,254 @@
"""Parse Touchstone (.s1p, .s2p, .snp) S-parameter files for LTspice."""
import re
from dataclasses import dataclass, field
from pathlib import Path
import numpy as np
# Frequency unit multipliers to Hz
_FREQ_MULTIPLIERS: dict[str, float] = {
"HZ": 1.0,
"KHZ": 1e3,
"MHZ": 1e6,
"GHZ": 1e9,
}
@dataclass
class TouchstoneData:
"""Parsed contents of a Touchstone file.
All frequencies are stored in Hz regardless of the original file's
unit. The ``data`` array holds complex-valued parameters with shape
``(n_freq, n_ports, n_ports)``.
"""
filename: str
n_ports: int
freq_unit: str
parameter_type: str # S, Y, Z, H, G
format_type: str # MA, DB, RI
reference_impedance: float
frequencies: np.ndarray # shape (n_freq,), always in Hz
data: np.ndarray # shape (n_freq, n_ports, n_ports), complex128
comments: list[str] = field(default_factory=list)
def _to_complex(v1: float, v2: float, fmt: str) -> complex:
"""Convert a value pair to complex according to the format type."""
if fmt == "RI":
return complex(v1, v2)
elif fmt == "MA":
mag = v1
angle_rad = np.deg2rad(v2)
return complex(mag * np.cos(angle_rad), mag * np.sin(angle_rad))
elif fmt == "DB":
mag = 10.0 ** (v1 / 20.0)
angle_rad = np.deg2rad(v2)
return complex(mag * np.cos(angle_rad), mag * np.sin(angle_rad))
else:
raise ValueError(f"Unknown format type: {fmt!r}")
def _detect_ports(path: Path) -> int:
"""Detect port count from the file extension (.s<n>p)."""
suffix = path.suffix.lower()
m = re.match(r"\.s(\d+)p$", suffix)
if not m:
raise ValueError(
f"Cannot determine port count from extension {suffix!r}. "
"Expected .s1p, .s2p, .s3p, .s4p, etc."
)
return int(m.group(1))
def parse_touchstone(path: str | Path) -> TouchstoneData:
"""Parse a Touchstone file into a TouchstoneData object.
Handles .s1p through .s4p (and beyond), all three format types
(MA, DB, RI), all frequency units, and continuation lines used
by files with more than two ports.
Args:
path: Path to the Touchstone file.
Returns:
A TouchstoneData with frequencies converted to Hz and
parameters stored as complex128.
"""
path = Path(path)
n_ports = _detect_ports(path)
# Defaults per Touchstone spec
freq_unit = "GHZ"
param_type = "S"
fmt = "MA"
ref_impedance = 50.0
comments: list[str] = []
data_lines: list[str] = []
option_found = False
with path.open() as fh:
for raw_line in fh:
line = raw_line.strip()
# Comment lines
if line.startswith("!"):
comments.append(line[1:].strip())
continue
# Option line (only the first one is used)
if line.startswith("#"):
if not option_found:
option_found = True
tokens = line[1:].split()
# Parse tokens case-insensitively
i = 0
while i < len(tokens):
tok = tokens[i].upper()
if tok in _FREQ_MULTIPLIERS:
freq_unit = tok
elif tok in ("S", "Y", "Z", "H", "G"):
param_type = tok
elif tok in ("MA", "DB", "RI"):
fmt = tok
elif tok == "R" and i + 1 < len(tokens):
i += 1
ref_impedance = float(tokens[i])
i += 1
continue
# Skip blank lines
if not line:
continue
# Inline comments after data (some files use !)
if "!" in line:
line = line[: line.index("!")]
data_lines.append(line)
# Compute the expected number of value pairs per frequency point.
# Each frequency point has n_ports * n_ports parameters, each
# consisting of two floats.
values_per_freq = n_ports * n_ports * 2 # pairs * 2 values each
# Flatten all data tokens
all_tokens: list[float] = []
for dl in data_lines:
all_tokens.extend(float(t) for t in dl.split())
# Each frequency row starts with the frequency value, followed by
# values_per_freq data values. For n_ports <= 2, everything fits
# on one line. For n_ports > 2, continuation lines are used and
# don't repeat the frequency.
stride = 1 + values_per_freq # freq + data
if len(all_tokens) % stride != 0:
raise ValueError(
f"Token count {len(all_tokens)} is not a multiple of "
f"expected stride {stride} for a {n_ports}-port file."
)
n_freq = len(all_tokens) // stride
freq_mult = _FREQ_MULTIPLIERS[freq_unit]
frequencies = np.empty(n_freq, dtype=np.float64)
data = np.empty((n_freq, n_ports, n_ports), dtype=np.complex128)
for k in range(n_freq):
offset = k * stride
frequencies[k] = all_tokens[offset] * freq_mult
# Data values come in pairs: (v1, v2) per parameter
idx = offset + 1
for row in range(n_ports):
for col in range(n_ports):
v1 = all_tokens[idx]
v2 = all_tokens[idx + 1]
data[k, row, col] = _to_complex(v1, v2, fmt)
idx += 2
return TouchstoneData(
filename=path.name,
n_ports=n_ports,
freq_unit=freq_unit,
parameter_type=param_type,
format_type=fmt,
reference_impedance=ref_impedance,
frequencies=frequencies,
data=data,
comments=comments,
)
def get_s_parameter(data: TouchstoneData, i: int, j: int) -> tuple[np.ndarray, np.ndarray]:
"""Extract a single S-parameter across all frequencies.
Args:
data: Parsed Touchstone data.
i: Row index (1-based, as in S(i,j)).
j: Column index (1-based).
Returns:
Tuple of (frequencies_hz, complex_values) where both are 1-D
numpy arrays.
"""
if i < 1 or i > data.n_ports:
raise IndexError(f"Row index {i} out of range for {data.n_ports}-port data")
if j < 1 or j > data.n_ports:
raise IndexError(f"Column index {j} out of range for {data.n_ports}-port data")
return data.frequencies.copy(), data.data[:, i - 1, j - 1].copy()
def s_param_to_db(complex_values: np.ndarray) -> np.ndarray:
"""Convert complex S-parameter values to decibels.
Computes 20 * log10(|S|), flooring magnitudes at -300 dB to avoid
log-of-zero warnings.
Args:
complex_values: Array of complex S-parameter values.
Returns:
Magnitude in dB as a real-valued numpy array.
"""
magnitude = np.abs(complex_values)
return 20.0 * np.log10(np.maximum(magnitude, 1e-15))
def generate_ltspice_subcircuit(touchstone_data: TouchstoneData, name: str) -> str:
"""Generate an LTspice-compatible subcircuit wrapping S-parameter data.
LTspice can reference Touchstone files from within a subcircuit
using the ``.net`` directive. This function produces a ``.sub``
file body that instantiates the S-parameter block.
Args:
touchstone_data: Parsed Touchstone data.
name: Subcircuit name (used in .SUBCKT and filename references).
Returns:
A string containing the complete subcircuit definition.
"""
td = touchstone_data
n = td.n_ports
# Build port list: port1, port2, ..., portN, plus a reference node
port_names = [f"port{k}" for k in range(1, n + 1)]
port_list = " ".join(port_names)
lines: list[str] = []
lines.append(f"* LTspice subcircuit for {td.filename}")
lines.append(f"* {n}-port {td.parameter_type}-parameters, Z0={td.reference_impedance} Ohm")
lines.append(
f"* Frequency range: {td.frequencies[0]:.6g} Hz "
f"to {td.frequencies[-1]:.6g} Hz "
f"({len(td.frequencies)} points)"
)
lines.append(f".SUBCKT {name} {port_list} ref")
lines.append(f".net {td.filename} {port_list} ref")
lines.append(f".ends {name}")
return "\n".join(lines) + "\n"

View File

@ -0,0 +1,299 @@
"""Evaluate mathematical expressions on waveform data.
Supports expressions like "V(out) * I(R1)", "V(out) / V(in)",
"20*log10(abs(V(out)))", etc. Uses a safe recursive-descent parser
instead of Python's eval/exec.
"""
from __future__ import annotations
import re
from dataclasses import dataclass
from enum import Enum, auto
import numpy as np
from .raw_parser import RawFile
# -- Tokenizer ---------------------------------------------------------------
class _TokenType(Enum):
NUMBER = auto()
SIGNAL = auto() # e.g., V(out), I(R1), time
FUNC = auto() # abs, sqrt, log10, dB
PLUS = auto()
MINUS = auto()
STAR = auto()
SLASH = auto()
LPAREN = auto()
RPAREN = auto()
EOF = auto()
@dataclass
class _Token:
type: _TokenType
value: str
# Functions we support (case-insensitive lookup)
_FUNCTIONS = {"abs", "sqrt", "log10", "db"}
# Regex pieces for the tokenizer
_NUMBER_RE = re.compile(r"[0-9]*\.?[0-9]+(?:[eE][+-]?[0-9]+)?")
# Signal names: either V(...), I(...) style or bare identifiers like "time".
# Handles nested parens for things like V(N001) and I(R1).
_SIGNAL_RE = re.compile(r"[A-Za-z_]\w*\([^)]*\)")
_IDENT_RE = re.compile(r"[A-Za-z_]\w*")
def _tokenize(expr: str) -> list[_Token]:
"""Convert an expression string into a list of tokens."""
tokens: list[_Token] = []
i = 0
s = expr.strip()
while i < len(s):
# Skip whitespace
if s[i].isspace():
i += 1
continue
# Operators / parens
if s[i] == "+":
tokens.append(_Token(_TokenType.PLUS, "+"))
i += 1
elif s[i] == "-":
tokens.append(_Token(_TokenType.MINUS, "-"))
i += 1
elif s[i] == "*":
tokens.append(_Token(_TokenType.STAR, "*"))
i += 1
elif s[i] == "/":
tokens.append(_Token(_TokenType.SLASH, "/"))
i += 1
elif s[i] == "(":
tokens.append(_Token(_TokenType.LPAREN, "("))
i += 1
elif s[i] == ")":
tokens.append(_Token(_TokenType.RPAREN, ")"))
i += 1
# Number literal
elif s[i].isdigit() or (s[i] == "." and i + 1 < len(s) and s[i + 1].isdigit()):
m = _NUMBER_RE.match(s, i)
if m:
tokens.append(_Token(_TokenType.NUMBER, m.group()))
i = m.end()
else:
raise ValueError(f"Invalid number at position {i}: {s[i:]!r}")
# Identifier: could be a function, signal like V(out), or bare name
elif s[i].isalpha() or s[i] == "_":
# Try signal pattern first: name(...)
m = _SIGNAL_RE.match(s, i)
if m:
full = m.group()
# Check if the part before '(' is a known function
paren_pos = full.index("(")
prefix = full[:paren_pos].lower()
if prefix in _FUNCTIONS:
# It's a function call -- emit FUNC token, then let
# the parser handle the parenthesized argument
tokens.append(_Token(_TokenType.FUNC, prefix))
i += paren_pos # parser will see '(' next
else:
# It's a signal name like V(out) or I(R1)
tokens.append(_Token(_TokenType.SIGNAL, full))
i = m.end()
else:
# Bare identifier (e.g., "time", or a function without parens)
m = _IDENT_RE.match(s, i)
if m:
name = m.group()
if name.lower() in _FUNCTIONS:
tokens.append(_Token(_TokenType.FUNC, name.lower()))
else:
tokens.append(_Token(_TokenType.SIGNAL, name))
i = m.end()
else:
raise ValueError(f"Unexpected character at position {i}: {s[i:]!r}")
else:
raise ValueError(f"Unexpected character at position {i}: {s[i:]!r}")
tokens.append(_Token(_TokenType.EOF, ""))
return tokens
# -- Recursive-descent parser / evaluator ------------------------------------
#
# Grammar:
# expr -> term (('+' | '-') term)*
# term -> unary (('*' | '/') unary)*
# unary -> '-' unary | primary
# primary -> NUMBER | SIGNAL | FUNC '(' expr ')' | '(' expr ')'
class _Parser:
"""Recursive-descent expression evaluator over numpy arrays."""
def __init__(self, tokens: list[_Token], variables: dict[str, np.ndarray]):
self.tokens = tokens
self.variables = variables
self.pos = 0
def _peek(self) -> _Token:
return self.tokens[self.pos]
def _advance(self) -> _Token:
tok = self.tokens[self.pos]
self.pos += 1
return tok
def _expect(self, ttype: _TokenType) -> _Token:
tok = self._advance()
if tok.type != ttype:
raise ValueError(f"Expected {ttype.name}, got {tok.type.name} ({tok.value!r})")
return tok
def parse(self) -> np.ndarray:
result = self._expr()
if self._peek().type != _TokenType.EOF:
raise ValueError(f"Unexpected token after expression: {self._peek().value!r}")
return np.real(result) if np.iscomplexobj(result) else result
def _expr(self) -> np.ndarray:
left = self._term()
while self._peek().type in (_TokenType.PLUS, _TokenType.MINUS):
op = self._advance()
right = self._term()
if op.type == _TokenType.PLUS:
left = left + right
else:
left = left - right
return left
def _term(self) -> np.ndarray:
left = self._unary()
while self._peek().type in (_TokenType.STAR, _TokenType.SLASH):
op = self._advance()
right = self._unary()
if op.type == _TokenType.STAR:
left = left * right
else:
# Avoid division by zero -- replace zeros with tiny value
safe = np.where(np.abs(right) < 1e-30, 1e-30, right)
left = left / safe
return left
def _unary(self) -> np.ndarray:
if self._peek().type == _TokenType.MINUS:
self._advance()
return -self._unary()
return self._primary()
def _primary(self) -> np.ndarray:
tok = self._peek()
if tok.type == _TokenType.NUMBER:
self._advance()
return np.float64(tok.value)
if tok.type == _TokenType.SIGNAL:
self._advance()
return self._resolve_signal(tok.value)
if tok.type == _TokenType.FUNC:
self._advance()
self._expect(_TokenType.LPAREN)
arg = self._expr()
self._expect(_TokenType.RPAREN)
return self._apply_func(tok.value, arg)
if tok.type == _TokenType.LPAREN:
self._advance()
result = self._expr()
self._expect(_TokenType.RPAREN)
return result
raise ValueError(f"Unexpected token: {tok.type.name} ({tok.value!r})")
def _resolve_signal(self, name: str) -> np.ndarray:
"""Look up a signal by name, trying exact match then case-insensitive."""
if name in self.variables:
return np.real(self.variables[name])
name_lower = name.lower()
for key, val in self.variables.items():
if key.lower() == name_lower:
return np.real(val)
available = ", ".join(sorted(self.variables.keys()))
raise ValueError(f"Unknown signal {name!r}. Available: {available}")
@staticmethod
def _apply_func(name: str, arg: np.ndarray) -> np.ndarray:
"""Apply a built-in function to a numpy array."""
a = np.real(arg)
if name == "abs":
return np.abs(a)
if name == "sqrt":
return np.sqrt(np.maximum(a, 0.0))
if name == "log10":
return np.log10(np.maximum(np.abs(a), 1e-30))
if name == "db":
return 20.0 * np.log10(np.maximum(np.abs(a), 1e-30))
raise ValueError(f"Unknown function: {name!r}")
# -- Public API ---------------------------------------------------------------
def evaluate_expression(expression: str, variables: dict[str, np.ndarray]) -> np.ndarray:
"""Evaluate a mathematical expression over waveform data.
Uses a safe recursive-descent parser -- no eval() or exec().
Args:
expression: Math expression, e.g. "V(out) * I(R1)" or "dB(V(out))"
variables: Dict mapping signal names to numpy arrays
Returns:
Resulting numpy array
Raises:
ValueError: On parse errors or unknown signals
"""
tokens = _tokenize(expression)
parser = _Parser(tokens, variables)
return parser.parse()
class WaveformCalculator:
"""Evaluate expressions against all signals in a RawFile."""
def __init__(self, raw_file: RawFile):
self._raw = raw_file
self._variables: dict[str, np.ndarray] = {}
for var in raw_file.variables:
self._variables[var.name] = raw_file.data[var.index]
def calc(self, expression: str) -> np.ndarray:
"""Evaluate an expression against the loaded signals.
Args:
expression: Math expression referencing signal names from the raw file
Returns:
Resulting numpy array
"""
return evaluate_expression(expression, self._variables)
def available_signals(self) -> list[str]:
"""List all signal names available for expressions."""
return sorted(self._variables.keys())

View File

@ -0,0 +1,515 @@
"""Waveform analysis and signal processing for simulation data."""
import numpy as np
def compute_fft(time: np.ndarray, signal: np.ndarray, max_harmonics: int = 50) -> dict:
"""Compute FFT of a time-domain signal.
Args:
time: Time array in seconds (must be monotonically increasing)
signal: Signal amplitude array (same length as time)
max_harmonics: Maximum number of frequency bins to return
Returns:
Dict with frequencies, magnitudes, magnitudes_db,
fundamental_freq, and dc_offset
"""
if len(time) < 2 or len(signal) < 2:
return {
"frequencies": [],
"magnitudes": [],
"magnitudes_db": [],
"fundamental_freq": 0.0,
"dc_offset": 0.0,
}
n = len(signal)
dt = (time[-1] - time[0]) / (n - 1)
if dt <= 0:
return {
"frequencies": [],
"magnitudes": [],
"magnitudes_db": [],
"fundamental_freq": 0.0,
"dc_offset": float(np.mean(np.real(signal))),
}
# Use real FFT for real-valued signals
spectrum = np.fft.rfft(np.real(signal))
freqs = np.fft.rfftfreq(n, d=dt)
magnitudes = np.abs(spectrum) * 2.0 / n
# DC component doesn't get the 2x factor
magnitudes[0] /= 2.0
dc_offset = float(magnitudes[0])
# Find fundamental: largest magnitude excluding DC
if len(magnitudes) > 1:
fund_idx = int(np.argmax(magnitudes[1:])) + 1
fundamental_freq = float(freqs[fund_idx])
else:
fund_idx = 0
fundamental_freq = 0.0
# Trim to max_harmonics (plus DC bin)
limit = min(max_harmonics + 1, len(freqs))
freqs = freqs[:limit]
magnitudes = magnitudes[:limit]
# dB conversion with floor to avoid log(0)
magnitudes_db = 20.0 * np.log10(np.maximum(magnitudes, 1e-15))
return {
"frequencies": freqs.tolist(),
"magnitudes": magnitudes.tolist(),
"magnitudes_db": magnitudes_db.tolist(),
"fundamental_freq": fundamental_freq,
"dc_offset": dc_offset,
}
def compute_thd(time: np.ndarray, signal: np.ndarray, n_harmonics: int = 10) -> dict:
"""Compute Total Harmonic Distortion.
THD is the ratio of harmonic content to the fundamental, expressed
as a percentage: sqrt(sum(V_n^2 for n>=2)) / V_1 * 100
Args:
time: Time array in seconds
signal: Signal amplitude array
n_harmonics: Number of harmonics to include (2nd through nth)
Returns:
Dict with thd_percent, fundamental_freq, fundamental_magnitude,
and harmonics list
"""
if len(time) < 2 or len(signal) < 2:
return {
"thd_percent": 0.0,
"fundamental_freq": 0.0,
"fundamental_magnitude": 0.0,
"harmonics": [],
}
n = len(signal)
dt = (time[-1] - time[0]) / (n - 1)
if dt <= 0:
return {
"thd_percent": 0.0,
"fundamental_freq": 0.0,
"fundamental_magnitude": 0.0,
"harmonics": [],
}
spectrum = np.fft.rfft(np.real(signal))
freqs = np.fft.rfftfreq(n, d=dt)
magnitudes = np.abs(spectrum) * 2.0 / n
magnitudes[0] /= 2.0 # DC correction
# Fundamental = largest non-DC peak
if len(magnitudes) <= 1:
return {
"thd_percent": 0.0,
"fundamental_freq": 0.0,
"fundamental_magnitude": 0.0,
"harmonics": [],
}
fund_idx = int(np.argmax(magnitudes[1:])) + 1
fundamental_freq = float(freqs[fund_idx])
fundamental_mag = float(magnitudes[fund_idx])
if fundamental_mag < 1e-15:
return {
"thd_percent": 0.0,
"fundamental_freq": fundamental_freq,
"fundamental_magnitude": fundamental_mag,
"harmonics": [],
}
# Collect harmonics by finding the bin closest to each integer multiple
harmonics = []
harmonic_sum_sq = 0.0
for h in range(2, n_harmonics + 2):
target_freq = fundamental_freq * h
if target_freq > freqs[-1]:
break
idx = int(np.argmin(np.abs(freqs - target_freq)))
mag = float(magnitudes[idx])
mag_db = 20.0 * np.log10(max(mag, 1e-15))
harmonic_sum_sq += mag**2
harmonics.append(
{
"harmonic": h,
"frequency": float(freqs[idx]),
"magnitude": mag,
"magnitude_db": mag_db,
}
)
thd_percent = (np.sqrt(harmonic_sum_sq) / fundamental_mag) * 100.0
return {
"thd_percent": float(thd_percent),
"fundamental_freq": fundamental_freq,
"fundamental_magnitude": fundamental_mag,
"harmonics": harmonics,
}
def compute_rms(signal: np.ndarray) -> float:
"""Compute RMS value of a signal.
Args:
signal: Signal amplitude array (real or complex)
Returns:
RMS value as a float
"""
if len(signal) == 0:
return 0.0
real_signal = np.real(signal)
return float(np.sqrt(np.mean(real_signal**2)))
def compute_peak_to_peak(signal: np.ndarray) -> dict:
"""Compute peak-to-peak metrics.
Args:
signal: Signal amplitude array
Returns:
Dict with peak_to_peak, max, min, and mean values
"""
if len(signal) == 0:
return {
"peak_to_peak": 0.0,
"max": 0.0,
"min": 0.0,
"mean": 0.0,
}
real_signal = np.real(signal)
sig_max = float(np.max(real_signal))
sig_min = float(np.min(real_signal))
return {
"peak_to_peak": sig_max - sig_min,
"max": sig_max,
"min": sig_min,
"mean": float(np.mean(real_signal)),
}
def compute_settling_time(
time: np.ndarray,
signal: np.ndarray,
final_value: float | None = None,
tolerance_percent: float = 2.0,
) -> dict:
"""Compute settling time.
Searches backwards from the end of the signal to find the last
point where the signal was outside the tolerance band around the
final value. Settling time is measured from time[0] to that crossing.
Args:
time: Time array in seconds
signal: Signal amplitude array
final_value: Target value. If None, uses the last sample.
tolerance_percent: Allowed deviation as a percentage of final_value
(or absolute if final_value is near zero)
Returns:
Dict with settling_time, final_value, tolerance, and settled flag
"""
if len(time) < 2 or len(signal) < 2:
return {
"settling_time": 0.0,
"final_value": 0.0,
"tolerance": 0.0,
"settled": False,
}
real_signal = np.real(signal)
if final_value is None:
final_value = float(real_signal[-1])
# Tolerance band: percentage of final_value, but use absolute
# tolerance if final_value is near zero to avoid a degenerate band
if abs(final_value) > 1e-12:
tolerance = abs(final_value) * tolerance_percent / 100.0
else:
# Fall back to percentage of signal range
sig_range = float(np.max(real_signal) - np.min(real_signal))
tolerance = sig_range * tolerance_percent / 100.0 if sig_range > 0 else 1e-12
# Walk backwards to find where signal last left the tolerance band
outside = np.abs(real_signal - final_value) > tolerance
if not np.any(outside):
# Signal was always within tolerance
return {
"settling_time": 0.0,
"final_value": final_value,
"tolerance": tolerance,
"settled": True,
}
# Find the last index that was outside the band
last_outside = int(np.max(np.nonzero(outside)[0]))
if last_outside >= len(time) - 1:
# Never settled within the captured data
return {
"settling_time": float(time[-1] - time[0]),
"final_value": final_value,
"tolerance": tolerance,
"settled": False,
}
# Settling time = time from start to the first sample inside the band
# after the last excursion
settling_time = float(time[last_outside + 1] - time[0])
return {
"settling_time": settling_time,
"final_value": final_value,
"tolerance": tolerance,
"settled": True,
}
def compute_rise_time(
time: np.ndarray,
signal: np.ndarray,
low_pct: float = 10,
high_pct: float = 90,
) -> dict:
"""Compute rise time between two percentage thresholds.
Uses linear interpolation between samples for sub-sample accuracy.
Thresholds are computed relative to the signal's min-to-max swing.
Args:
time: Time array in seconds
signal: Signal amplitude array
low_pct: Lower threshold as percentage of swing (default 10%)
high_pct: Upper threshold as percentage of swing (default 90%)
Returns:
Dict with rise_time, low_threshold, high_threshold,
low_time, and high_time
"""
if len(time) < 2 or len(signal) < 2:
return {
"rise_time": 0.0,
"low_threshold": 0.0,
"high_threshold": 0.0,
"low_time": 0.0,
"high_time": 0.0,
}
real_signal = np.real(signal)
sig_min = float(np.min(real_signal))
sig_max = float(np.max(real_signal))
swing = sig_max - sig_min
if swing < 1e-15:
return {
"rise_time": 0.0,
"low_threshold": sig_min,
"high_threshold": sig_max,
"low_time": float(time[0]),
"high_time": float(time[0]),
}
low_thresh = sig_min + swing * (low_pct / 100.0)
high_thresh = sig_min + swing * (high_pct / 100.0)
low_time = _interpolate_crossing(time, real_signal, low_thresh, rising=True)
high_time = _interpolate_crossing(time, real_signal, high_thresh, rising=True)
if low_time is None or high_time is None:
return {
"rise_time": 0.0,
"low_threshold": low_thresh,
"high_threshold": high_thresh,
"low_time": float(low_time) if low_time is not None else None,
"high_time": float(high_time) if high_time is not None else None,
}
return {
"rise_time": high_time - low_time,
"low_threshold": low_thresh,
"high_threshold": high_thresh,
"low_time": low_time,
"high_time": high_time,
}
def _interpolate_crossing(
time: np.ndarray,
signal: np.ndarray,
threshold: float,
rising: bool = True,
) -> float | None:
"""Find the first time a signal crosses a threshold, with interpolation.
Args:
time: Time array
signal: Signal array
threshold: Value to cross
rising: If True, look for low-to-high crossing
Returns:
Interpolated time of crossing, or None if no crossing found
"""
for i in range(len(signal) - 1):
if rising:
crosses = signal[i] <= threshold < signal[i + 1]
else:
crosses = signal[i] >= threshold > signal[i + 1]
if crosses:
# Linear interpolation between samples
dv = signal[i + 1] - signal[i]
if abs(dv) < 1e-30:
return float(time[i])
frac = (threshold - signal[i]) / dv
return float(time[i] + frac * (time[i + 1] - time[i]))
return None
def compute_bandwidth(
frequency: np.ndarray,
magnitude_db: np.ndarray,
ref_db: float | None = None,
) -> dict:
"""Compute -3dB bandwidth from frequency response data.
Interpolates between data points to find the exact frequencies
where the response crosses the -3dB level relative to the reference.
Args:
frequency: Frequency array in Hz (may be complex; .real is used)
magnitude_db: Magnitude in dB
ref_db: Reference level in dB. Defaults to the peak of magnitude_db.
Returns:
Dict with bandwidth_hz, f_low, f_high, ref_db, and type
"""
if len(frequency) < 2 or len(magnitude_db) < 2:
return {
"bandwidth_hz": 0.0,
"f_low": None,
"f_high": None,
"ref_db": 0.0,
"type": "unknown",
}
freq = np.real(frequency).astype(np.float64)
mag = np.real(magnitude_db).astype(np.float64)
# Sort by frequency (in case data isn't ordered)
sort_idx = np.argsort(freq)
freq = freq[sort_idx]
mag = mag[sort_idx]
# Strip any negative frequencies
positive_mask = freq >= 0
freq = freq[positive_mask]
mag = mag[positive_mask]
if len(freq) < 2:
return {
"bandwidth_hz": 0.0,
"f_low": None,
"f_high": None,
"ref_db": 0.0,
"type": "unknown",
}
if ref_db is None:
ref_db = float(np.max(mag))
cutoff = ref_db - 3.0
# Find all -3dB crossings by checking where magnitude crosses the cutoff
above = mag >= cutoff
crossings = []
for i in range(len(mag) - 1):
if above[i] != above[i + 1]:
# Interpolate the exact crossing frequency
dm = mag[i + 1] - mag[i]
if abs(dm) < 1e-30:
f_cross = float(freq[i])
else:
frac = (cutoff - mag[i]) / dm
f_cross = float(freq[i] + frac * (freq[i + 1] - freq[i]))
crossings.append(f_cross)
if not crossings:
# No crossing found - check if entirely above or below
if np.all(above):
return {
"bandwidth_hz": float(freq[-1] - freq[0]),
"f_low": None,
"f_high": None,
"ref_db": ref_db,
"type": "unknown",
}
return {
"bandwidth_hz": 0.0,
"f_low": None,
"f_high": None,
"ref_db": ref_db,
"type": "unknown",
}
# Classify response shape
peak_idx = int(np.argmax(mag))
if len(crossings) == 1:
f_cross = crossings[0]
if peak_idx < len(freq) // 2:
# Peak is at low end => lowpass
return {
"bandwidth_hz": f_cross,
"f_low": None,
"f_high": f_cross,
"ref_db": ref_db,
"type": "lowpass",
}
else:
# Peak is at high end => highpass
return {
"bandwidth_hz": float(freq[-1]) - f_cross,
"f_low": f_cross,
"f_high": None,
"ref_db": ref_db,
"type": "highpass",
}
# Two or more crossings => bandpass (use first and last)
f_low = crossings[0]
f_high = crossings[-1]
return {
"bandwidth_hz": f_high - f_low,
"f_low": f_low,
"f_high": f_high,
"ref_db": ref_db,
"type": "bandpass",
}

View File

@ -1,534 +0,0 @@
"""FastMCP server for LTspice circuit simulation automation.
This server provides tools for:
- Running SPICE simulations
- Extracting waveform data
- Modifying schematic components
- Browsing component libraries and examples
"""
import json
from pathlib import Path
import numpy as np
from fastmcp import FastMCP
from . import __version__
from .config import (
LTSPICE_EXAMPLES,
LTSPICE_LIB,
validate_installation,
)
from .raw_parser import parse_raw_file
from .runner import run_netlist, run_simulation
from .schematic import (
modify_component_value,
parse_schematic,
)
# Initialize FastMCP server
mcp = FastMCP(
name="mcp-ltspice",
instructions="""
LTspice MCP Server - Circuit simulation automation.
Use this server to:
- Run SPICE simulations on .asc schematics or .cir netlists
- Extract waveform data (voltages, currents) from simulation results
- Modify component values in schematics programmatically
- Browse LTspice's component library (6500+ symbols)
- Access example circuits (4000+ examples)
LTspice runs via Wine on Linux. Simulations execute in batch mode
and results are parsed from binary .raw files.
""",
)
# ============================================================================
# TOOLS
# ============================================================================
@mcp.tool()
async def simulate(
schematic_path: str,
timeout_seconds: float = 300,
) -> dict:
"""Run an LTspice simulation on a schematic file.
This runs LTspice in batch mode, which executes any simulation
directives (.tran, .ac, .dc, .op, etc.) in the schematic.
Args:
schematic_path: Absolute path to .asc schematic file
timeout_seconds: Maximum time to wait for simulation (default 5 min)
Returns:
dict with:
- success: bool
- elapsed_seconds: simulation time
- variables: list of signal names available
- points: number of data points
- error: error message if failed
"""
result = await run_simulation(
schematic_path,
timeout=timeout_seconds,
parse_results=True,
)
response = {
"success": result.success,
"elapsed_seconds": result.elapsed_seconds,
"error": result.error,
}
if result.raw_data:
response["variables"] = [
{"name": v.name, "type": v.type}
for v in result.raw_data.variables
]
response["points"] = result.raw_data.points
response["plotname"] = result.raw_data.plotname
response["raw_file"] = str(result.raw_file) if result.raw_file else None
return response
@mcp.tool()
async def simulate_netlist(
netlist_path: str,
timeout_seconds: float = 300,
) -> dict:
"""Run an LTspice simulation on a netlist file.
Use this for .cir or .net SPICE netlist files instead of
schematic .asc files.
Args:
netlist_path: Absolute path to .cir or .net netlist file
timeout_seconds: Maximum time to wait for simulation
Returns:
dict with simulation results (same as simulate)
"""
result = await run_netlist(
netlist_path,
timeout=timeout_seconds,
parse_results=True,
)
response = {
"success": result.success,
"elapsed_seconds": result.elapsed_seconds,
"error": result.error,
}
if result.raw_data:
response["variables"] = [
{"name": v.name, "type": v.type}
for v in result.raw_data.variables
]
response["points"] = result.raw_data.points
response["raw_file"] = str(result.raw_file) if result.raw_file else None
return response
@mcp.tool()
def get_waveform(
raw_file_path: str,
signal_names: list[str],
max_points: int = 1000,
) -> dict:
"""Extract waveform data from a .raw simulation results file.
After running a simulation, use this to get the actual data values.
For transient analysis, includes time axis. For AC, includes frequency.
Args:
raw_file_path: Path to .raw file from simulation
signal_names: List of signal names to extract (partial match OK)
e.g., ["V(out)", "I(R1)"] or just ["out", "R1"]
max_points: Maximum data points to return (downsampled if needed)
Returns:
dict with:
- time_or_frequency: the x-axis data
- signals: dict mapping signal name to data array
- units: dict mapping signal name to unit type
"""
raw = parse_raw_file(raw_file_path)
# Get x-axis (time or frequency)
x_axis = raw.get_time()
x_name = "time"
if x_axis is None:
x_axis = raw.get_frequency()
x_name = "frequency"
# Downsample if needed
total_points = len(x_axis) if x_axis is not None else raw.points
step = max(1, total_points // max_points)
result = {
"x_axis_name": x_name,
"x_axis_data": [],
"signals": {},
"total_points": total_points,
"returned_points": 0,
}
if x_axis is not None:
sampled = x_axis[::step]
# For frequency domain, take real part (imag is 0)
if np.iscomplexobj(sampled):
result["x_axis_data"] = sampled.real.tolist()
else:
result["x_axis_data"] = sampled.tolist()
result["returned_points"] = len(result["x_axis_data"])
# Extract requested signals
for name in signal_names:
data = raw.get_variable(name)
if data is not None:
sampled = data[::step]
# Handle complex data (AC analysis)
if np.iscomplexobj(sampled):
import math
result["signals"][name] = {
"magnitude_db": [
20 * math.log10(abs(x)) if abs(x) > 0 else -200
for x in sampled
],
"phase_degrees": [
math.degrees(math.atan2(x.imag, x.real))
for x in sampled
],
}
else:
result["signals"][name] = {"values": sampled.tolist()}
return result
@mcp.tool()
def read_schematic(schematic_path: str) -> dict:
"""Read and parse an LTspice schematic file.
Returns component list, net names, and SPICE directives.
Args:
schematic_path: Path to .asc schematic file
Returns:
dict with:
- components: list of {name, symbol, value, x, y}
- nets: list of net/flag names
- directives: list of SPICE directive strings
"""
sch = parse_schematic(schematic_path)
return {
"version": sch.version,
"components": [
{
"name": c.name,
"symbol": c.symbol,
"value": c.value,
"x": c.x,
"y": c.y,
"attributes": c.attributes,
}
for c in sch.components
],
"nets": [f.name for f in sch.flags],
"directives": sch.get_spice_directives(),
"wire_count": len(sch.wires),
}
@mcp.tool()
def edit_component(
schematic_path: str,
component_name: str,
new_value: str,
output_path: str | None = None,
) -> dict:
"""Modify a component's value in a schematic.
Use this to change resistor values, capacitor values, etc.
programmatically before running a simulation.
Args:
schematic_path: Path to .asc schematic file
component_name: Instance name like "R1", "C2", "M1"
new_value: New value string, e.g., "10k", "100n", "2N7000"
output_path: Where to save modified schematic (None = overwrite)
Returns:
dict with success status and component details
"""
try:
sch = modify_component_value(
schematic_path,
component_name,
new_value,
output_path,
)
comp = sch.get_component(component_name)
return {
"success": True,
"component": component_name,
"new_value": new_value,
"output_path": output_path or schematic_path,
"symbol": comp.symbol if comp else None,
}
except ValueError as e:
return {
"success": False,
"error": str(e),
}
@mcp.tool()
def list_symbols(
category: str | None = None,
search: str | None = None,
limit: int = 50,
) -> dict:
"""List available component symbols from LTspice library.
Symbols define the graphical representation and pins of components.
Args:
category: Filter by category folder (e.g., "Opamps", "Comparators")
search: Search term for symbol name (case-insensitive)
limit: Maximum results to return
Returns:
dict with:
- symbols: list of {name, category, path}
- total_count: total matching symbols
"""
symbols = []
sym_dir = LTSPICE_LIB / "sym"
if not sym_dir.exists():
return {"error": "Symbol library not found", "symbols": [], "total_count": 0}
for asy_file in sym_dir.rglob("*.asy"):
rel_path = asy_file.relative_to(sym_dir)
cat = str(rel_path.parent) if rel_path.parent != Path(".") else "misc"
name = asy_file.stem
# Apply filters
if category and cat.lower() != category.lower():
continue
if search and search.lower() not in name.lower():
continue
symbols.append({
"name": name,
"category": cat,
"path": str(asy_file),
})
# Sort by name
symbols.sort(key=lambda x: x["name"].lower())
total = len(symbols)
return {
"symbols": symbols[:limit],
"total_count": total,
"returned_count": min(limit, total),
}
@mcp.tool()
def list_examples(
category: str | None = None,
search: str | None = None,
limit: int = 50,
) -> dict:
"""List example circuits from LTspice examples library.
Great for learning or as starting points for new designs.
Args:
category: Filter by category folder
search: Search term for example name
limit: Maximum results to return
Returns:
dict with list of example schematics
"""
examples = []
if not LTSPICE_EXAMPLES.exists():
return {"error": "Examples not found", "examples": [], "total_count": 0}
for asc_file in LTSPICE_EXAMPLES.rglob("*.asc"):
rel_path = asc_file.relative_to(LTSPICE_EXAMPLES)
cat = str(rel_path.parent) if rel_path.parent != Path(".") else "misc"
name = asc_file.stem
if category and cat.lower() != category.lower():
continue
if search and search.lower() not in name.lower():
continue
examples.append({
"name": name,
"category": cat,
"path": str(asc_file),
})
examples.sort(key=lambda x: x["name"].lower())
total = len(examples)
return {
"examples": examples[:limit],
"total_count": total,
"returned_count": min(limit, total),
}
@mcp.tool()
def get_symbol_info(symbol_path: str) -> dict:
"""Get detailed information about a component symbol.
Reads the .asy file to extract pin names, attributes, and description.
Args:
symbol_path: Path to .asy symbol file
Returns:
dict with symbol details including pins and default attributes
"""
path = Path(symbol_path)
if not path.exists():
return {"error": f"Symbol not found: {symbol_path}"}
content = path.read_text(errors="replace")
lines = content.split("\n")
info = {
"name": path.stem,
"pins": [],
"attributes": {},
"description": "",
"prefix": "",
"spice_prefix": "",
}
for line in lines:
line = line.strip()
if line.startswith("PIN"):
parts = line.split()
if len(parts) >= 5:
info["pins"].append({
"x": int(parts[1]),
"y": int(parts[2]),
"justification": parts[3],
"rotation": parts[4] if len(parts) > 4 else "0",
})
elif line.startswith("PINATTR PinName"):
pin_name = line.split(None, 2)[2] if len(line.split()) > 2 else ""
if info["pins"]:
info["pins"][-1]["name"] = pin_name
elif line.startswith("SYMATTR"):
parts = line.split(None, 2)
if len(parts) >= 3:
attr_name = parts[1]
attr_value = parts[2]
info["attributes"][attr_name] = attr_value
if attr_name == "Description":
info["description"] = attr_value
elif attr_name == "Prefix":
info["prefix"] = attr_value
elif attr_name == "SpiceModel":
info["spice_prefix"] = attr_value
return info
@mcp.tool()
def check_installation() -> dict:
"""Verify LTspice and Wine are properly installed.
Returns:
dict with installation status and paths
"""
ok, msg = validate_installation()
from .config import LTSPICE_DIR, LTSPICE_EXE, WINE_PREFIX
return {
"valid": ok,
"message": msg,
"paths": {
"ltspice_dir": str(LTSPICE_DIR),
"ltspice_exe": str(LTSPICE_EXE),
"wine_prefix": str(WINE_PREFIX),
"lib_dir": str(LTSPICE_LIB),
"examples_dir": str(LTSPICE_EXAMPLES),
},
"lib_exists": LTSPICE_LIB.exists(),
"examples_exist": LTSPICE_EXAMPLES.exists(),
}
# ============================================================================
# RESOURCES
# ============================================================================
@mcp.resource("ltspice://symbols")
def resource_symbols() -> str:
"""List of all available LTspice symbols organized by category."""
result = list_symbols(limit=10000)
return json.dumps(result, indent=2)
@mcp.resource("ltspice://examples")
def resource_examples() -> str:
"""List of all LTspice example circuits."""
result = list_examples(limit=10000)
return json.dumps(result, indent=2)
@mcp.resource("ltspice://status")
def resource_status() -> str:
"""Current LTspice installation status."""
return json.dumps(check_installation(), indent=2)
# ============================================================================
# ENTRY POINT
# ============================================================================
def main():
"""Run the MCP server."""
print(f"🔌 mcp-ltspice v{__version__}")
print(" LTspice circuit simulation automation")
# Quick validation
ok, msg = validate_installation()
if ok:
print(f"{msg}")
else:
print(f"{msg}")
mcp.run()
if __name__ == "__main__":
main()

0
tests/__init__.py Normal file
View File

336
tests/conftest.py Normal file
View File

@ -0,0 +1,336 @@
"""Shared fixtures for mcltspice test suite.
All fixtures produce synthetic data -- no LTspice or Wine required.
"""
from pathlib import Path
import numpy as np
import pytest
from mcltspice.raw_parser import RawFile, Variable
from mcltspice.schematic import Component, Flag, Schematic, Text, Wire
# ---------------------------------------------------------------------------
# Time-domain fixtures
# ---------------------------------------------------------------------------
@pytest.fixture
def sample_rate() -> float:
"""Default sample rate: 100 kHz."""
return 100_000.0
@pytest.fixture
def duration() -> float:
"""Default signal duration: 10 ms (enough for 1 kHz signals)."""
return 0.01
@pytest.fixture
def time_array(sample_rate, duration) -> np.ndarray:
"""Uniformly spaced time array."""
n = int(sample_rate * duration)
return np.linspace(0, duration, n, endpoint=False)
@pytest.fixture
def sine_1khz(time_array) -> np.ndarray:
"""1 kHz sine wave, 1 V peak."""
return np.sin(2 * np.pi * 1000 * time_array)
@pytest.fixture
def dc_signal() -> np.ndarray:
"""Constant 3.3 V DC signal (1000 samples)."""
return np.full(1000, 3.3)
@pytest.fixture
def step_signal(time_array) -> np.ndarray:
"""Unit step at t = duration/2 with exponential rise (tau = duration/10)."""
t = time_array
mid = t[-1] / 2
tau = t[-1] / 10
sig = np.where(t >= mid, 1.0 - np.exp(-(t - mid) / tau), 0.0)
return sig
# ---------------------------------------------------------------------------
# Frequency-domain / AC fixtures
# ---------------------------------------------------------------------------
@pytest.fixture
def ac_frequency() -> np.ndarray:
"""Log-spaced frequency array: 1 Hz to 10 MHz, 500 points."""
return np.logspace(0, 7, 500)
@pytest.fixture
def lowpass_response(ac_frequency) -> np.ndarray:
"""First-order lowpass magnitude in dB (fc ~ 1 kHz)."""
fc = 1000.0
mag = 1.0 / np.sqrt(1.0 + (ac_frequency / fc) ** 2)
return 20.0 * np.log10(mag)
@pytest.fixture
def lowpass_complex(ac_frequency) -> np.ndarray:
"""First-order lowpass as complex transfer function (fc ~ 1 kHz)."""
fc = 1000.0
s = 1j * ac_frequency / fc
return 1.0 / (1.0 + s)
# ---------------------------------------------------------------------------
# Stepped / multi-run fixtures
# ---------------------------------------------------------------------------
@pytest.fixture
def stepped_time() -> np.ndarray:
"""Time axis for 3 runs, each 0..1 ms with 100 points per run."""
runs = []
for _ in range(3):
runs.append(np.linspace(0, 1e-3, 100, endpoint=False))
return np.concatenate(runs)
@pytest.fixture
def stepped_data(stepped_time) -> np.ndarray:
"""Two variables (time + V(out)) across 3 runs."""
n = len(stepped_time)
data = np.zeros((2, n))
data[0] = stepped_time
# Each run has a different amplitude sine wave
for run_idx in range(3):
start = run_idx * 100
end = start + 100
t_run = data[0, start:end]
data[1, start:end] = (run_idx + 1) * np.sin(2 * np.pi * 1000 * t_run)
return data
# ---------------------------------------------------------------------------
# Mock RawFile fixtures
# ---------------------------------------------------------------------------
@pytest.fixture
def mock_rawfile(time_array, sine_1khz) -> RawFile:
"""A simple transient-analysis RawFile with time and V(out)."""
n = len(time_array)
data = np.zeros((2, n))
data[0] = time_array
data[1] = sine_1khz
return RawFile(
title="Test Circuit",
date="2026-01-01",
plotname="Transient Analysis",
flags=["real"],
variables=[
Variable(0, "time", "time"),
Variable(1, "V(out)", "voltage"),
],
points=n,
data=data,
)
@pytest.fixture
def mock_rawfile_stepped(stepped_data) -> RawFile:
"""A stepped RawFile with 3 runs."""
n = stepped_data.shape[1]
return RawFile(
title="Stepped Sim",
date="2026-01-01",
plotname="Transient Analysis",
flags=["real", "stepped"],
variables=[
Variable(0, "time", "time"),
Variable(1, "V(out)", "voltage"),
],
points=n,
data=stepped_data,
n_runs=3,
run_boundaries=[0, 100, 200],
)
@pytest.fixture
def mock_rawfile_ac(ac_frequency, lowpass_complex) -> RawFile:
"""An AC-analysis RawFile with complex frequency-domain data."""
n = len(ac_frequency)
data = np.zeros((2, n), dtype=np.complex128)
data[0] = ac_frequency.astype(np.complex128)
data[1] = lowpass_complex
return RawFile(
title="AC Sim",
date="2026-01-01",
plotname="AC Analysis",
flags=["complex"],
variables=[
Variable(0, "frequency", "frequency"),
Variable(1, "V(out)", "voltage"),
],
points=n,
data=data,
)
# ---------------------------------------------------------------------------
# Netlist / Schematic string fixtures
# ---------------------------------------------------------------------------
@pytest.fixture
def sample_netlist_str() -> str:
"""A basic SPICE netlist string for an RC lowpass."""
return (
"* RC Lowpass Filter\n"
"V1 in 0 AC 1\n"
"R1 in out 1k\n"
"C1 out 0 100n\n"
".ac dec 100 1 1meg\n"
".backanno\n"
".end\n"
)
@pytest.fixture
def sample_asc_str() -> str:
"""A minimal .asc schematic string for an RC lowpass."""
return (
"Version 4\n"
"SHEET 1 880 680\n"
"WIRE 80 96 176 96\n"
"WIRE 176 176 272 176\n"
"FLAG 80 176 0\n"
"FLAG 272 240 0\n"
"FLAG 176 176 out\n"
"SYMBOL voltage 80 80 R0\n"
"SYMATTR InstName V1\n"
"SYMATTR Value AC 1\n"
"SYMBOL res 160 80 R0\n"
"SYMATTR InstName R1\n"
"SYMATTR Value 1k\n"
"SYMBOL cap 256 176 R0\n"
"SYMATTR InstName C1\n"
"SYMATTR Value 100n\n"
"TEXT 80 296 Left 2 !.ac dec 100 1 1meg\n"
)
# ---------------------------------------------------------------------------
# Touchstone fixtures
# ---------------------------------------------------------------------------
@pytest.fixture
def sample_s2p_content() -> str:
"""Synthetic .s2p file content (MA format, GHz)."""
lines = [
"! Two-port S-parameter data",
"! Freq S11(mag) S11(ang) S21(mag) S21(ang) S12(mag) S12(ang) S22(mag) S22(ang)",
"# GHZ S MA R 50",
"1.0 0.5 -30 0.9 -10 0.1 170 0.4 -40",
"2.0 0.6 -50 0.8 -20 0.12 160 0.5 -60",
"3.0 0.7 -70 0.7 -30 0.15 150 0.55 -80",
]
return "\n".join(lines) + "\n"
@pytest.fixture
def tmp_s2p_file(sample_s2p_content, tmp_path) -> Path:
"""Write synthetic .s2p content to a temp file and return path."""
p = tmp_path / "test.s2p"
p.write_text(sample_s2p_content)
return p
# ---------------------------------------------------------------------------
# Schematic object fixtures (for DRC and diff tests)
# ---------------------------------------------------------------------------
@pytest.fixture
def valid_schematic() -> Schematic:
"""A schematic with ground, components, wires, and sim directive."""
sch = Schematic()
sch.flags = [
Flag(80, 176, "0"),
Flag(272, 240, "0"),
Flag(176, 176, "out"),
]
sch.wires = [
Wire(80, 96, 176, 96),
Wire(176, 176, 272, 176),
]
sch.components = [
Component(name="V1", symbol="voltage", x=80, y=80, rotation=0, mirror=False,
attributes={"Value": "AC 1"}),
Component(name="R1", symbol="res", x=160, y=80, rotation=0, mirror=False,
attributes={"Value": "1k"}),
Component(name="C1", symbol="cap", x=256, y=176, rotation=0, mirror=False,
attributes={"Value": "100n"}),
]
sch.texts = [
Text(80, 296, ".ac dec 100 1 1meg", type="spice"),
]
return sch
@pytest.fixture
def schematic_no_ground() -> Schematic:
"""A schematic missing a ground node."""
sch = Schematic()
sch.flags = [Flag(176, 176, "out")]
sch.wires = [Wire(80, 96, 176, 96)]
sch.components = [
Component(name="R1", symbol="res", x=160, y=80, rotation=0, mirror=False,
attributes={"Value": "1k"}),
]
sch.texts = [Text(80, 296, ".tran 10m", type="spice")]
return sch
@pytest.fixture
def schematic_no_sim() -> Schematic:
"""A schematic missing a simulation directive."""
sch = Schematic()
sch.flags = [Flag(80, 176, "0")]
sch.wires = [Wire(80, 96, 176, 96)]
sch.components = [
Component(name="R1", symbol="res", x=160, y=80, rotation=0, mirror=False,
attributes={"Value": "1k"}),
]
sch.texts = []
return sch
@pytest.fixture
def schematic_duplicate_names() -> Schematic:
"""A schematic with duplicate component names."""
sch = Schematic()
sch.flags = [Flag(80, 176, "0")]
sch.wires = [Wire(80, 96, 176, 96)]
sch.components = [
Component(name="R1", symbol="res", x=160, y=80, rotation=0, mirror=False,
attributes={"Value": "1k"}),
Component(name="R1", symbol="res", x=320, y=80, rotation=0, mirror=False,
attributes={"Value": "2.2k"}),
]
sch.texts = [Text(80, 296, ".tran 10m", type="spice")]
return sch
@pytest.fixture
def ltspice_available():
"""Skip test if LTspice is not available."""
from mcltspice.config import validate_installation
ok, msg = validate_installation()
if not ok:
pytest.skip(f"LTspice not available: {msg}")
return True

175
tests/test_asc_generator.py Normal file
View File

@ -0,0 +1,175 @@
"""Tests for asc_generator module: pin positioning, schematic rendering, templates."""
import pytest
from mcltspice.asc_generator import (
_PIN_OFFSETS,
AscSchematic,
_rotate,
generate_inverting_amp,
generate_rc_lowpass,
generate_voltage_divider,
pin_position,
)
class TestPinPosition:
@pytest.mark.parametrize("symbol", ["voltage", "res", "cap", "ind"])
def test_r0_returns_offset_plus_origin(self, symbol):
"""At R0, pin position = origin + raw offset."""
cx, cy = 160, 80
for pin_idx in range(2):
px, py = pin_position(symbol, pin_idx, cx, cy, rotation=0)
offsets = _PIN_OFFSETS[symbol]
ox, oy = offsets[pin_idx]
assert px == cx + ox
assert py == cy + oy
@pytest.mark.parametrize("symbol", ["voltage", "res", "cap", "ind"])
def test_r90(self, symbol):
"""R90 applies (px, py) -> (-py, px)."""
cx, cy = 160, 80
for pin_idx in range(2):
px, py = pin_position(symbol, pin_idx, cx, cy, rotation=90)
offsets = _PIN_OFFSETS[symbol]
ox, oy = offsets[pin_idx]
assert px == cx + (-oy)
assert py == cy + ox
@pytest.mark.parametrize("symbol", ["voltage", "res", "cap", "ind"])
def test_r180(self, symbol):
"""R180 applies (px, py) -> (-px, -py)."""
cx, cy = 160, 80
for pin_idx in range(2):
px, py = pin_position(symbol, pin_idx, cx, cy, rotation=180)
offsets = _PIN_OFFSETS[symbol]
ox, oy = offsets[pin_idx]
assert px == cx + (-ox)
assert py == cy + (-oy)
@pytest.mark.parametrize("symbol", ["voltage", "res", "cap", "ind"])
def test_r270(self, symbol):
"""R270 applies (px, py) -> (py, -px)."""
cx, cy = 160, 80
for pin_idx in range(2):
px, py = pin_position(symbol, pin_idx, cx, cy, rotation=270)
offsets = _PIN_OFFSETS[symbol]
ox, oy = offsets[pin_idx]
assert px == cx + oy
assert py == cy + (-ox)
def test_unknown_symbol_defaults(self):
"""Unknown symbol uses default pin offsets."""
px, py = pin_position("unknown", 0, 0, 0, rotation=0)
# Default is [(0, 0), (0, 80)]
assert (px, py) == (0, 0)
px2, py2 = pin_position("unknown", 1, 0, 0, rotation=0)
assert (px2, py2) == (0, 80)
class TestRotate:
def test_identity(self):
assert _rotate(10, 20, 0) == (10, 20)
def test_90(self):
assert _rotate(10, 20, 90) == (-20, 10)
def test_180(self):
assert _rotate(10, 20, 180) == (-10, -20)
def test_270(self):
assert _rotate(10, 20, 270) == (20, -10)
def test_invalid_rotation(self):
"""Invalid rotation falls through to identity."""
assert _rotate(10, 20, 45) == (10, 20)
class TestAscSchematicRender:
def test_version_header(self):
sch = AscSchematic()
text = sch.render()
assert text.startswith("Version 4\n")
def test_sheet_dimensions(self):
sch = AscSchematic(sheet_w=1200, sheet_h=900)
text = sch.render()
assert "SHEET 1 1200 900" in text
def test_wire_rendering(self):
sch = AscSchematic()
sch.add_wire(80, 96, 176, 96)
text = sch.render()
assert "WIRE 80 96 176 96" in text
def test_component_rendering(self):
sch = AscSchematic()
sch.add_component("res", "R1", "1k", 160, 80)
text = sch.render()
assert "SYMBOL res 160 80 R0" in text
assert "SYMATTR InstName R1" in text
assert "SYMATTR Value 1k" in text
def test_rotated_component(self):
sch = AscSchematic()
sch.add_component("res", "R1", "1k", 160, 80, rotation=90)
text = sch.render()
assert "SYMBOL res 160 80 R90" in text
def test_ground_flag(self):
sch = AscSchematic()
sch.add_ground(80, 176)
text = sch.render()
assert "FLAG 80 176 0" in text
def test_net_label(self):
sch = AscSchematic()
sch.add_net_label("out", 176, 176)
text = sch.render()
assert "FLAG 176 176 out" in text
def test_directive_rendering(self):
sch = AscSchematic()
sch.add_directive(".tran 10m", 80, 300)
text = sch.render()
assert "TEXT 80 300 Left 2 !.tran 10m" in text
def test_chaining(self):
sch = (
AscSchematic()
.add_component("res", "R1", "1k", 160, 80)
.add_wire(80, 96, 176, 96)
.add_ground(80, 176)
)
text = sch.render()
assert "SYMBOL" in text
assert "WIRE" in text
assert "FLAG" in text
class TestAscTemplates:
@pytest.mark.parametrize(
"factory",
[generate_rc_lowpass, generate_voltage_divider, generate_inverting_amp],
)
def test_template_returns_schematic(self, factory):
sch = factory()
assert isinstance(sch, AscSchematic)
@pytest.mark.parametrize(
"factory",
[generate_rc_lowpass, generate_voltage_divider, generate_inverting_amp],
)
def test_template_nonempty(self, factory):
text = factory().render()
assert len(text) > 50
assert "SYMBOL" in text
@pytest.mark.parametrize(
"factory",
[generate_rc_lowpass, generate_voltage_divider, generate_inverting_amp],
)
def test_template_has_expected_components(self, factory):
text = factory().render()
# All templates should have at least a res and a voltage source
assert "res" in text or "voltage" in text

231
tests/test_diff.py Normal file
View File

@ -0,0 +1,231 @@
"""Tests for diff module: schematic comparison."""
from mcltspice.diff import (
ComponentChange,
SchematicDiff,
_diff_components,
_diff_directives,
_diff_nets,
_diff_wires,
diff_schematics,
)
from mcltspice.schematic import Component, Flag, Schematic, Text, Wire, write_schematic
def _make_schematic(**kwargs) -> Schematic:
"""Helper to build a Schematic with overrides."""
sch = Schematic()
sch.components = kwargs.get("components", [])
sch.wires = kwargs.get("wires", [])
sch.flags = kwargs.get("flags", [])
sch.texts = kwargs.get("texts", [])
return sch
class TestDiffComponents:
def test_added_component(self):
sch_a = _make_schematic(components=[
Component(name="R1", symbol="res", x=160, y=80, rotation=0, mirror=False,
attributes={"Value": "1k"}),
])
sch_b = _make_schematic(components=[
Component(name="R1", symbol="res", x=160, y=80, rotation=0, mirror=False,
attributes={"Value": "1k"}),
Component(name="C1", symbol="cap", x=256, y=176, rotation=0, mirror=False,
attributes={"Value": "100n"}),
])
changes = _diff_components(sch_a, sch_b)
added = [c for c in changes if c.change_type == "added"]
assert len(added) == 1
assert added[0].name == "C1"
def test_removed_component(self):
sch_a = _make_schematic(components=[
Component(name="R1", symbol="res", x=160, y=80, rotation=0, mirror=False,
attributes={"Value": "1k"}),
Component(name="R2", symbol="res", x=320, y=80, rotation=0, mirror=False,
attributes={"Value": "2.2k"}),
])
sch_b = _make_schematic(components=[
Component(name="R1", symbol="res", x=160, y=80, rotation=0, mirror=False,
attributes={"Value": "1k"}),
])
changes = _diff_components(sch_a, sch_b)
removed = [c for c in changes if c.change_type == "removed"]
assert len(removed) == 1
assert removed[0].name == "R2"
def test_modified_value(self):
sch_a = _make_schematic(components=[
Component(name="R1", symbol="res", x=160, y=80, rotation=0, mirror=False,
attributes={"Value": "1k"}),
])
sch_b = _make_schematic(components=[
Component(name="R1", symbol="res", x=160, y=80, rotation=0, mirror=False,
attributes={"Value": "2.2k"}),
])
changes = _diff_components(sch_a, sch_b)
modified = [c for c in changes if c.change_type == "modified"]
assert len(modified) == 1
assert modified[0].old_value == "1k"
assert modified[0].new_value == "2.2k"
def test_moved_component(self):
sch_a = _make_schematic(components=[
Component(name="R1", symbol="res", x=160, y=80, rotation=0, mirror=False,
attributes={"Value": "1k"}),
])
sch_b = _make_schematic(components=[
Component(name="R1", symbol="res", x=320, y=160, rotation=0, mirror=False,
attributes={"Value": "1k"}),
])
changes = _diff_components(sch_a, sch_b)
assert len(changes) == 1
assert changes[0].moved is True
def test_no_changes(self):
comp = Component(name="R1", symbol="res", x=160, y=80, rotation=0, mirror=False,
attributes={"Value": "1k"})
sch = _make_schematic(components=[comp])
changes = _diff_components(sch, sch)
assert len(changes) == 0
class TestDiffDirectives:
def test_added_directive(self):
sch_a = _make_schematic(texts=[
Text(80, 296, ".tran 10m", type="spice"),
])
sch_b = _make_schematic(texts=[
Text(80, 296, ".tran 10m", type="spice"),
Text(80, 320, ".meas tran vmax MAX V(out)", type="spice"),
])
changes = _diff_directives(sch_a, sch_b)
added = [c for c in changes if c.change_type == "added"]
assert len(added) == 1
def test_removed_directive(self):
sch_a = _make_schematic(texts=[
Text(80, 296, ".tran 10m", type="spice"),
Text(80, 320, ".op", type="spice"),
])
sch_b = _make_schematic(texts=[
Text(80, 296, ".tran 10m", type="spice"),
])
changes = _diff_directives(sch_a, sch_b)
removed = [c for c in changes if c.change_type == "removed"]
assert len(removed) == 1
def test_modified_directive(self):
sch_a = _make_schematic(texts=[
Text(80, 296, ".tran 10m", type="spice"),
])
sch_b = _make_schematic(texts=[
Text(80, 296, ".tran 50m", type="spice"),
])
changes = _diff_directives(sch_a, sch_b)
modified = [c for c in changes if c.change_type == "modified"]
assert len(modified) == 1
assert modified[0].old_text == ".tran 10m"
assert modified[0].new_text == ".tran 50m"
class TestDiffNets:
def test_added_nets(self):
sch_a = _make_schematic(flags=[Flag(80, 176, "0")])
sch_b = _make_schematic(flags=[Flag(80, 176, "0"), Flag(176, 176, "out")])
added, removed = _diff_nets(sch_a, sch_b)
assert "out" in added
assert len(removed) == 0
def test_removed_nets(self):
sch_a = _make_schematic(flags=[Flag(80, 176, "0"), Flag(176, 176, "out")])
sch_b = _make_schematic(flags=[Flag(80, 176, "0")])
added, removed = _diff_nets(sch_a, sch_b)
assert "out" in removed
assert len(added) == 0
class TestDiffWires:
def test_added_wires(self):
sch_a = _make_schematic(wires=[Wire(80, 96, 176, 96)])
sch_b = _make_schematic(wires=[
Wire(80, 96, 176, 96),
Wire(176, 176, 272, 176),
])
added, removed = _diff_wires(sch_a, sch_b)
assert added == 1
assert removed == 0
def test_removed_wires(self):
sch_a = _make_schematic(wires=[
Wire(80, 96, 176, 96),
Wire(176, 176, 272, 176),
])
sch_b = _make_schematic(wires=[Wire(80, 96, 176, 96)])
added, removed = _diff_wires(sch_a, sch_b)
assert added == 0
assert removed == 1
class TestSchematicDiff:
def test_has_changes_false(self):
diff = SchematicDiff()
assert diff.has_changes is False
def test_has_changes_true(self):
diff = SchematicDiff(wires_added=1)
assert diff.has_changes is True
def test_summary_no_changes(self):
diff = SchematicDiff()
assert "No changes" in diff.summary()
def test_to_dict(self):
diff = SchematicDiff(
component_changes=[
ComponentChange(name="R1", change_type="modified",
old_value="1k", new_value="2.2k")
]
)
d = diff.to_dict()
assert d["has_changes"] is True
assert len(d["component_changes"]) == 1
class TestDiffSchematicsIntegration:
"""Write two schematics to disk and compare them end-to-end."""
def test_full_diff(self, valid_schematic, tmp_path):
# Create "before" schematic
path_a = tmp_path / "before.asc"
write_schematic(valid_schematic, path_a)
# Create "after" schematic with a modified R1 value
modified = Schematic()
modified.flags = list(valid_schematic.flags)
modified.wires = list(valid_schematic.wires)
modified.texts = list(valid_schematic.texts)
modified.components = []
for comp in valid_schematic.components:
if comp.name == "R1":
new_comp = Component(
name=comp.name, symbol=comp.symbol,
x=comp.x, y=comp.y, rotation=comp.rotation, mirror=comp.mirror,
attributes={"Value": "4.7k"},
)
modified.components.append(new_comp)
else:
modified.components.append(comp)
path_b = tmp_path / "after.asc"
write_schematic(modified, path_b)
diff = diff_schematics(path_a, path_b)
assert diff.has_changes
r1_changes = [c for c in diff.component_changes if c.name == "R1"]
assert len(r1_changes) == 1
assert r1_changes[0].old_value == "1k"
assert r1_changes[0].new_value == "4.7k"

127
tests/test_drc.py Normal file
View File

@ -0,0 +1,127 @@
"""Tests for drc module: design rule checks on schematic objects."""
from mcltspice.drc import (
DRCResult,
DRCViolation,
Severity,
_check_duplicate_names,
_check_ground,
_check_simulation_directive,
)
from mcltspice.schematic import Schematic, write_schematic
def _run_single_check(check_fn, schematic: Schematic) -> DRCResult:
"""Run a single DRC check function and return results."""
result = DRCResult()
check_fn(schematic, result)
return result
class TestGroundCheck:
def test_missing_ground_detected(self, schematic_no_ground):
result = _run_single_check(_check_ground, schematic_no_ground)
assert not result.passed
assert any(v.rule == "NO_GROUND" for v in result.violations)
def test_ground_present(self, valid_schematic):
result = _run_single_check(_check_ground, valid_schematic)
assert result.passed
assert len(result.violations) == 0
class TestSimDirectiveCheck:
def test_missing_sim_directive_detected(self, schematic_no_sim):
result = _run_single_check(_check_simulation_directive, schematic_no_sim)
assert not result.passed
assert any(v.rule == "NO_SIM_DIRECTIVE" for v in result.violations)
def test_sim_directive_present(self, valid_schematic):
result = _run_single_check(_check_simulation_directive, valid_schematic)
assert result.passed
class TestDuplicateNameCheck:
def test_duplicate_names_detected(self, schematic_duplicate_names):
result = _run_single_check(_check_duplicate_names, schematic_duplicate_names)
assert not result.passed
assert any(v.rule == "DUPLICATE_NAME" for v in result.violations)
def test_unique_names_pass(self, valid_schematic):
result = _run_single_check(_check_duplicate_names, valid_schematic)
assert result.passed
class TestDRCResult:
def test_passed_when_no_errors(self):
result = DRCResult()
result.violations.append(
DRCViolation(rule="TEST", severity=Severity.WARNING, message="warning only")
)
assert result.passed # Warnings don't cause failure
def test_failed_when_errors(self):
result = DRCResult()
result.violations.append(
DRCViolation(rule="TEST", severity=Severity.ERROR, message="error")
)
assert not result.passed
def test_summary_no_violations(self):
result = DRCResult(checks_run=5)
assert "passed" in result.summary().lower()
def test_summary_with_errors(self):
result = DRCResult(checks_run=5)
result.violations.append(
DRCViolation(rule="TEST", severity=Severity.ERROR, message="error")
)
assert "FAILED" in result.summary()
def test_to_dict(self):
result = DRCResult(checks_run=3)
result.violations.append(
DRCViolation(rule="NO_GROUND", severity=Severity.ERROR, message="No ground")
)
d = result.to_dict()
assert d["passed"] is False
assert d["error_count"] == 1
assert len(d["violations"]) == 1
def test_errors_and_warnings_properties(self):
result = DRCResult()
result.violations.append(
DRCViolation(rule="E1", severity=Severity.ERROR, message="err")
)
result.violations.append(
DRCViolation(rule="W1", severity=Severity.WARNING, message="warn")
)
result.violations.append(
DRCViolation(rule="I1", severity=Severity.INFO, message="info")
)
assert len(result.errors) == 1
assert len(result.warnings) == 1
class TestFullDRC:
"""Integration test: write a schematic to disk and run the full DRC pipeline."""
def test_valid_schematic_passes(self, valid_schematic, tmp_path):
"""A valid schematic should pass DRC with no errors."""
from mcltspice.drc import run_drc
path = tmp_path / "valid.asc"
write_schematic(valid_schematic, path)
result = run_drc(path)
# May have warnings (floating nodes etc) but no errors
assert len(result.errors) == 0
def test_no_ground_fails(self, schematic_no_ground, tmp_path):
from mcltspice.drc import run_drc
path = tmp_path / "no_ground.asc"
write_schematic(schematic_no_ground, path)
result = run_drc(path)
assert any(v.rule == "NO_GROUND" for v in result.errors)

209
tests/test_integration.py Normal file
View File

@ -0,0 +1,209 @@
"""Integration tests that run actual LTspice simulations.
These tests require LTspice and Wine to be installed. Skip with:
pytest -m 'not integration'
"""
import tempfile
from pathlib import Path
import numpy as np
import pytest
from mcltspice.asc_generator import (
generate_colpitts_oscillator as generate_colpitts_asc,
)
from mcltspice.asc_generator import (
generate_common_emitter_amp as generate_ce_amp_asc,
)
from mcltspice.asc_generator import (
generate_non_inverting_amp as generate_noninv_amp_asc,
)
from mcltspice.asc_generator import (
generate_rc_lowpass as generate_rc_lowpass_asc,
)
from mcltspice.runner import run_simulation
from mcltspice.waveform_math import compute_bandwidth, compute_rms
@pytest.mark.integration
class TestRCLowpass:
"""End-to-end test: RC lowpass filter -> simulate -> verify -3dB point."""
async def test_rc_lowpass_bandwidth(self, ltspice_available):
"""Generate RC lowpass (R=1k, C=100n), simulate, verify fc ~ 1.6 kHz."""
# Generate schematic
sch = generate_rc_lowpass_asc(r="1k", c="100n")
with tempfile.TemporaryDirectory() as tmpdir:
asc_path = Path(tmpdir) / "rc_lowpass.asc"
sch.save(asc_path)
# Simulate
result = await run_simulation(asc_path)
assert result.success, f"Simulation failed: {result.error or result.stderr}"
assert result.raw_data is not None, "No raw data produced"
# Find frequency and V(out) variables
raw = result.raw_data
freq_idx = None
vout_idx = None
for var in raw.variables:
if var.name.lower() == "frequency":
freq_idx = var.index
elif var.name.lower() == "v(out)":
vout_idx = var.index
assert freq_idx is not None, (
f"No frequency variable. Variables: {[v.name for v in raw.variables]}"
)
assert vout_idx is not None, (
f"No V(out) variable. Variables: {[v.name for v in raw.variables]}"
)
# Extract data
freq = np.abs(raw.data[freq_idx])
mag_complex = raw.data[vout_idx]
mag_db = 20.0 * np.log10(np.maximum(np.abs(mag_complex), 1e-30))
# Compute bandwidth
bw = compute_bandwidth(freq, mag_db)
# Expected: fc = 1/(2*pi*1000*100e-9) ~ 1591 Hz
# Allow 20% tolerance for simulation differences
expected_fc = 1.0 / (2 * np.pi * 1000 * 100e-9)
assert bw["bandwidth_hz"] is not None, "Could not compute bandwidth"
assert abs(bw["bandwidth_hz"] - expected_fc) / expected_fc < 0.2, (
f"Bandwidth {bw['bandwidth_hz']:.0f} Hz too far from expected {expected_fc:.0f} Hz"
)
@pytest.mark.integration
class TestNonInvertingAmp:
"""End-to-end test: non-inverting amp -> simulate -> verify gain."""
async def test_noninv_amp_gain(self, ltspice_available):
"""Generate non-inverting amp (Rf=100k, Rin=10k), verify gain ~ 11 (20.8 dB)."""
sch = generate_noninv_amp_asc(rin="10k", rf="100k")
with tempfile.TemporaryDirectory() as tmpdir:
asc_path = Path(tmpdir) / "noninv_amp.asc"
sch.save(asc_path)
result = await run_simulation(asc_path)
assert result.success, f"Simulation failed: {result.error or result.stderr}"
assert result.raw_data is not None
raw = result.raw_data
freq_idx = None
vout_idx = None
for var in raw.variables:
if var.name.lower() == "frequency":
freq_idx = var.index
elif var.name.lower() == "v(out)":
vout_idx = var.index
assert freq_idx is not None
assert vout_idx is not None
_freq = np.abs(raw.data[freq_idx]) # noqa: F841 — kept for debug
mag = np.abs(raw.data[vout_idx])
mag_db = 20.0 * np.log10(np.maximum(mag, 1e-30))
# At low frequency, gain should be 1 + 100k/10k = 11 = 20.83 dB
# Use first few points (low frequency)
low_freq_gain_db = float(np.mean(mag_db[:5]))
expected_gain_db = 20 * np.log10(11) # 20.83 dB
assert abs(low_freq_gain_db - expected_gain_db) < 2.0, (
f"Low-freq gain {low_freq_gain_db:.1f} dB, expected ~{expected_gain_db:.1f} dB"
)
@pytest.mark.integration
class TestCommonEmitterAmp:
"""End-to-end test: CE amplifier -> simulate transient -> verify output exists."""
async def test_ce_amp_output(self, ltspice_available):
"""Generate CE amp, simulate transient, verify output has AC content."""
sch = generate_ce_amp_asc()
with tempfile.TemporaryDirectory() as tmpdir:
asc_path = Path(tmpdir) / "ce_amp.asc"
sch.save(asc_path)
result = await run_simulation(asc_path)
assert result.success, f"Simulation failed: {result.error or result.stderr}"
assert result.raw_data is not None
raw = result.raw_data
time_idx = None
vout_idx = None
for var in raw.variables:
if var.name.lower() == "time":
time_idx = var.index
elif var.name.lower() == "v(out)":
vout_idx = var.index
assert time_idx is not None
assert vout_idx is not None
sig = np.real(raw.data[vout_idx])
# Output should not be DC-only -- check peak-to-peak > threshold
pp = float(np.max(sig) - np.min(sig))
assert pp > 0.01, (
f"Output appears DC-only (peak-to-peak={pp:.4f}V). "
"Expected amplified AC signal."
)
# RMS should be non-trivial
rms = float(compute_rms(sig))
assert rms > 0.01, f"Output RMS too low: {rms:.4f}V"
@pytest.mark.integration
class TestColpittsOscillator:
"""End-to-end test: Colpitts oscillator -> simulate -> verify oscillation."""
async def test_colpitts_oscillation(self, ltspice_available):
"""Generate Colpitts oscillator, verify oscillation near expected frequency."""
sch = generate_colpitts_asc()
with tempfile.TemporaryDirectory() as tmpdir:
asc_path = Path(tmpdir) / "colpitts.asc"
sch.save(asc_path)
result = await run_simulation(asc_path)
assert result.success, f"Simulation failed: {result.error or result.stderr}"
assert result.raw_data is not None
raw = result.raw_data
time_idx = None
vcol_idx = None
for var in raw.variables:
if var.name.lower() == "time":
time_idx = var.index
elif var.name.lower() == "v(out)":
vcol_idx = var.index
elif "collector" in var.name.lower() and vcol_idx is None:
vcol_idx = var.index
assert time_idx is not None
assert vcol_idx is not None, (
f"No V(out) or collector signal. Variables: {[v.name for v in raw.variables]}"
)
sig = np.real(raw.data[vcol_idx])
# Oscillator output should have significant AC content
pp = float(np.max(sig) - np.min(sig))
assert pp > 0.1, (
f"Output peak-to-peak {pp:.3f}V too small -- oscillator may not have started"
)
# Expected frequency: f = 1/(2*pi*sqrt(L*C1*C2/(C1+C2)))
# With L=1u, C1=C2=100p: Cseries = 50p
# f = 1/(2*pi*sqrt(1e-6 * 50e-12)) ~ 22.5 MHz
# This is quite high, but we just verify oscillation exists

197
tests/test_netlist.py Normal file
View File

@ -0,0 +1,197 @@
"""Tests for netlist module: builder pattern, rendering, template functions."""
import pytest
from mcltspice.netlist import (
Netlist,
buck_converter,
colpitts_oscillator,
common_emitter_amplifier,
differential_amplifier,
h_bridge,
inverting_amplifier,
ldo_regulator,
non_inverting_amplifier,
rc_lowpass,
voltage_divider,
)
class TestNetlistBuilder:
def test_add_resistor(self):
n = Netlist().add_resistor("R1", "in", "out", "10k")
assert len(n.components) == 1
assert n.components[0].name == "R1"
assert n.components[0].value == "10k"
def test_add_capacitor(self):
n = Netlist().add_capacitor("C1", "out", "0", "100n")
assert len(n.components) == 1
assert n.components[0].value == "100n"
def test_add_inductor(self):
n = Netlist().add_inductor("L1", "a", "b", "10u", series_resistance="0.1")
assert "Rser=0.1" in n.components[0].params
def test_chaining(self):
"""Builder methods return self for chaining."""
n = (
Netlist("Test")
.add_resistor("R1", "a", "b", "1k")
.add_capacitor("C1", "b", "0", "1n")
)
assert len(n.components) == 2
def test_add_voltage_source_dc(self):
n = Netlist().add_voltage_source("V1", "in", "0", dc="5")
assert "5" in n.components[0].value
def test_add_voltage_source_ac(self):
n = Netlist().add_voltage_source("V1", "in", "0", ac="1")
assert "AC 1" in n.components[0].value
def test_add_voltage_source_pulse(self):
n = Netlist().add_voltage_source(
"V1", "g", "0", pulse=("0", "5", "0", "1n", "1n", "5u", "10u")
)
rendered = n.render()
assert "PULSE(" in rendered
def test_add_voltage_source_sin(self):
n = Netlist().add_voltage_source(
"V1", "in", "0", sin=("0", "1", "1k")
)
rendered = n.render()
assert "SIN(" in rendered
def test_add_directive(self):
n = Netlist().add_directive(".tran 10m")
assert ".tran 10m" in n.directives
def test_add_meas(self):
n = Netlist().add_meas("tran", "vmax", "MAX V(out)")
assert any("vmax" in d for d in n.directives)
class TestNetlistRender:
def test_render_contains_title(self):
n = Netlist("My Circuit")
text = n.render()
assert "* My Circuit" in text
def test_render_contains_components(self):
n = (
Netlist()
.add_resistor("R1", "in", "out", "10k")
.add_capacitor("C1", "out", "0", "100n")
)
text = n.render()
assert "R1 in out 10k" in text
assert "C1 out 0 100n" in text
def test_render_contains_backanno_and_end(self):
n = Netlist()
text = n.render()
assert ".backanno" in text
assert ".end" in text
def test_render_includes_directive(self):
n = Netlist().add_directive(".ac dec 100 1 1meg")
text = n.render()
assert ".ac dec 100 1 1meg" in text
def test_render_includes_comment(self):
n = Netlist().add_comment("Test comment")
text = n.render()
assert "* Test comment" in text
def test_render_includes_lib(self):
n = Netlist().add_lib("LT1001")
text = n.render()
assert ".lib LT1001" in text
class TestTemplateNetlists:
"""All template functions should return valid Netlist objects."""
@pytest.mark.parametrize(
"factory",
[
voltage_divider,
rc_lowpass,
inverting_amplifier,
non_inverting_amplifier,
differential_amplifier,
common_emitter_amplifier,
buck_converter,
ldo_regulator,
colpitts_oscillator,
h_bridge,
],
)
def test_template_returns_netlist(self, factory):
n = factory()
assert isinstance(n, Netlist)
@pytest.mark.parametrize(
"factory",
[
voltage_divider,
rc_lowpass,
inverting_amplifier,
non_inverting_amplifier,
differential_amplifier,
common_emitter_amplifier,
buck_converter,
ldo_regulator,
colpitts_oscillator,
h_bridge,
],
)
def test_template_has_backanno_and_end(self, factory):
text = factory().render()
assert ".backanno" in text
assert ".end" in text
@pytest.mark.parametrize(
"factory",
[
voltage_divider,
rc_lowpass,
inverting_amplifier,
non_inverting_amplifier,
differential_amplifier,
common_emitter_amplifier,
buck_converter,
ldo_regulator,
colpitts_oscillator,
h_bridge,
],
)
def test_template_has_components(self, factory):
n = factory()
assert len(n.components) > 0
@pytest.mark.parametrize(
"factory",
[
voltage_divider,
rc_lowpass,
inverting_amplifier,
non_inverting_amplifier,
differential_amplifier,
common_emitter_amplifier,
buck_converter,
ldo_regulator,
colpitts_oscillator,
h_bridge,
],
)
def test_template_has_sim_directive(self, factory):
n = factory()
# Should have at least one directive starting with a sim type
sim_types = [".tran", ".ac", ".dc", ".op", ".noise", ".tf"]
text = n.render()
assert any(sim in text.lower() for sim in sim_types), (
f"No simulation directive found in {factory.__name__}"
)

447
tests/test_new_templates.py Normal file
View File

@ -0,0 +1,447 @@
"""Tests for the 5 new circuit templates (netlist + asc generator)."""
import pytest
from mcltspice.asc_generator import (
AscSchematic,
generate_boost_converter,
generate_current_mirror,
generate_instrumentation_amp,
generate_sallen_key_lowpass,
generate_transimpedance_amp,
)
from mcltspice.netlist import (
Netlist,
boost_converter,
current_mirror,
instrumentation_amplifier,
sallen_key_lowpass,
transimpedance_amplifier,
)
# ---------------------------------------------------------------------------
# Netlist template tests
# ---------------------------------------------------------------------------
class TestSallenKeyLowpassNetlist:
def test_returns_netlist(self):
n = sallen_key_lowpass()
assert isinstance(n, Netlist)
def test_component_count(self):
n = sallen_key_lowpass()
# V1, Vpos, Vneg, R1, R2, C1, C2, X1 = 8 components
assert len(n.components) == 8
def test_render_contains_key_components(self):
text = sallen_key_lowpass().render()
assert "R1" in text
assert "R2" in text
assert "C1" in text
assert "C2" in text
assert "X1" in text
assert "LT1001" in text
assert ".ac" in text
def test_custom_params(self):
n = sallen_key_lowpass(r1="4.7k", r2="4.7k", c1="22n", c2="22n")
text = n.render()
assert "4.7k" in text
assert "22n" in text
def test_has_backanno_and_end(self):
text = sallen_key_lowpass().render()
assert ".backanno" in text
assert ".end" in text
class TestBoostConverterNetlist:
def test_returns_netlist(self):
n = boost_converter()
assert isinstance(n, Netlist)
def test_component_count(self):
n = boost_converter()
# Vin, Vgate, L1, M1, D1, Cout, Rload = 7 components
assert len(n.components) == 7
def test_render_contains_key_components(self):
text = boost_converter().render()
assert "L1" in text
assert "M1" in text
assert "D1" in text
assert "Cout" in text
assert "Rload" in text
assert "PULSE(" in text
assert ".tran" in text
def test_custom_params(self):
n = boost_converter(ind="22u", r_load="100", v_in="3.3", duty_cycle=0.6)
text = n.render()
assert "22u" in text
assert "100" in text
assert "3.3" in text
def test_has_backanno_and_end(self):
text = boost_converter().render()
assert ".backanno" in text
assert ".end" in text
class TestInstrumentationAmplifierNetlist:
def test_returns_netlist(self):
n = instrumentation_amplifier()
assert isinstance(n, Netlist)
def test_component_count(self):
n = instrumentation_amplifier()
# V1, V2, Vpos, Vneg, X1, X2, R1, R1b, Rgain, X3, R2, R3, R2b, R3b = 14
assert len(n.components) == 14
def test_render_contains_key_components(self):
text = instrumentation_amplifier().render()
assert "X1" in text
assert "X2" in text
assert "X3" in text
assert "Rgain" in text
assert "LT1001" in text
assert ".ac" in text
def test_custom_params(self):
n = instrumentation_amplifier(r1="20k", r_gain="1k")
text = n.render()
assert "20k" in text
assert "1k" in text
def test_has_backanno_and_end(self):
text = instrumentation_amplifier().render()
assert ".backanno" in text
assert ".end" in text
class TestCurrentMirrorNetlist:
def test_returns_netlist(self):
n = current_mirror()
assert isinstance(n, Netlist)
def test_component_count(self):
n = current_mirror()
# Vcc, Rref, Rload, Q1, Q2 = 5 components
assert len(n.components) == 5
def test_render_contains_key_components(self):
text = current_mirror().render()
assert "Q1" in text
assert "Q2" in text
assert "Rref" in text
assert "Rload" in text
assert "2N2222" in text
assert ".op" in text
assert ".tran" in text
def test_custom_params(self):
n = current_mirror(r_ref="4.7k", r_load="2.2k", vcc="5")
text = n.render()
assert "4.7k" in text
assert "2.2k" in text
assert "5" in text
def test_has_backanno_and_end(self):
text = current_mirror().render()
assert ".backanno" in text
assert ".end" in text
class TestTransimpedanceAmplifierNetlist:
def test_returns_netlist(self):
n = transimpedance_amplifier()
assert isinstance(n, Netlist)
def test_component_count(self):
n = transimpedance_amplifier()
# I1, Vpos, Vneg, Rf, Cf, X1 = 6 components
assert len(n.components) == 6
def test_render_contains_key_components(self):
text = transimpedance_amplifier().render()
assert "I1" in text
assert "Rf" in text
assert "Cf" in text
assert "X1" in text
assert "LT1001" in text
assert ".ac" in text
def test_custom_params(self):
n = transimpedance_amplifier(rf="1Meg", cf="0.5p", i_source="10u")
text = n.render()
assert "1Meg" in text
assert "0.5p" in text
assert "10u" in text
def test_has_backanno_and_end(self):
text = transimpedance_amplifier().render()
assert ".backanno" in text
assert ".end" in text
# ---------------------------------------------------------------------------
# ASC generator template tests
# ---------------------------------------------------------------------------
class TestSallenKeyLowpassAsc:
def test_returns_schematic(self):
sch = generate_sallen_key_lowpass()
assert isinstance(sch, AscSchematic)
def test_render_valid(self):
text = generate_sallen_key_lowpass().render()
assert text.startswith("Version 4\n")
assert "SHEET" in text
assert len(text) > 100
def test_contains_expected_symbols(self):
text = generate_sallen_key_lowpass().render()
assert "SYMBOL res" in text
assert "SYMBOL cap" in text
assert "SYMBOL OpAmps/UniversalOpamp2" in text
assert "SYMBOL voltage" in text
def test_custom_params(self):
text = generate_sallen_key_lowpass(r1="4.7k", c1="22n").render()
assert "4.7k" in text
assert "22n" in text
def test_has_simulation_directive(self):
text = generate_sallen_key_lowpass().render()
assert ".ac" in text
class TestBoostConverterAsc:
def test_returns_schematic(self):
sch = generate_boost_converter()
assert isinstance(sch, AscSchematic)
def test_render_valid(self):
text = generate_boost_converter().render()
assert text.startswith("Version 4\n")
assert "SHEET" in text
assert len(text) > 100
def test_contains_expected_symbols(self):
text = generate_boost_converter().render()
assert "SYMBOL ind" in text
assert "SYMBOL nmos" in text
assert "SYMBOL diode" in text
assert "SYMBOL cap" in text
assert "SYMBOL res" in text
def test_custom_params(self):
text = generate_boost_converter(ind="22u", r_load="100").render()
assert "22u" in text
assert "100" in text
def test_has_simulation_directive(self):
text = generate_boost_converter().render()
assert ".tran" in text
class TestInstrumentationAmpAsc:
def test_returns_schematic(self):
sch = generate_instrumentation_amp()
assert isinstance(sch, AscSchematic)
def test_render_valid(self):
text = generate_instrumentation_amp().render()
assert text.startswith("Version 4\n")
assert "SHEET" in text
assert len(text) > 100
def test_contains_expected_symbols(self):
text = generate_instrumentation_amp().render()
# Should have 3 opamps
assert text.count("SYMBOL OpAmps/UniversalOpamp2") == 3
# Should have multiple resistors
assert text.count("SYMBOL res") >= 7
def test_custom_params(self):
text = generate_instrumentation_amp(r1="20k", r_gain="1k").render()
assert "20k" in text
assert "1k" in text
def test_has_simulation_directive(self):
text = generate_instrumentation_amp().render()
assert ".ac" in text
class TestCurrentMirrorAsc:
def test_returns_schematic(self):
sch = generate_current_mirror()
assert isinstance(sch, AscSchematic)
def test_render_valid(self):
text = generate_current_mirror().render()
assert text.startswith("Version 4\n")
assert "SHEET" in text
assert len(text) > 100
def test_contains_expected_symbols(self):
text = generate_current_mirror().render()
# Should have 2 NPN transistors
assert text.count("SYMBOL npn") == 2
assert "SYMBOL res" in text
assert "SYMBOL voltage" in text
def test_custom_params(self):
text = generate_current_mirror(r_ref="4.7k", r_load="2.2k").render()
assert "4.7k" in text
assert "2.2k" in text
def test_has_simulation_directive(self):
text = generate_current_mirror().render()
assert ".op" in text
assert ".tran" in text
class TestTransimpedanceAmpAsc:
def test_returns_schematic(self):
sch = generate_transimpedance_amp()
assert isinstance(sch, AscSchematic)
def test_render_valid(self):
text = generate_transimpedance_amp().render()
assert text.startswith("Version 4\n")
assert "SHEET" in text
assert len(text) > 100
def test_contains_expected_symbols(self):
text = generate_transimpedance_amp().render()
assert "SYMBOL OpAmps/UniversalOpamp2" in text
assert "SYMBOL res" in text
assert "SYMBOL cap" in text
def test_custom_params(self):
text = generate_transimpedance_amp(rf="1Meg", cf="0.5p").render()
assert "1Meg" in text
assert "0.5p" in text
def test_has_simulation_directive(self):
text = generate_transimpedance_amp().render()
assert ".ac" in text
# ---------------------------------------------------------------------------
# Parametrized cross-cutting tests for all 5 new netlist templates
# ---------------------------------------------------------------------------
class TestNewNetlistTemplatesCommon:
@pytest.mark.parametrize(
"factory",
[
sallen_key_lowpass,
boost_converter,
instrumentation_amplifier,
current_mirror,
transimpedance_amplifier,
],
)
def test_returns_netlist(self, factory):
assert isinstance(factory(), Netlist)
@pytest.mark.parametrize(
"factory",
[
sallen_key_lowpass,
boost_converter,
instrumentation_amplifier,
current_mirror,
transimpedance_amplifier,
],
)
def test_has_backanno_and_end(self, factory):
text = factory().render()
assert ".backanno" in text
assert ".end" in text
@pytest.mark.parametrize(
"factory",
[
sallen_key_lowpass,
boost_converter,
instrumentation_amplifier,
current_mirror,
transimpedance_amplifier,
],
)
def test_has_components(self, factory):
n = factory()
assert len(n.components) > 0
@pytest.mark.parametrize(
"factory",
[
sallen_key_lowpass,
boost_converter,
instrumentation_amplifier,
current_mirror,
transimpedance_amplifier,
],
)
def test_has_sim_directive(self, factory):
text = factory().render()
sim_types = [".tran", ".ac", ".dc", ".op", ".noise", ".tf"]
assert any(sim in text.lower() for sim in sim_types), (
f"No simulation directive found in {factory.__name__}"
)
# ---------------------------------------------------------------------------
# Parametrized cross-cutting tests for all 5 new ASC templates
# ---------------------------------------------------------------------------
class TestNewAscTemplatesCommon:
@pytest.mark.parametrize(
"factory",
[
generate_sallen_key_lowpass,
generate_boost_converter,
generate_instrumentation_amp,
generate_current_mirror,
generate_transimpedance_amp,
],
)
def test_returns_schematic(self, factory):
assert isinstance(factory(), AscSchematic)
@pytest.mark.parametrize(
"factory",
[
generate_sallen_key_lowpass,
generate_boost_converter,
generate_instrumentation_amp,
generate_current_mirror,
generate_transimpedance_amp,
],
)
def test_render_nonempty(self, factory):
text = factory().render()
assert len(text) > 50
assert "SYMBOL" in text
@pytest.mark.parametrize(
"factory",
[
generate_sallen_key_lowpass,
generate_boost_converter,
generate_instrumentation_amp,
generate_current_mirror,
generate_transimpedance_amp,
],
)
def test_has_version_and_sheet(self, factory):
text = factory().render()
assert "Version 4" in text
assert "SHEET" in text

View File

@ -0,0 +1,87 @@
"""Tests for optimizer module helpers: snap_to_preferred, format_engineering."""
import pytest
from mcltspice.optimizer import format_engineering, snap_to_preferred
class TestSnapToPreferred:
def test_e12_exact_match(self):
"""A value that is already an E12 value should snap to itself."""
assert snap_to_preferred(4700.0, "E12") == pytest.approx(4700.0, rel=0.01)
def test_e12_near_value(self):
"""4800 should snap to 4700 (E12)."""
result = snap_to_preferred(4800.0, "E12")
assert result == pytest.approx(4700.0, rel=0.05)
def test_e24_finer_resolution(self):
"""E24 has 5.1, so 5050 should snap to 5100."""
result = snap_to_preferred(5050.0, "E24")
assert result == pytest.approx(5100.0, rel=0.05)
def test_e96_precision(self):
"""E96 should snap to a value very close to the input."""
result = snap_to_preferred(4750.0, "E96")
assert result == pytest.approx(4750.0, rel=0.03)
def test_zero_value(self):
"""Zero should snap to the smallest E-series value."""
result = snap_to_preferred(0.0, "E12")
assert result > 0
def test_negative_value(self):
"""Negative value should snap to the smallest E-series value."""
result = snap_to_preferred(-100.0, "E12")
assert result > 0
def test_sub_ohm(self):
"""Small values (e.g., 0.47 ohms) should snap correctly."""
result = snap_to_preferred(0.5, "E12")
assert result == pytest.approx(0.47, rel=0.1)
def test_megohm_range(self):
"""Large values should snap correctly across decades."""
result = snap_to_preferred(2_200_000.0, "E12")
assert result == pytest.approx(2_200_000.0, rel=0.05)
def test_unknown_series_defaults_to_e12(self):
"""Unknown series name should fall back to E12."""
result = snap_to_preferred(4800.0, "E6")
assert result == pytest.approx(4700.0, rel=0.05)
class TestFormatEngineering:
def test_10k(self):
assert format_engineering(10_000) == "10k"
def test_1u(self):
assert format_engineering(0.000001) == "1u"
def test_4_7k(self):
assert format_engineering(4700) == "4.7k"
def test_zero(self):
assert format_engineering(0) == "0"
def test_1_5(self):
"""Values in the unity range should have no suffix."""
result = format_engineering(1.5)
assert result == "1.5"
def test_negative(self):
result = format_engineering(-4700)
assert result.startswith("-")
assert "4.7k" in result
def test_picofarad(self):
result = format_engineering(100e-12)
assert "100p" in result
def test_milliamp(self):
result = format_engineering(0.010)
assert "10m" in result
def test_large_value(self):
result = format_engineering(1e9)
assert "G" in result or "1e" in result

View File

@ -0,0 +1,123 @@
"""Tests for power_analysis module: average power, efficiency, power factor."""
import numpy as np
import pytest
from mcltspice.power_analysis import (
compute_average_power,
compute_efficiency,
compute_instantaneous_power,
compute_power_metrics,
)
class TestComputeAveragePower:
def test_dc_power(self):
"""DC: P = V * I exactly."""
n = 1000
t = np.linspace(0, 1.0, n)
v = np.full(n, 5.0)
i = np.full(n, 2.0)
p = compute_average_power(t, v, i)
assert p == pytest.approx(10.0, rel=1e-6)
def test_ac_in_phase(self):
"""In-phase AC: P_avg = Vpk*Ipk/2."""
n = 10000
t = np.linspace(0, 0.01, n, endpoint=False) # 1 full period at 100 Hz
freq = 100.0
Vpk = 10.0
Ipk = 2.0
v = Vpk * np.sin(2 * np.pi * freq * t)
i = Ipk * np.sin(2 * np.pi * freq * t)
p = compute_average_power(t, v, i)
expected = Vpk * Ipk / 2.0 # = Vrms * Irms
assert p == pytest.approx(expected, rel=0.02)
def test_ac_quadrature(self):
"""90-degree phase shift: P_avg ~ 0 (reactive power only)."""
n = 10000
t = np.linspace(0, 0.01, n, endpoint=False)
freq = 100.0
v = np.sin(2 * np.pi * freq * t)
i = np.cos(2 * np.pi * freq * t) # 90 deg shifted
p = compute_average_power(t, v, i)
assert p == pytest.approx(0.0, abs=0.01)
def test_short_signal(self):
assert compute_average_power(np.array([0.0]), np.array([5.0]), np.array([2.0])) == 0.0
class TestComputeEfficiency:
def test_known_efficiency(self):
"""Input 10W, output 8W -> 80% efficiency."""
n = 1000
t = np.linspace(0, 1.0, n)
vin = np.full(n, 10.0)
iin = np.full(n, 1.0) # 10W input
vout = np.full(n, 8.0)
iout = np.full(n, 1.0) # 8W output
result = compute_efficiency(t, vin, iin, vout, iout)
assert result["efficiency_percent"] == pytest.approx(80.0, rel=0.01)
assert result["input_power_watts"] == pytest.approx(10.0, rel=0.01)
assert result["output_power_watts"] == pytest.approx(8.0, rel=0.01)
assert result["power_dissipated_watts"] == pytest.approx(2.0, rel=0.01)
def test_zero_input_power(self):
"""Zero input -> 0% efficiency (avoid division by zero)."""
n = 100
t = np.linspace(0, 1.0, n)
zeros = np.zeros(n)
result = compute_efficiency(t, zeros, zeros, zeros, zeros)
assert result["efficiency_percent"] == 0.0
class TestPowerFactor:
def test_dc_power_factor(self):
"""DC signals (in phase) should have PF = 1.0."""
n = 1000
t = np.linspace(0, 1.0, n)
v = np.full(n, 5.0)
i = np.full(n, 2.0)
result = compute_power_metrics(t, v, i)
assert result["power_factor"] == pytest.approx(1.0, rel=0.01)
def test_ac_in_phase_power_factor(self):
"""In-phase AC should have PF ~ 1.0."""
n = 10000
t = np.linspace(0, 0.01, n, endpoint=False)
freq = 100.0
v = np.sin(2 * np.pi * freq * t)
i = np.sin(2 * np.pi * freq * t)
result = compute_power_metrics(t, v, i)
assert result["power_factor"] == pytest.approx(1.0, rel=0.05)
def test_ac_quadrature_power_factor(self):
"""90-degree phase shift -> PF ~ 0."""
n = 10000
t = np.linspace(0, 0.01, n, endpoint=False)
freq = 100.0
v = np.sin(2 * np.pi * freq * t)
i = np.cos(2 * np.pi * freq * t)
result = compute_power_metrics(t, v, i)
assert result["power_factor"] == pytest.approx(0.0, abs=0.05)
def test_empty_signals(self):
result = compute_power_metrics(np.array([]), np.array([]), np.array([]))
assert result["power_factor"] == 0.0
class TestInstantaneousPower:
def test_element_wise(self):
v = np.array([1.0, 2.0, 3.0])
i = np.array([0.5, 1.0, 1.5])
p = compute_instantaneous_power(v, i)
np.testing.assert_array_almost_equal(p, [0.5, 2.0, 4.5])
def test_complex_uses_real(self):
"""Should use real parts only."""
v = np.array([3.0 + 4j])
i = np.array([2.0 + 1j])
p = compute_instantaneous_power(v, i)
assert p[0] == pytest.approx(6.0) # 3 * 2

101
tests/test_raw_parser.py Normal file
View File

@ -0,0 +1,101 @@
"""Tests for raw_parser module: run boundaries, variable lookup, run slicing."""
import numpy as np
import pytest
from mcltspice.raw_parser import _detect_run_boundaries
class TestDetectRunBoundaries:
def test_single_run(self):
"""Monotonically increasing time -> single run starting at 0."""
x = np.linspace(0, 1e-3, 500)
boundaries = _detect_run_boundaries(x)
assert boundaries == [0]
def test_multi_run(self):
"""Three runs: time resets to near-zero at each boundary."""
run1 = np.linspace(0, 1e-3, 100)
run2 = np.linspace(0, 1e-3, 100)
run3 = np.linspace(0, 1e-3, 100)
x = np.concatenate([run1, run2, run3])
boundaries = _detect_run_boundaries(x)
assert len(boundaries) == 3
assert boundaries[0] == 0
assert boundaries[1] == 100
assert boundaries[2] == 200
def test_complex_ac(self):
"""AC analysis with complex frequency axis that resets."""
run1 = np.logspace(0, 6, 50).astype(np.complex128)
run2 = np.logspace(0, 6, 50).astype(np.complex128)
x = np.concatenate([run1, run2])
boundaries = _detect_run_boundaries(x)
assert len(boundaries) == 2
assert boundaries[0] == 0
assert boundaries[1] == 50
def test_single_point(self):
"""Single data point -> one run."""
boundaries = _detect_run_boundaries(np.array([0.0]))
assert boundaries == [0]
class TestRawFileGetVariable:
def test_exact_match(self, mock_rawfile):
"""Exact name match returns correct data."""
result = mock_rawfile.get_variable("V(out)")
assert result is not None
assert len(result) == mock_rawfile.points
def test_case_insensitive(self, mock_rawfile):
"""Variable lookup is case-insensitive (partial match)."""
result = mock_rawfile.get_variable("v(out)")
assert result is not None
def test_partial_match(self, mock_rawfile):
"""Substring match should work: 'out' matches 'V(out)'."""
result = mock_rawfile.get_variable("out")
assert result is not None
def test_missing_variable(self, mock_rawfile):
"""Non-existent variable returns None."""
result = mock_rawfile.get_variable("V(nonexistent)")
assert result is None
def test_get_time(self, mock_rawfile):
result = mock_rawfile.get_time()
assert result is not None
assert len(result) == mock_rawfile.points
class TestRawFileRunData:
def test_get_run_data_slicing(self, mock_rawfile_stepped):
"""Extracting a single run produces correct point count."""
run0 = mock_rawfile_stepped.get_run_data(0)
assert run0.points == 100
assert run0.n_runs == 1
assert run0.is_stepped is False
def test_get_run_data_values(self, mock_rawfile_stepped):
"""Each run has the expected amplitude scaling."""
for i in range(3):
run = mock_rawfile_stepped.get_run_data(i)
sig = run.get_variable("V(out)")
# Peak amplitude should be approximately (i+1)
assert float(np.max(np.abs(sig))) == pytest.approx(i + 1, rel=0.1)
def test_is_stepped(self, mock_rawfile_stepped, mock_rawfile):
assert mock_rawfile_stepped.is_stepped is True
assert mock_rawfile.is_stepped is False
def test_get_variable_with_run(self, mock_rawfile_stepped):
"""get_variable with run= parameter slices correctly."""
v_run1 = mock_rawfile_stepped.get_variable("V(out)", run=1)
assert v_run1 is not None
assert len(v_run1) == 100
def test_non_stepped_get_run_data(self, mock_rawfile):
"""Getting run data from non-stepped file returns self."""
run = mock_rawfile.get_run_data(0)
assert run.points == mock_rawfile.points

107
tests/test_stability.py Normal file
View File

@ -0,0 +1,107 @@
"""Tests for stability module: gain margin, phase margin from loop gain data."""
import numpy as np
from mcltspice.stability import (
compute_gain_margin,
compute_phase_margin,
compute_stability_metrics,
)
def _second_order_system(freq, wn=1000.0, zeta=0.3):
"""Create a 2nd-order underdamped system: H(s) = wn^2 / (s^2 + 2*zeta*wn*s + wn^2).
Returns complex loop gain at the given frequencies.
"""
s = 1j * 2 * np.pi * freq
return wn**2 / (s**2 + 2 * zeta * wn * s + wn**2)
class TestGainMargin:
def test_third_order_system(self):
"""A 3rd-order system crosses -180 phase and has finite gain margin."""
freq = np.logspace(0, 6, 10000)
# Three-pole system: K / ((s/w1 + 1) * (s/w2 + 1) * (s/w3 + 1))
# Phase goes from 0 to -270, so it definitely crosses -180
w1 = 2 * np.pi * 100
w2 = 2 * np.pi * 1000
w3 = 2 * np.pi * 10000
K = 100.0 # enough gain to have a gain crossover
s = 1j * 2 * np.pi * freq
loop_gain = K / ((s / w1 + 1) * (s / w2 + 1) * (s / w3 + 1))
result = compute_gain_margin(freq, loop_gain)
assert result["gain_margin_db"] is not None
assert result["is_stable"] is True
assert result["gain_margin_db"] > 0
assert result["phase_crossover_freq_hz"] is not None
def test_no_phase_crossover(self):
"""A simple first-order system never reaches -180 phase, so GM is infinite."""
freq = np.logspace(0, 6, 1000)
s = 1j * 2 * np.pi * freq
# First-order: 1/(1+s/wn) -- phase goes from 0 to -90
loop_gain = 1.0 / (1 + s / (2 * np.pi * 1000))
result = compute_gain_margin(freq, loop_gain)
assert result["gain_margin_db"] == float("inf")
assert result["is_stable"] is True
def test_short_input(self):
result = compute_gain_margin(np.array([1.0]), np.array([1.0 + 0j]))
assert result["gain_margin_db"] is None
class TestPhaseMargin:
def test_third_order_system(self):
"""A 3rd-order system with sufficient gain should have measurable phase margin."""
freq = np.logspace(0, 6, 10000)
w1 = 2 * np.pi * 100
w2 = 2 * np.pi * 1000
w3 = 2 * np.pi * 10000
K = 100.0
s = 1j * 2 * np.pi * freq
loop_gain = K / ((s / w1 + 1) * (s / w2 + 1) * (s / w3 + 1))
result = compute_phase_margin(freq, loop_gain)
assert result["phase_margin_deg"] is not None
assert result["is_stable"] is True
assert result["phase_margin_deg"] > 0
def test_all_gain_below_0db(self):
"""If gain is always below 0 dB, phase margin is infinite (system is stable)."""
freq = np.logspace(0, 6, 1000)
s = 1j * 2 * np.pi * freq
# Very low gain system
loop_gain = 0.001 / (1 + s / (2 * np.pi * 1000))
result = compute_phase_margin(freq, loop_gain)
assert result["phase_margin_deg"] == float("inf")
assert result["is_stable"] is True
assert result["gain_crossover_freq_hz"] is None
def test_short_input(self):
result = compute_phase_margin(np.array([1.0]), np.array([1.0 + 0j]))
assert result["phase_margin_deg"] is None
class TestStabilityMetrics:
def test_comprehensive_output(self):
"""compute_stability_metrics returns all expected fields."""
freq = np.logspace(0, 6, 5000)
w1 = 2 * np.pi * 100
w2 = 2 * np.pi * 1000
w3 = 2 * np.pi * 10000
K = 100.0
s = 1j * 2 * np.pi * freq
loop_gain = K / ((s / w1 + 1) * (s / w2 + 1) * (s / w3 + 1))
result = compute_stability_metrics(freq, loop_gain)
assert "gain_margin" in result
assert "phase_margin" in result
assert "bode" in result
assert "is_stable" in result
assert len(result["bode"]["frequency_hz"]) == len(freq)
def test_short_input_structure(self):
result = compute_stability_metrics(np.array([]), np.array([]))
assert result["is_stable"] is None

199
tests/test_svg_plot.py Normal file
View File

@ -0,0 +1,199 @@
"""Tests for the pure-SVG waveform plot generation module."""
import numpy as np
import pytest
from mcltspice.svg_plot import (
_format_freq,
_nice_ticks,
plot_bode,
plot_spectrum,
plot_timeseries,
)
# ---------------------------------------------------------------------------
# Fixtures
# ---------------------------------------------------------------------------
@pytest.fixture()
def sine_wave():
"""A 1 kHz sine wave sampled at 100 kHz for 10 ms."""
t = np.linspace(0, 0.01, 1000, endpoint=False)
v = np.sin(2 * np.pi * 1000 * t)
return t, v
@pytest.fixture()
def bode_data():
"""Simple first-order lowpass Bode response (fc = 1 kHz)."""
freq = np.logspace(1, 6, 500)
fc = 1e3
mag_db = -10 * np.log10(1 + (freq / fc) ** 2)
phase_deg = -np.degrees(np.arctan(freq / fc))
return freq, mag_db, phase_deg
@pytest.fixture()
def spectrum_data():
"""Synthetic FFT spectrum with a peak at 1 kHz."""
freq = np.logspace(1, 5, 300)
mag_db = -60 * np.ones_like(freq)
peak_idx = np.argmin(np.abs(freq - 1e3))
mag_db[peak_idx] = 0.0
return freq, mag_db
# ---------------------------------------------------------------------------
# Timeseries
# ---------------------------------------------------------------------------
class TestPlotTimeseries:
def test_basic(self, sine_wave):
"""A simple sine wave produces a valid SVG with expected elements."""
t, v = sine_wave
svg = plot_timeseries(t, v)
assert svg.startswith("<svg")
assert "<path" in svg
assert "Time Domain" in svg
def test_empty_arrays(self):
"""Empty input should not crash and should return a valid SVG."""
svg = plot_timeseries([], [])
assert svg.startswith("<svg")
assert "</svg>" in svg
def test_custom_title_and_labels(self, sine_wave):
"""Custom title and ylabel should appear in the SVG output."""
t, v = sine_wave
svg = plot_timeseries(t, v, title="My Signal", ylabel="Current (A)")
assert "My Signal" in svg
assert "Current (A)" in svg
def test_svg_dimensions(self, sine_wave):
"""The width/height attributes should match the requested size."""
t, v = sine_wave
svg = plot_timeseries(t, v, width=1024, height=768)
assert 'width="1024"' in svg
assert 'height="768"' in svg
# ---------------------------------------------------------------------------
# Bode
# ---------------------------------------------------------------------------
class TestPlotBode:
def test_magnitude_only(self, bode_data):
"""Bode plot without phase produces a valid SVG with one trace."""
freq, mag_db, _ = bode_data
svg = plot_bode(freq, mag_db)
assert svg.startswith("<svg")
assert "<path" in svg
assert "Bode Plot" in svg
def test_with_phase(self, bode_data):
"""Bode plot with phase should contain two <path> elements (mag + phase)."""
freq, mag_db, phase_deg = bode_data
svg = plot_bode(freq, mag_db, phase_deg)
assert svg.startswith("<svg")
# Two traces -- magnitude and phase
assert svg.count("<path") >= 2
# Phase subplot label
assert "Phase (deg)" in svg
def test_log_axis_ticks(self, bode_data):
"""Log frequency axis should contain tick labels at powers of 10."""
freq, mag_db, _ = bode_data
svg = plot_bode(freq, mag_db)
# Expect at least some frequency labels like "100", "1k", "10k", "100k"
found = sum(1 for lbl in ("100", "1k", "10k", "100k") if lbl in svg)
assert found >= 2, f"Expected log tick labels in SVG; found {found}"
# ---------------------------------------------------------------------------
# Spectrum
# ---------------------------------------------------------------------------
class TestPlotSpectrum:
def test_basic(self, spectrum_data):
"""A simple spectrum produces a valid SVG."""
freq, mag_db = spectrum_data
svg = plot_spectrum(freq, mag_db)
assert svg.startswith("<svg")
assert "<path" in svg
assert "FFT Spectrum" in svg
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
class TestNiceTicks:
def test_simple_range(self):
ticks = _nice_ticks(0, 10, n_ticks=5)
assert len(ticks) >= 3
assert ticks[0] <= 0
assert ticks[-1] >= 10
def test_equal_values(self):
"""When vmin == vmax, return a single-element list."""
ticks = _nice_ticks(5, 5)
assert ticks == [5]
def test_negative_range(self):
ticks = _nice_ticks(-100, -20, n_ticks=5)
assert ticks[0] <= -100
assert ticks[-1] >= -20
def test_small_range(self):
ticks = _nice_ticks(0.001, 0.005, n_ticks=5)
assert all(0 <= t <= 0.01 for t in ticks)
class TestFormatFreq:
def test_hz(self):
assert _format_freq(1) == "1"
assert _format_freq(10) == "10"
assert _format_freq(100) == "100"
def test_khz(self):
assert _format_freq(1000) == "1k"
assert _format_freq(10000) == "10k"
assert _format_freq(100000) == "100k"
def test_mhz(self):
assert _format_freq(1e6) == "1M"
assert _format_freq(10e6) == "10M"
def test_ghz(self):
assert _format_freq(1e9) == "1G"
def test_zero(self):
assert _format_freq(0) == "0"
class TestSvgDimensions:
def test_timeseries_dimensions(self):
t = np.linspace(0, 1, 100)
v = np.sin(t)
svg = plot_timeseries(t, v, width=640, height=480)
assert 'width="640"' in svg
assert 'height="480"' in svg
def test_bode_dimensions(self):
freq = np.logspace(1, 5, 50)
mag = np.zeros(50)
svg = plot_bode(freq, mag, width=900, height=600)
assert 'width="900"' in svg
assert 'height="600"' in svg
def test_spectrum_dimensions(self):
freq = np.logspace(1, 5, 50)
mag = np.zeros(50)
svg = plot_spectrum(freq, mag, width=1000, height=500)
assert 'width="1000"' in svg
assert 'height="500"' in svg

177
tests/test_touchstone.py Normal file
View File

@ -0,0 +1,177 @@
"""Tests for touchstone module: format conversion, parsing, S-parameter extraction."""
from pathlib import Path
import numpy as np
import pytest
from mcltspice.touchstone import (
_detect_ports,
_to_complex,
get_s_parameter,
parse_touchstone,
s_param_to_db,
)
class TestToComplex:
def test_ri_format(self):
"""RI: (real, imag) -> complex."""
c = _to_complex(3.0, 4.0, "RI")
assert c == complex(3.0, 4.0)
def test_ma_format(self):
"""MA: (magnitude, angle_deg) -> complex."""
c = _to_complex(1.0, 0.0, "MA")
assert c == pytest.approx(complex(1.0, 0.0), abs=1e-10)
c90 = _to_complex(1.0, 90.0, "MA")
assert c90.real == pytest.approx(0.0, abs=1e-10)
assert c90.imag == pytest.approx(1.0, abs=1e-10)
def test_db_format(self):
"""DB: (mag_db, angle_deg) -> complex."""
# 0 dB = magnitude 1.0
c = _to_complex(0.0, 0.0, "DB")
assert abs(c) == pytest.approx(1.0)
# 20 dB = magnitude 10.0
c20 = _to_complex(20.0, 0.0, "DB")
assert abs(c20) == pytest.approx(10.0, rel=0.01)
def test_unknown_format_raises(self):
with pytest.raises(ValueError, match="Unknown format"):
_to_complex(1.0, 0.0, "XY")
class TestDetectPorts:
@pytest.mark.parametrize(
"suffix, expected",
[
(".s1p", 1),
(".s2p", 2),
(".s3p", 3),
(".s4p", 4),
(".S2P", 2), # case insensitive
],
)
def test_valid_extensions(self, suffix, expected):
p = Path(f"test{suffix}")
assert _detect_ports(p) == expected
def test_invalid_extension(self):
with pytest.raises(ValueError, match="Cannot determine port count"):
_detect_ports(Path("test.txt"))
class TestParseTouchstone:
def test_parse_s2p(self, tmp_s2p_file):
"""Parse a synthetic .s2p file and verify structure."""
data = parse_touchstone(tmp_s2p_file)
assert data.n_ports == 2
assert data.parameter_type == "S"
assert data.format_type == "MA"
assert data.reference_impedance == 50.0
assert len(data.frequencies) == 3
assert data.data.shape == (3, 2, 2)
def test_frequencies_in_hz(self, tmp_s2p_file):
"""Frequencies should be converted to Hz (from GHz)."""
data = parse_touchstone(tmp_s2p_file)
# First freq is 1.0 GHz = 1e9 Hz
assert data.frequencies[0] == pytest.approx(1e9)
assert data.frequencies[1] == pytest.approx(2e9)
assert data.frequencies[2] == pytest.approx(3e9)
def test_s11_values(self, tmp_s2p_file):
"""S11 at first frequency should match input: mag=0.5, angle=-30."""
data = parse_touchstone(tmp_s2p_file)
s11 = data.data[0, 0, 0]
assert abs(s11) == pytest.approx(0.5, rel=0.01)
assert np.degrees(np.angle(s11)) == pytest.approx(-30.0, abs=1.0)
def test_comments_parsed(self, tmp_s2p_file):
data = parse_touchstone(tmp_s2p_file)
assert len(data.comments) > 0
def test_s1p_file(self, tmp_path):
"""Parse a minimal .s1p file."""
content = (
"# MHZ S RI R 50\n"
"100 0.5 0.3\n"
"200 0.4 0.2\n"
)
p = tmp_path / "test.s1p"
p.write_text(content)
data = parse_touchstone(p)
assert data.n_ports == 1
assert data.data.shape == (2, 1, 1)
# 100 MHz = 100e6 Hz
assert data.frequencies[0] == pytest.approx(100e6)
def test_db_format_file(self, tmp_path):
"""Parse a .s1p file in DB format."""
content = (
"# GHZ S DB R 50\n"
"1.0 -3.0 -45\n"
"2.0 -6.0 -90\n"
)
p = tmp_path / "dbtest.s1p"
p.write_text(content)
data = parse_touchstone(p)
assert data.format_type == "DB"
# -3 dB -> magnitude ~ 0.707
assert abs(data.data[0, 0, 0]) == pytest.approx(10 ** (-3.0 / 20.0), rel=0.01)
class TestSParamToDb:
def test_unity_magnitude(self):
"""Magnitude 1.0 -> 0 dB."""
vals = np.array([1.0 + 0j])
db = s_param_to_db(vals)
assert db[0] == pytest.approx(0.0, abs=0.01)
def test_known_magnitude(self):
"""Magnitude 0.1 -> -20 dB."""
vals = np.array([0.1 + 0j])
db = s_param_to_db(vals)
assert db[0] == pytest.approx(-20.0, abs=0.1)
def test_zero_magnitude(self):
"""Zero magnitude should not produce -inf (floored)."""
vals = np.array([0.0 + 0j])
db = s_param_to_db(vals)
assert np.isfinite(db[0])
assert db[0] < -200
class TestGetSParameter:
def test_1_based_indexing(self, tmp_s2p_file):
data = parse_touchstone(tmp_s2p_file)
freqs, vals = get_s_parameter(data, 1, 1)
assert len(freqs) == 3
assert len(vals) == 3
def test_s21(self, tmp_s2p_file):
data = parse_touchstone(tmp_s2p_file)
# S21 is stored at (row=0, col=1) -> get_s_parameter(data, 1, 2)
# because the parser iterates row then col, and Touchstone 2-port
# order is S11, S21, S12, S22 -> (0,0), (0,1), (1,0), (1,1)
freqs, vals = get_s_parameter(data, 1, 2)
# S21 at first freq: mag=0.9, angle=-10
assert abs(vals[0]) == pytest.approx(0.9, rel=0.01)
def test_out_of_range_row(self, tmp_s2p_file):
data = parse_touchstone(tmp_s2p_file)
with pytest.raises(IndexError, match="Row index"):
get_s_parameter(data, 3, 1)
def test_out_of_range_col(self, tmp_s2p_file):
data = parse_touchstone(tmp_s2p_file)
with pytest.raises(IndexError, match="Column index"):
get_s_parameter(data, 1, 3)
def test_zero_index_raises(self, tmp_s2p_file):
data = parse_touchstone(tmp_s2p_file)
with pytest.raises(IndexError):
get_s_parameter(data, 0, 1)

165
tests/test_waveform_expr.py Normal file
View File

@ -0,0 +1,165 @@
"""Tests for waveform_expr module: tokenizer, parser, expression evaluator."""
import numpy as np
import pytest
from mcltspice.waveform_expr import (
WaveformCalculator,
_tokenize,
_TokenType,
evaluate_expression,
)
# ---------------------------------------------------------------------------
# Tokenizer tests
# ---------------------------------------------------------------------------
class TestTokenizer:
def test_number_tokens(self):
tokens = _tokenize("42 3.14 1e-3")
nums = [t for t in tokens if t.type == _TokenType.NUMBER]
assert len(nums) == 3
assert nums[0].value == "42"
assert nums[1].value == "3.14"
assert nums[2].value == "1e-3"
def test_signal_tokens(self):
tokens = _tokenize("V(out) + I(R1)")
signals = [t for t in tokens if t.type == _TokenType.SIGNAL]
assert len(signals) == 2
assert signals[0].value == "V(out)"
assert signals[1].value == "I(R1)"
def test_operator_tokens(self):
tokens = _tokenize("1 + 2 - 3 * 4 / 5")
ops = [t for t in tokens if t.type not in (_TokenType.NUMBER, _TokenType.EOF)]
types = [t.type for t in ops]
assert types == [_TokenType.PLUS, _TokenType.MINUS, _TokenType.STAR, _TokenType.SLASH]
def test_function_tokens(self):
tokens = _tokenize("abs(V(out))")
funcs = [t for t in tokens if t.type == _TokenType.FUNC]
assert len(funcs) == 1
assert funcs[0].value == "abs"
def test_case_insensitive_functions(self):
tokens = _tokenize("dB(V(out))")
funcs = [t for t in tokens if t.type == _TokenType.FUNC]
assert funcs[0].value == "db"
def test_bare_identifier(self):
tokens = _tokenize("time + 1")
signals = [t for t in tokens if t.type == _TokenType.SIGNAL]
assert len(signals) == 1
assert signals[0].value == "time"
def test_invalid_character_raises(self):
with pytest.raises(ValueError, match="Unexpected character"):
_tokenize("V(out) @ 2")
def test_eof_token(self):
tokens = _tokenize("1")
assert tokens[-1].type == _TokenType.EOF
# ---------------------------------------------------------------------------
# Expression evaluator tests (scalar via numpy scalars)
# ---------------------------------------------------------------------------
class TestEvaluateExpression:
def test_addition(self):
result = evaluate_expression("2 + 3", {})
assert float(result) == pytest.approx(5.0)
def test_multiplication(self):
result = evaluate_expression("4 * 5", {})
assert float(result) == pytest.approx(20.0)
def test_precedence(self):
"""Multiplication binds tighter than addition: 2+3*4=14."""
result = evaluate_expression("2 + 3 * 4", {})
assert float(result) == pytest.approx(14.0)
def test_unary_minus(self):
result = evaluate_expression("-5 + 3", {})
assert float(result) == pytest.approx(-2.0)
def test_nested_parens(self):
result = evaluate_expression("(2 + 3) * (4 - 1)", {})
assert float(result) == pytest.approx(15.0)
def test_division_by_near_zero(self):
"""Division by near-zero uses a safe floor to avoid inf."""
result = evaluate_expression("1 / 0", {})
# Should return a very large number, not inf
assert np.isfinite(result)
def test_db_function(self):
"""dB(x) = 20 * log10(|x|)."""
result = evaluate_expression("db(10)", {})
assert float(result) == pytest.approx(20.0, rel=0.01)
def test_abs_function(self):
result = evaluate_expression("abs(-7)", {})
assert float(result) == pytest.approx(7.0)
def test_sqrt_function(self):
result = evaluate_expression("sqrt(16)", {})
assert float(result) == pytest.approx(4.0)
def test_log10_function(self):
result = evaluate_expression("log10(1000)", {})
assert float(result) == pytest.approx(3.0)
def test_signal_lookup(self):
"""Expression referencing a variable by name."""
variables = {"V(out)": np.array([1.0, 2.0, 3.0])}
result = evaluate_expression("V(out) * 2", variables)
np.testing.assert_array_almost_equal(result, [2.0, 4.0, 6.0])
def test_unknown_signal_raises(self):
with pytest.raises(ValueError, match="Unknown signal"):
evaluate_expression("V(missing)", {"V(out)": np.array([1.0])})
def test_unknown_function_raises(self):
# 'sin' is not in the supported function set -- the tokenizer treats
# it as a signal name "sin(1)", so the error is "Unknown signal"
with pytest.raises(ValueError, match="Unknown signal"):
evaluate_expression("sin(1)", {})
def test_malformed_expression(self):
with pytest.raises(ValueError):
evaluate_expression("2 +", {})
def test_case_insensitive_signal(self):
"""Signal lookup is case-insensitive."""
variables = {"V(OUT)": np.array([10.0])}
result = evaluate_expression("V(out)", variables)
np.testing.assert_array_almost_equal(result, [10.0])
# ---------------------------------------------------------------------------
# WaveformCalculator tests
# ---------------------------------------------------------------------------
class TestWaveformCalculator:
def test_calc_available_signals(self, mock_rawfile):
calc = WaveformCalculator(mock_rawfile)
signals = calc.available_signals()
assert "time" in signals
assert "V(out)" in signals
def test_calc_expression(self, mock_rawfile):
calc = WaveformCalculator(mock_rawfile)
result = calc.calc("V(out) * 2")
expected = np.real(mock_rawfile.data[1]) * 2
np.testing.assert_array_almost_equal(result, expected)
def test_calc_db(self, mock_rawfile):
calc = WaveformCalculator(mock_rawfile)
result = calc.calc("db(V(out))")
# db should produce real values
assert np.all(np.isfinite(result))

184
tests/test_waveform_math.py Normal file
View File

@ -0,0 +1,184 @@
"""Tests for waveform_math module: RMS, peak-to-peak, FFT, THD, bandwidth, settling, rise time."""
import numpy as np
import pytest
from mcltspice.waveform_math import (
compute_bandwidth,
compute_fft,
compute_peak_to_peak,
compute_rise_time,
compute_rms,
compute_settling_time,
compute_thd,
)
class TestComputeRms:
def test_dc_signal_exact(self, dc_signal):
"""RMS of a DC signal equals its DC value."""
rms = compute_rms(dc_signal)
assert rms == pytest.approx(3.3, abs=1e-10)
def test_sine_vpk_over_sqrt2(self, sine_1khz):
"""RMS of a pure sine (1 V peak) should be 1/sqrt(2)."""
rms = compute_rms(sine_1khz)
assert rms == pytest.approx(1.0 / np.sqrt(2), rel=0.01)
def test_empty_signal(self):
"""RMS of an empty array is 0."""
assert compute_rms(np.array([])) == 0.0
def test_single_sample(self):
"""RMS of a single sample equals abs(sample)."""
assert compute_rms(np.array([5.0])) == pytest.approx(5.0)
def test_complex_signal(self):
"""RMS uses only the real part of complex data."""
sig = np.array([3.0 + 4j, 3.0 + 4j])
rms = compute_rms(sig)
assert rms == pytest.approx(3.0)
class TestComputePeakToPeak:
def test_sine_wave(self, sine_1khz):
"""Peak-to-peak of a 1 V peak sine should be ~2 V."""
result = compute_peak_to_peak(sine_1khz)
assert result["peak_to_peak"] == pytest.approx(2.0, rel=0.01)
assert result["max"] == pytest.approx(1.0, rel=0.01)
assert result["min"] == pytest.approx(-1.0, rel=0.01)
assert result["mean"] == pytest.approx(0.0, abs=0.01)
def test_dc_signal(self, dc_signal):
"""Peak-to-peak of a DC signal is 0."""
result = compute_peak_to_peak(dc_signal)
assert result["peak_to_peak"] == pytest.approx(0.0)
def test_empty_signal(self):
result = compute_peak_to_peak(np.array([]))
assert result["peak_to_peak"] == 0.0
class TestComputeFft:
def test_known_sine_peak_at_correct_freq(self, time_array, sine_1khz):
"""A 1 kHz sine should produce a dominant peak at 1 kHz."""
result = compute_fft(time_array, sine_1khz)
assert result["fundamental_freq"] == pytest.approx(1000, rel=0.05)
assert result["dc_offset"] == pytest.approx(0.0, abs=0.01)
def test_dc_offset_detection(self, time_array):
"""A signal with DC offset should report correct dc_offset."""
offset = 2.5
sig = offset + np.sin(2 * np.pi * 1000 * time_array)
result = compute_fft(time_array, sig)
assert result["dc_offset"] == pytest.approx(offset, rel=0.05)
def test_short_signal(self):
"""Very short signals return empty results."""
result = compute_fft(np.array([0.0]), np.array([1.0]))
assert result["frequencies"] == []
assert result["fundamental_freq"] == 0.0
def test_zero_dt(self):
"""Time array with zero duration returns gracefully."""
result = compute_fft(np.array([1.0, 1.0]), np.array([1.0, 2.0]))
assert result["frequencies"] == []
class TestComputeThd:
def test_pure_sine_low_thd(self, time_array, sine_1khz):
"""A pure sine wave should have very low THD."""
result = compute_thd(time_array, sine_1khz)
assert result["thd_percent"] < 1.0
assert result["fundamental_freq"] == pytest.approx(1000, rel=0.05)
def test_clipped_sine_high_thd(self, time_array, sine_1khz):
"""A hard-clipped sine should have significantly higher THD."""
clipped = np.clip(sine_1khz, -0.5, 0.5)
result = compute_thd(time_array, clipped)
# Clipping at 50% introduces substantial harmonics
assert result["thd_percent"] > 10.0
def test_short_signal(self):
result = compute_thd(np.array([0.0]), np.array([1.0]))
assert result["thd_percent"] == 0.0
class TestComputeBandwidth:
def test_lowpass_cutoff(self, ac_frequency, lowpass_response):
"""Lowpass with fc=1kHz should report bandwidth near 1 kHz."""
result = compute_bandwidth(ac_frequency, lowpass_response)
assert result["bandwidth_hz"] == pytest.approx(1000, rel=0.1)
assert result["type"] == "lowpass"
def test_all_above_cutoff(self, ac_frequency):
"""If all magnitudes are above -3dB level, bandwidth spans entire range."""
flat = np.zeros_like(ac_frequency)
result = compute_bandwidth(ac_frequency, flat)
assert result["bandwidth_hz"] > 0
def test_short_input(self):
result = compute_bandwidth(np.array([1.0]), np.array([0.0]))
assert result["bandwidth_hz"] == 0.0
def test_bandpass_shape(self):
"""A peaked response should be detected as bandpass."""
fc = 10_000.0
Q = 5.0 # Q factor => BW = fc/Q = 2000 Hz
bw_expected = fc / Q
freq = np.logspace(2, 6, 2000)
# Second-order bandpass: H(s) = (s/wn/Q) / (s^2/wn^2 + s/wn/Q + 1)
wn = 2 * np.pi * fc
s = 1j * 2 * np.pi * freq
H = (s / wn / Q) / (s**2 / wn**2 + s / wn / Q + 1)
mag_db = 20.0 * np.log10(np.abs(H))
result = compute_bandwidth(freq, mag_db)
assert result["type"] == "bandpass"
assert result["bandwidth_hz"] == pytest.approx(bw_expected, rel=0.15)
class TestComputeSettlingTime:
def test_already_settled(self, time_array, dc_signal):
"""A constant signal is already settled at t=0."""
t = np.linspace(0, 0.01, len(dc_signal))
result = compute_settling_time(t, dc_signal, final_value=3.3)
assert result["settled"] is True
assert result["settling_time"] == 0.0
def test_step_response(self, time_array, step_signal):
"""Step response should settle after the transient."""
result = compute_settling_time(time_array, step_signal, final_value=1.0)
assert result["settled"] is True
assert result["settling_time"] > 0
def test_never_settles(self, time_array, sine_1khz):
"""An oscillating signal never settles to a DC value."""
result = compute_settling_time(time_array, sine_1khz, final_value=0.5)
assert result["settled"] is False
def test_short_signal(self):
result = compute_settling_time(np.array([0.0]), np.array([1.0]))
assert result["settled"] is False
class TestComputeRiseTime:
def test_fast_step(self):
"""A fast rising step should have a short rise time."""
t = np.linspace(0, 1e-3, 10000)
# Step with very fast exponential rise
sig = np.where(t > 0.1e-3, 1.0 - np.exp(-(t - 0.1e-3) / 20e-6), 0.0)
result = compute_rise_time(t, sig)
assert result["rise_time"] > 0
# 10-90% rise time of RC = ~2.2 * tau
assert result["rise_time"] == pytest.approx(2.2 * 20e-6, rel=0.2)
def test_no_swing(self):
"""Flat signal has zero rise time."""
t = np.linspace(0, 1, 100)
sig = np.ones(100) * 5.0
result = compute_rise_time(t, sig)
assert result["rise_time"] == 0.0
def test_short_signal(self):
result = compute_rise_time(np.array([0.0]), np.array([0.0]))
assert result["rise_time"] == 0.0

71
uv.lock generated
View File

@ -1026,6 +1026,36 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/73/e4/6d6f14b2a759c622f191b2d67e9075a3f56aaccb3be4bb9bb6890030d0a0/matplotlib-3.10.8-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:1ae029229a57cd1e8fe542485f27e7ca7b23aa9e8944ddb4985d0bc444f1eca2", size = 8713867, upload-time = "2025-12-10T22:56:48.954Z" },
]
[[package]]
name = "mcltspice"
version = "2026.2.10"
source = { editable = "." }
dependencies = [
{ name = "fastmcp" },
{ name = "numpy" },
]
[package.optional-dependencies]
dev = [
{ name = "pytest" },
{ name = "pytest-asyncio" },
{ name = "ruff" },
]
plot = [
{ name = "matplotlib" },
]
[package.metadata]
requires-dist = [
{ name = "fastmcp", specifier = ">=2.0.0" },
{ name = "matplotlib", marker = "extra == 'plot'", specifier = ">=3.7.0" },
{ name = "numpy", specifier = ">=1.24.0" },
{ name = "pytest", marker = "extra == 'dev'", specifier = ">=7.0.0" },
{ name = "pytest-asyncio", marker = "extra == 'dev'", specifier = ">=0.23.0" },
{ name = "ruff", marker = "extra == 'dev'", specifier = ">=0.1.0" },
]
provides-extras = ["dev", "plot"]
[[package]]
name = "mcp"
version = "1.26.0"
@ -1051,34 +1081,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/fd/d9/eaa1f80170d2b7c5ba23f3b59f766f3a0bb41155fbc32a69adfa1adaaef9/mcp-1.26.0-py3-none-any.whl", hash = "sha256:904a21c33c25aa98ddbeb47273033c435e595bbacfdb177f4bd87f6dceebe1ca", size = 233615, upload-time = "2026-01-24T19:40:30.652Z" },
]
[[package]]
name = "mcp-ltspice"
version = "2026.2.10"
source = { editable = "." }
dependencies = [
{ name = "fastmcp" },
{ name = "numpy" },
]
[package.optional-dependencies]
dev = [
{ name = "pytest" },
{ name = "ruff" },
]
plot = [
{ name = "matplotlib" },
]
[package.metadata]
requires-dist = [
{ name = "fastmcp", specifier = ">=2.0.0" },
{ name = "matplotlib", marker = "extra == 'plot'", specifier = ">=3.7.0" },
{ name = "numpy", specifier = ">=1.24.0" },
{ name = "pytest", marker = "extra == 'dev'", specifier = ">=7.0.0" },
{ name = "ruff", marker = "extra == 'dev'", specifier = ">=0.1.0" },
]
provides-extras = ["dev", "plot"]
[[package]]
name = "mdurl"
version = "0.1.2"
@ -1602,6 +1604,19 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/3b/ab/b3226f0bd7cdcf710fbede2b3548584366da3b19b5021e74f5bde2a8fa3f/pytest-9.0.2-py3-none-any.whl", hash = "sha256:711ffd45bf766d5264d487b917733b453d917afd2b0ad65223959f59089f875b", size = 374801, upload-time = "2025-12-06T21:30:49.154Z" },
]
[[package]]
name = "pytest-asyncio"
version = "1.3.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pytest" },
{ name = "typing-extensions", marker = "python_full_version < '3.13'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/90/2c/8af215c0f776415f3590cac4f9086ccefd6fd463befeae41cd4d3f193e5a/pytest_asyncio-1.3.0.tar.gz", hash = "sha256:d7f52f36d231b80ee124cd216ffb19369aa168fc10095013c6b014a34d3ee9e5", size = 50087, upload-time = "2025-11-10T16:07:47.256Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e5/35/f8b19922b6a25bc0880171a2f1a003eaeb93657475193ab516fd87cac9da/pytest_asyncio-1.3.0-py3-none-any.whl", hash = "sha256:611e26147c7f77640e6d0a92a38ed17c3e9848063698d5c93d5aa7aa11cebff5", size = 15075, upload-time = "2025-11-10T16:07:45.537Z" },
]
[[package]]
name = "python-dateutil"
version = "2.9.0.post0"