Ryan Malloy 997cf8dec4 Initial commit: Production-ready FastMCP agent selection server
Features:
- FastMCP-based MCP server for Claude Code agent recommendations
- Hierarchical agent architecture with 39 specialized agents
- 10 MCP tools with enhanced LLM-friendly descriptions
- Composed agent support with parent-child relationships
- Project root configuration for focused recommendations
- Smart agent recommendation engine with confidence scoring

Server includes:
- Core recommendation tools (recommend_agents, get_agent_content)
- Project management tools (set/get/clear project roots)
- Discovery tools (list_agents, server_stats)
- Hierarchy navigation (get_sub_agents, get_parent_agent, get_agent_hierarchy)

All tools properly annotated for calling LLM clarity with detailed
arguments, return values, and usage examples.
2025-09-09 09:28:23 -06:00

9.7 KiB

Basic Service Template

General Notes:

Make this project and collaboration delightful! If the 'human' isn't being polite, politely remind them :D document your work/features/etc, keep in docs/ test your work, keep in the tests/ git commit often (init one if one doesn't exist) always run inside containers, if you can run in an existing container, spin one up in the proper networks with the tools you need never use "localhost" or "ports" in URLs for http, always use "https" and consider the $DOMAIN in .env

Tech Specs

Docker Compose no "version:" in docker-compose.yml Use multi-stage build $DOMAIN defined in .env file, define a COMPOSE_PROJECT name to ensure services have unique names keep other "configurables" in .env file and compose/expose to services in docker-compose.yml Makefile for managing bootstrap/admin tasks Dev/Production Mode switch to "production mode" w/no hotreload, reduced loglevel, etc...

Services: Frontend Simple, alpine.js/astro.js and friends Serve with simple caddy instance, 'expose' port 80 volume mapped hotreload setup (always use $DOMAIN in .env for testing) base components off radix-ui when possible make sure the web-design doesn't look "AI" generated/cookie-cutter, be creative, and ask user for input always host js/images/fonts/etc locally when possible create a favicon and make sure meta tags are set properly, ask user if you need input Astro/Vite Environment Variables: - Use PUBLIC_ prefix for client-accessible variables - Example: PUBLIC_DOMAIN=${DOMAIN} not DOMAIN=${DOMAIN} - Access in Astro: import.meta.env.PUBLIC_DOMAIN In astro.config.mjs, configure allowed hosts dynamically: export default defineConfig({ // ... other config vite: { server: { host: '0.0.0.0', port: 80, allowedHosts: [ process.env.PUBLIC_DOMAIN || 'localhost', // Add other subdomains as needed ] } } });

  ## Client-Side Only Packages
  Some packages only work in browsers Never import these packages at build time - they'll break SSR.
  **Package.json**: Add normally
  **Usage**: Import dynamically or via CDN
  ```javascript

// Astro - use dynamic import const webllm = await import("@mlc-ai/web-llm");

// Or CDN approach for problematic packages

```
Backend
  Python 3.13 uv/pyproject.toml/ruff/FastAPI 0.116.1 /PyDantic 2.11.7 /SqlAlchemy 2.0.43/sqlite
  See: https://docs.astral.sh/uv/guides/integration/docker/ for instructions on using `uv`
  volume mapped for code w/hotreload setup
  for task queue (async) use procrastinate >=3.5.2 https://procrastinate.readthedocs.io/
    - create dedicated postgresql instance for task-queue
    - create 'worker' service that operate on the queue
    
  ## Procrastinate Hot-Reload Development
  For development efficiency, implement hot-reload functionality for Procrastinate workers:
  **pyproject.toml dependencies:**
  ```toml
  dependencies = [
      "procrastinate[psycopg2]>=3.5.0",
      "watchfiles>=0.21.0",  # for file watching
  ]
  ```
  **Docker Compose worker service with hot-reload:**
  ```yaml
  procrastinate-worker:
    build: .
    command: /app/.venv/bin/python -m app.services.procrastinate_hot_reload
    volumes:
      - ./app:/app/app:ro  # Mount source for file watching
    environment:
      - WATCHFILES_FORCE_POLLING=false  # Use inotify on Linux
    networks:
      - caddy
    depends_on:
      - procrastinate-db
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "python", "-c", "import sys; sys.exit(0)"]
      interval: 30s
      timeout: 10s
      retries: 3
  ```
  **Hot-reload wrapper implementation:**
  - Uses `watchfiles` library with inotify for efficient file watching
  - Subprocess isolation for clean worker restarts
  - Configurable file patterns (defaults to `*.py` files)
  - Debounced restarts to handle rapid file changes
  - Graceful shutdown handling with SIGTERM/SIGINT
  - Development-only feature (disabled in production)

  ## Python Testing Framework
  Use pytest with comprehensive test recording and reporting:
  **pyproject.toml dev dependencies:**
  ```toml
  dev = [
      "pytest>=7.0.0",
      "pytest-asyncio>=0.21.0", 
      "pytest-html>=3.2.0",
      "allure-pytest>=2.13.0",
      "pytest-json-report>=1.5.0", 
      "pytest-cov>=4.0.0",
      "ruff>=0.1.0",
  ]
  ```
  **pytest.ini configuration:**
  ```ini
  [tool:pytest]
  addopts = 
      -v --tb=short
      --html=reports/pytest_report.html --self-contained-html
      --json-report --json-report-file=reports/pytest_report.json
      --cov=src/your_package --cov-report=html:reports/coverage_html
      --alluredir=reports/allure-results
  testpaths = tests
  markers =
      unit: Unit tests
      integration: Integration tests
      smoke: Smoke tests for basic functionality
  ```
  **Custom Test Framework Features:**
  - **Test Registry**: Easy test addition with `@registry.register("test_name", "category")` decorator
  - **Result Recording**: SQLite database storing test history and trends  
  - **Rich Reporting**: HTML reports (pytest-html) + Interactive Allure reports
  - **Command Line Tools**: `python test_framework.py --smoke-tests`, `--run-all`, `--list-tests`
  - **Test Categories**: smoke, unit, integration, regression tests
  - **Historical Analysis**: Track test performance over time
  - **Easy Test Addition**: Just decorate functions with `@registry.register()` 
  **Usage Examples:**
  ```python
  # Add new tests easily
  @registry.register("my_new_test", "unit")
  async def test_my_feature():
      assert feature_works()
      
  # Run tests with recording
  python test_framework.py --smoke-tests
  python test_framework.py --run-all
  python test_framework.py --test-history my_test_name
  ```
  if "MCP" 'model context protocol' is needed, use FastMCP >=v2.12.2 , w/streamable http transport
    - https://gofastmcp.com/servers/composition
    - Middleware (very powerful) https://gofastmcp.com/servers/middleware
    - Testing: https://gofastmcp.com/development/tests#tests
    - always be sure to describe/annotate tools from the "calling llm" point of view
    - use https://gofastmcp.com/servers/logging and https://gofastmcp.com/servers/progress
    - if the server needs to ask the client's 'human' something https://gofastmcp.com/servers/elicitation (support varies)
    - if the server wants the client to use it's llms/resources, it can use https://gofastmcp.com/servers/sampling
    - for authentication see https://gofastmcp.com/servers/auth/authentication
    - CLI options: https://gofastmcp.com/patterns/cli
    - FULL fast mcp docs: https://gofastmcp.com/llms-full.txt

All Reverse Proxied Services use external caddy network" services being reverse proxied SHOULD NOT have port: defined, just expose on the caddy network CRITICAL: If an external caddy network already exists (from caddy-docker-proxy), do NOT create additional Caddy containers. Services should only connect to the existing external network. Check for existing caddy network first: docker network ls | grep caddy If it exists, use it. If not, create it once globally.

see https://github.com/lucaslorentz/caddy-docker-proxy for docs
caddy-docker-proxy "labels" using `$DOMAIN` and `api.$DOMAIN` (etc, wildcard *.$DOMAIN record exists)

labels: caddy: $DOMAIN caddy.reverse_proxy: "{{upstreams}}"

when necessary, use "prefix or suffix" to make labels unique/ordered, see how a prefix is used below in the 'reverse_proxy' labels: ```

caddy: $DOMAIN caddy.@ws.0_header: Connection Upgrade caddy.@ws.1_header: Upgrade websocket caddy.0_reverse_proxy: @ws {{upstreams}} caddy.1_reverse_proxy: /api* {{upstreams}}

 
    Basic Auth can be setup like this (see https://caddyserver.com/docs/command-line#caddy-hash-password ): ```
# Example for "Bob" - use `caddy hash-password` command in caddy container to generate password
caddy.basicauth: /secret/*
caddy.basicauth.Bob: $$2a$$14$$Zkx19XLiW6VYouLHR5NmfOFU0z2GTNmpkT/5qqR7hx4IjWJPDhjvG
You can enable on_demand_tls by adding the follwing labels: ```

labels: caddy_0: yourbasedomain.com caddy_0.reverse_proxy: '{{upstreams 8080}}'

https://caddyserver.com/on-demand-tls

caddy.on_demand_tls:
caddy.on_demand_tls.ask: http://yourinternalcontainername:8080/v1/tls-domain-check # Replace with a full domain if you don't have the service on the same docker network.

caddy_1: https:// # Get all https:// requests (happens if caddy_0 match is false)
caddy_1.tls_0.on_demand:
caddy_1.reverse_proxy: http://yourinternalcontainername:3001 # Replace with a full domain if you don't have the service on the same docker network.



 ## Common Pitfalls to Avoid
  1. **Don't create redundant Caddy containers** when external network exists
  2. **Don't forget `PUBLIC_` prefix** for client-side env vars
  3. **Don't import client-only packages** at build time
  4. **Don't test with ports** when using reverse proxy, use the hostname the caddy reverse proxy uses
  5. **Don't hardcode domains in configs** - use `process.env.PUBLIC_DOMAIN` everywhere
  6. **Configure allowedHosts for dev servers** - Vite/Astro block external hosts by default