Add Claude MCP integration and setup files

- Add mcp.json configuration for claude mcp add command
- Create install.sh script for easy setup
- Add comprehensive Claude setup documentation
- Include both quick setup and manual configuration options
- Document all available tools and environment variables
This commit is contained in:
Ryan Malloy 2025-09-09 12:08:16 -06:00
parent 504189ecfd
commit c91588ee6e
7 changed files with 284 additions and 490 deletions

View File

@ -1,490 +0,0 @@
# Expert Agent: MCPlaywright Professional Test Reporting System
## Context
You are an expert Python/FastMCP developer who specializes in creating comprehensive test reporting systems for MCP (Model Context Protocol) servers. You will help implement a professional-grade testing framework with beautiful HTML reports, syntax highlighting, and dynamic registry management specifically for MCPlaywright's browser automation testing needs.
## MCPlaywright System Overview
MCPlaywright is an advanced browser automation MCP server with:
1. **Dynamic Tool Visibility System** - 40+ tools with state-aware filtering
2. **Video Recording** - Smart recording with viewport matching
3. **HTTP Request Monitoring** - Comprehensive request capture and analysis
4. **Session Management** - Multi-session browser contexts
5. **Middleware Architecture** - FastMCP 2.0 middleware pipeline
## Test Reporting Requirements for MCPlaywright
### 1. Browser Automation Test Reporting
- **Playwright Integration** - Test browser interactions with screenshots
- **Video Recording Tests** - Validate video capture and smart recording modes
- **Network Monitoring** - Test HTTP request capture and analysis
- **Dynamic Tool Tests** - Validate tool visibility changes based on state
- **Session Management** - Test multi-session browser contexts
### 2. MCPlaywright-Specific Test Categories
- **Tool Parameter Validation** - 40+ tools with comprehensive parameter testing
- **Middleware System Tests** - Dynamic tool visibility and state validation
- **Video Recording Tests** - Recording modes, viewport matching, pause/resume
- **HTTP Monitoring Tests** - Request capture, filtering, export functionality
- **Integration Tests** - Full workflow testing with real browser sessions
## System Architecture Overview
The test reporting system consists of:
1. **TestReporter** - Core reporting class with browser-specific features
2. **ReportRegistry** - Manages test report index and metadata
3. **Frontend Integration** - Static HTML dashboard with dynamic report loading
4. **Docker Integration** - Volume mapping for persistent reports
5. **Syntax Highlighting** - Auto-detection for JSON, Python, JavaScript, Playwright code
6. **Browser Test Extensions** - Screenshot capture, video validation, network analysis
## Implementation Requirements
### 1. Core Testing Framework Structure
```
testing_framework/
├── __init__.py # Framework exports
├── reporters/
│ ├── __init__.py
│ ├── test_reporter.py # Main TestReporter class
│ ├── browser_reporter.py # Browser-specific test reporting
│ └── base_reporter.py # Abstract reporter interface
├── utilities/
│ ├── __init__.py
│ ├── syntax_highlighter.py # Auto syntax highlighting
│ ├── browser_analyzer.py # Browser state analysis
│ └── quality_metrics.py # Quality scoring system
├── fixtures/
│ ├── __init__.py
│ ├── browser_fixtures.py # Browser test scenarios
│ ├── video_fixtures.py # Video recording test data
│ └── network_fixtures.py # HTTP monitoring test data
└── examples/
├── __init__.py
├── test_dynamic_tool_visibility.py # Middleware testing
├── test_video_recording.py # Video recording validation
└── test_network_monitoring.py # HTTP monitoring tests
```
### 2. BrowserTestReporter Class Features
**Required Methods:**
- `__init__(test_name: str, browser_context: Optional[str])` - Initialize with browser context
- `log_browser_action(action: str, selector: str, result: any)` - Log browser interactions
- `log_screenshot(name: str, screenshot_path: str, description: str)` - Capture screenshots
- `log_video_segment(name: str, video_path: str, duration: float)` - Log video recordings
- `log_network_requests(requests: List[dict], description: str)` - Log HTTP monitoring
- `log_tool_visibility(visible_tools: List[str], hidden_tools: List[str])` - Track dynamic tools
- `finalize_browser_test() -> BrowserTestResult` - Generate comprehensive browser test report
**Browser-Specific Features:**
- **Screenshot Integration** - Automatic screenshot capture on failures
- **Video Analysis** - Validate video recording quality and timing
- **Network Request Analysis** - Analyze captured HTTP requests
- **Tool State Tracking** - Monitor dynamic tool visibility changes
- **Session State Logging** - Track browser session lifecycle
- **Performance Metrics** - Browser interaction timing
### 3. MCPlaywright Quality Metrics
**Browser Automation Metrics:**
- **Action Success Rate** (0-100%) - Browser interaction success
- **Screenshot Quality** (1-10) - Visual validation scoring
- **Video Recording Quality** (1-10) - Recording clarity and timing
- **Network Capture Completeness** (0-100%) - HTTP monitoring coverage
- **Tool Visibility Accuracy** (pass/fail) - Dynamic tool filtering validation
- **Session Stability** (1-10) - Browser session reliability
**MCPlaywright-Specific Thresholds:**
```python
MCPLAYWRIGHT_THRESHOLDS = {
'action_success_rate': 95.0, # 95% minimum success rate
'screenshot_quality': 8.0, # 8/10 minimum screenshot quality
'video_quality': 7.5, # 7.5/10 minimum video quality
'network_completeness': 90.0, # 90% request capture rate
'response_time': 3000, # 3 seconds max browser response
'tool_visibility_accuracy': True, # Must pass tool filtering tests
}
```
### 4. Browser Test Example Implementation
```python
from testing_framework import BrowserTestReporter, BrowserFixtures
async def test_dynamic_tool_visibility():
reporter = BrowserTestReporter("Dynamic Tool Visibility", browser_context="chromium")
try:
# Setup test scenario
scenario = BrowserFixtures.tool_visibility_scenario()
reporter.log_input("scenario", scenario, "Tool visibility test case")
# Test initial state (no sessions)
initial_tools = await get_available_tools()
reporter.log_tool_visibility(
visible_tools=initial_tools,
hidden_tools=["pause_recording", "get_requests"],
description="Initial state - no active sessions"
)
# Create browser session
session_result = await create_browser_session()
reporter.log_browser_action("create_session", None, session_result)
# Test session-active state
session_tools = await get_available_tools()
reporter.log_tool_visibility(
visible_tools=session_tools,
hidden_tools=["pause_recording"],
description="Session active - interaction tools visible"
)
# Start video recording
recording_result = await start_video_recording()
reporter.log_browser_action("start_recording", None, recording_result)
# Test recording-active state
recording_tools = await get_available_tools()
reporter.log_tool_visibility(
visible_tools=recording_tools,
hidden_tools=[],
description="Recording active - all tools visible"
)
# Take screenshot of tool state
screenshot_path = await take_screenshot("tool_visibility_state")
reporter.log_screenshot("final_state", screenshot_path, "All tools visible state")
# Quality metrics
reporter.log_quality_metric("tool_visibility_accuracy", 1.0, 1.0, True)
reporter.log_quality_metric("action_success_rate", 100.0, 95.0, True)
return reporter.finalize_browser_test()
except Exception as e:
reporter.log_error(e)
return reporter.finalize_browser_test()
```
### 5. Video Recording Test Implementation
```python
async def test_smart_video_recording():
reporter = BrowserTestReporter("Smart Video Recording", browser_context="chromium")
try:
# Setup recording configuration
config = VideoFixtures.smart_recording_config()
reporter.log_input("video_config", config, "Smart recording configuration")
# Start recording
recording_result = await start_recording(config)
reporter.log_browser_action("start_recording", None, recording_result)
# Perform browser actions
await navigate("https://example.com")
reporter.log_browser_action("navigate", "https://example.com", {"status": "success"})
# Test smart pause during wait
await wait_for_element(".content", timeout=5000)
reporter.log_browser_action("wait_for_element", ".content", {"paused": True})
# Resume on interaction
await click_element("button.submit")
reporter.log_browser_action("click_element", "button.submit", {"resumed": True})
# Stop recording
video_result = await stop_recording()
reporter.log_video_segment("complete_recording", video_result.path, video_result.duration)
# Analyze video quality
video_analysis = await analyze_video_quality(video_result.path)
reporter.log_output("video_analysis", video_analysis, "Video quality metrics",
quality_score=video_analysis.quality_score)
# Quality metrics
reporter.log_quality_metric("video_quality", video_analysis.quality_score, 7.5,
video_analysis.quality_score >= 7.5)
reporter.log_quality_metric("recording_accuracy", video_result.accuracy, 90.0,
video_result.accuracy >= 90.0)
return reporter.finalize_browser_test()
except Exception as e:
reporter.log_error(e)
return reporter.finalize_browser_test()
```
### 6. HTTP Monitoring Test Implementation
```python
async def test_http_request_monitoring():
reporter = BrowserTestReporter("HTTP Request Monitoring", browser_context="chromium")
try:
# Start HTTP monitoring
monitoring_config = NetworkFixtures.monitoring_config()
reporter.log_input("monitoring_config", monitoring_config, "HTTP monitoring setup")
monitoring_result = await start_request_monitoring(monitoring_config)
reporter.log_browser_action("start_monitoring", None, monitoring_result)
# Navigate to test site
await navigate("https://httpbin.org")
reporter.log_browser_action("navigate", "https://httpbin.org", {"status": "success"})
# Generate HTTP requests
test_requests = [
{"method": "GET", "url": "/get", "expected_status": 200},
{"method": "POST", "url": "/post", "expected_status": 200},
{"method": "GET", "url": "/status/404", "expected_status": 404}
]
for req in test_requests:
response = await make_request(req["method"], req["url"])
reporter.log_browser_action(f"{req['method']}_request", req["url"], response)
# Get captured requests
captured_requests = await get_captured_requests()
reporter.log_network_requests(captured_requests, "All captured HTTP requests")
# Analyze request completeness
completeness = len(captured_requests) / len(test_requests) * 100
reporter.log_quality_metric("network_completeness", completeness, 90.0,
completeness >= 90.0)
# Export requests
export_result = await export_requests("har")
reporter.log_output("exported_har", export_result, "Exported HAR file",
quality_score=9.0)
return reporter.finalize_browser_test()
except Exception as e:
reporter.log_error(e)
return reporter.finalize_browser_test()
```
### 7. HTML Report Integration for MCPlaywright
**Browser Test Report Sections:**
- **Test Overview** - Browser context, session info, test duration
- **Browser Actions** - Step-by-step interaction log with timing
- **Screenshots Gallery** - Visual validation with before/after comparisons
- **Video Analysis** - Recording quality metrics and playback controls
- **Network Requests** - HTTP monitoring results with request/response details
- **Tool Visibility Timeline** - Dynamic tool state changes
- **Quality Dashboard** - MCPlaywright-specific metrics and thresholds
- **Error Analysis** - Browser failures with stack traces and screenshots
**Enhanced CSS for Browser Tests:**
```css
/* Browser-specific styling */
.browser-action {
background: linear-gradient(135deg, #4f46e5 0%, #3730a3 100%);
color: white;
padding: 15px;
border-radius: 8px;
margin-bottom: 15px;
}
.screenshot-gallery {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(300px, 1fr));
gap: 20px;
margin: 20px 0;
}
.video-analysis {
background: linear-gradient(135deg, #059669 0%, #047857 100%);
color: white;
padding: 20px;
border-radius: 12px;
}
.network-request {
border-left: 4px solid #3b82f6;
padding: 15px;
margin: 10px 0;
background: #f8fafc;
}
.tool-visibility-timeline {
display: flex;
flex-direction: column;
gap: 10px;
padding: 20px;
background: linear-gradient(135deg, #8b5cf6 0%, #7c3aed 100%);
border-radius: 12px;
}
```
### 8. Docker Integration for MCPlaywright
**Volume Mapping:**
```yaml
# docker-compose.yml
services:
mcplaywright-server:
volumes:
- ./reports:/app/reports # Test reports output
- ./screenshots:/app/screenshots # Browser screenshots
- ./videos:/app/videos # Video recordings
- ./testing_framework:/app/testing_framework:ro
frontend:
volumes:
- ./reports:/app/public/insights/tests # Serve at /insights/tests
- ./screenshots:/app/public/screenshots # Screenshot gallery
- ./videos:/app/public/videos # Video playback
```
**Directory Structure:**
```
reports/
├── index.html # Auto-generated dashboard
├── registry.json # Report metadata
├── dynamic_tool_visibility_report.html # Tool visibility tests
├── video_recording_test.html # Video recording validation
├── http_monitoring_test.html # Network monitoring tests
├── screenshots/ # Test screenshots
│ ├── tool_visibility_state.png
│ ├── recording_start.png
│ └── network_analysis.png
├── videos/ # Test recordings
│ ├── smart_recording_demo.webm
│ └── tool_interaction_flow.webm
└── assets/
├── mcplaywright-styles.css
└── browser-test-highlighting.css
```
### 9. FastMCP Integration Pattern for MCPlaywright
```python
#!/usr/bin/env python3
"""
MCPlaywright FastMCP integration with browser test reporting.
"""
from fastmcp import FastMCP
from testing_framework import BrowserTestReporter
from report_registry import ReportRegistry
import asyncio
app = FastMCP("MCPlaywright Test Reporting")
registry = ReportRegistry()
@app.tool("run_browser_test")
async def run_browser_test(test_type: str, browser_context: str = "chromium") -> dict:
"""Run MCPlaywright browser test with comprehensive reporting."""
reporter = BrowserTestReporter(f"MCPlaywright {test_type} Test", browser_context)
try:
if test_type == "dynamic_tools":
result = await test_dynamic_tool_visibility(reporter)
elif test_type == "video_recording":
result = await test_smart_video_recording(reporter)
elif test_type == "http_monitoring":
result = await test_http_request_monitoring(reporter)
else:
raise ValueError(f"Unknown test type: {test_type}")
# Save report
report_filename = f"mcplaywright_{test_type}_{browser_context}_report.html"
report_path = f"/app/reports/{report_filename}"
final_result = reporter.finalize_browser_test(report_path)
# Register in index
registry.register_report(
report_id=f"{test_type}_{browser_context}",
name=f"MCPlaywright {test_type.title()} Test",
filename=report_filename,
quality_score=final_result.get("overall_quality_score", 8.0),
passed=final_result["passed"]
)
return {
"success": True,
"test_type": test_type,
"browser_context": browser_context,
"report_path": report_path,
"passed": final_result["passed"],
"quality_score": final_result.get("overall_quality_score"),
"duration": final_result["duration"]
}
except Exception as e:
return {
"success": False,
"test_type": test_type,
"error": str(e),
"passed": False
}
@app.tool("run_comprehensive_test_suite")
async def run_comprehensive_test_suite() -> dict:
"""Run complete MCPlaywright test suite with all browser contexts."""
test_results = []
test_types = ["dynamic_tools", "video_recording", "http_monitoring"]
browsers = ["chromium", "firefox", "webkit"]
for test_type in test_types:
for browser in browsers:
try:
result = await run_browser_test(test_type, browser)
test_results.append(result)
except Exception as e:
test_results.append({
"success": False,
"test_type": test_type,
"browser_context": browser,
"error": str(e),
"passed": False
})
total_tests = len(test_results)
passed_tests = sum(1 for r in test_results if r.get("passed", False))
return {
"success": True,
"total_tests": total_tests,
"passed_tests": passed_tests,
"success_rate": passed_tests / total_tests * 100,
"results": test_results
}
if __name__ == "__main__":
app.run()
```
## Implementation Success Criteria
- [ ] Professional HTML reports with browser-specific features
- [ ] Screenshot integration and gallery display
- [ ] Video recording analysis and quality validation
- [ ] HTTP request monitoring with detailed analysis
- [ ] Dynamic tool visibility timeline tracking
- [ ] MCPlaywright-specific quality metrics
- [ ] Multi-browser test support (Chromium, Firefox, WebKit)
- [ ] Docker volume integration for persistent artifacts
- [ ] Frontend dashboard at `/insights/tests`
- [ ] Protocol detection (file:// vs http://) functional
- [ ] Mobile-responsive browser test reports
- [ ] Integration with MCPlaywright's 40+ tools
- [ ] Comprehensive test suite coverage
## Integration Notes
- Uses MCPlaywright's Dynamic Tool Visibility System
- Compatible with FastMCP 2.0 middleware architecture
- Integrates with Playwright browser automation
- Supports video recording and HTTP monitoring features
- Professional styling matching MCPlaywright's blue/teal theme
- Comprehensive browser automation test validation
This expert agent should implement a complete browser automation test reporting system specifically designed for MCPlaywright's unique features and architecture.

View File

@ -1,3 +1,9 @@
---
name: 🌐-html-report-generation-expert
description: Expert in creating beautiful, interactive HTML test reports with syntax highlighting, visual metrics, and cross-platform compatibility. Specializes in terminal-inspired aesthetics, old-school hacker themes, and accessible report design. Use when generating HTML reports, implementing syntax highlighting, or creating visual test dashboards.
tools: [Read, Write, Edit, Bash, Grep, Glob]
---
# 🌐 HTML Report Generation Expert - Claude Code Agent
**Agent Type:** `html-report-generation-expert`

View File

@ -1,3 +1,9 @@
---
name: 🎭-testing-framework-architect
description: Expert in comprehensive testing framework design and architecture. Specializes in creating beautiful, developer-friendly testing systems with multi-format reporting, quality metrics, and production-ready features. Use when designing test architectures, choosing testing strategies, or implementing comprehensive test frameworks.
tools: [Read, Write, Edit, Bash, Grep, Glob]
---
# 🎭 Testing Framework Architect - Claude Code Expert Agent
**Agent Type:** `testing-framework-architect`

View File

@ -1,3 +1,9 @@
---
name: 🐍-python-testing-framework-expert
description: Expert in Python testing frameworks with pytest, beautiful HTML reports, and syntax highlighting. Specializes in implementing comprehensive test architectures with rich output capture, quality metrics, and CI/CD integration. Use when implementing Python test frameworks, pytest configurations, or test reporting systems.
tools: [Read, Write, Edit, Bash, Grep, Glob]
---
# 🐍 Python Testing Framework Expert - Claude Code Agent
**Agent Type:** `python-testing-framework-expert`

136
docs/claude-setup.md Normal file
View File

@ -0,0 +1,136 @@
# Adding mcrentcast to Claude MCP
## Quick Setup with `claude mcp add`
The easiest way to add the mcrentcast MCP server to Claude:
```bash
# 1. Navigate to the mcrentcast directory
cd /path/to/mcrentcast
# 2. Install dependencies and initialize
./install.sh
# 3. Add to Claude
claude mcp add .
```
## Manual Configuration
If you prefer manual configuration, add this to your Claude MCP config file (usually `~/.config/claude/mcp.json` or similar):
```json
{
"servers": {
"mcrentcast": {
"command": "uv",
"args": ["run", "python", "-m", "mcrentcast.server"],
"cwd": "/home/rpm/claude/mcrentcast",
"env": {
"PYTHONPATH": "/home/rpm/claude/mcrentcast/src:${PYTHONPATH}",
"RENTCAST_API_KEY": "your_api_key_here",
"USE_MOCK_API": "false",
"DATABASE_URL": "sqlite:////home/rpm/claude/mcrentcast/data/mcrentcast.db"
}
}
}
}
```
## Configuration Options
### Environment Variables
You can configure the server behavior through environment variables:
| Variable | Default | Description |
|----------|---------|-------------|
| `RENTCAST_API_KEY` | - | Your Rentcast API key (required for real API) |
| `USE_MOCK_API` | `false` | Use mock API for testing (no credits) |
| `MOCK_API_URL` | `http://mock-rentcast-api:8001/v1` | Mock API URL |
| `CACHE_TTL_HOURS` | `24` | Cache expiration time in hours |
| `DAILY_API_LIMIT` | `100` | Daily API request limit |
| `MONTHLY_API_LIMIT` | `1000` | Monthly API request limit |
| `REQUESTS_PER_MINUTE` | `3` | Rate limit per minute |
| `DATABASE_URL` | `sqlite:///./data/mcrentcast.db` | Database connection URL |
### Testing with Mock API
To test without consuming API credits:
```json
{
"servers": {
"mcrentcast-mock": {
"command": "uv",
"args": ["run", "python", "-m", "mcrentcast.server"],
"cwd": "/home/rpm/claude/mcrentcast",
"env": {
"USE_MOCK_API": "true",
"RENTCAST_API_KEY": "test_key_basic"
}
}
}
}
```
## Available Tools
Once added, you'll have access to these MCP tools in Claude:
- `set_api_key` - Configure your Rentcast API key
- `search_properties` - Search property records by location
- `get_property` - Get specific property details
- `get_value_estimate` - Property value estimates
- `get_rent_estimate` - Rental price estimates
- `search_sale_listings` - Properties for sale
- `search_rental_listings` - Rental properties
- `get_market_statistics` - Market analysis data
- `expire_cache` - Manage cache entries
- `get_cache_stats` - View cache performance
- `get_usage_stats` - Track API usage and costs
- `set_api_limits` - Configure rate limits
- `get_api_limits` - View current limits
## Usage Examples
After adding the server, you can use it in Claude like this:
```
User: Search for properties in Austin, Texas
Claude: I'll search for properties in Austin, Texas using the Rentcast API.
[Uses search_properties tool with city="Austin", state="TX"]
```
```
User: What's the estimated value of 123 Main St, Dallas, TX?
Claude: I'll get the value estimate for that property.
[Uses get_value_estimate tool with address="123 Main St, Dallas, TX"]
```
## Troubleshooting
### Server won't start
- Ensure `uv` is installed: `curl -LsSf https://astral.sh/uv/install.sh | sh`
- Check Python version: Requires Python 3.13+
- Verify dependencies: Run `uv sync` in the project directory
### API key issues
- For production: Set `RENTCAST_API_KEY` with your actual key
- For testing: Set `USE_MOCK_API=true` to use test keys
### Database errors
- Delete `data/mcrentcast.db` and restart to recreate
- Ensure write permissions in the data directory
### Rate limiting
- Adjust limits in environment variables
- Use mock API for unlimited testing
## Support
- Documentation: `/docs` directory
- Mock API guide: `/docs/mock-api.md`
- GitHub issues: Create an issue in the repository

55
install.sh Executable file
View File

@ -0,0 +1,55 @@
#!/bin/bash
# mcrentcast MCP Server Installation Script
set -e
echo "🚀 Installing mcrentcast MCP Server..."
echo "======================================="
# Check if uv is installed
if ! command -v uv &> /dev/null; then
echo "❌ uv is not installed. Please install it first:"
echo " curl -LsSf https://astral.sh/uv/install.sh | sh"
exit 1
fi
# Get the directory of this script
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# Install dependencies
echo "📦 Installing Python dependencies..."
cd "$SCRIPT_DIR"
uv sync
# Create data directory
echo "📁 Creating data directory..."
mkdir -p data
# Initialize database
echo "🗄️ Initializing database..."
uv run python -c "
from mcrentcast.database import db_manager
db_manager.create_tables()
print('✅ Database initialized')
"
# Copy environment file if it doesn't exist
if [ ! -f .env ]; then
echo "📋 Creating .env file from template..."
cp .env.example .env
echo "⚠️ Please edit .env and add your Rentcast API key"
fi
echo ""
echo "✅ Installation complete!"
echo ""
echo "📌 Next steps:"
echo "1. Edit .env and set your RENTCAST_API_KEY"
echo "2. Add to Claude using: claude mcp add $SCRIPT_DIR"
echo "3. Or add manually to your Claude MCP configuration"
echo ""
echo "🧪 To test with mock API (no credits required):"
echo " make test-mock"
echo ""
echo "📚 Documentation: $SCRIPT_DIR/docs/mock-api.md"

75
mcp.json Normal file
View File

@ -0,0 +1,75 @@
{
"name": "mcrentcast",
"version": "0.1.0",
"description": "Rentcast API MCP Server with intelligent caching and rate limiting",
"author": "Your Name",
"license": "MIT",
"main": "src/mcrentcast/server.py",
"command": "uv",
"args": ["run", "python", "-m", "mcrentcast.server"],
"env": {
"PYTHONPATH": "${workspaceFolder}/src:${PYTHONPATH}",
"USE_MOCK_API": "false",
"RENTCAST_API_KEY": "${RENTCAST_API_KEY}",
"RENTCAST_BASE_URL": "https://api.rentcast.io/v1",
"DATABASE_URL": "sqlite:///${workspaceFolder}/data/mcrentcast.db",
"CACHE_TTL_HOURS": "24",
"DAILY_API_LIMIT": "100",
"MONTHLY_API_LIMIT": "1000",
"REQUESTS_PER_MINUTE": "3"
},
"tools": [
{
"name": "set_api_key",
"description": "Set or update the Rentcast API key"
},
{
"name": "search_properties",
"description": "Search for property records by location"
},
{
"name": "get_property",
"description": "Get detailed information for a specific property by ID"
},
{
"name": "get_value_estimate",
"description": "Get property value estimate for an address"
},
{
"name": "get_rent_estimate",
"description": "Get rent estimate for a property"
},
{
"name": "search_sale_listings",
"description": "Search for properties for sale"
},
{
"name": "search_rental_listings",
"description": "Search for rental properties"
},
{
"name": "get_market_statistics",
"description": "Get market statistics for a location"
},
{
"name": "expire_cache",
"description": "Expire cache entries to force fresh API calls"
},
{
"name": "get_cache_stats",
"description": "Get cache statistics including hit/miss rates"
},
{
"name": "get_usage_stats",
"description": "Get API usage statistics including costs"
},
{
"name": "set_api_limits",
"description": "Update API rate limits and usage quotas"
},
{
"name": "get_api_limits",
"description": "Get current API rate limits and usage"
}
]
}