Add MS Office-themed test dashboard with interactive reporting

- Self-contained HTML dashboard with MS Office 365 design
- pytest plugin captures inputs, outputs, and errors per test
- Unified orchestrator runs pytest + torture tests together
- Test files persisted in reports/test_files/ with relative links
- GitHub Actions workflow with PR comments and job summaries
- Makefile with convenient commands (test, view-dashboard, etc.)
- Works offline with embedded JSON data (no CORS issues)
This commit is contained in:
Ryan Malloy 2026-01-11 00:28:12 -07:00
parent 76c7a0b2d0
commit c935cec7b6
15 changed files with 2721 additions and 2 deletions

124
.github/workflows/test-dashboard.yml vendored Normal file
View File

@ -0,0 +1,124 @@
name: Test Dashboard
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
workflow_dispatch: # Allow manual trigger
jobs:
test-and-dashboard:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
cache: 'pip'
- name: Install UV
run: |
curl -LsSf https://astral.sh/uv/install.sh | sh
echo "$HOME/.cargo/bin" >> $GITHUB_PATH
- name: Install dependencies
run: |
uv sync --dev
- name: Run tests with dashboard generation
run: |
python run_dashboard_tests.py
continue-on-error: true # Generate dashboard even if tests fail
- name: Extract test summary
id: test_summary
run: |
TOTAL=$(jq '.summary.total' reports/test_results.json)
PASSED=$(jq '.summary.passed' reports/test_results.json)
FAILED=$(jq '.summary.failed' reports/test_results.json)
SKIPPED=$(jq '.summary.skipped' reports/test_results.json)
PASS_RATE=$(jq '.summary.pass_rate' reports/test_results.json)
echo "total=$TOTAL" >> $GITHUB_OUTPUT
echo "passed=$PASSED" >> $GITHUB_OUTPUT
echo "failed=$FAILED" >> $GITHUB_OUTPUT
echo "skipped=$SKIPPED" >> $GITHUB_OUTPUT
echo "pass_rate=$PASS_RATE" >> $GITHUB_OUTPUT
- name: Upload test dashboard
uses: actions/upload-artifact@v4
if: always()
with:
name: test-dashboard
path: reports/
retention-days: 30
- name: Comment PR with results
if: github.event_name == 'pull_request'
uses: actions/github-script@v7
with:
script: |
const total = ${{ steps.test_summary.outputs.total }};
const passed = ${{ steps.test_summary.outputs.passed }};
const failed = ${{ steps.test_summary.outputs.failed }};
const skipped = ${{ steps.test_summary.outputs.skipped }};
const passRate = ${{ steps.test_summary.outputs.pass_rate }};
const statusEmoji = failed > 0 ? '❌' : '✅';
const passRateEmoji = passRate >= 90 ? '🎉' : passRate >= 70 ? '👍' : '⚠️';
const comment = `## ${statusEmoji} Test Results
| Metric | Value |
|--------|-------|
| Total Tests | ${total} |
| ✅ Passed | ${passed} |
| ❌ Failed | ${failed} |
| ⏭️ Skipped | ${skipped} |
| ${passRateEmoji} Pass Rate | ${passRate.toFixed(1)}% |
### 📊 Interactive Dashboard
[Download test dashboard artifact](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }})
The dashboard includes:
- Detailed test results with inputs/outputs
- Error tracebacks for failed tests
- Category breakdown (Word, Excel, PowerPoint, etc.)
- Interactive filtering and search
**To view**: Download the artifact, extract, and open \`test_dashboard.html\` in your browser.
`;
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: comment
});
- name: Create job summary
if: always()
run: |
echo "# 📊 Test Dashboard Summary" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "## Results" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "- **Total**: ${{ steps.test_summary.outputs.total }} tests" >> $GITHUB_STEP_SUMMARY
echo "- **✅ Passed**: ${{ steps.test_summary.outputs.passed }}" >> $GITHUB_STEP_SUMMARY
echo "- **❌ Failed**: ${{ steps.test_summary.outputs.failed }}" >> $GITHUB_STEP_SUMMARY
echo "- **⏭️ Skipped**: ${{ steps.test_summary.outputs.skipped }}" >> $GITHUB_STEP_SUMMARY
echo "- **📈 Pass Rate**: ${{ steps.test_summary.outputs.pass_rate }}%" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "## 🌐 Dashboard" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "Download the \`test-dashboard\` artifact to view the interactive HTML dashboard." >> $GITHUB_STEP_SUMMARY
- name: Fail job if tests failed
if: steps.test_summary.outputs.failed > 0
run: exit 1

190
ADVANCED_TOOLS_PLAN.md Normal file
View File

@ -0,0 +1,190 @@
# Advanced MCP Office Tools Enhancement Plan
## Current Status
- ✅ Basic text extraction
- ✅ Image extraction
- ✅ Metadata extraction
- ✅ Format detection
- ✅ Document health analysis
- ✅ Word-to-Markdown conversion
## Missing Advanced Features by Library
### 📊 Excel Tools (openpyxl + pandas + xlsxwriter)
#### Data Analysis & Manipulation
- `analyze_excel_data` - Statistical analysis, data types, missing values
- `create_pivot_table` - Generate pivot tables with aggregations
- `excel_data_validation` - Set dropdown lists, number ranges, date constraints
- `excel_conditional_formatting` - Apply color scales, data bars, icon sets
- `excel_formula_analysis` - Extract, validate, and analyze formulas
- `excel_chart_creation` - Create charts (bar, line, pie, scatter, etc.)
- `excel_worksheet_operations` - Add/delete/rename sheets, copy data
- `excel_merge_spreadsheets` - Combine multiple Excel files intelligently
#### Advanced Excel Features
- `excel_named_ranges` - Create and manage named ranges
- `excel_data_filtering` - Apply AutoFilter and advanced filters
- `excel_cell_styling` - Font, borders, alignment, number formats
- `excel_protection` - Password protect sheets/workbooks
- `excel_hyperlinks` - Add/extract hyperlinks from cells
- `excel_comments_notes` - Add/extract cell comments and notes
### 📝 Word Tools (python-docx + mammoth)
#### Document Structure & Layout
- `word_extract_tables` - Extract tables with styling and structure
- `word_extract_headers_footers` - Get headers/footers from all sections
- `word_extract_toc` - Extract table of contents with page numbers
- `word_document_structure` - Analyze heading hierarchy and outline
- `word_page_layout_analysis` - Margins, orientation, columns, page breaks
- `word_section_analysis` - Different sections with different formatting
#### Content Management
- `word_find_replace_advanced` - Pattern-based find/replace with formatting
- `word_extract_comments` - Get all comments with author and timestamps
- `word_extract_tracked_changes` - Get revision history and changes
- `word_extract_hyperlinks` - Extract all hyperlinks with context
- `word_extract_footnotes_endnotes` - Get footnotes and endnotes
- `word_style_analysis` - Analyze and extract custom styles
#### Document Generation
- `word_create_document` - Create new Word documents from templates
- `word_merge_documents` - Combine multiple Word documents
- `word_insert_content` - Add text, tables, images at specific locations
- `word_apply_formatting` - Apply consistent formatting across content
### 🎯 PowerPoint Tools (python-pptx)
#### Presentation Analysis
- `ppt_extract_slide_content` - Get text, images, shapes from each slide
- `ppt_extract_speaker_notes` - Get presenter notes for all slides
- `ppt_slide_layout_analysis` - Analyze slide layouts and master slides
- `ppt_extract_animations` - Get animation sequences and timing
- `ppt_presentation_structure` - Outline view with slide hierarchy
#### Content Management
- `ppt_slide_operations` - Add/delete/reorder slides
- `ppt_master_slide_analysis` - Extract master slide templates
- `ppt_shape_analysis` - Analyze text boxes, shapes, SmartArt
- `ppt_media_extraction` - Extract embedded videos and audio
- `ppt_hyperlink_analysis` - Extract slide transitions and hyperlinks
#### Presentation Generation
- `ppt_create_presentation` - Create new presentations from data
- `ppt_slide_generation` - Generate slides from templates and content
- `ppt_chart_integration` - Add charts and graphs to slides
### 🔄 Cross-Format Tools
#### Document Conversion
- `convert_excel_to_word_table` - Convert spreadsheet data to Word tables
- `convert_word_table_to_excel` - Extract Word tables to Excel format
- `extract_presentation_data_to_excel` - Convert slide content to spreadsheet
- `create_report_from_data` - Generate Word reports from Excel data
#### Advanced Analysis
- `cross_document_comparison` - Compare content across different formats
- `document_summarization` - AI-powered document summaries
- `extract_key_metrics` - Find numbers, dates, important data across docs
- `document_relationship_analysis` - Find references between documents
### 🎨 Advanced Image & Media Tools
#### Image Processing (Pillow integration)
- `advanced_image_extraction` - Extract with OCR, face detection, object recognition
- `image_format_conversion` - Convert between formats with optimization
- `image_metadata_analysis` - EXIF data, creation dates, camera info
- `image_quality_analysis` - Resolution, compression, clarity metrics
#### Media Analysis
- `extract_embedded_objects` - Get all embedded files (PDFs, other Office docs)
- `analyze_document_media` - Comprehensive media inventory
- `optimize_document_media` - Reduce file sizes by optimizing images
### 📈 Data Science Integration
#### Analytics Tools (pandas + numpy integration)
- `statistical_analysis` - Mean, median, correlations, distributions
- `time_series_analysis` - Trend analysis on date-based data
- `data_cleaning_suggestions` - Identify data quality issues
- `export_for_analysis` - Export to JSON, CSV, Parquet for data science
#### Visualization Preparation
- `prepare_chart_data` - Format data for visualization libraries
- `generate_chart_configs` - Create chart.js, plotly, matplotlib configs
- `data_validation_rules` - Suggest data validation based on content analysis
### 🔐 Security & Compliance Tools
#### Document Security
- `analyze_document_security` - Check for sensitive information
- `redact_sensitive_content` - Remove/mask PII, financial data
- `document_audit_trail` - Track document creation, modification history
- `compliance_checking` - Check against various compliance standards
#### Access Control
- `extract_permissions` - Get document protection and sharing settings
- `password_analysis` - Check password protection strength
- `digital_signature_verification` - Verify document signatures
### 🔧 Automation & Workflow Tools
#### Batch Operations
- `batch_document_processing` - Process multiple documents with same operations
- `template_application` - Apply templates to multiple documents
- `bulk_format_conversion` - Convert multiple files between formats
- `automated_report_generation` - Generate reports from data templates
#### Integration Tools
- `export_to_cms` - Export content to various CMS formats
- `api_integration_prep` - Prepare data for API consumption
- `database_export` - Export structured data to database formats
- `email_template_generation` - Create email templates from documents
## Implementation Priority
### Phase 1: High-Impact Excel Tools 🔥
1. `analyze_excel_data` - Immediate value for data analysis
2. `create_pivot_table` - High-demand business feature
3. `excel_chart_creation` - Visual data representation
4. `excel_conditional_formatting` - Professional spreadsheet styling
### Phase 2: Advanced Word Processing 📄
1. `word_extract_tables` - Critical for data extraction
2. `word_document_structure` - Essential for navigation
3. `word_find_replace_advanced` - Powerful content management
4. `word_create_document` - Document generation capability
### Phase 3: PowerPoint & Cross-Format 🎯
1. `ppt_extract_slide_content` - Complete presentation analysis
2. `convert_excel_to_word_table` - Cross-format workflows
3. `ppt_create_presentation` - Automated presentation generation
### Phase 4: Advanced Analytics & Security 🚀
1. `statistical_analysis` - Data science integration
2. `analyze_document_security` - Compliance and security
3. `batch_document_processing` - Automation workflows
## Technical Implementation Notes
### Library Extensions Needed
- **openpyxl**: Chart creation, conditional formatting, data validation
- **python-docx**: Advanced styling, document manipulation
- **python-pptx**: Slide generation, animation analysis
- **pandas**: Statistical functions, data analysis tools
- **Pillow**: Advanced image processing features
### New Dependencies to Consider
- **matplotlib/plotly**: Chart generation
- **numpy**: Statistical calculations
- **python-dateutil**: Advanced date parsing
- **regex**: Advanced pattern matching
- **cryptography**: Document security analysis
### Architecture Considerations
- Maintain mixin pattern for clean organization
- Add result caching for expensive operations
- Implement progress tracking for batch operations
- Add streaming support for large data processing
- Maintain backward compatibility with existing tools

127
Makefile Normal file
View File

@ -0,0 +1,127 @@
# Makefile for MCP Office Tools
# Provides convenient commands for testing, development, and dashboard generation
.PHONY: help test test-dashboard test-pytest test-torture view-dashboard clean install format lint type-check
# Default target - show help
help:
@echo "MCP Office Tools - Available Commands"
@echo "======================================"
@echo ""
@echo "Testing & Dashboard:"
@echo " make test - Run all tests with dashboard generation"
@echo " make test-dashboard - Alias for 'make test'"
@echo " make test-pytest - Run only pytest tests"
@echo " make test-torture - Run only torture tests"
@echo " make view-dashboard - Open test dashboard in browser"
@echo ""
@echo "Development:"
@echo " make install - Install project with dev dependencies"
@echo " make format - Format code with black"
@echo " make lint - Lint code with ruff"
@echo " make type-check - Run type checking with mypy"
@echo " make clean - Clean temporary files and caches"
@echo ""
@echo "Examples:"
@echo " make test # Run everything and open dashboard"
@echo " make test-pytest # Quick pytest-only run"
@echo " make view-dashboard # View existing results"
# Run all tests and generate unified dashboard
test: test-dashboard
test-dashboard:
@echo "🧪 Running comprehensive test suite with dashboard generation..."
@python run_dashboard_tests.py
# Run only pytest tests
test-pytest:
@echo "🧪 Running pytest test suite..."
@uv run pytest --dashboard-output=reports/test_results.json -v
# Run only torture tests
test-torture:
@echo "🔥 Running torture tests..."
@uv run python torture_test.py
# View test dashboard in browser
view-dashboard:
@echo "📊 Opening test dashboard..."
@./view_dashboard.sh
# Install project with dev dependencies
install:
@echo "📦 Installing MCP Office Tools with dev dependencies..."
@uv sync --dev
@echo "✅ Installation complete!"
# Format code with black
format:
@echo "🎨 Formatting code with black..."
@uv run black src/ tests/ examples/
@echo "✅ Formatting complete!"
# Lint code with ruff
lint:
@echo "🔍 Linting code with ruff..."
@uv run ruff check src/ tests/ examples/
@echo "✅ Linting complete!"
# Type checking with mypy
type-check:
@echo "🔎 Running type checks with mypy..."
@uv run mypy src/
@echo "✅ Type checking complete!"
# Clean temporary files and caches
clean:
@echo "🧹 Cleaning temporary files and caches..."
@find . -type d -name "__pycache__" -exec rm -rf {} + 2>/dev/null || true
@find . -type d -name "*.egg-info" -exec rm -rf {} + 2>/dev/null || true
@find . -type d -name ".pytest_cache" -exec rm -rf {} + 2>/dev/null || true
@find . -type d -name ".mypy_cache" -exec rm -rf {} + 2>/dev/null || true
@find . -type d -name ".ruff_cache" -exec rm -rf {} + 2>/dev/null || true
@find . -type f -name "*.pyc" -delete 2>/dev/null || true
@rm -rf dist/ build/ 2>/dev/null || true
@echo "✅ Cleanup complete!"
# Run full quality checks (format, lint, type-check, test)
check: format lint type-check test
@echo "✅ All quality checks passed!"
# Quick development test cycle (no dashboard)
quick-test:
@echo "⚡ Quick test run (no dashboard)..."
@uv run pytest -v --tb=short
# Coverage report
coverage:
@echo "📊 Generating coverage report..."
@uv run pytest --cov=mcp_office_tools --cov-report=html --cov-report=term
@echo "✅ Coverage report generated at htmlcov/index.html"
# Run server in development mode
dev:
@echo "🚀 Starting MCP Office Tools server..."
@uv run mcp-office-tools
# Build distribution packages
build:
@echo "📦 Building distribution packages..."
@uv build
@echo "✅ Build complete! Packages in dist/"
# Show project info
info:
@echo "MCP Office Tools - Project Information"
@echo "======================================="
@echo ""
@echo "Project: mcp-office-tools"
@echo "Version: $(shell grep '^version' pyproject.toml | cut -d'"' -f2)"
@echo "Python: $(shell python --version)"
@echo "UV: $(shell uv --version 2>/dev/null || echo 'not installed')"
@echo ""
@echo "Directory: $(shell pwd)"
@echo "Tests: $(shell find tests -name 'test_*.py' | wc -l) test files"
@echo "Source files: $(shell find src -name '*.py' | wc -l) Python files"
@echo ""

114
QUICKSTART_DASHBOARD.md Normal file
View File

@ -0,0 +1,114 @@
# Test Dashboard - Quick Start
## TL;DR - 3 Commands to Get Started
```bash
# 1. Run all tests and generate dashboard
python run_dashboard_tests.py
# 2. View dashboard (alternative)
make test
# 3. Open existing dashboard
./view_dashboard.sh
```
## What You Get
A beautiful, interactive HTML test dashboard that looks like Microsoft Office 365:
- **Summary Cards** - Pass/fail stats at a glance
- **Interactive Filters** - Search and filter by category/status
- **Detailed Views** - Expand any test to see inputs, outputs, errors
- **MS Office Theme** - Professional, familiar design
## File Locations
```
reports/
├── test_dashboard.html ← Open this in browser
└── test_results.json ← Test data (auto-generated)
```
## Common Tasks
### Run Tests
```bash
make test # Run everything
make test-pytest # Pytest only
python torture_test.py # Torture tests only
```
### View Results
```bash
./view_dashboard.sh # Auto-open in browser
make view-dashboard # Same thing
open reports/test_dashboard.html # Manual
```
### Customize
```bash
# Edit colors
vim reports/test_dashboard.html # Edit CSS variables
# Change categorization
vim tests/pytest_dashboard_plugin.py # Edit _categorize_test()
```
## Color Reference
- Word: Blue `#2B579A`
- Excel: Green `#217346`
- PowerPoint: Orange `#D24726`
- Pass: Green `#107C10`
- Fail: Red `#D83B01`
## Example Output
```
$ python run_dashboard_tests.py
======================================================================
🧪 Running pytest test suite...
======================================================================
... pytest output ...
======================================================================
🔥 Running torture tests...
======================================================================
... torture test output ...
======================================================================
📊 TEST DASHBOARD SUMMARY
======================================================================
✅ Passed: 12
❌ Failed: 2
⏭️ Skipped: 1
📈 Pass Rate: 80.0%
⏱️ Duration: 45.12s
📄 Results saved to: reports/test_results.json
🌐 Dashboard: reports/test_dashboard.html
======================================================================
🌐 Opening dashboard in browser...
```
## Troubleshooting
**Dashboard shows no results?**
→ Run tests first: `python run_dashboard_tests.py`
**Can't open in browser?**
→ Manually open: `file:///path/to/reports/test_dashboard.html`
**Tests not categorized correctly?**
→ Edit `tests/pytest_dashboard_plugin.py`, function `_categorize_test()`
## More Info
- Full docs: `TEST_DASHBOARD.md`
- Implementation details: `DASHBOARD_SUMMARY.md`
- Dashboard features: `reports/README.md`

209
reports/README.md Normal file
View File

@ -0,0 +1,209 @@
# MCP Office Tools - Test Dashboard
Beautiful, interactive HTML dashboard for viewing test results with Microsoft Office-inspired design.
## Features
- **MS Office Theme**: Modern Microsoft Office 365-inspired design with Fluent Design elements
- **Category-based Organization**: Separate results by Word, Excel, PowerPoint, Universal, and Server categories
- **Interactive Filtering**: Search and filter tests by name, category, or status
- **Detailed Test Views**: Expand any test to see inputs, outputs, errors, and tracebacks
- **Real-time Statistics**: Pass/fail rates, duration metrics, and category breakdowns
- **Self-contained**: Works offline with no external dependencies
## Quick Start
### Run All Tests with Dashboard
```bash
# Run both pytest and torture tests, generate dashboard, and open in browser
python run_dashboard_tests.py
```
### Run Only Pytest Tests
```bash
# Run pytest with dashboard plugin
pytest -p tests.pytest_dashboard_plugin --dashboard-output=reports/test_results.json
# Open dashboard
open reports/test_dashboard.html # macOS
xdg-open reports/test_dashboard.html # Linux
start reports/test_dashboard.html # Windows
```
### View Existing Results
Simply open `reports/test_dashboard.html` in your browser. The dashboard will automatically load `test_results.json` from the same directory.
## Dashboard Components
### Summary Cards
Four main summary cards show:
- **Total Tests**: Number of test cases executed
- **Passed**: Successful tests with pass rate and progress bar
- **Failed**: Tests with errors
- **Duration**: Total execution time
### Filter Controls
- **Search Box**: Filter tests by name, module, or category
- **Category Filters**: Filter by Word, Excel, PowerPoint, Universal, or Server
- **Status Filters**: Show only passed, failed, or skipped tests
### Test Results
Each test displays:
- **Status Icon**: Visual indicator (✓ pass, ✗ fail, ⊘ skip)
- **Test Name**: Descriptive test name
- **Category Badge**: Color-coded category (Word=blue, Excel=green, PowerPoint=orange)
- **Duration**: Execution time in milliseconds
- **Expandable Details**: Click to view inputs, outputs, errors, and full traceback
## File Structure
```
reports/
├── test_dashboard.html # Main dashboard (open this in browser)
├── test_results.json # Generated test data (auto-loaded by dashboard)
├── pytest_results.json # Intermediate pytest results
└── README.md # This file
```
## Design Philosophy
### Microsoft Office Color Palette
- **Word Blue**: `#2B579A` - Used for Word-related tests
- **Excel Green**: `#217346` - Used for Excel-related tests
- **PowerPoint Orange**: `#D24726` - Used for PowerPoint-related tests
- **Primary Blue**: `#0078D4` - Accent color (Fluent Design)
### Fluent Design Principles
- **Subtle Shadows**: Cards have soft shadows for depth
- **Rounded Corners**: 8px border radius for modern look
- **Hover Effects**: Interactive elements respond to mouse hover
- **Typography**: Segoe UI font family (Office standard)
- **Clean Layout**: Generous whitespace and clear hierarchy
## Integration with CI/CD
### GitHub Actions Example
```yaml
- name: Run Tests with Dashboard
run: |
python run_dashboard_tests.py
- name: Upload Test Dashboard
uses: actions/upload-artifact@v3
with:
name: test-dashboard
path: reports/
```
### GitLab CI Example
```yaml
test_dashboard:
script:
- python run_dashboard_tests.py
artifacts:
paths:
- reports/
expire_in: 1 week
```
## Customization
### Change Dashboard Output Location
```bash
# Custom output path for pytest
pytest -p tests.pytest_dashboard_plugin --dashboard-output=custom/path/results.json
```
### Modify Colors
Edit the CSS variables in `test_dashboard.html`:
```css
:root {
--word-blue: #2B579A;
--excel-green: #217346;
--powerpoint-orange: #D24726;
/* ... more colors ... */
}
```
## Troubleshooting
### Dashboard shows "No Test Results Found"
- Ensure `test_results.json` exists in the `reports/` directory
- Run tests first: `python run_dashboard_tests.py`
- Check browser console for JSON loading errors
### Tests not categorized correctly
- Categories are determined by test path/name
- Ensure test files follow naming convention (e.g., `test_word_*.py`)
- Edit `_categorize_test()` in `pytest_dashboard_plugin.py` to customize
### Dashboard doesn't open automatically
- May require manual browser opening
- Use the file path printed in terminal
- Check that `webbrowser` module is available
## Advanced Usage
### Extend the Plugin
The pytest plugin can be customized by editing `tests/pytest_dashboard_plugin.py`:
```python
def _extract_inputs(self, item):
"""Customize how test inputs are extracted"""
# Your custom logic here
pass
def _categorize_test(self, item):
"""Customize test categorization"""
# Your custom logic here
pass
```
### Add Custom Test Data
The JSON format supports additional fields:
```json
{
"metadata": { /* your custom metadata */ },
"summary": { /* summary stats */ },
"categories": { /* category breakdown */ },
"tests": [
{
"name": "test_name",
"custom_field": "your_value",
/* ... standard fields ... */
}
]
}
```
## Contributing
When adding new test categories or features:
1. Update `_categorize_test()` in the pytest plugin
2. Add corresponding color scheme in HTML dashboard CSS
3. Add filter button in dashboard controls
4. Update this README with new features
## License
Part of the MCP Office Tools project. See main project LICENSE file.

View File

@ -0,0 +1,18 @@
{
"metadata": {
"start_time": "2026-01-11T00:23:10.209539",
"pytest_version": "9.0.2",
"end_time": "2026-01-11T00:23:10.999816",
"duration": 0.7902717590332031,
"exit_status": 0
},
"summary": {
"total": 0,
"passed": 0,
"failed": 0,
"skipped": 0,
"pass_rate": 0
},
"categories": {},
"tests": []
}

963
reports/test_dashboard.html Normal file
View File

@ -0,0 +1,963 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>MCP Office Tools - Test Dashboard</title>
<style>
/* Microsoft Office Color Palette */
:root {
/* Office App Colors */
--word-blue: #2B579A;
--excel-green: #217346;
--powerpoint-orange: #D24726;
--outlook-blue: #0078D4;
/* Fluent Design Colors */
--primary-blue: #0078D4;
--success-green: #107C10;
--warning-orange: #FF8C00;
--error-red: #D83B01;
--neutral-gray: #605E5C;
--light-gray: #F3F2F1;
--lighter-gray: #FAF9F8;
--border-gray: #E1DFDD;
/* Status Colors */
--pass-green: #107C10;
--fail-red: #D83B01;
--skip-yellow: #FFB900;
/* Backgrounds */
--bg-primary: #FFFFFF;
--bg-secondary: #FAF9F8;
--bg-tertiary: #F3F2F1;
/* Text */
--text-primary: #201F1E;
--text-secondary: #605E5C;
--text-light: #8A8886;
}
/* Reset and Base Styles */
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: 'Segoe UI', -apple-system, BlinkMacSystemFont, 'Roboto', 'Helvetica Neue', sans-serif;
background: var(--bg-secondary);
color: var(--text-primary);
line-height: 1.6;
}
/* Header */
.header {
background: linear-gradient(135deg, var(--primary-blue) 0%, var(--word-blue) 100%);
color: white;
padding: 2rem 2rem 3rem;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
}
.header-content {
max-width: 1400px;
margin: 0 auto;
}
.header h1 {
font-size: 2rem;
font-weight: 600;
margin-bottom: 0.5rem;
display: flex;
align-items: center;
gap: 1rem;
}
.office-icons {
display: flex;
gap: 0.5rem;
}
.office-icon {
width: 32px;
height: 32px;
border-radius: 4px;
display: flex;
align-items: center;
justify-content: center;
font-weight: 700;
font-size: 16px;
color: white;
}
.icon-word { background: var(--word-blue); }
.icon-excel { background: var(--excel-green); }
.icon-powerpoint { background: var(--powerpoint-orange); }
.header-meta {
opacity: 0.9;
font-size: 0.9rem;
margin-top: 0.5rem;
}
/* Main Container */
.container {
max-width: 1400px;
margin: -2rem auto 2rem;
padding: 0 2rem;
}
/* Summary Cards */
.summary-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));
gap: 1.5rem;
margin-bottom: 2rem;
}
.summary-card {
background: var(--bg-primary);
border-radius: 8px;
padding: 1.5rem;
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.08);
border: 1px solid var(--border-gray);
transition: transform 0.2s, box-shadow 0.2s;
}
.summary-card:hover {
transform: translateY(-2px);
box-shadow: 0 4px 16px rgba(0, 0, 0, 0.12);
}
.card-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 1rem;
}
.card-title {
font-size: 0.875rem;
font-weight: 600;
color: var(--text-secondary);
text-transform: uppercase;
letter-spacing: 0.5px;
}
.card-value {
font-size: 2.5rem;
font-weight: 700;
line-height: 1;
}
.card-subtitle {
font-size: 0.875rem;
color: var(--text-light);
margin-top: 0.5rem;
}
.status-badge {
display: inline-flex;
align-items: center;
gap: 0.5rem;
padding: 0.25rem 0.75rem;
border-radius: 12px;
font-size: 0.75rem;
font-weight: 600;
text-transform: uppercase;
}
.badge-pass { background: rgba(16, 124, 16, 0.1); color: var(--pass-green); }
.badge-fail { background: rgba(216, 59, 1, 0.1); color: var(--fail-red); }
.badge-skip { background: rgba(255, 185, 0, 0.1); color: var(--skip-yellow); }
/* Controls */
.controls {
background: var(--bg-primary);
border-radius: 8px;
padding: 1.5rem;
margin-bottom: 1.5rem;
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.08);
border: 1px solid var(--border-gray);
display: flex;
gap: 1rem;
flex-wrap: wrap;
align-items: center;
}
.search-box {
flex: 1;
min-width: 300px;
position: relative;
}
.search-box input {
width: 100%;
padding: 0.75rem 1rem 0.75rem 2.5rem;
border: 2px solid var(--border-gray);
border-radius: 4px;
font-size: 0.875rem;
font-family: inherit;
transition: border-color 0.2s;
}
.search-box input:focus {
outline: none;
border-color: var(--primary-blue);
}
.search-icon {
position: absolute;
left: 0.875rem;
top: 50%;
transform: translateY(-50%);
color: var(--text-light);
}
.filter-group {
display: flex;
gap: 0.5rem;
flex-wrap: wrap;
}
.filter-btn {
padding: 0.5rem 1rem;
border: 2px solid var(--border-gray);
background: var(--bg-primary);
color: var(--text-primary);
border-radius: 4px;
font-size: 0.875rem;
font-weight: 600;
cursor: pointer;
transition: all 0.2s;
font-family: inherit;
}
.filter-btn:hover {
border-color: var(--primary-blue);
background: var(--lighter-gray);
}
.filter-btn.active {
background: var(--primary-blue);
color: white;
border-color: var(--primary-blue);
}
.filter-btn.word.active { background: var(--word-blue); border-color: var(--word-blue); }
.filter-btn.excel.active { background: var(--excel-green); border-color: var(--excel-green); }
.filter-btn.powerpoint.active { background: var(--powerpoint-orange); border-color: var(--powerpoint-orange); }
/* Test Results */
.test-results {
display: flex;
flex-direction: column;
gap: 1rem;
}
.test-item {
background: var(--bg-primary);
border-radius: 8px;
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.08);
border: 1px solid var(--border-gray);
overflow: hidden;
transition: box-shadow 0.2s;
}
.test-item:hover {
box-shadow: 0 4px 16px rgba(0, 0, 0, 0.12);
}
.test-header {
padding: 1.25rem 1.5rem;
cursor: pointer;
display: flex;
align-items: center;
gap: 1rem;
transition: background 0.2s;
}
.test-header:hover {
background: var(--lighter-gray);
}
.test-status-icon {
width: 24px;
height: 24px;
border-radius: 50%;
display: flex;
align-items: center;
justify-content: center;
font-weight: 700;
font-size: 14px;
flex-shrink: 0;
}
.status-pass {
background: var(--pass-green);
color: white;
}
.status-fail {
background: var(--fail-red);
color: white;
}
.status-skip {
background: var(--skip-yellow);
color: white;
}
.test-info {
flex: 1;
min-width: 0;
}
.test-name {
font-weight: 600;
font-size: 1rem;
color: var(--text-primary);
margin-bottom: 0.25rem;
}
.test-meta {
font-size: 0.875rem;
color: var(--text-light);
display: flex;
gap: 1rem;
flex-wrap: wrap;
}
.test-category-badge {
display: inline-block;
padding: 0.25rem 0.75rem;
border-radius: 4px;
font-size: 0.75rem;
font-weight: 600;
color: white;
}
.category-word { background: var(--word-blue); }
.category-excel { background: var(--excel-green); }
.category-powerpoint { background: var(--powerpoint-orange); }
.category-universal { background: var(--outlook-blue); }
.category-server { background: var(--neutral-gray); }
.category-other { background: var(--text-light); }
.test-duration {
font-weight: 600;
color: var(--text-secondary);
}
.expand-icon {
width: 32px;
height: 32px;
border-radius: 4px;
display: flex;
align-items: center;
justify-content: center;
background: var(--lighter-gray);
color: var(--text-secondary);
transition: transform 0.2s, background 0.2s;
flex-shrink: 0;
}
.test-header:hover .expand-icon {
background: var(--light-gray);
}
.test-item.expanded .expand-icon {
transform: rotate(180deg);
}
.test-details {
display: none;
border-top: 1px solid var(--border-gray);
background: var(--bg-secondary);
}
.test-item.expanded .test-details {
display: block;
}
.details-section {
padding: 1.5rem;
border-bottom: 1px solid var(--border-gray);
}
.details-section:last-child {
border-bottom: none;
}
.section-title {
font-weight: 600;
font-size: 0.875rem;
color: var(--text-secondary);
text-transform: uppercase;
letter-spacing: 0.5px;
margin-bottom: 0.75rem;
}
.code-block {
background: var(--text-primary);
color: #D4D4D4;
padding: 1rem;
border-radius: 4px;
font-family: 'Consolas', 'Monaco', 'Courier New', monospace;
font-size: 0.875rem;
line-height: 1.5;
overflow-x: auto;
white-space: pre-wrap;
word-wrap: break-word;
}
.error-block {
background: rgba(216, 59, 1, 0.05);
border-left: 4px solid var(--error-red);
padding: 1rem;
border-radius: 4px;
color: var(--error-red);
font-family: 'Consolas', 'Monaco', 'Courier New', monospace;
font-size: 0.875rem;
line-height: 1.5;
overflow-x: auto;
white-space: pre-wrap;
}
.inputs-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(250px, 1fr));
gap: 1rem;
}
.input-item {
background: var(--bg-primary);
padding: 1rem;
border-radius: 4px;
border: 1px solid var(--border-gray);
}
.input-label {
font-weight: 600;
font-size: 0.75rem;
color: var(--text-secondary);
text-transform: uppercase;
letter-spacing: 0.5px;
margin-bottom: 0.5rem;
}
.input-value {
font-family: 'Consolas', 'Monaco', 'Courier New', monospace;
font-size: 0.875rem;
color: var(--text-primary);
}
.file-link {
display: inline-flex;
align-items: center;
gap: 0.5rem;
color: var(--primary-blue);
text-decoration: none;
padding: 0.25rem 0.5rem;
border-radius: 4px;
background: rgba(43, 87, 154, 0.1);
transition: all 0.2s ease;
}
.file-link:hover {
background: rgba(43, 87, 154, 0.2);
text-decoration: underline;
}
/* Empty State */
.empty-state {
text-align: center;
padding: 4rem 2rem;
color: var(--text-light);
}
.empty-state-icon {
font-size: 4rem;
margin-bottom: 1rem;
opacity: 0.5;
}
/* Footer */
.footer {
text-align: center;
padding: 2rem;
color: var(--text-light);
font-size: 0.875rem;
}
/* Progress Bar */
.progress-bar {
width: 100%;
height: 8px;
background: var(--light-gray);
border-radius: 4px;
overflow: hidden;
margin-top: 1rem;
}
.progress-fill {
height: 100%;
background: linear-gradient(90deg, var(--success-green) 0%, var(--excel-green) 100%);
transition: width 0.3s ease;
}
/* Responsive */
@media (max-width: 768px) {
.container {
padding: 0 1rem;
}
.header {
padding: 1.5rem 1rem 2rem;
}
.header h1 {
font-size: 1.5rem;
}
.summary-grid {
grid-template-columns: 1fr;
}
.controls {
flex-direction: column;
align-items: stretch;
}
.search-box {
min-width: 100%;
}
}
/* Utility Classes */
.hidden {
display: none !important;
}
.text-muted {
color: var(--text-light);
}
.text-success {
color: var(--pass-green);
}
.text-error {
color: var(--fail-red);
}
.text-warning {
color: var(--skip-yellow);
}
</style>
</head>
<body>
<!-- Header -->
<header class="header">
<div class="header-content">
<h1>
<div class="office-icons">
<div class="office-icon icon-word">W</div>
<div class="office-icon icon-excel">X</div>
<div class="office-icon icon-powerpoint">P</div>
</div>
MCP Office Tools - Test Dashboard
</h1>
<div class="header-meta">
<span id="test-timestamp">Loading...</span>
</div>
</div>
</header>
<!-- Main Container -->
<div class="container">
<!-- Summary Cards -->
<div class="summary-grid">
<div class="summary-card">
<div class="card-header">
<div class="card-title">Total Tests</div>
</div>
<div class="card-value" id="total-tests">0</div>
<div class="card-subtitle">Test cases executed</div>
</div>
<div class="summary-card">
<div class="card-header">
<div class="card-title">Passed</div>
<span class="status-badge badge-pass">
<span></span>
</span>
</div>
<div class="card-value text-success" id="passed-tests">0</div>
<div class="card-subtitle">
<span id="pass-rate">0%</span> pass rate
</div>
<div class="progress-bar">
<div class="progress-fill" id="pass-progress" style="width: 0%"></div>
</div>
</div>
<div class="summary-card">
<div class="card-header">
<div class="card-title">Failed</div>
<span class="status-badge badge-fail">
<span></span>
</span>
</div>
<div class="card-value text-error" id="failed-tests">0</div>
<div class="card-subtitle">Tests with errors</div>
</div>
<div class="summary-card">
<div class="card-header">
<div class="card-title">Duration</div>
</div>
<div class="card-value" id="total-duration" style="font-size: 2rem;">0s</div>
<div class="card-subtitle">Total execution time</div>
</div>
</div>
<!-- Controls -->
<div class="controls">
<div class="search-box">
<span class="search-icon">🔍</span>
<input
type="text"
id="search-input"
placeholder="Search tests by name, module, or category..."
autocomplete="off"
>
</div>
<div class="filter-group">
<button class="filter-btn active" data-filter="all">All</button>
<button class="filter-btn word" data-filter="Word">Word</button>
<button class="filter-btn excel" data-filter="Excel">Excel</button>
<button class="filter-btn powerpoint" data-filter="PowerPoint">PowerPoint</button>
<button class="filter-btn" data-filter="Universal">Universal</button>
<button class="filter-btn" data-filter="Server">Server</button>
</div>
<div class="filter-group">
<button class="filter-btn" data-status="passed">Passed</button>
<button class="filter-btn" data-status="failed">Failed</button>
<button class="filter-btn" data-status="skipped">Skipped</button>
</div>
</div>
<!-- Test Results -->
<div id="test-results" class="test-results">
<!-- Tests will be dynamically inserted here -->
</div>
<!-- Empty State -->
<div id="empty-state" class="empty-state hidden">
<div class="empty-state-icon">📭</div>
<h2>No Test Results Found</h2>
<p>Run tests with: <code>pytest --dashboard-output=reports/test_results.json</code></p>
</div>
</div>
<!-- Footer -->
<footer class="footer">
<p>MCP Office Tools Test Dashboard | Generated with ❤️ using pytest</p>
</footer>
<script>
// Dashboard Application
class TestDashboard {
constructor() {
this.data = null;
this.filteredTests = [];
this.activeFilters = {
category: 'all',
status: null,
search: ''
};
this.init();
}
async init() {
await this.loadData();
this.setupEventListeners();
this.render();
}
async loadData() {
try {
// Try embedded data first (works with file:// URLs)
const embeddedScript = document.getElementById('test-results-data');
if (embeddedScript) {
this.data = JSON.parse(embeddedScript.textContent);
this.filteredTests = this.data.tests;
return;
}
// Fallback to fetch (works with http:// URLs)
const response = await fetch('test_results.json');
this.data = await response.json();
this.filteredTests = this.data.tests;
} catch (error) {
console.error('Failed to load test results:', error);
document.getElementById('empty-state').classList.remove('hidden');
}
}
setupEventListeners() {
// Search
document.getElementById('search-input').addEventListener('input', (e) => {
this.activeFilters.search = e.target.value.toLowerCase();
this.applyFilters();
});
// Category filters
document.querySelectorAll('[data-filter]').forEach(btn => {
btn.addEventListener('click', (e) => {
document.querySelectorAll('[data-filter]').forEach(b => b.classList.remove('active'));
e.target.classList.add('active');
this.activeFilters.category = e.target.dataset.filter;
this.applyFilters();
});
});
// Status filters
document.querySelectorAll('[data-status]').forEach(btn => {
btn.addEventListener('click', (e) => {
if (e.target.classList.contains('active')) {
e.target.classList.remove('active');
this.activeFilters.status = null;
} else {
document.querySelectorAll('[data-status]').forEach(b => b.classList.remove('active'));
e.target.classList.add('active');
this.activeFilters.status = e.target.dataset.status;
}
this.applyFilters();
});
});
}
applyFilters() {
this.filteredTests = this.data.tests.filter(test => {
// Category filter
if (this.activeFilters.category !== 'all' && test.category !== this.activeFilters.category) {
return false;
}
// Status filter
if (this.activeFilters.status && test.outcome !== this.activeFilters.status) {
return false;
}
// Search filter
if (this.activeFilters.search) {
const searchStr = this.activeFilters.search;
const matchName = test.name.toLowerCase().includes(searchStr);
const matchModule = test.module.toLowerCase().includes(searchStr);
const matchCategory = test.category.toLowerCase().includes(searchStr);
if (!matchName && !matchModule && !matchCategory) {
return false;
}
}
return true;
});
this.renderTests();
}
render() {
if (!this.data) return;
this.renderSummary();
this.renderTests();
}
renderSummary() {
const { metadata, summary } = this.data;
// Timestamp
const timestamp = new Date(metadata.start_time).toLocaleString();
document.getElementById('test-timestamp').textContent = `Run on ${timestamp}`;
// Summary cards
document.getElementById('total-tests').textContent = summary.total;
document.getElementById('passed-tests').textContent = summary.passed;
document.getElementById('failed-tests').textContent = summary.failed;
document.getElementById('pass-rate').textContent = `${summary.pass_rate.toFixed(1)}%`;
document.getElementById('pass-progress').style.width = `${summary.pass_rate}%`;
// Duration
const duration = metadata.duration.toFixed(2);
document.getElementById('total-duration').textContent = `${duration}s`;
}
renderTests() {
const container = document.getElementById('test-results');
const emptyState = document.getElementById('empty-state');
if (this.filteredTests.length === 0) {
container.innerHTML = '';
emptyState.classList.remove('hidden');
return;
}
emptyState.classList.add('hidden');
container.innerHTML = this.filteredTests.map(test => this.createTestItem(test)).join('');
// Add click handlers for expand/collapse
container.querySelectorAll('.test-header').forEach(header => {
header.addEventListener('click', () => {
header.parentElement.classList.toggle('expanded');
});
});
}
createTestItem(test) {
const statusIcon = this.getStatusIcon(test.outcome);
const categoryClass = `category-${test.category.toLowerCase()}`;
const duration = (test.duration * 1000).toFixed(0); // ms
return `
<div class="test-item" data-test-id="${test.nodeid}">
<div class="test-header">
<div class="test-status-icon status-${test.outcome}">
${statusIcon}
</div>
<div class="test-info">
<div class="test-name">${this.escapeHtml(test.name)}</div>
<div class="test-meta">
<span class="test-category-badge ${categoryClass}">${test.category}</span>
<span>${test.module}</span>
<span class="test-duration">${duration}ms</span>
</div>
</div>
<div class="expand-icon"></div>
</div>
<div class="test-details">
${this.createTestDetails(test)}
</div>
</div>
`;
}
createTestDetails(test) {
let html = '';
// Inputs
if (test.inputs && Object.keys(test.inputs).length > 0) {
html += `
<div class="details-section">
<div class="section-title">Test Inputs</div>
<div class="inputs-grid">
${Object.entries(test.inputs).map(([key, value]) => `
<div class="input-item">
<div class="input-label">${this.escapeHtml(key)}</div>
<div class="input-value">${this.formatInputValue(key, value)}</div>
</div>
`).join('')}
</div>
</div>
`;
}
// Outputs
if (test.outputs) {
html += `
<div class="details-section">
<div class="section-title">Test Outputs</div>
<div class="code-block">${this.escapeHtml(JSON.stringify(test.outputs, null, 2))}</div>
</div>
`;
}
// Error
if (test.error) {
html += `
<div class="details-section">
<div class="section-title">Error Details</div>
<div class="error-block">${this.escapeHtml(test.error)}</div>
</div>
`;
}
// Traceback
if (test.traceback) {
html += `
<div class="details-section">
<div class="section-title">Traceback</div>
<div class="error-block">${this.escapeHtml(test.traceback)}</div>
</div>
`;
}
// Full path
html += `
<div class="details-section">
<div class="section-title">Test Path</div>
<div class="code-block">${this.escapeHtml(test.nodeid)}</div>
</div>
`;
return html;
}
getStatusIcon(outcome) {
switch (outcome) {
case 'passed': return '✓';
case 'failed': return '✗';
case 'skipped': return '⊘';
default: return '?';
}
}
formatInputValue(key, value) {
const strValue = typeof value === 'string' ? value : JSON.stringify(value);
// Detect file paths - relative (test_files/...) or absolute
const isRelativePath = strValue.startsWith('test_files/');
const isAbsolutePath = /^["']?(\/[^"']+|[A-Z]:\\[^"']+)["']?$/i.test(strValue);
const isFilePath = isRelativePath || isAbsolutePath || key.toLowerCase().includes('file') || key.toLowerCase().includes('path');
if (isFilePath && (isRelativePath || isAbsolutePath)) {
// Extract the actual path (remove quotes if present)
const cleanPath = strValue.replace(/^["']|["']$/g, '');
const fileName = cleanPath.split('/').pop() || cleanPath.split('\\').pop();
const fileExt = fileName.split('.').pop()?.toLowerCase() || '';
// Choose icon based on file type
let icon = '📄';
if (['xlsx', 'xls', 'csv'].includes(fileExt)) icon = '📊';
else if (['docx', 'doc'].includes(fileExt)) icon = '📝';
else if (['pptx', 'ppt'].includes(fileExt)) icon = '📽️';
// Use relative path for relative files, file:// for absolute paths
const href = isRelativePath ? this.escapeHtml(cleanPath) : `file://${this.escapeHtml(cleanPath)}`;
const downloadAttr = isRelativePath ? 'download' : '';
return `<a href="${href}" class="file-link" title="Download ${this.escapeHtml(fileName)}" ${downloadAttr} target="_blank">${icon} ${this.escapeHtml(fileName)}</a>`;
}
return this.escapeHtml(strValue);
}
escapeHtml(text) {
if (text === null || text === undefined) return '';
const div = document.createElement('div');
div.textContent = String(text);
return div.innerHTML;
}
}
// Initialize dashboard when DOM is ready
if (document.readyState === 'loading') {
document.addEventListener('DOMContentLoaded', () => new TestDashboard());
} else {
new TestDashboard();
}
</script>
<script type="application/json" id="test-results-data">{"metadata": {"start_time": "2026-01-11T00:23:10.209539", "end_time": "2026-01-11T00:23:12.295169", "duration": 1.052842140197754, "exit_status": 0, "pytest_version": "9.0.2", "test_types": ["pytest", "torture_test"]}, "summary": {"total": 6, "passed": 5, "failed": 0, "skipped": 1, "pass_rate": 83.33333333333334}, "categories": {"Excel": {"total": 4, "passed": 3, "failed": 0, "skipped": 1}, "Word": {"total": 2, "passed": 2, "failed": 0, "skipped": 0}}, "tests": [{"name": "Excel Data Analysis", "nodeid": "torture_test.py::test_excel_data_analysis", "category": "Excel", "outcome": "passed", "duration": 0.1404409408569336, "timestamp": "2026-01-11T00:23:12.271793", "module": "torture_test", "class": null, "function": "test_excel_data_analysis", "inputs": {"file": "test_files/test_data.xlsx"}, "outputs": {"sheets_analyzed": ["Test Data"]}, "error": null, "traceback": null}, {"name": "Excel Formula Extraction", "nodeid": "torture_test.py::test_excel_formula_extraction", "category": "Excel", "outcome": "passed", "duration": 0.0031723976135253906, "timestamp": "2026-01-11T00:23:12.274971", "module": "torture_test", "class": null, "function": "test_excel_formula_extraction", "inputs": {"file": "test_files/test_data.xlsx"}, "outputs": {"total_formulas": 8}, "error": null, "traceback": null}, {"name": "Excel Chart Data Generation", "nodeid": "torture_test.py::test_excel_chart_generation", "category": "Excel", "outcome": "passed", "duration": 0.003323078155517578, "timestamp": "2026-01-11T00:23:12.278299", "module": "torture_test", "class": null, "function": "test_excel_chart_generation", "inputs": {"file": "test_files/test_data.xlsx", "x_column": "Category", "y_columns": ["Value"]}, "outputs": {"chart_libraries": 2}, "error": null, "traceback": null}, {"name": "Word Structure Analysis", "nodeid": "torture_test.py::test_word_structure_analysis", "category": "Word", "outcome": "passed", "duration": 0.010413646697998047, "timestamp": "2026-01-11T00:23:12.288718", "module": "torture_test", "class": null, "function": "test_word_structure_analysis", "inputs": {"file": "test_files/test_document.docx"}, "outputs": {"total_headings": 0}, "error": null, "traceback": null}, {"name": "Word Table Extraction", "nodeid": "torture_test.py::test_word_table_extraction", "category": "Word", "outcome": "passed", "duration": 0.006224393844604492, "timestamp": "2026-01-11T00:23:12.294948", "module": "torture_test", "class": null, "function": "test_word_table_extraction", "inputs": {"file": "test_files/test_document.docx"}, "outputs": {"total_tables": 0}, "error": null, "traceback": null}, {"name": "Real Excel File Analysis (FORScan)", "nodeid": "torture_test.py::test_real_excel_analysis", "category": "Excel", "outcome": "skipped", "duration": 0, "timestamp": "2026-01-11T00:23:12.294963", "module": "torture_test", "class": null, "function": "test_real_excel_analysis", "inputs": {"file": "/home/rpm/FORScan Lite spreadsheets v1.1/FORScan Lite spreadsheet - PIDs.xlsx"}, "outputs": null, "error": "File not found: /home/rpm/FORScan Lite spreadsheets v1.1/FORScan Lite spreadsheet - PIDs.xlsx", "traceback": null}]}</script>
</body>
</html>

Binary file not shown.

Binary file not shown.

154
reports/test_results.json Normal file
View File

@ -0,0 +1,154 @@
{
"metadata": {
"start_time": "2026-01-11T00:23:10.209539",
"end_time": "2026-01-11T00:23:12.295169",
"duration": 1.052842140197754,
"exit_status": 0,
"pytest_version": "9.0.2",
"test_types": [
"pytest",
"torture_test"
]
},
"summary": {
"total": 6,
"passed": 5,
"failed": 0,
"skipped": 1,
"pass_rate": 83.33333333333334
},
"categories": {
"Excel": {
"total": 4,
"passed": 3,
"failed": 0,
"skipped": 1
},
"Word": {
"total": 2,
"passed": 2,
"failed": 0,
"skipped": 0
}
},
"tests": [
{
"name": "Excel Data Analysis",
"nodeid": "torture_test.py::test_excel_data_analysis",
"category": "Excel",
"outcome": "passed",
"duration": 0.1404409408569336,
"timestamp": "2026-01-11T00:23:12.271793",
"module": "torture_test",
"class": null,
"function": "test_excel_data_analysis",
"inputs": {
"file": "test_files/test_data.xlsx"
},
"outputs": {
"sheets_analyzed": [
"Test Data"
]
},
"error": null,
"traceback": null
},
{
"name": "Excel Formula Extraction",
"nodeid": "torture_test.py::test_excel_formula_extraction",
"category": "Excel",
"outcome": "passed",
"duration": 0.0031723976135253906,
"timestamp": "2026-01-11T00:23:12.274971",
"module": "torture_test",
"class": null,
"function": "test_excel_formula_extraction",
"inputs": {
"file": "test_files/test_data.xlsx"
},
"outputs": {
"total_formulas": 8
},
"error": null,
"traceback": null
},
{
"name": "Excel Chart Data Generation",
"nodeid": "torture_test.py::test_excel_chart_generation",
"category": "Excel",
"outcome": "passed",
"duration": 0.003323078155517578,
"timestamp": "2026-01-11T00:23:12.278299",
"module": "torture_test",
"class": null,
"function": "test_excel_chart_generation",
"inputs": {
"file": "test_files/test_data.xlsx",
"x_column": "Category",
"y_columns": [
"Value"
]
},
"outputs": {
"chart_libraries": 2
},
"error": null,
"traceback": null
},
{
"name": "Word Structure Analysis",
"nodeid": "torture_test.py::test_word_structure_analysis",
"category": "Word",
"outcome": "passed",
"duration": 0.010413646697998047,
"timestamp": "2026-01-11T00:23:12.288718",
"module": "torture_test",
"class": null,
"function": "test_word_structure_analysis",
"inputs": {
"file": "test_files/test_document.docx"
},
"outputs": {
"total_headings": 0
},
"error": null,
"traceback": null
},
{
"name": "Word Table Extraction",
"nodeid": "torture_test.py::test_word_table_extraction",
"category": "Word",
"outcome": "passed",
"duration": 0.006224393844604492,
"timestamp": "2026-01-11T00:23:12.294948",
"module": "torture_test",
"class": null,
"function": "test_word_table_extraction",
"inputs": {
"file": "test_files/test_document.docx"
},
"outputs": {
"total_tables": 0
},
"error": null,
"traceback": null
},
{
"name": "Real Excel File Analysis (FORScan)",
"nodeid": "torture_test.py::test_real_excel_analysis",
"category": "Excel",
"outcome": "skipped",
"duration": 0,
"timestamp": "2026-01-11T00:23:12.294963",
"module": "torture_test",
"class": null,
"function": "test_real_excel_analysis",
"inputs": {
"file": "/home/rpm/FORScan Lite spreadsheets v1.1/FORScan Lite spreadsheet - PIDs.xlsx"
},
"outputs": null,
"error": "File not found: /home/rpm/FORScan Lite spreadsheets v1.1/FORScan Lite spreadsheet - PIDs.xlsx",
"traceback": null
}
]
}

507
run_dashboard_tests.py Executable file
View File

@ -0,0 +1,507 @@
#!/usr/bin/env python
"""
Run both pytest and torture tests, then generate a unified test dashboard.
This script orchestrates:
1. Running pytest with dashboard plugin
2. Running torture tests with result capture
3. Merging results into a single JSON file
4. Opening the dashboard in the browser
"""
import asyncio
import json
import os
import shutil
import subprocess
import sys
import time
from datetime import datetime
from pathlib import Path
# Add src to path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "src"))
def run_pytest_tests(output_path: Path) -> dict:
"""Run pytest tests with dashboard plugin."""
print("\n" + "=" * 70)
print("🧪 Running pytest test suite...")
print("=" * 70)
# Ensure plugin is loaded
plugin_path = Path(__file__).parent / "tests" / "pytest_dashboard_plugin.py"
# Run pytest with plugin
cmd = [
sys.executable,
"-m",
"pytest",
"-p",
"tests.pytest_dashboard_plugin",
f"--dashboard-output={output_path}",
"-v",
]
result = subprocess.run(cmd, cwd=Path(__file__).parent)
# Load results
if output_path.exists():
with open(output_path) as f:
return json.load(f)
else:
return {
"metadata": {
"start_time": datetime.now().isoformat(),
"end_time": datetime.now().isoformat(),
"duration": 0,
"exit_status": result.returncode,
},
"summary": {"total": 0, "passed": 0, "failed": 0, "skipped": 0, "pass_rate": 0},
"categories": {},
"tests": [],
}
async def run_torture_tests(test_files_dir: Path = None) -> dict:
"""Run torture tests and capture results.
Args:
test_files_dir: Directory to store test files. If provided, files persist
for inclusion in dashboard. If None, uses temp directory.
"""
print("\n" + "=" * 70)
print("🔥 Running torture tests...")
print("=" * 70)
from torture_test import (
run_torture_tests as run_torture,
create_test_xlsx,
create_test_docx,
EXCEL_TEST_FILES,
ExcelMixin,
WordMixin,
)
excel_mixin = ExcelMixin()
word_mixin = WordMixin()
results = []
start_time = time.time()
# Use persistent directory if provided, otherwise temp
if test_files_dir:
test_files_dir.mkdir(parents=True, exist_ok=True)
test_xlsx = create_test_xlsx(str(test_files_dir / "test_data.xlsx"))
test_docx = create_test_docx(str(test_files_dir / "test_document.docx"))
# Use relative paths for the dashboard
test_xlsx_path = "test_files/test_data.xlsx"
test_docx_path = "test_files/test_document.docx"
else:
import tempfile
tmpdir = tempfile.mkdtemp()
test_xlsx = create_test_xlsx(os.path.join(tmpdir, "test_data.xlsx"))
test_docx = create_test_docx(os.path.join(tmpdir, "test_document.docx"))
test_xlsx_path = test_xlsx
test_docx_path = test_docx
# Test 1: Excel Data Analysis
test_start = time.time()
try:
result = await excel_mixin.analyze_excel_data(test_xlsx)
summary = result.get("summary", {})
sheets_count = summary.get("sheets_analyzed", 1)
results.append({
"name": "Excel Data Analysis",
"nodeid": "torture_test.py::test_excel_data_analysis",
"category": "Excel",
"outcome": "passed",
"duration": time.time() - test_start,
"timestamp": datetime.now().isoformat(),
"module": "torture_test",
"class": None,
"function": "test_excel_data_analysis",
"inputs": {"file": test_xlsx_path},
"outputs": {"sheets_analyzed": sheets_count},
"error": None,
"traceback": None,
})
except Exception as e:
results.append({
"name": "Excel Data Analysis",
"nodeid": "torture_test.py::test_excel_data_analysis",
"category": "Excel",
"outcome": "failed",
"duration": time.time() - test_start,
"timestamp": datetime.now().isoformat(),
"module": "torture_test",
"class": None,
"function": "test_excel_data_analysis",
"inputs": {"file": test_xlsx_path},
"outputs": None,
"error": str(e),
"traceback": f"{type(e).__name__}: {e}",
})
# Test 2: Excel Formula Extraction
test_start = time.time()
try:
result = await excel_mixin.extract_excel_formulas(test_xlsx)
summary = result.get("summary", {})
formula_count = summary.get("total_formulas", 0)
results.append({
"name": "Excel Formula Extraction",
"nodeid": "torture_test.py::test_excel_formula_extraction",
"category": "Excel",
"outcome": "passed",
"duration": time.time() - test_start,
"timestamp": datetime.now().isoformat(),
"module": "torture_test",
"class": None,
"function": "test_excel_formula_extraction",
"inputs": {"file": test_xlsx_path},
"outputs": {"total_formulas": formula_count},
"error": None,
"traceback": None,
})
except Exception as e:
results.append({
"name": "Excel Formula Extraction",
"nodeid": "torture_test.py::test_excel_formula_extraction",
"category": "Excel",
"outcome": "failed",
"duration": time.time() - test_start,
"timestamp": datetime.now().isoformat(),
"module": "torture_test",
"class": None,
"function": "test_excel_formula_extraction",
"inputs": {"file": test_xlsx_path},
"outputs": None,
"error": str(e),
"traceback": f"{type(e).__name__}: {e}",
})
# Test 3: Excel Chart Generation
test_start = time.time()
try:
result = await excel_mixin.create_excel_chart_data(
test_xlsx,
x_column="Category",
y_columns=["Value"],
chart_type="bar"
)
chart_libs = len(result.get("chart_configuration", {}))
results.append({
"name": "Excel Chart Data Generation",
"nodeid": "torture_test.py::test_excel_chart_generation",
"category": "Excel",
"outcome": "passed",
"duration": time.time() - test_start,
"timestamp": datetime.now().isoformat(),
"module": "torture_test",
"class": None,
"function": "test_excel_chart_generation",
"inputs": {"file": test_xlsx_path, "x_column": "Category", "y_columns": ["Value"]},
"outputs": {"chart_libraries": chart_libs},
"error": None,
"traceback": None,
})
except Exception as e:
results.append({
"name": "Excel Chart Data Generation",
"nodeid": "torture_test.py::test_excel_chart_generation",
"category": "Excel",
"outcome": "failed",
"duration": time.time() - test_start,
"timestamp": datetime.now().isoformat(),
"module": "torture_test",
"class": None,
"function": "test_excel_chart_generation",
"inputs": {"file": test_xlsx_path, "x_column": "Category", "y_columns": ["Value"]},
"outputs": None,
"error": str(e),
"traceback": f"{type(e).__name__}: {e}",
})
# Test 4: Word Structure Analysis
test_start = time.time()
try:
result = await word_mixin.analyze_word_structure(test_docx)
heading_count = result["structure"].get("total_headings", 0)
results.append({
"name": "Word Structure Analysis",
"nodeid": "torture_test.py::test_word_structure_analysis",
"category": "Word",
"outcome": "passed",
"duration": time.time() - test_start,
"timestamp": datetime.now().isoformat(),
"module": "torture_test",
"class": None,
"function": "test_word_structure_analysis",
"inputs": {"file": test_docx_path},
"outputs": {"total_headings": heading_count},
"error": None,
"traceback": None,
})
except Exception as e:
results.append({
"name": "Word Structure Analysis",
"nodeid": "torture_test.py::test_word_structure_analysis",
"category": "Word",
"outcome": "failed",
"duration": time.time() - test_start,
"timestamp": datetime.now().isoformat(),
"module": "torture_test",
"class": None,
"function": "test_word_structure_analysis",
"inputs": {"file": test_docx_path},
"outputs": None,
"error": str(e),
"traceback": f"{type(e).__name__}: {e}",
})
# Test 5: Word Table Extraction
test_start = time.time()
try:
result = await word_mixin.extract_word_tables(test_docx)
table_count = result.get("total_tables", 0)
results.append({
"name": "Word Table Extraction",
"nodeid": "torture_test.py::test_word_table_extraction",
"category": "Word",
"outcome": "passed",
"duration": time.time() - test_start,
"timestamp": datetime.now().isoformat(),
"module": "torture_test",
"class": None,
"function": "test_word_table_extraction",
"inputs": {"file": test_docx_path},
"outputs": {"total_tables": table_count},
"error": None,
"traceback": None,
})
except Exception as e:
results.append({
"name": "Word Table Extraction",
"nodeid": "torture_test.py::test_word_table_extraction",
"category": "Word",
"outcome": "failed",
"duration": time.time() - test_start,
"timestamp": datetime.now().isoformat(),
"module": "torture_test",
"class": None,
"function": "test_word_table_extraction",
"inputs": {"file": test_docx_path},
"outputs": None,
"error": str(e),
"traceback": f"{type(e).__name__}: {e}",
})
# Test 6: Real Excel file (if available)
real_excel = EXCEL_TEST_FILES[0]
if os.path.exists(real_excel):
test_start = time.time()
try:
result = await excel_mixin.analyze_excel_data(real_excel)
sheets = len(result.get("sheets", []))
results.append({
"name": "Real Excel File Analysis (FORScan)",
"nodeid": "torture_test.py::test_real_excel_analysis",
"category": "Excel",
"outcome": "passed",
"duration": time.time() - test_start,
"timestamp": datetime.now().isoformat(),
"module": "torture_test",
"class": None,
"function": "test_real_excel_analysis",
"inputs": {"file": real_excel},
"outputs": {"sheets": sheets},
"error": None,
"traceback": None,
})
except Exception as e:
results.append({
"name": "Real Excel File Analysis (FORScan)",
"nodeid": "torture_test.py::test_real_excel_analysis",
"category": "Excel",
"outcome": "failed",
"duration": time.time() - test_start,
"timestamp": datetime.now().isoformat(),
"module": "torture_test",
"class": None,
"function": "test_real_excel_analysis",
"inputs": {"file": real_excel},
"outputs": None,
"error": str(e),
"traceback": f"{type(e).__name__}: {e}",
})
else:
results.append({
"name": "Real Excel File Analysis (FORScan)",
"nodeid": "torture_test.py::test_real_excel_analysis",
"category": "Excel",
"outcome": "skipped",
"duration": 0,
"timestamp": datetime.now().isoformat(),
"module": "torture_test",
"class": None,
"function": "test_real_excel_analysis",
"inputs": {"file": real_excel},
"outputs": None,
"error": f"File not found: {real_excel}",
"traceback": None,
})
# Calculate summary
total_duration = time.time() - start_time
passed = sum(1 for r in results if r["outcome"] == "passed")
failed = sum(1 for r in results if r["outcome"] == "failed")
skipped = sum(1 for r in results if r["outcome"] == "skipped")
total = len(results)
return {
"metadata": {
"start_time": datetime.fromtimestamp(start_time).isoformat(),
"end_time": datetime.now().isoformat(),
"duration": total_duration,
"exit_status": 0 if failed == 0 else 1,
"pytest_version": "torture_test",
},
"summary": {
"total": total,
"passed": passed,
"failed": failed,
"skipped": skipped,
"pass_rate": (passed / total * 100) if total > 0 else 0,
},
"categories": {
"Excel": {
"total": sum(1 for r in results if r["category"] == "Excel"),
"passed": sum(1 for r in results if r["category"] == "Excel" and r["outcome"] == "passed"),
"failed": sum(1 for r in results if r["category"] == "Excel" and r["outcome"] == "failed"),
"skipped": sum(1 for r in results if r["category"] == "Excel" and r["outcome"] == "skipped"),
},
"Word": {
"total": sum(1 for r in results if r["category"] == "Word"),
"passed": sum(1 for r in results if r["category"] == "Word" and r["outcome"] == "passed"),
"failed": sum(1 for r in results if r["category"] == "Word" and r["outcome"] == "failed"),
"skipped": sum(1 for r in results if r["category"] == "Word" and r["outcome"] == "skipped"),
},
},
"tests": results,
}
def merge_results(pytest_results: dict, torture_results: dict) -> dict:
"""Merge pytest and torture test results."""
# Merge tests
all_tests = pytest_results.get("tests", []) + torture_results.get("tests", [])
# Recalculate summary
total = len(all_tests)
passed = sum(1 for t in all_tests if t["outcome"] == "passed")
failed = sum(1 for t in all_tests if t["outcome"] == "failed")
skipped = sum(1 for t in all_tests if t["outcome"] == "skipped")
# Merge categories
all_categories = {}
for cat_dict in [pytest_results.get("categories", {}), torture_results.get("categories", {})]:
for cat, stats in cat_dict.items():
if cat not in all_categories:
all_categories[cat] = {"total": 0, "passed": 0, "failed": 0, "skipped": 0}
for key in ["total", "passed", "failed", "skipped"]:
all_categories[cat][key] += stats.get(key, 0)
# Combine durations
total_duration = pytest_results.get("metadata", {}).get("duration", 0) + \
torture_results.get("metadata", {}).get("duration", 0)
return {
"metadata": {
"start_time": pytest_results.get("metadata", {}).get("start_time", datetime.now().isoformat()),
"end_time": datetime.now().isoformat(),
"duration": total_duration,
"exit_status": 0 if failed == 0 else 1,
"pytest_version": pytest_results.get("metadata", {}).get("pytest_version", "unknown"),
"test_types": ["pytest", "torture_test"],
},
"summary": {
"total": total,
"passed": passed,
"failed": failed,
"skipped": skipped,
"pass_rate": (passed / total * 100) if total > 0 else 0,
},
"categories": all_categories,
"tests": all_tests,
}
def main():
"""Main execution function."""
reports_dir = Path(__file__).parent / "reports"
reports_dir.mkdir(exist_ok=True)
test_files_dir = reports_dir / "test_files"
pytest_output = reports_dir / "pytest_results.json"
final_output = reports_dir / "test_results.json"
# Run pytest tests
pytest_results = run_pytest_tests(pytest_output)
# Run torture tests with persistent test files
torture_results = asyncio.run(run_torture_tests(test_files_dir))
# Merge results
merged_results = merge_results(pytest_results, torture_results)
# Write final results
with open(final_output, "w") as f:
json.dump(merged_results, f, indent=2)
# Embed JSON data into HTML for offline viewing (file:// URLs)
dashboard_html = reports_dir / "test_dashboard.html"
if dashboard_html.exists():
html_content = dashboard_html.read_text()
# Remove any existing embedded data
import re
html_content = re.sub(
r'<script type="application/json" id="test-results-data">.*?</script>\n?',
'',
html_content,
flags=re.DOTALL
)
# Embed fresh data before </body>
embed_script = f'<script type="application/json" id="test-results-data">{json.dumps(merged_results)}</script>\n'
html_content = html_content.replace('</body>', f'{embed_script}</body>')
dashboard_html.write_text(html_content)
print("\n" + "=" * 70)
print("📊 TEST DASHBOARD SUMMARY")
print("=" * 70)
print(f"\n✅ Passed: {merged_results['summary']['passed']}")
print(f"❌ Failed: {merged_results['summary']['failed']}")
print(f"⏭️ Skipped: {merged_results['summary']['skipped']}")
print(f"\n📈 Pass Rate: {merged_results['summary']['pass_rate']:.1f}%")
print(f"⏱️ Duration: {merged_results['metadata']['duration']:.2f}s")
print(f"\n📄 Results saved to: {final_output}")
print(f"🌐 Dashboard: {reports_dir / 'test_dashboard.html'}")
print("=" * 70)
# Try to open dashboard in browser
try:
import webbrowser
dashboard_path = reports_dir / "test_dashboard.html"
webbrowser.open(f"file://{dashboard_path.absolute()}")
print("\n🌐 Opening dashboard in browser...")
except Exception as e:
print(f"\n⚠️ Could not open browser automatically: {e}")
print(f" Open manually: file://{(reports_dir / 'test_dashboard.html').absolute()}")
# Return exit code
return merged_results["metadata"]["exit_status"]
if __name__ == "__main__":
sys.exit(main())

97
test_mcp_tools.py Normal file
View File

@ -0,0 +1,97 @@
#!/usr/bin/env python3
"""Simple test script to verify MCP Office Tools functionality."""
import asyncio
import tempfile
import os
from pathlib import Path
# Create simple test documents
def create_test_documents():
"""Create test documents for verification."""
temp_dir = Path(tempfile.mkdtemp())
# Create a simple CSV file
csv_path = temp_dir / "test.csv"
csv_content = """Name,Age,City
John Doe,30,New York
Jane Smith,25,Los Angeles
Bob Johnson,35,Chicago"""
with open(csv_path, 'w') as f:
f.write(csv_content)
# Create a simple text file to test validation
txt_path = temp_dir / "test.txt"
with open(txt_path, 'w') as f:
f.write("This is a simple text file, not an Office document.")
return temp_dir, csv_path, txt_path
async def test_mcp_server():
"""Test MCP server functionality."""
print("🧪 Testing MCP Office Tools Server")
print("=" * 50)
# Create test documents
temp_dir, csv_path, txt_path = create_test_documents()
print(f"📁 Created test files in: {temp_dir}")
try:
# Import the server components
from mcp_office_tools.mixins import UniversalMixin
# Test the Universal Mixin directly
universal = UniversalMixin()
print("\n🔍 Testing extract_text with CSV file...")
try:
result = await universal.extract_text(str(csv_path))
print("✅ CSV text extraction successful!")
print(f" Text length: {len(result.get('text', ''))}")
print(f" Method used: {result.get('method_used', 'unknown')}")
except Exception as e:
print(f"❌ CSV text extraction failed: {e}")
print("\n🔍 Testing get_supported_formats...")
try:
result = await universal.get_supported_formats()
print("✅ Supported formats query successful!")
print(f" Total formats: {len(result.get('formats', []))}")
print(f" Excel formats: {len([f for f in result.get('formats', []) if 'Excel' in f.get('description', '')])}")
except Exception as e:
print(f"❌ Supported formats query failed: {e}")
print("\n🔍 Testing validation with unsupported file...")
try:
result = await universal.extract_text(str(txt_path))
print("❌ Should have failed with unsupported file!")
except Exception as e:
print(f"✅ Correctly rejected unsupported file: {type(e).__name__}")
print("\n🔍 Testing detect_office_format...")
try:
result = await universal.detect_office_format(str(csv_path))
print("✅ Format detection successful!")
print(f" Detected format: {result.get('format', 'unknown')}")
print(f" Is supported: {result.get('is_supported', False)}")
except Exception as e:
print(f"❌ Format detection failed: {e}")
except ImportError as e:
print(f"❌ Failed to import server components: {e}")
return False
except Exception as e:
print(f"❌ Unexpected error: {e}")
return False
finally:
# Cleanup
import shutil
shutil.rmtree(temp_dir)
print(f"\n🧹 Cleaned up test files from: {temp_dir}")
print("\n✅ Basic MCP Office Tools testing completed!")
return True
if __name__ == "__main__":
asyncio.run(test_mcp_server())

View File

@ -245,8 +245,8 @@ def mock_validation_context():
return MockValidationContext
# FastMCP-specific test markers
pytest_plugins = ["pytest_asyncio"]
# FastMCP-specific test markers and dashboard plugin
pytest_plugins = ["pytest_asyncio", "tests.pytest_dashboard_plugin"]
# Configure pytest markers
def pytest_configure(config):

View File

@ -0,0 +1,194 @@
"""Pytest plugin to capture test results for the dashboard.
This plugin captures detailed test execution data including inputs, outputs,
timing, and status for display in the HTML test dashboard.
"""
import json
import time
import traceback
from datetime import datetime
from pathlib import Path
from typing import Dict, List, Any
import pytest
class DashboardReporter:
"""Reporter that captures test execution data for the dashboard."""
def __init__(self, output_path: str):
self.output_path = Path(output_path)
self.test_results: List[Dict[str, Any]] = []
self.start_time = time.time()
self.session_metadata = {
"start_time": datetime.now().isoformat(),
"pytest_version": pytest.__version__,
}
def pytest_runtest_protocol(self, item, nextitem):
"""Capture test execution at the protocol level."""
# Store test item for later use
item._dashboard_start = time.time()
return None
def pytest_runtest_makereport(self, item, call):
"""Capture test results and extract information."""
if call.when == "call": # Only capture the main test call, not setup/teardown
test_data = {
"name": item.name,
"nodeid": item.nodeid,
"category": self._categorize_test(item),
"outcome": None, # Will be set in pytest_runtest_logreport
"duration": call.duration,
"timestamp": datetime.now().isoformat(),
"module": item.module.__name__ if item.module else "unknown",
"class": item.cls.__name__ if item.cls else None,
"function": item.function.__name__ if hasattr(item, "function") else item.name,
"inputs": self._extract_inputs(item),
"outputs": None,
"error": None,
"traceback": None,
}
# Store for later processing in pytest_runtest_logreport
item._dashboard_data = test_data
def pytest_runtest_logreport(self, report):
"""Process test reports to extract outputs and status."""
if report.when == "call" and hasattr(report, "item"):
item = report.item if hasattr(report, "item") else None
if item and hasattr(item, "_dashboard_data"):
test_data = item._dashboard_data
# Set outcome
test_data["outcome"] = report.outcome
# Extract output
if hasattr(report, "capstdout"):
test_data["outputs"] = {
"stdout": report.capstdout,
"stderr": getattr(report, "capstderr", ""),
}
# Extract error information
if report.failed:
test_data["error"] = str(report.longrepr) if hasattr(report, "longrepr") else "Unknown error"
if hasattr(report, "longreprtext"):
test_data["traceback"] = report.longreprtext
elif hasattr(report, "longrepr"):
test_data["traceback"] = str(report.longrepr)
# Extract actual output from test result if available
if hasattr(report, "result"):
test_data["outputs"]["result"] = str(report.result)
self.test_results.append(test_data)
def pytest_sessionfinish(self, session, exitstatus):
"""Write results to JSON file at end of test session."""
end_time = time.time()
# Calculate summary statistics
total_tests = len(self.test_results)
passed = sum(1 for t in self.test_results if t["outcome"] == "passed")
failed = sum(1 for t in self.test_results if t["outcome"] == "failed")
skipped = sum(1 for t in self.test_results if t["outcome"] == "skipped")
# Group by category
categories = {}
for test in self.test_results:
cat = test["category"]
if cat not in categories:
categories[cat] = {"total": 0, "passed": 0, "failed": 0, "skipped": 0}
categories[cat]["total"] += 1
if test["outcome"] == "passed":
categories[cat]["passed"] += 1
elif test["outcome"] == "failed":
categories[cat]["failed"] += 1
elif test["outcome"] == "skipped":
categories[cat]["skipped"] += 1
# Build final output
output_data = {
"metadata": {
**self.session_metadata,
"end_time": datetime.now().isoformat(),
"duration": end_time - self.start_time,
"exit_status": exitstatus,
},
"summary": {
"total": total_tests,
"passed": passed,
"failed": failed,
"skipped": skipped,
"pass_rate": (passed / total_tests * 100) if total_tests > 0 else 0,
},
"categories": categories,
"tests": self.test_results,
}
# Ensure output directory exists
self.output_path.parent.mkdir(parents=True, exist_ok=True)
# Write JSON
with open(self.output_path, "w") as f:
json.dump(output_data, f, indent=2)
print(f"\n Dashboard test results written to: {self.output_path}")
def _categorize_test(self, item) -> str:
"""Categorize test based on its name/path."""
nodeid = item.nodeid.lower()
if "word" in nodeid:
return "Word"
elif "excel" in nodeid:
return "Excel"
elif "powerpoint" in nodeid or "pptx" in nodeid:
return "PowerPoint"
elif "universal" in nodeid:
return "Universal"
elif "server" in nodeid:
return "Server"
else:
return "Other"
def _extract_inputs(self, item) -> Dict[str, Any]:
"""Extract test inputs from fixtures and parameters."""
inputs = {}
# Get fixture values
if hasattr(item, "funcargs"):
for name, value in item.funcargs.items():
# Skip complex objects, only store simple values
if isinstance(value, (str, int, float, bool, type(None))):
inputs[name] = value
elif isinstance(value, (list, tuple)) and len(value) < 10:
inputs[name] = list(value)
elif isinstance(value, dict) and len(value) < 10:
inputs[name] = value
else:
inputs[name] = f"<{type(value).__name__}>"
# Get parametrize values if present
if hasattr(item, "callspec"):
inputs["params"] = item.callspec.params
return inputs
def pytest_configure(config):
"""Register the dashboard reporter plugin."""
output_path = config.getoption("--dashboard-output", default="reports/test_results.json")
reporter = DashboardReporter(output_path)
config.pluginmanager.register(reporter, "dashboard_reporter")
def pytest_addoption(parser):
"""Add command line option for dashboard output path."""
parser.addoption(
"--dashboard-output",
action="store",
default="reports/test_results.json",
help="Path to output JSON file for dashboard (default: reports/test_results.json)",
)

22
view_dashboard.sh Executable file
View File

@ -0,0 +1,22 @@
#!/bin/bash
# Quick script to open the test dashboard in browser
DASHBOARD_PATH="/home/rpm/claude/mcp-office-tools/reports/test_dashboard.html"
echo "📊 Opening MCP Office Tools Test Dashboard..."
echo "Dashboard: $DASHBOARD_PATH"
echo ""
# Try different browser commands based on what's available
if command -v xdg-open &> /dev/null; then
xdg-open "$DASHBOARD_PATH"
elif command -v firefox &> /dev/null; then
firefox "$DASHBOARD_PATH" &
elif command -v chromium &> /dev/null; then
chromium "$DASHBOARD_PATH" &
elif command -v google-chrome &> /dev/null; then
google-chrome "$DASHBOARD_PATH" &
else
echo "⚠️ No browser command found. Please open manually:"
echo " file://$DASHBOARD_PATH"
fi