# Hook API Reference ## Hook Input/Output Protocol All hooks communicate with Claude Code via JSON over stdin/stdout. ### Input Format Hooks receive a JSON object via stdin containing: ```json { "tool": "string", // Tool being used (e.g., "Bash", "Read", "Edit") "parameters": {}, // Tool-specific parameters "success": boolean, // Tool execution result (PostToolUse only) "error": "string", // Error message if failed (PostToolUse only) "execution_time": number, // Execution time in seconds (PostToolUse only) "prompt": "string" // User prompt text (UserPromptSubmit only) } ``` ### Output Format Hooks must output a JSON response to stdout: ```json { "allow": boolean, // Required: true to allow, false to block "message": "string" // Optional: message to display to user } ``` ### Exit Codes - `0` - Allow operation (success) - `1` - Block operation (for PreToolUse hooks only) - Any other code - Treated as hook error, operation allowed --- ## Hook Types ### UserPromptSubmit **Trigger**: When user submits a prompt to Claude **Purpose**: Monitor session state, trigger backups **Input**: ```json { "prompt": "Create a new Python script that..." } ``` **Expected behavior**: - Always return `"allow": true` - Update context usage estimates - Trigger backups if thresholds exceeded - Update session tracking **Example response**: ```json { "allow": true, "message": "Context usage: 67%" } ``` --- ### PreToolUse **Trigger**: Before Claude executes any tool **Purpose**: Validate operations, block dangerous commands #### PreToolUse[Bash] **Input**: ```json { "tool": "Bash", "parameters": { "command": "rm -rf /tmp/myfiles", "description": "Clean up temporary files" } } ``` **Expected behavior**: - Return `"allow": false` and exit code 1 to block dangerous commands - Return `"allow": true` with warnings for suspicious commands - Check against learned failure patterns - Suggest alternatives when blocking **Block response**: ```json { "allow": false, "message": "⛔ Command blocked: Dangerous pattern detected" } ``` **Warning response**: ```json { "allow": true, "message": "⚠️ Command may fail (confidence: 85%)\n💡 Suggestion: Use 'python3' instead of 'python'" } ``` #### PreToolUse[Edit] **Input**: ```json { "tool": "Edit", "parameters": { "file_path": "/etc/passwd", "old_string": "user:x:1000", "new_string": "user:x:0" } } ``` **Expected behavior**: - Validate file paths for security - Check for system file modifications - Prevent path traversal attacks #### PreToolUse[Write] **Input**: ```json { "tool": "Write", "parameters": { "file_path": "../../sensitive_file.txt", "content": "malicious content" } } ``` **Expected behavior**: - Validate file paths - Check file extensions - Detect potential security issues --- ### PostToolUse **Trigger**: After Claude executes any tool **Purpose**: Learn from outcomes, log activity **Input**: ```json { "tool": "Bash", "parameters": { "command": "pip install requests", "description": "Install requests library" }, "success": false, "error": "bash: pip: command not found", "execution_time": 0.125 } ``` **Expected behavior**: - Always return `"allow": true` - Update learning patterns based on success/failure - Log execution for analysis - Update session state **Example response**: ```json { "allow": true, "message": "Logged Bash execution (learned failure pattern)" } ``` --- ### Stop **Trigger**: When Claude session ends **Purpose**: Finalize session, create documentation **Input**: ```json {} ``` **Expected behavior**: - Always return `"allow": true` - Create LAST_SESSION.md - Update ACTIVE_TODOS.md - Save learned patterns - Generate recovery information if needed **Example response**: ```json { "allow": true, "message": "Session finalized. Modified 5 files, used 23 tools." } ``` --- ## Error Handling ### Hook Errors If a hook script: - Exits with non-zero code (except 1 for PreToolUse) - Produces invalid JSON - Times out (> 5 seconds default) - Crashes or cannot be executed Then Claude Code will: - Log the error - Allow the operation to proceed - Display a warning message ### Recovery Behavior Hooks are designed to fail safely: - Operations are never blocked due to hook failures - Invalid responses are treated as "allow" - Missing hooks are ignored - Malformed JSON output allows operation --- ## Performance Requirements ### Execution Time - **UserPromptSubmit**: < 100ms recommended - **PreToolUse**: < 50ms recommended (blocking user) - **PostToolUse**: < 200ms recommended (can be async) - **Stop**: < 1s recommended (cleanup operations) ### Memory Usage - Hooks should use < 50MB memory - Avoid loading large datasets on each execution - Use caching for expensive operations ### I/O Operations - Minimize file I/O in PreToolUse hooks - Batch write operations where possible - Use async operations for non-blocking hooks --- ## Security Considerations ### Input Validation Always validate hook input: ```python def validate_input(input_data): if not isinstance(input_data, dict): raise ValueError("Input must be JSON object") tool = input_data.get("tool", "") if not isinstance(tool, str): raise ValueError("Tool must be string") # Validate other fields... ``` ### Output Sanitization Ensure hook output is safe: ```python def safe_message(text): # Remove potential injection characters return text.replace('\x00', '').replace('\r', '').replace('\n', '\\n') response = { "allow": True, "message": safe_message(user_input) } ``` ### File Path Validation For hooks that access files: ```python def validate_file_path(path): # Convert to absolute path abs_path = os.path.abspath(path) # Check if within project boundaries project_root = os.path.abspath(".") if not abs_path.startswith(project_root): raise ValueError("Path outside project directory") # Check for system files system_paths = ['/etc', '/usr', '/var', '/sys', '/proc'] for sys_path in system_paths: if abs_path.startswith(sys_path): raise ValueError("System file access denied") ``` --- ## Testing Hooks ### Unit Testing Test hooks with sample inputs: ```python def test_command_validator(): import subprocess import json # Test dangerous command input_data = { "tool": "Bash", "parameters": {"command": "rm -rf /"} } process = subprocess.run( ["python3", "hooks/command_validator.py"], input=json.dumps(input_data), capture_output=True, text=True ) assert process.returncode == 1 # Should block response = json.loads(process.stdout) assert response["allow"] == False ``` ### Integration Testing Test with Claude Code directly: ```bash # Test in development environment echo '{"tool": "Bash", "parameters": {"command": "ls"}}' | python3 hooks/command_validator.py # Test hook registration claude-hooks status ``` ### Performance Testing Measure hook execution time: ```python import time import subprocess import json def benchmark_hook(hook_script, input_data, iterations=100): times = [] for _ in range(iterations): start = time.time() subprocess.run( ["python3", hook_script], input=json.dumps(input_data), capture_output=True ) times.append(time.time() - start) avg_time = sum(times) / len(times) max_time = max(times) print(f"Average: {avg_time*1000:.1f}ms, Max: {max_time*1000:.1f}ms") ```