MCP-Security-Proxy
Verified Safeby polymons
Overview
Transparent security proxy for LLM tool interactions, employing ensemble anomaly detection to classify requests as benign or malicious.
Installation
docker compose up -d --buildEnvironment Variables
- CLOUD_OPENAI_API_KEY (required if CLOUD_LLM_PROVIDER=openai)
- CLOUD_GOOGLE_API_KEY (required if CLOUD_LLM_PROVIDER=gemini)
- LLM_MODEL_PATH (default: /app/models/llama-2-7b-chat.Q4_K_M.gguf)
- MCP_SERVERS (default: comma-separated list of tool URLs, e.g., http://tool-filesystem:8080)
Security Notes
The MCP Bridge (security proxy) component implements robust security features, including an ensemble of rule-based, statistical, and semantic detectors, network isolation for tools (mcp-secure internal network), and fail-safe blocking. However, the underlying MCP tool servers (filesystem, sqlite, time, fetch, memory) are intentionally designed to be vulnerable to common attacks (e.g., SQL injection via direct `cursor.execute`, command injection via timezone parameter, path traversal when `SAFE_MODE=false`), as this is a research project testing the proxy's detection capabilities. A bypass of the proxy would expose these severe vulnerabilities. The `is_safe_to_run` assumes the system is run with the proxy actively protecting these intentionally vulnerable tools.
Similar Servers
mcpo
Exposes Model Context Protocol (MCP) tools as OpenAPI-compatible HTTP servers.
mcp-context-forge
Retrieves web content and files from any URL, converting them into high-quality markdown format with support for various content types and conversion engines.
mcp-language-server
Proxies a Language Server Protocol (LSP) server to provide semantic code intelligence tools to Model Context Protocol (MCP) clients, enabling LLMs to interact with codebases.
mcp-server-code-execution-mode
This server enables LLM agents to execute Python code in a highly secure, isolated container environment, facilitating complex multi-tool orchestration and data analysis with minimal LLM context token usage.