MCP-Security-Proxy
Verified Safeby Rizwan723
Overview
A security proxy for Model Context Protocol (MCP) to test and defend against vulnerabilities in LLM tool interactions within a cloud-native Dockerized environment.
Installation
docker compose up -d --buildEnvironment Variables
- LLM_MODEL_PATH
- LLM_N_GPU_LAYERS
- LLM_N_CTX
- LLM_N_BATCH
- LLM_FLASH_ATTN
- LLM_VERBOSE
- LLM_CHAT_FORMAT
- LLM_TEMPERATURE
- LLM_MAX_TOKENS
- CLOUD_LLM_PROVIDER
- CLOUD_GOOGLE_API_KEY
- CLOUD_OPENAI_API_KEY
- CLOUD_MODEL_NAME
- CLOUD_TEMPERATURE
- CLOUD_MAX_TOKENS
- LOG_LEVEL
- DEBUG
- MODEL_NAME
- DETECTOR_SIGMA
- MCP_SERVERS
- RESEARCH_DATA_PATH
- TRAINING_DATA_FILE
- SEMANTIC_MODEL_PATH
- STATISTICAL_MODEL_PATH
- AUDIT_LOG_FILE
- AUDIT_LOG_PATH
- FS_SAFE_MODE
- FS_ROOT_DIR
- FS_LOG_LEVEL
- SQL_DB_PATH
- SQL_LOG_LEVEL
- TIME_LOG_LEVEL
- TIME_DEFAULT_TIMEZONE
- FETCH_LOG_LEVEL
- FETCH_SAFE_MODE
- FETCH_TIMEOUT
- MEMORY_LOG_LEVEL
- MEMORY_SAFE_MODE
- MEMORY_STORAGE_PATH
- BRIDGE_RPC_URL
- CUSTOM_LLM_URL
- LLM_CLOUD_URL
- TRAFFIC_SAMPLES
- TRAFFIC_ATTACK_RATIO
- TRAFFIC_CONCURRENCY
Security Notes
The MCP Bridge (security proxy) is exceptionally well-designed for its purpose, incorporating multiple layers of defense: a rule-based detector for known attacks, a statistical anomaly detector, a semantic (DistilBERT) prototypical learning detector, and an optional MAML-based few-shot adaptation detector, all within a weighted ensemble that defaults to 'ATTACK' in ambiguous cases. Network isolation is enforced through internal Docker networks for tools, preventing direct external access. Input validation is performed using Pydantic, and ReDoS protection is present in regex-based rules. The primary security risk lies in the intentionally vulnerable 'MCP Tools' services (e.g., filesystem, fetch, sqlite) when their 'SAFE_MODE' flags are set to 'false' (which is the default in the provided `docker-compose.yml` for testing purposes). This design is a feature for vulnerability testing, not a flaw in the proxy's defensive capabilities. A minor theoretical risk is the `torch.load(..., weights_only=False)` call for model loading, which could allow arbitrary code execution if the model files themselves were compromised by an attacker, though in a research context generating its own models, this is standard. The LLM proxy endpoint ( `/v1/chat/completions`) performs basic forwarding without explicit security checks in the provided code, but it is configured to use internal LLM services by default.
Similar Servers
mcp-language-server
Serves as an MCP (Model Context Protocol) gateway, enabling LLMs to interact with Language Servers (LSPs) for codebase navigation, semantic analysis, and code editing operations.
toolhive-studio
ToolHive is a desktop application (Electron UI) for discovering, deploying, and managing Model Context Protocol (MCP) servers in isolated containers, and connecting them to AI agents and clients.
modular-mcp
A proxy server that efficiently manages and loads large tool collections from multiple Model Context Protocol (MCP) servers on-demand for LLMs, reducing context overhead.
emceepee
A proxy server enabling AI agents to dynamically connect to and interact with multiple Model Context Protocol (MCP) backend servers, exposing the full MCP protocol via a simplified tool interface or a sandboxed JavaScript execution environment.