Back to Home
Rizwan723 icon

MCP-Security-Proxy

Verified Safe

by Rizwan723

Overview

A security proxy for Model Context Protocol (MCP) to test and defend against vulnerabilities in LLM tool interactions within a cloud-native Dockerized environment.

Installation

Run Command
docker compose up -d --build

Environment Variables

  • LLM_MODEL_PATH
  • LLM_N_GPU_LAYERS
  • LLM_N_CTX
  • LLM_N_BATCH
  • LLM_FLASH_ATTN
  • LLM_VERBOSE
  • LLM_CHAT_FORMAT
  • LLM_TEMPERATURE
  • LLM_MAX_TOKENS
  • CLOUD_LLM_PROVIDER
  • CLOUD_GOOGLE_API_KEY
  • CLOUD_OPENAI_API_KEY
  • CLOUD_MODEL_NAME
  • CLOUD_TEMPERATURE
  • CLOUD_MAX_TOKENS
  • LOG_LEVEL
  • DEBUG
  • MODEL_NAME
  • DETECTOR_SIGMA
  • MCP_SERVERS
  • RESEARCH_DATA_PATH
  • TRAINING_DATA_FILE
  • SEMANTIC_MODEL_PATH
  • STATISTICAL_MODEL_PATH
  • AUDIT_LOG_FILE
  • AUDIT_LOG_PATH
  • FS_SAFE_MODE
  • FS_ROOT_DIR
  • FS_LOG_LEVEL
  • SQL_DB_PATH
  • SQL_LOG_LEVEL
  • TIME_LOG_LEVEL
  • TIME_DEFAULT_TIMEZONE
  • FETCH_LOG_LEVEL
  • FETCH_SAFE_MODE
  • FETCH_TIMEOUT
  • MEMORY_LOG_LEVEL
  • MEMORY_SAFE_MODE
  • MEMORY_STORAGE_PATH
  • BRIDGE_RPC_URL
  • CUSTOM_LLM_URL
  • LLM_CLOUD_URL
  • TRAFFIC_SAMPLES
  • TRAFFIC_ATTACK_RATIO
  • TRAFFIC_CONCURRENCY

Security Notes

The MCP Bridge (security proxy) is exceptionally well-designed for its purpose, incorporating multiple layers of defense: a rule-based detector for known attacks, a statistical anomaly detector, a semantic (DistilBERT) prototypical learning detector, and an optional MAML-based few-shot adaptation detector, all within a weighted ensemble that defaults to 'ATTACK' in ambiguous cases. Network isolation is enforced through internal Docker networks for tools, preventing direct external access. Input validation is performed using Pydantic, and ReDoS protection is present in regex-based rules. The primary security risk lies in the intentionally vulnerable 'MCP Tools' services (e.g., filesystem, fetch, sqlite) when their 'SAFE_MODE' flags are set to 'false' (which is the default in the provided `docker-compose.yml` for testing purposes). This design is a feature for vulnerability testing, not a flaw in the proxy's defensive capabilities. A minor theoretical risk is the `torch.load(..., weights_only=False)` call for model loading, which could allow arbitrary code execution if the model files themselves were compromised by an attacker, though in a research context generating its own models, this is standard. The LLM proxy endpoint ( `/v1/chat/completions`) performs basic forwarding without explicit security checks in the provided code, but it is configured to use internal LLM services by default.

Similar Servers

Stats

Interest Score0
Security Score9
Cost ClassMedium
Avg Tokens350
Stars0
Forks0
Last Update2026-01-19

Tags

SecurityAI/MLProxyDockerVulnerability Testing