internal-ai-bridge-mcp
Verified Safeby AD-Archer
Overview
This project acts as a Model Context Protocol (MCP) bridge, enabling OpenWebUI or other MCP-capable clients (like n8n) to communicate with an internal, in-house AI platform via HTTP webhooks, providing conversation memory and OpenAI-compatible endpoints.
Installation
docker run --rm -p 8765:8765 --env-file .env -v "$(pwd)/data:/app/data" ghcr.io/ad-archer/external-ai-bridge-mcp:latestEnvironment Variables
- AI_WEBHOOK_URL
- MODEL_NAME
- CONVERSATION_DB_PATH
- CONVERSATION_HISTORY_LIMIT
- MESSAGE_RETENTION_DAYS
- AI_API_KEY
- AI_TIMEOUT
- ENABLE_BEARER_AUTH
- API_BEARER_TOKEN
- ROUTE_BEARER_TOKENS
- EXTRA_WEBHOOKS
- FRONTEND_WEBHOOK_URL
Security Notes
The server uses Pydantic for configuration validation, `sqlite3` with parameterized queries to prevent SQL injection, and provides a `BearerAuthMiddleware` for API authentication. It includes retry logic for AI webhook calls. A notable feature is the `trigger_webhook` tool which can invoke arbitrary URLs if not targeting a pre-configured alias, posing a potential Server-Side Request Forgery (SSRF) risk if the bridge is exposed without strong authentication or if an attacker gains control of the MCP client. However, this is described as an intended feature for flexibility. No 'eval' or obvious obfuscation/malicious patterns were found.
Similar Servers
mcpo
Exposes Model Context Protocol (MCP) tools as OpenAPI-compatible HTTP servers.
MCP-connect
A lightweight bridge service that exposes local MCP servers as HTTP APIs, enabling cloud AI tools and agents to interact with various local MCP services via Streamable HTTP or a classic request/response bridge.
tmcp
A server implementation for the Model Context Protocol (MCP) to enable LLMs to access external context and tools.
mcp-rubber-duck
An MCP (Model Context Protocol) server that acts as a bridge to query multiple OpenAI-compatible LLMs, enabling multi-agent AI workflows and providing an AI 'rubber duck' debugging panel.