llm-mcp-bridge
Verified Safeby ramgeart
Overview
An agnostic MCP server that bridges various OpenAI-compatible LLM APIs, providing tools for LLM model analysis, benchmarking, and quality evaluation.
Installation
node dist/index.jsEnvironment Variables
- LLM_BASE_URL
- LLM_API_KEY
Security Notes
The server correctly handles API keys via environment variables or explicit arguments, avoiding hardcoding. Input validation for tool arguments is implemented using Zod, which helps prevent common injection vulnerabilities. The design allows overriding the LLM base URL and API key per tool call, which is a powerful feature but implies that the calling client needs to be trusted to prevent redirection to malicious endpoints, though this is a characteristic of the MCP protocol itself and not a flaw in the bridge's implementation. No 'eval' or other directly exploitable patterns were found in the provided source.
Similar Servers
mcpo
Exposes Model Context Protocol (MCP) tools as OpenAPI-compatible HTTP servers.
inspector
Local development and debugging platform for Model Context Protocol (MCP) clients and servers, including proxying MCP server interactions, simulating UI widgets, and facilitating OAuth flows. It enables building, testing, and developing MCP clients and servers.
mcp-omnisearch
Provides a unified interface for various search, AI response, content processing, and enhancement tools via Model Context Protocol (MCP).
tmcp
A server implementation for the Model Context Protocol (MCP) to enable LLMs to access external context and tools.