consult-llm-mcp
Verified Safeby raine
Overview
An MCP server that allows AI agents like Claude Code to consult stronger, more capable AI models (e.g., GPT-5.2, Gemini 3.0 Pro) for complex code analysis, debugging, and architectural advice.
Installation
npx -y consult-llm-mcpEnvironment Variables
- OPENAI_API_KEY
- GEMINI_API_KEY
- DEEPSEEK_API_KEY
- CONSULT_LLM_DEFAULT_MODEL
- GEMINI_MODE
- OPENAI_MODE
- CODEX_REASONING_EFFORT
- CONSULT_LLM_ALLOWED_MODELS
Security Notes
The server uses `child_process.spawn` for CLI integrations with `shell: false`, which mitigates direct shell injection. API keys are loaded from environment variables, preventing hardcoding. Input file paths for context (via `processFiles`) are resolved to absolute paths, but direct user input of malicious paths could theoretically lead to unintended file reads, though typically these are controlled by the invoking AI agent. The risk primarily lies with potential vulnerabilities in the external CLI tools (Gemini CLI, Codex CLI) that are invoked, or how they parse constructed prompt/file arguments.
Similar Servers
claude-code-mcp
Acts as an MCP server to enable LLMs to run Claude Code CLI in one-shot mode, bypassing permissions for complex coding, file system, Git, and terminal operations.
Delphi-MCP-Server
Implements the Model Context Protocol (MCP) in Delphi to enable AI-powered development workflows and integrate with clients like Claude Code.
tenets
Provides intelligent, token-optimized code context and automatically injects guiding principles to AI coding assistants for enhanced understanding and consistent interactions.
mcp-devtools-server
This MCP server standardizes development tool patterns and provides AI-powered integrations to enable Claude Code to generate code more efficiently, reduce errors, and improve autocorrection for various programming languages and workflows.