Back to Home
raine icon

consult-llm-mcp

Verified Safe

by raine

Overview

An MCP server that allows AI agents like Claude Code to consult stronger, more capable AI models (e.g., GPT-5.2, Gemini 3.0 Pro) for complex code analysis, debugging, and architectural advice.

Installation

Run Command
npx -y consult-llm-mcp

Environment Variables

  • OPENAI_API_KEY
  • GEMINI_API_KEY
  • DEEPSEEK_API_KEY
  • CONSULT_LLM_DEFAULT_MODEL
  • GEMINI_MODE
  • OPENAI_MODE
  • CODEX_REASONING_EFFORT
  • CONSULT_LLM_ALLOWED_MODELS

Security Notes

The server uses `child_process.spawn` for CLI integrations with `shell: false`, which mitigates direct shell injection. API keys are loaded from environment variables, preventing hardcoding. Input file paths for context (via `processFiles`) are resolved to absolute paths, but direct user input of malicious paths could theoretically lead to unintended file reads, though typically these are controlled by the invoking AI agent. The risk primarily lies with potential vulnerabilities in the external CLI tools (Gemini CLI, Codex CLI) that are invoked, or how they parse constructed prompt/file arguments.

Similar Servers

Stats

Interest Score47
Security Score8
Cost ClassHigh
Avg Tokens10000
Stars49
Forks8
Last Update2026-01-12

Tags

AILLMDeveloper ToolsMCPCode Analysis