openai-responses-mcp
Verified Safeby uchimanajet7
Overview
A lightweight MCP server for AI clients (like Claude Code/Desktop) to leverage OpenAI Responses API with autonomous web search capabilities over stdio.
Installation
npx openai-responses-mcp@latest --stdioEnvironment Variables
- OPENAI_API_KEY
- MODEL_ANSWER
- ANSWER_EFFORT
- ANSWER_VERBOSITY
- OPENAI_API_TIMEOUT
- OPENAI_MAX_RETRIES
- SEARCH_MAX_RESULTS
- SEARCH_RECENCY_DAYS
- MAX_CITATIONS
- DEBUG
- MCP_LINE_MODE
Security Notes
The server follows good security practices by requiring API keys via environment variables (no hardcoding in YAML/code) and explicitly warns against storing secrets in config. Debug logging is designed to avoid sensitive information like full query text, instructions, or API keys, instead showing metadata like query length or truncated error bodies. Configurable `base_url` and external policy file paths are present, which are standard features but could be misconfigured by a user to point to malicious sources.
Similar Servers
consult-llm-mcp
An MCP server that allows AI agents like Claude Code to consult stronger, more capable AI models (e.g., GPT-5.2, Gemini 3.0 Pro) for complex code analysis, debugging, and architectural advice.
claude-faf-mcp
Optimizes AI understanding of software projects by providing persistent context, fixing context-drift, and enabling bi-directional synchronization between project metadata and AI documentation.
codex-mcp
Provides a robust MCP server wrapper for Codex CLI to enable reliable session ID tracking for multi-turn AI conversations.
converse
Orchestrates and exposes various AI tools (chat, multi-model consensus, job management) over the Model Context Protocol, enabling local, persistent, and potentially asynchronous AI interactions across multiple Large Language Model (LLM) providers.