thinkingcap
Verified Safeby Infatoshi
Overview
A multi-agent research MCP server that runs multiple LLM providers in parallel and synthesizes their responses to a given query.
Installation
npx -y thinkingcap openrouter:moonshotai/kimi-k2-thinking groq:moonshotai/kimi-k2-instruct-0905 cerebras:zai-glm-4.6 xai:grok-4-fastEnvironment Variables
- OPENAI_API_KEY
- OPENROUTER_API_KEY
- GROQ_API_KEY
- CEREBRAS_API_KEY
- XAI_API_KEY
- ANTHROPIC_API_KEY
- GOOGLE_API_KEY
Security Notes
API keys are correctly managed via environment variables and are not hardcoded. The system uses LLMs to generate structured data (questions) that are then JSON parsed; while this introduces a potential risk if an LLM deviates maliciously from the expected format, the prompts explicitly guide the LLM to return only a JSON array of strings, mitigating the risk. Web search is performed against DuckDuckGo, a legitimate search engine, via direct HTTP requests. There are no detected uses of `eval` or similar dangerous functions on untrusted inputs.
Similar Servers
gpt-researcher
An autonomous AI agent designed for comprehensive online and local document research, capable of generating detailed, factual, and unbiased reports. It also supports integration with AI assistants (like Claude) via the Machine Conversation Protocol (MCP) for deep research capabilities.
deep-research
An AI-powered research assistant that generates comprehensive reports, leverages various LLMs and web search engines, and offers integration as a SaaS or MCP service.
mcp-omnisearch
Provides a unified interface for LLMs to access multiple web search, AI response, content processing, and enhancement tools from various providers through the Model Context Protocol (MCP).
mcp_massive
An AI agent orchestration server, likely interacting with LLMs and managing multi-agent workflows.