gpt-researcher
Verified Safeby assafelovic
Overview
The GPT Researcher MCP Server enables AI assistants to conduct comprehensive web research and generate detailed, factual, and unbiased reports. It supports multi-agent workflows, local document analysis, and integration with external tools via the Machine Conversation Protocol (MCP) for various research tasks.
Installation
docker-compose up --buildEnvironment Variables
- OPENAI_API_KEY
- TAVILY_API_KEY
- OPENAI_BASE_URL
- LANGCHAIN_API_KEY
- LOGGING_LEVEL
- NEXT_PUBLIC_GA_MEASUREMENT_ID
- NEXT_PUBLIC_GPTR_API_URL
- DISCORD_BOT_TOKEN
- DISCORD_CLIENT_ID
- DOC_PATH
- RETRIEVER
- EMBEDDING
- MCP_API_KEY
Security Notes
The project demonstrates good security practices regarding file path manipulation by using `sanitize_filename` and `os.path.basename` to prevent path traversal in file uploads, deletions, and report generation. Sensitive API keys are managed via environment variables. However, the default `docker-compose.yml` runs services as `user: root`, which grants excessive privileges within containers and should be mitigated for production. Additionally, the FastAPI server's CORS `allow_origins` includes `"*"` for testing purposes, which needs to be restricted to specific domains in a production environment. LLM-based components are also inherently susceptible to prompt injection risks.
Similar Servers
deep-research
Generate comprehensive, AI-powered deep research reports, leveraging various LLMs and web search engines, with local knowledge base integration and report artifact editing.
DevDocs
DevDocs is a web crawling and content extraction platform designed to accelerate software development by converting documentation into LLM-ready formats for intelligent data querying and fine-tuning.
mcp-server
Provides a Model Context Protocol (MCP) server for AI agents to search and retrieve curated documentation for the Strands Agents framework, facilitating AI coding assistance.
Google-Search-MCP-Server
This MCP server enhances Google search with AI-powered research synthesis, content extraction, source quality assessment, and deduplication, designed to be used by large language models (LLMs) like Claude.