Stop Searching. Start Trusting.
The curated directory of MCP servers, vetted for security, efficiency, and quality.
Tired of the MCP "Marketplace" Chaos?
We built MCPScout.ai to solve the ecosystems biggest pain points.
No Insecure Dumps
We manually analyze every server for basic security flaws.
Easy Setup
Our gotcha notes warn you about complex setups.
Avoid "Token Hogs"
We estimate token costs for cost-effective agents.
Products, Not Demos
We filter out "Hello World" demos.
Vetted Servers(6642)
UltraRAG
by OpenBMB
A low-code RAG framework for researchers to build and iterate on complex multi-stage, multimodal Retrieval-Augmented Generation (RAG) pipelines using a Model Context Protocol (MCP) architecture.
A low-code RAG framework for researchers to build and iterate on complex multi-stage, multimodal Retrieval-Augmented Generation (RAG) pipelines using a Model Context Protocol (MCP) architecture.
Setup Requirements
- ⚠️Requires Node.js (version 20+) for launching remote MCP servers (`npx mcp-remote`).
- ⚠️Many functionalities (e.g., web search, OpenAI LLMs) require external API keys (Exa, Tavily, ZhipuAI, OpenAI) which incur usage costs.
- ⚠️Leverages GPU hardware extensively for performance (e.g., vLLM, FAISS-GPU, sentence-transformers, infinity-emb); specific CUDA versions (e.g., CUDA 12.x) may be required depending on chosen dependencies.
Verified SafeView Analysis
mcp
by awslabs
Enables AI assistants to interact with AWS DocumentDB databases by providing tools for connection management, database/collection operations, document CRUD, aggregation, schema analysis, and query planning.
Enables AI assistants to interact with AWS DocumentDB databases by providing tools for connection management, database/collection operations, document CRUD, aggregation, schema analysis, and query planning.
Setup Requirements
- ⚠️Requires Python 3.10+ and 'uv' package manager for installation and local development.
- ⚠️Requires network access to an AWS DocumentDB cluster.
- ⚠️Requires a valid SSL/TLS certificate ('global-bundle.pem') for DocumentDB connections if TLS is enabled.
- ⚠️Requires AWS credentials with appropriate permissions to access DocumentDB.
- ⚠️Manual configuration of the MCP client's JSON settings file is needed for local server or 'uvx' package usage.
Verified SafeView Analysis
osaurus
by dinoki-ai
Osaurus is a native macOS LLM server running local language models with OpenAI and Ollama compatible APIs, enabling tool calling and a plugin ecosystem for AI agents.
Osaurus is a native macOS LLM server running local language models with OpenAI and Ollama compatible APIs, enabling tool calling and a plugin ecosystem for AI agents.
Setup Requirements
- ⚠️Requires macOS 15.5+ and Apple Silicon (M1 or newer) for native execution and optimized performance.
- ⚠️Users must manually download LLM models via the application's UI or CLI after installation.
- ⚠️Integration with external MCP clients (e.g., Cursor) requires adding specific JSON configuration to the client.
Verified SafeView Analysis
5ire
by nanbingxyz
A desktop AI assistant client that integrates with various LLM providers and supports extensible tool and prompt functionalities via the Model Context Protocol (MCP).
A desktop AI assistant client that integrates with various LLM providers and supports extensible tool and prompt functionalities via the Model Context Protocol (MCP).
Setup Requirements
- ⚠️Requires Python, Node.js, and the 'uv' Python package manager for the 'tools' feature, complicating the runtime environment setup.
- ⚠️The application downloads a large local embedding model (Xenova/bge-m3) during initial setup, requiring significant bandwidth and disk space.
- ⚠️Requires API keys for external LLM providers (e.g., OpenAI, Anthropic, Google) for core chat functionalities, which are typically paid services.
- ⚠️A custom `CRYPTO_SECRET` environment variable *must* be set for secure data encryption; otherwise, encryption is trivially broken due to a weak default.
Review RequiredView Analysis
context7
by upstash
Context7 MCP enhances LLM prompts by injecting up-to-date, version-specific documentation and code examples directly from source code, enabling more accurate and relevant code generation.
Context7 MCP enhances LLM prompts by injecting up-to-date, version-specific documentation and code examples directly from source code, enabling more accurate and relevant code generation.
Setup Requirements
- ⚠️Requires Node.js v18.0.0 or higher.
- ⚠️Context7 API Key is highly recommended for higher rate limits and private repository access; basic usage might be rate-limited without it.
- ⚠️Relies on an external API (`https://mcp.context7.com/mcp` or `https://context7.com/api`) for documentation content, requiring an active internet connection.
Verified SafeView Analysis
mcp-language-server
by isaacphi
Proxies a Language Server Protocol (LSP) server to provide semantic code intelligence tools to Model Context Protocol (MCP) clients, enabling LLMs to interact with codebases.
Proxies a Language Server Protocol (LSP) server to provide semantic code intelligence tools to Model Context Protocol (MCP) clients, enabling LLMs to interact with codebases.
Setup Requirements
- ⚠️Requires a separately installed Language Server Protocol (LSP) executable (e.g., gopls, rust-analyzer).
- ⚠️Requires a specific JSON configuration in the MCP client (e.g., Claude Desktop) to define the server command, arguments, and environment variables.
- ⚠️C/C++ projects using clangd require a `compile_commands.json` file, typically generated by build tools like `bear`.
Verified SafeView Analysis
npcpy
by NPC-Worldwide
A comprehensive Python library and framework for building, evaluating, and serving LLM-powered agents and multi-agent systems, integrating fine-tuning capabilities, knowledge graphs, and scalable model operations, with a built-in Flask API server for deployment.
A comprehensive Python library and framework for building, evaluating, and serving LLM-powered agents and multi-agent systems, integrating fine-tuning capabilities, knowledge graphs, and scalable model operations, with a built-in Flask API server for deployment.
Setup Requirements
- ⚠️Requires Ollama for local LLM inference (e.g., `ollama pull llama3.2`).
- ⚠️Requires external API keys for non-Ollama LLM providers (e.g., `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `GEMINI_API_KEY`, `DEEPSEEK_API_KEY`, `PERPLEXITY_API_KEY`, `ELEVENLABS_API_KEY`).
- ⚠️FFmpeg is required for audio/video processing capabilities.
- ⚠️PyAudio/PortAudio are required for audio (TTS/STT) functionalities.
- ⚠️GPU (CUDA) highly recommended for fine-tuning and diffusion models for performance.
- ⚠️Inotify-tools is required for filesystem triggers.
- ⚠️Python 3.10+ is a requirement (specified in setup.py).
Review RequiredView Analysis
firecrawl-mcp-server
by firecrawl
Provides web scraping, crawling, search, and structured data extraction capabilities to AI models via the Model Context Protocol.
Provides web scraping, crawling, search, and structured data extraction capabilities to AI models via the Model Context Protocol.
Setup Requirements
- ⚠️Requires a Firecrawl API Key (paid service) for cloud API usage.
- ⚠️Requires Node.js version 18 or higher to run.
- ⚠️If using a self-hosted Firecrawl instance (`FIRECRAWL_API_URL`), ensure LLM support is configured for extraction tools, as it might not be enabled by default.
Verified SafeView Analysis
Skill_Seekers
by yusufkaraaslan
Automatically convert documentation websites, GitHub repositories, and PDFs into Claude AI skills, including conflict detection and AI-powered enhancement.
Automatically convert documentation websites, GitHub repositories, and PDFs into Claude AI skills, including conflict detection and AI-powered enhancement.
Setup Requirements
- ⚠️Requires Python 3.10+.
- ⚠️Requires `mcp` package for MCP server functionality.
- ⚠️Requires `PyMuPDF` (and `pytesseract`/`Pillow` for OCR) for PDF features.
- ⚠️Requires `PyGithub` for GitHub features.
- ⚠️`ANTHROPIC_API_KEY` is needed for API-based AI enhancement and skill upload.
- ⚠️`claude-code` CLI tool (part of Claude Code Max plan) is needed for local AI enhancement.
Verified SafeView Analysis
UI-TARS-desktop
by bytedance
A multimodal AI agent stack providing a native GUI agent desktop application (UI-TARS Desktop) and a general CLI/Web UI agent (Agent TARS) for controlling computers, browsers, and mobile devices using natural language, integrating various real-world tools via the Model Context Protocol (MCP).
A multimodal AI agent stack providing a native GUI agent desktop application (UI-TARS Desktop) and a general CLI/Web UI agent (Agent TARS) for controlling computers, browsers, and mobile devices using natural language, integrating various real-world tools via the Model Context Protocol (MCP).
Setup Requirements
- ⚠️Requires Node.js >= 22 and pnpm >= 9.
- ⚠️Requires API keys for various VLM models (e.g., OpenAI, Anthropic, VolcEngine/Doubao, Gemini, Perplexity, Groq, Mistral, Azure OpenAI, OpenRouter, DeepSeek, Ollama, LM Studio), which are often paid services.
- ⚠️Android automation requires ADB (Android Debug Bridge) to be installed and configured with connected devices.
- ⚠️Remote computer/browser control depends on external proxy services (UI_TARS_PROXY_HOST) which may require specific setup or payment.
Verified SafeView Analysis
mcp-for-beginners
by microsoft
Automating GitHub repository cloning and VS Code integration for streamlined development workflows.
Automating GitHub repository cloning and VS Code integration for streamlined development workflows.
Setup Requirements
- ⚠️Requires Python 3.10+.
- ⚠️Requires Git CLI installed and configured in the environment where the server runs.
- ⚠️Requires VS Code (or VS Code Insiders) installed for the `open_in_vscode` tool to function.
Verified SafeView Analysis
agentgateway
by agentgateway
A flexible API gateway designed for routing and managing network traffic, with specialized capabilities for integrating AI/LLM models, Model Context Protocol (MCP) agents, and Agent-to-Agent (A2A) communications through configurable listeners, routes, and policies.
A flexible API gateway designed for routing and managing network traffic, with specialized capabilities for integrating AI/LLM models, Model Context Protocol (MCP) agents, and Agent-to-Agent (A2A) communications through configurable listeners, routes, and policies.
Setup Requirements
- ⚠️Requires OpenSSL for certificate management and testing.
- ⚠️Building from source requires a Rust toolchain.
- ⚠️Specific AI/LLM backends (e.g., AWS Bedrock, Google Vertex AI) will require corresponding cloud credentials and project setup.
- ⚠️The UI is a separate Next.js application that needs to be built or run in development mode alongside the Rust backend.