Stop Searching. Start Trusting.

The curated directory of MCP servers, vetted for security, efficiency, and quality.

Tired of the MCP "Marketplace" Chaos?

We built MCPScout.ai to solve the ecosystems biggest pain points.

No Insecure Dumps

We manually analyze every server for basic security flaws.

Easy Setup

Our gotcha notes warn you about complex setups.

Avoid "Token Hogs"

We estimate token costs for cost-effective agents.

Products, Not Demos

We filter out "Hello World" demos.

SORT:

Vetted Servers(6642)

100
2280
High Cost
OpenBMB icon

UltraRAG

by OpenBMB

Sec7

A low-code RAG framework for researchers to build and iterate on complex multi-stage, multimodal Retrieval-Augmented Generation (RAG) pipelines using a Model Context Protocol (MCP) architecture.

Setup Requirements

  • ⚠️Requires Node.js (version 20+) for launching remote MCP servers (`npx mcp-remote`).
  • ⚠️Many functionalities (e.g., web search, OpenAI LLMs) require external API keys (Exa, Tavily, ZhipuAI, OpenAI) which incur usage costs.
  • ⚠️Leverages GPU hardware extensively for performance (e.g., vLLM, FAISS-GPU, sentence-transformers, infinity-emb); specific CUDA versions (e.g., CUDA 12.x) may be required depending on chosen dependencies.
Verified SafeView Analysis
The framework relies on user-provided YAML configurations for pipeline and server definitions, which, if untrusted, could lead to unexpected behavior. Running external servers or APIs (e.g., Exa, Tavily, ZhipuAI, OpenAI) requires careful security consideration for API key management and network exposure. The `/api/system/shutdown` endpoint in the UI should not be exposed publicly without strict access controls.
Updated: 2025-12-06GitHub
100
7564
Medium Cost
awslabs icon

mcp

by awslabs

Sec9

Enables AI assistants to interact with AWS DocumentDB databases by providing tools for connection management, database/collection operations, document CRUD, aggregation, schema analysis, and query planning.

Setup Requirements

  • ⚠️Requires Python 3.10+ and 'uv' package manager for installation and local development.
  • ⚠️Requires network access to an AWS DocumentDB cluster.
  • ⚠️Requires a valid SSL/TLS certificate ('global-bundle.pem') for DocumentDB connections if TLS is enabled.
  • ⚠️Requires AWS credentials with appropriate permissions to access DocumentDB.
  • ⚠️Manual configuration of the MCP client's JSON settings file is needed for local server or 'uvx' package usage.
Verified SafeView Analysis
The server defaults to read-only mode, blocking write operations unless explicitly enabled with the `--allow-write` flag, significantly enhancing security for sensitive database operations. Input parameters are rigorously validated using Pydantic, preventing common injection vulnerabilities. Connection strings are validated to enforce DocumentDB-specific security requirements (e.g., `retryWrites=false`). Logging is handled by `loguru` for robust auditing. No hardcoded sensitive credentials or direct evaluation of user-provided code were identified.
Updated: 2025-12-06GitHub
100
1613
Low Cost
dinoki-ai icon

osaurus

by dinoki-ai

Sec8

Osaurus is a native macOS LLM server running local language models with OpenAI and Ollama compatible APIs, enabling tool calling and a plugin ecosystem for AI agents.

Setup Requirements

  • ⚠️Requires macOS 15.5+ and Apple Silicon (M1 or newer) for native execution and optimized performance.
  • ⚠️Users must manually download LLM models via the application's UI or CLI after installation.
  • ⚠️Integration with external MCP clients (e.g., Cursor) requires adding specific JSON configuration to the client.
Verified SafeView Analysis
The project demonstrates strong security practices including mandated code signing for plugins, explicit user consent for tool execution (e.g., via ToolPermissionView), and secure handling of sensitive credentials (Keychain integration for MCP tokens, environment variables for CI/CD secrets). The plugin system, while powerful, inherently introduces a surface for potential vulnerabilities if malicious plugins are installed, though this is mitigated by strict signing and permission models. Configuration details shared for inter-app communication do not include sensitive data.
Updated: 2025-12-05GitHub
100
4833
Medium Cost
nanbingxyz icon

5ire

by nanbingxyz

Sec3

A desktop AI assistant client that integrates with various LLM providers and supports extensible tool and prompt functionalities via the Model Context Protocol (MCP).

Setup Requirements

  • ⚠️Requires Python, Node.js, and the 'uv' Python package manager for the 'tools' feature, complicating the runtime environment setup.
  • ⚠️The application downloads a large local embedding model (Xenova/bge-m3) during initial setup, requiring significant bandwidth and disk space.
  • ⚠️Requires API keys for external LLM providers (e.g., OpenAI, Anthropic, Google) for core chat functionalities, which are typically paid services.
  • ⚠️A custom `CRYPTO_SECRET` environment variable *must* be set for secure data encryption; otherwise, encryption is trivially broken due to a weak default.
Review RequiredView Analysis
The application allows installation and execution of external MCP servers, including local ones that can run arbitrary commands, posing a significant risk for arbitrary code execution. The IPC handler for network requests (`ipcMain.handle("request")`) provides broad network access from the renderer process without sufficient protocol restrictions or sandboxing for all uses, potentially enabling internal network attacks. Direct SQL execution via IPC (`db-all`, `db-run`, `db-transaction`) could be vulnerable to injection if parameters are not consistently handled as prepared statements. Critically, the `CRYPTO_SECRET` environment variable, used for encryption, defaults to an empty string if not configured, rendering encrypted data easily decipherable.
Updated: 2025-12-05GitHub
100
38704
Medium Cost
upstash icon

context7

by upstash

Sec8

Context7 MCP enhances LLM prompts by injecting up-to-date, version-specific documentation and code examples directly from source code, enabling more accurate and relevant code generation.

Setup Requirements

  • ⚠️Requires Node.js v18.0.0 or higher.
  • ⚠️Context7 API Key is highly recommended for higher rate limits and private repository access; basic usage might be rate-limited without it.
  • ⚠️Relies on an external API (`https://mcp.context7.com/mcp` or `https://context7.com/api`) for documentation content, requiring an active internet connection.
Verified SafeView Analysis
The `mcp/src/lib/encryption.ts` file contains a hardcoded `DEFAULT_ENCRYPTION_KEY` for AES-256-CBC encryption of client IPs. While this is primarily for internal hashing/rate limiting and can be overridden by the `CLIENT_IP_ENCRYPTION_KEY` environment variable, hardcoded default keys are generally not best practice for cryptographic operations. The server relies on an external API (`https://context7.com/api` or `https://mcp.context7.com/mcp`), so the overall security posture depends on the trustworthiness and security of this external service. No `eval` or other directly malicious patterns were found in the provided source code.
Updated: 2025-12-05GitHub
100
1291
Medium Cost
Sec7

Proxies a Language Server Protocol (LSP) server to provide semantic code intelligence tools to Model Context Protocol (MCP) clients, enabling LLMs to interact with codebases.

Setup Requirements

  • ⚠️Requires a separately installed Language Server Protocol (LSP) executable (e.g., gopls, rust-analyzer).
  • ⚠️Requires a specific JSON configuration in the MCP client (e.g., Claude Desktop) to define the server command, arguments, and environment variables.
  • ⚠️C/C++ projects using clangd require a `compile_commands.json` file, typically generated by build tools like `bear`.
Verified SafeView Analysis
The server's core function involves executing an external, user-configured Language Server Protocol (LSP) binary as a child process. This LSP child process is granted extensive file system access (read, write, create, rename, delete) within the specified workspace directory to fulfill its duties. While this is inherent to LSP functionality, it means the server's security heavily relies on the trustworthiness of the *user-selected* LSP executable. If a malicious LSP server is configured, it could potentially compromise the workspace or (less directly) the host system. The server itself does not expose network ports, employ `eval`, or contain hardcoded secrets, and validates LSP command existence with `exec.LookPath` and workspace path validity. The risk is primarily in the user's choice of external LSP.
Updated: 2025-12-02GitHub
100
1100
High Cost
NPC-Worldwide icon

npcpy

by NPC-Worldwide

Sec1

A comprehensive Python library and framework for building, evaluating, and serving LLM-powered agents and multi-agent systems, integrating fine-tuning capabilities, knowledge graphs, and scalable model operations, with a built-in Flask API server for deployment.

Setup Requirements

  • ⚠️Requires Ollama for local LLM inference (e.g., `ollama pull llama3.2`).
  • ⚠️Requires external API keys for non-Ollama LLM providers (e.g., `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `GEMINI_API_KEY`, `DEEPSEEK_API_KEY`, `PERPLEXITY_API_KEY`, `ELEVENLABS_API_KEY`).
  • ⚠️FFmpeg is required for audio/video processing capabilities.
  • ⚠️PyAudio/PortAudio are required for audio (TTS/STT) functionalities.
  • ⚠️GPU (CUDA) highly recommended for fine-tuning and diffusion models for performance.
  • ⚠️Inotify-tools is required for filesystem triggers.
  • ⚠️Python 3.10+ is a requirement (specified in setup.py).
Review RequiredView Analysis
The server exposes `execute_llm_command` and Jinx `python` engine steps, which can execute arbitrary shell commands (`subprocess.run(..., shell=True)`) and Python code (`eval(...)`) respectively, based on LLM output. This is a critical remote code execution vulnerability if not rigorously sandboxed and input-validated. The example server setup uses `cors_origins=['*']`, which is insecure for production. Direct LLM-to-bash execution without user confirmation or strong sandboxing is extremely dangerous.
Updated: 2025-12-04GitHub
100
5006
Low Cost
Sec9

Provides web scraping, crawling, search, and structured data extraction capabilities to AI models via the Model Context Protocol.

Setup Requirements

  • ⚠️Requires a Firecrawl API Key (paid service) for cloud API usage.
  • ⚠️Requires Node.js version 18 or higher to run.
  • ⚠️If using a self-hosted Firecrawl instance (`FIRECRAWL_API_URL`), ensure LLM support is configured for extraction tools, as it might not be enabled by default.
Verified SafeView Analysis
The server uses environment variables for API keys, avoiding hardcoded secrets. Input validation is performed using `zod` schemas. A `SAFE_MODE` is explicitly enabled for cloud deployments, disabling potentially dangerous interactive web actions (like clicks, writes, JavaScript execution) for enhanced security. The core scraping functionality relies on external `firecrawl-js` and `firecrawl-fastmcp` SDKs, whose internal security is assumed. No direct `eval` or obvious malicious patterns were found in the provided source.
Updated: 2025-11-22GitHub
100
4626
High Cost
yusufkaraaslan icon

Skill_Seekers

by yusufkaraaslan

Sec7

Automatically convert documentation websites, GitHub repositories, and PDFs into Claude AI skills, including conflict detection and AI-powered enhancement.

Setup Requirements

  • ⚠️Requires Python 3.10+.
  • ⚠️Requires `mcp` package for MCP server functionality.
  • ⚠️Requires `PyMuPDF` (and `pytesseract`/`Pillow` for OCR) for PDF features.
  • ⚠️Requires `PyGithub` for GitHub features.
  • ⚠️`ANTHROPIC_API_KEY` is needed for API-based AI enhancement and skill upload.
  • ⚠️`claude-code` CLI tool (part of Claude Code Max plan) is needed for local AI enhancement.
Verified SafeView Analysis
The server uses `subprocess.run` and `subprocess.Popen` extensively to execute CLI tools. While arguments are constructed using `sys.executable` and `Path` objects, reducing direct shell injection risks, this still relies on the security of the invoked CLI tools and careful argument sanitization. The system processes untrusted external content (web pages, GitHub repositories, PDF files), which inherently carries risks such as resource exhaustion, parsing errors, or specific data-level attack vectors, though no direct code execution from scraped content is apparent in the core logic. API keys (ANTHROPIC_API_KEY, GITHUB_TOKEN) are correctly handled via environment variables or config, not hardcoded. The `CodeAnalyzer` uses regex for some languages which can be less robust than full parsers. Overall, it follows good security practices for its stated purpose but the inherent risks of processing external data and relying on external subprocesses remain.
Updated: 2025-11-30GitHub
100
19698
High Cost
bytedance icon

UI-TARS-desktop

by bytedance

Sec8

A multimodal AI agent stack providing a native GUI agent desktop application (UI-TARS Desktop) and a general CLI/Web UI agent (Agent TARS) for controlling computers, browsers, and mobile devices using natural language, integrating various real-world tools via the Model Context Protocol (MCP).

Setup Requirements

  • ⚠️Requires Node.js >= 22 and pnpm >= 9.
  • ⚠️Requires API keys for various VLM models (e.g., OpenAI, Anthropic, VolcEngine/Doubao, Gemini, Perplexity, Groq, Mistral, Azure OpenAI, OpenRouter, DeepSeek, Ollama, LM Studio), which are often paid services.
  • ⚠️Android automation requires ADB (Android Debug Bridge) to be installed and configured with connected devices.
  • ⚠️Remote computer/browser control depends on external proxy services (UI_TARS_PROXY_HOST) which may require specific setup or payment.
Verified SafeView Analysis
The project demonstrates awareness of security best practices, utilizing `secretlint` to flag potential hardcoded secrets (resolved via environment variables), implementing JWT for authentication in remote interactions, and validating file paths to prevent directory traversal in filesystem operations. Permissions for macOS system access (e.g., accessibility, screen recording) are explicitly handled. However, reliance on external proxy services (`UI_TARS_PROXY_HOST`) introduces a dependency on the security of those third-party infrastructures. There are no immediate signs of direct `eval` usage in critical agent logic (though commented out examples exist in helper utilities), and `clipboard.setContent` for typing on Windows is a common automation technique.
Updated: 2025-12-05GitHub
100
13565
Low Cost
microsoft icon

mcp-for-beginners

by microsoft

Sec7

Automating GitHub repository cloning and VS Code integration for streamlined development workflows.

Setup Requirements

  • ⚠️Requires Python 3.10+.
  • ⚠️Requires Git CLI installed and configured in the environment where the server runs.
  • ⚠️Requires VS Code (or VS Code Insiders) installed for the `open_in_vscode` tool to function.
Verified SafeView Analysis
The server uses `subprocess.run` to execute external commands like `git clone` and platform-specific VS Code launch commands. The `open_in_vscode` function on Windows utilizes `shell=True` with the `start` command, which inherently carries a higher risk of command injection if input paths are not perfectly sanitized or if a malicious executable is placed in a predictable path. While the code attempts to mitigate this with path expansion, caution is advised. Additionally, cloning untrusted GitHub repositories can introduce vulnerabilities from the repository content itself.
Updated: 2025-12-04GitHub
100
1402
Medium Cost
agentgateway icon

agentgateway

by agentgateway

Sec3

A flexible API gateway designed for routing and managing network traffic, with specialized capabilities for integrating AI/LLM models, Model Context Protocol (MCP) agents, and Agent-to-Agent (A2A) communications through configurable listeners, routes, and policies.

Setup Requirements

  • ⚠️Requires OpenSSL for certificate management and testing.
  • ⚠️Building from source requires a Rust toolchain.
  • ⚠️Specific AI/LLM backends (e.g., AWS Bedrock, Google Vertex AI) will require corresponding cloud credentials and project setup.
  • ⚠️The UI is a separate Next.js application that needs to be built or run in development mode alongside the Rust backend.
Review RequiredView Analysis
CRITICAL: Test private keys are committed to the repository for integration tests (crates/agentgateway/tests/common/testdata), which is highly dangerous if accidentally used in production. The UI allows direct configuration updates (including highly privileged operations like executing arbitrary `stdio` commands in MCP targets) via HTTP endpoints. While the UI uses `http://localhost:15000` by default, a production deployment without proper authentication and authorization on the `/config` endpoint (and other management endpoints) could allow remote unauthenticated configuration modifications, including remote code execution. Configuration dumps could also expose sensitive details. The 'Restart Setup Wizard' functionality allows deleting all configuration. Strong authentication and authorization must be implemented for the backend management endpoints in any non-test environment.
Updated: 2025-12-05GitHub
PreviousPage 3 of 554Next