Stop Searching. Start Trusting.

The curated directory of MCP servers, vetted for security, efficiency, and quality.

Tired of the MCP "Marketplace" Chaos?

We built MCPScout.ai to solve the ecosystems biggest pain points.

No Insecure Dumps

We manually analyze every server for basic security flaws.

Easy Setup

Our gotcha notes warn you about complex setups.

Avoid "Token Hogs"

We estimate token costs for cost-effective agents.

Products, Not Demos

We filter out "Hello World" demos.

SORT:

Vetted Servers(9120)

100
2443
High Cost
OpenBMB icon

UltraRAG

by OpenBMB

Sec7

An open-source RAG framework for building, experimenting, and evaluating complex Retrieval-Augmented Generation (RAG) pipelines with low-code YAML configurations and native multimodal support.

Setup Requirements

  • ⚠️Requires GPUs for optimal performance, especially for vLLM, FAISS-GPU, and certain embedding models; `gpu_ids` is frequently configured.
  • ⚠️Requires various API keys (e.g., OPENAI_API_KEY, EXA_API_KEY, TAVILY_API_KEY, ZHIPUAI_API_KEY) for accessing external LLM and search services, which are typically paid.
  • ⚠️External system dependencies include Node.js (version >=20 is checked) for remote MCP servers and the `mineru` executable for advanced document parsing.
  • ⚠️FAISS (faiss-cpu or faiss-gpu-cu12, specific to CUDA version) is an optional dependency for the retriever backend.
Verified SafeView Analysis
The server uses `ast.literal_eval` for parsing configuration values like list delimiters, which is generally safer than `eval` but still processes string input as Python literals. Subprocess execution (`subprocess.Popen`, `asyncio.create_subprocess_exec`) is used for launching MCP servers and external tools like `mineru`, which is inherent to its architecture; parameters appear to be sanitized or derived from trusted sources. Network risks include making API calls to external LLM providers (OpenAI, ZhipuAI) and web search services (Exa, Tavily), and also supports deploying a remote retriever via a configurable URL (`retriever_url`). If `retriever_url` is user-controlled in a non-isolated environment, it could pose a Server-Side Request Forgery (SSRF) risk. Hardcoded secrets are avoided by relying on environment variables (e.g., `LLM_API_KEY`, `EXA_API_KEY`). The framework's overall security depends significantly on how users configure and deploy individual MCP servers and pipelines.
Updated: 2026-01-19GitHub
100
7905
Medium Cost
awslabs icon

mcp

by awslabs

Sec3

Enables AI assistants to interact with AWS DocumentDB databases, providing tools for connection management, database/collection operations, document querying, aggregation pipelines, query planning, and schema analysis. It acts as a bridge for safe and efficient database operations through the Model Context Protocol (MCP).

Setup Requirements

  • ⚠️Requires network access to the DocumentDB cluster (e.g., via VPC peering, security group rules).
  • ⚠️Requires an SSL/TLS certificate (typically `global-bundle.pem`) for TLS-enabled DocumentDB clusters.
  • ⚠️The DocumentDB connection string must explicitly include `retryWrites=false`.
  • ⚠️Requires the `uv` Python package manager for installation (`uvx` command in examples).
Review RequiredView Analysis
Critical: Database connection strings (containing credentials) are stored in memory for the `_idle_timeout` duration (default 30 minutes). If the server process is compromised, these credentials could be exposed. High: The server lacks inherent authentication and fine-grained authorization for incoming MCP requests, assuming the calling agent is fully trusted. It provides a `--allow-write` flag for a binary read-only/read-write mode, but not granular access control per operation or user. Medium: Queries and aggregation pipelines are passed directly to `pymongo`, which protects against classic SQL injection but allows trusted (or compromised) agents to perform potentially resource-intensive or data-exposing operations. Error logging might inadvertently expose sensitive connection details depending on `pymongo`'s exception messages.
Updated: 2026-01-19GitHub
100
3056
Low Cost
dinoki-ai icon

osaurus

by dinoki-ai

Sec9

Osaurus is an AI edge runtime for macOS, enabling users to run local and cloud AI models, orchestrate tools via the Model Context Protocol (MCP), and power AI applications and workflows on Apple Silicon.

Setup Requirements

  • ⚠️Requires macOS 15.5+ and Apple Silicon (M1 or newer) due to MLX Runtime optimization.
  • ⚠️Initial setup involves downloading Whisper models for voice input and LLM models from Hugging Face, requiring internet connection and several gigabytes of disk space.
  • ⚠️Voice input (WhisperKit) and Transcription Mode require granting specific macOS permissions: Microphone, Screen Recording (for system audio), and Accessibility (for global dictation).
Verified SafeView Analysis
The project demonstrates strong security practices for user data and application integrity. API keys for remote providers are securely stored in the macOS Keychain. The plugin system incorporates explicit permission policies (e.g., 'ask', 'auto', 'deny') for tools, including granular macOS system permissions (Automation, Accessibility, Full Disk Access), giving users control over tool capabilities. All distributed plugins (dylibs) are required to be code-signed with a Developer ID Application certificate. The server runs locally by default and network exposure is a configurable user option. No obvious malicious patterns like obfuscation or direct `eval` usage are found in the core application logic. CI/CD scripts handle sensitive environment variables (e.g., GitHub tokens, Apple certificates) for release processes, which is standard but relies on the security of the CI environment itself.
Updated: 2026-01-19GitHub
100
4961
Medium Cost
nanbingxyz icon

5ire

by nanbingxyz

Sec2

A desktop AI assistant client that integrates with various LLM providers and connects to Model Context Protocol (MCP) servers for extended tool-use and knowledge base capabilities.

Setup Requirements

  • ⚠️Requires Python, Node.js, and the uv Python package manager if local MCP servers (for tools feature) are to be used.
  • ⚠️Requires API keys/credentials for external LLM providers (e.g., OpenAI, Anthropic, Google, Mistral, Grok), incurring monetary costs for API usage.
  • ⚠️Downloads embedding models (e.g., Xenova/bge-m3) from HuggingFace upon first use, which can be a significant initial download size.
Review RequiredView Analysis
The application's deep-link handling for 'install-tool' (e.g., `app.5ire://install-tool#<base64_encoded_json>`) allows for the installation of new MCP server configurations. If an MCP server is configured with `type: 'local'`, its `command` and `args` fields (e.g., `command: 'python', args: ['-c', 'import os; os.system("rm -rf /")']`) are directly executed via `StdioTransport`. This mechanism permits arbitrary command execution on the user's system upon clicking a malicious deep link, posing a critical security vulnerability. While some input validation (`isValidMCPServer`, `isValidMCPServerKey`) exists, it doesn't prevent dangerous commands.
Updated: 2026-01-19GitHub
100
1168
Low Cost
Sec8

A Model Context Protocol (MCP) server providing persistent, semantic memory storage and retrieval capabilities for AI agents. It supports lightweight semantic reasoning (contradiction, causal inference), content chunking, multi-backend storage (SQLite-vec, Cloudflare, Hybrid), autonomous memory consolidation (decay, association, clustering, compression, forgetting), and real-time updates via SSE. It's designed for token-efficient interaction with LLMs.

Setup Requirements

  • ⚠️Requires Python dependencies like PyTorch (or ONNX Runtime & Tokenizers for CPU-only), sentence-transformers, sqlite-vec, mcp, aiohttp, fastapi, and uvicorn. Installation might be complex due to platform-specific PyTorch/GPU setup.
  • ⚠️Initial model downloads (~300MB for 'all-MiniLM-L6-v2') can cause timeouts during first-time startup if network is slow or dependencies are not pre-cached.
  • ⚠️Cloudflare storage backend requires `CLOUDFLARE_API_TOKEN` and `CLOUDFLARE_ACCOUNT_ID` environment variables configured, alongside other D1/Vectorize/R2 specifics.
Verified SafeView Analysis
The server employs good security practices, such as lazy initialization of storage, reliance on environment variables for sensitive data (e.g., Cloudflare API tokens, OAuth keys), and the generation of JWT keys rather than hardcoding. It uses `httpx` and `aiohttp` for external network calls, and `aiosqlite` with parameterized queries for database interactions, mitigating SQL injection risks. Document upload handlers attempt to prevent path traversal. `json.dump` is used for file writing, which is safer than `pickle`. Extensive use of `subprocess.run` occurs in installation and maintenance scripts, which is expected for such operations but could be a vector if those scripts are not carefully managed. Overall, no immediate critical vulnerabilities like `eval()` on untrusted input or hardcoded universal secrets were found in the core server logic, making it reasonably safe for its intended use case.
Updated: 2026-01-19GitHub
100
42477
Medium Cost
upstash icon

context7

by upstash

Sec7

Provides up-to-date, version-specific documentation and code examples to Large Language Models (LLMs) and AI coding assistants to improve code generation accuracy and relevance, preventing outdated or hallucinated information.

Setup Requirements

  • ⚠️Requires Node.js >= v18.0.0
  • ⚠️Requires an MCP client (e.g., Cursor, Claude Code, VSCode)
  • ⚠️Context7 API Key is optional but recommended for higher rate limits and private repositories.
Verified SafeView Analysis
The `CLIENT_IP_ENCRYPTION_KEY` used for client IP encryption has a hardcoded default value in `packages/mcp/src/lib/encryption.ts`. While it can be overridden by an environment variable, relying on a default hardcoded key, even for non-critical data like client IPs, is generally not recommended in a security-first approach. Users are advised to provide their own `CLIENT_IP_ENCRYPTION_KEY`.
Updated: 2026-01-19GitHub
100
3151
Medium Cost
haris-musa icon

excel-mcp-server

by haris-musa

Sec8

This server allows AI agents to manipulate Excel files (create, read, update, format, chart, pivot, validate) without requiring Microsoft Excel to be installed.

Setup Requirements

  • ⚠️Requires Python 3.10 or newer.
  • ⚠️The `uvx` command implies the `uv` Python package manager must be installed.
  • ⚠️When using SSE or Streamable HTTP transports, the `EXCEL_FILES_PATH` environment variable must be set (defaults to `./excel_files`).
Verified SafeView Analysis
The server uses `os.path.join` to construct file paths based on `EXCEL_FILES_PATH` and the provided `filename`. While `os.path.join` can handle some path components, it does not explicitly sanitize the `filename` parameter against directory traversal (`../`) attacks before joining, which could potentially allow access outside the intended `EXCEL_FILES_PATH` if exploited. However, the `validate_formula` function explicitly checks for and prevents potentially unsafe Excel functions like `INDIRECT`, `HYPERLINK`, `WEBSERVICE`, `DGET`, and `RTD`, which is a good security measure. No direct `eval` or unsanitized shell command execution was found, nor were hardcoded secrets apparent.
Updated: 2026-01-19GitHub
100
1170
High Cost
NPC-Worldwide icon

npcpy

by NPC-Worldwide

Sec2

Core library of the NPC Toolkit that supercharges natural language processing pipelines and agent tooling. It's a flexible framework for building state-of-the-art applications and conducting novel research with LLMs. Supports multi-agent systems, fine-tuning, reinforcement learning, genetic algorithms, model ensembling, and NumPy-like operations for AI models (NPCArray). Includes a built-in Flask server for deploying agent teams via REST APIs, and multimodal generation (image, video, audio).

Setup Requirements

  • ⚠️Requires Ollama for local LLMs (e.g., llama3.2, gemma3:4b) installed and running.
  • ⚠️Platform-specific dependencies for audio (ffmpeg, portaudio, espeak) and screenshots (pywin32 on Windows, screencapture on Mac, gnome-screenshot/scrot on Linux).
  • ⚠️Requires API keys for various cloud LLM/generation providers (e.g., OPENAI_API_KEY, ANTHROPIC_API_KEY, GEMINI_API_KEY, DEEPSEEK_API_KEY, PERPLEXITY_API_KEY, ELEVENLABS_API_KEY).
  • ⚠️Dependencies for local fine-tuning and diffusion models require PyTorch, diffusers, transformers, and sentence-transformers, which can be resource-intensive and require CUDA for optimal performance.
  • ⚠️Uses SQLite for conversation history and internal state; `psycopg2-binary` required for PostgreSQL.
  • ⚠️Jinja templates can be complex to debug without prior experience.
Review RequiredView Analysis
The system allows execution of arbitrary Python code within Jinx steps and direct shell commands generated by LLMs via `subprocess.run(..., shell=True)`. This creates severe command injection and remote code execution vulnerabilities if not rigorously sandboxed. The Flask server exposes various endpoints for executing commands, Jinxs, ML models, and fine-tuning, often without explicit authentication mechanisms shown in examples, making it vulnerable to unauthorized access and execution. Deserialization of untrusted data via `pickle.loads` in ML functionalities also poses a risk.
Updated: 2026-01-18GitHub
100
5286
High Cost
Sec9

A Model Context Protocol (MCP) server for integrating Firecrawl's web scraping, crawling, search, and structured data extraction capabilities with AI agents.

Setup Requirements

  • ⚠️Requires a Firecrawl API Key (paid service, unless self-hosting your own Firecrawl instance).
  • ⚠️Requires Node.js version 18.0.0 or higher.
  • ⚠️Crawl operations can return very large amounts of data, potentially exceeding token limits or incurring high costs if processed by an LLM.
  • ⚠️Windows users running via `npx` might need to prefix the command with `cmd /c`.
Verified SafeView Analysis
The server implements a 'SAFE_MODE' for cloud deployments, which disables potentially risky interactive actions (click, write, executeJavascript) and webhooks. API keys are handled securely via environment variables or headers, not hardcoded. The `skipTlsVerification` option exists but is off by default. Overall, good security practices are in place, especially for hosted environments.
Updated: 2026-01-14GitHub
100
7142
High Cost
yusufkaraaslan icon

Skill_Seekers

by yusufkaraaslan

Sec7

Automate the conversion of diverse documentation (websites, GitHub repos, PDFs, local codebases) into high-quality AI skills for various LLM coding agents like Claude Code, Gemini, and OpenAI.

Setup Requirements

  • ⚠️Requires 'mcp' Python package (pip install mcp) for core functionality.
  • ⚠️Platform-specific API keys (e.g., ANTHROPIC_API_KEY, GOOGLE_API_KEY, OPENAI_API_KEY) are mandatory for most core features and uploads.
  • ⚠️GitHub personal access token (GITHUB_TOKEN) is required for GitHub scraping and config submission.
  • ⚠️Requires local installation of 'git' for repository operations.
  • ⚠️For local AI enhancement, the 'claude' command-line tool (Claude Code CLI) must be installed and in PATH.
  • ⚠️HTTP transport mode requires 'uvicorn' and 'starlette' Python packages.
  • ⚠️PDF OCR features require 'pytesseract' and 'Pillow', and the Tesseract OCR engine installation.
  • ⚠️Python 3.8+ is required.
Verified SafeView Analysis
The server leverages subprocess execution for CLI tools and interacts with external APIs (GitHub, Claude, Google, OpenAI). While robust token management (environment variables, secure config files with 600/700 permissions) and input validation are implemented, potential risks exist in command injection if inputs are not fully sanitized by upstream agents. Data sent to AI APIs for enhancement could contain sensitive information. Git token injection into URLs is a feature but must be handled with care.
Updated: 2026-01-18GitHub
100
24014
High Cost
bytedance icon

UI-TARS-desktop

by bytedance

Sec4

UI-TARS-desktop is a native GUI Agent application powered by multimodal AI models, enabling users to control their computer and browser through natural language instructions.

Setup Requirements

  • ⚠️Requires Node.js >=20.x (and >=22 for multimodal workspace)
  • ⚠️Requires pnpm for package management
  • ⚠️Requires API keys for LLM providers (e.g., VolcEngine, Anthropic, OpenAI) which are paid services
  • ⚠️On macOS, requires granting 'Accessibility' and 'Screen Recording' permissions to the application
Review RequiredView Analysis
Critical Vulnerability: The `NutJSOperator` (used for local computer control) directly uses `eval()` on an `expression` that can originate from LLM-generated content (`calculatorTool`). This allows for arbitrary code execution, posing a severe risk if the LLM is compromised or jailbroken. The inline comment `// 注意:生产环境中使用安全的数学计算器` (Note: use a safe calculator in production) acknowledges this risk but does not mitigate it in the provided code. High Risk GUI Automation: As a GUI agent, it has the capability to control the user's mouse, keyboard, and screen. If maliciously prompted, it could interact with critical applications, delete files, or exfiltrate sensitive data. Hardcoded Private Key: The `apps/ui-tars/src/main/remote/app_private.ts` (implied by name and usage patterns seen in `auth.ts`) suggests a hardcoded application private key (`appPrivateKeyBase64`). Distributing a private key in a client-side application (Electron app) is a major security flaw, as it can be extracted and misused for impersonation or unauthorized access to remote services. Supply Chain Risk (Remote Presets): The CLI can fetch preset configurations from remote URLs (`gui-agent/cli/src/cli/start.ts`). If a remote server is compromised, it could serve malicious configurations. Third-Party Trust: Relies on ByteDance's remote proxy services for some "free" and "subscription" remote computer/browser operators, introducing a dependency on external infrastructure's security and trustworthiness.
Updated: 2026-01-14GitHub
100
14083
Low Cost
microsoft icon

mcp-for-beginners

by microsoft

Sec5

Building custom Model Context Protocol (MCP) servers for AI agent development, including weather data retrieval and GitHub repository automation.

Setup Requirements

  • ⚠️Requires Python 3.10+.
  • ⚠️Requires `uv` or `pip` for dependency management.
  • ⚠️Requires `Node.js` and `npm` for MCP Inspector and `@playwright/mcp` dependency.
  • ⚠️Requires `Git CLI` to be installed and in PATH for `git_clone_repo` tool.
  • ⚠️Requires `VS Code` or `VS Code Insiders` to be installed in standard paths for `open_in_vscode` tool.
  • ⚠️Requires environment variables (`AZURE_OPENAI_CHAT_DEPLOYMENT_NAME`, `AZURE_OPENAI_API_KEY`, `AZURE_OPENAI_ENDPOINT`, `AZURE_OPENAI_API_VERSION`, `GITHUB_TOKEN`) for full client functionality and Azure OpenAI integration, which may incur costs.
Review RequiredView Analysis
The server includes tools that execute system commands (`git clone`, `open_in_vscode`) based on user input. Specifically, the `open_in_vscode` tool on Windows uses `subprocess.run` with `shell=True` which is a critical security vulnerability if the `folder_path` contains malicious shell metacharacters, potentially leading to arbitrary code execution. There is no explicit input validation for URL formats or paths within the `git_clone_repo` and `open_in_vscode` tools themselves, relying on external command failures, which is not robust enough for untrusted inputs. However, no hardcoded secrets or direct `eval` calls were found in the provided server code snippets.
Updated: 2026-01-19GitHub
PreviousPage 3 of 760Next