Stop Searching. Start Trusting.

The curated directory of MCP servers, vetted for security, efficiency, and quality.

Tired of the MCP "Marketplace" Chaos?

We built MCPScout.ai to solve the ecosystems biggest pain points.

No Insecure Dumps

We manually analyze every server for basic security flaws.

Easy Setup

Our gotcha notes warn you about complex setups.

Avoid "Token Hogs"

We estimate token costs for cost-effective agents.

Products, Not Demos

We filter out "Hello World" demos.

CATEGORIES:
SORT:

Vetted Servers(8554)

46
25
Medium Cost
Michael-Obele icon

shadcn-svelte-mcp

by Michael-Obele

Sec8

Provides real-time access to shadcn-svelte component documentation, Bits UI API details, and Lucide Svelte icon search via an MCP server for AI-powered code editors and CLIs.

Setup Requirements

  • ⚠️Requires Node.js >= 20.9.0.
  • ⚠️Requires an AI-powered code editor or CLI client (e.g., Cursor, VS Code, Claude Code) for effective use.
  • ⚠️Caching functionality relies on a writable '.cache' directory.
Verified SafeView Analysis
The server performs web scraping using `crawlee` (Playwright) from external documentation sites. While the sources (shadcn-svelte.com, bits-ui.com, svelte-sonner.vercel.app, unpkg.com) are generally trusted, running Playwright with `--no-sandbox` (as configured) can slightly reduce isolation if highly malicious content were to be encountered. CORS are set to `*` in development and production, which is typical for public APIs. No hardcoded secrets or 'eval' calls were found in the provided code.
Updated: 2026-01-11GitHub
46
61
High Cost
The-AI-Alliance icon

gofannon

by The-AI-Alliance

Sec2

Rapidly prototype AI agents and web UIs, build conversational flows, preview interactions, and deploy agent-driven experiences.

Setup Requirements

  • ⚠️Requires OpenAI, Anthropic, or Gemini API keys (paid services).
  • ⚠️Requires Docker and Docker Compose for local setup.
  • ⚠️Requires Python 3.10 or higher.
  • ⚠️Requires pnpm 8 or higher.
Review RequiredView Analysis
The system features explicit execution of user-provided or LLM-generated Python code via the `exec` function within a 'sandboxed environment'. This is a critical security vulnerability, as `exec` is notoriously difficult to secure against malicious code, potentially allowing arbitrary code execution, compromise of the host system, or data exfiltration. The sandboxed code also has access to network clients (`httpx.AsyncClient`, `RemoteMCPClient`, `GofannonClient`) enabling arbitrary network requests, which amplifies the risk of Server-Side Request Forgery (SSRF) and data exfiltration. Furthermore, hardcoded default passwords (e.g., 'password' for admin panel, 'minioadmin' for MinIO, 'admin:password' for CouchDB) are present in configuration files, posing significant vulnerabilities if not explicitly changed in production environments.
Updated: 2026-01-16GitHub
46
58
High Cost
shredEngineer icon

Archive-Agent

by shredEngineer

Sec9

An intelligent file indexer with powerful AI search (RAG engine), automatic OCR, and a seamless MCP interface to unlock documents with natural language.

Setup Requirements

  • ⚠️Requires OpenAI API Key (Paid) for OpenAI provider, or local Ollama/LM Studio setup for local models.
  • ⚠️Docker is required for the Qdrant vector database (unless ARCHIVE_AGENT_QDRANT_IN_MEMORY is explicitly set to 1).
  • ⚠️System-wide installation of `pandoc` is required.
  • ⚠️Requires Python >= 3.10 and `uv` for environment management.
Verified SafeView Analysis
The project demonstrates robust security practices for an open-source tool. It utilizes Pydantic models with `extra='forbid'` for strict schema validation of AI responses, preventing unexpected data injection. `OPENAI_API_KEY` is correctly sourced from environment variables. `file_lock` ensures safe concurrent access to shared resources. The `mcp_server_host` is configurable to expose to LAN, giving the user control over network exposure. Arbitrary command execution via `subprocess.run` is limited to specific, justified system utilities (`nano`, `streamlit`, `docker`, `pandoc`). Overall, the architecture minimizes common attack vectors.
Updated: 2026-01-14GitHub
45
16
Medium Cost
robsyc icon

ld-spec-mcp

by robsyc

Sec9

Serves W3C Semantic Web specifications section-by-section and resource-by-resource to AI agents for efficient, targeted information retrieval.

Setup Requirements

  • ⚠️Requires Python 3.11 or newer
  • ⚠️Requires cloning the GitHub repository locally for setup
  • ⚠️Requires internet access to W3C specification websites for content fetching
Verified SafeView Analysis
The server fetches content from a predefined set of trusted W3C URIs listed in `index.yaml`. User input (`spec_key`, `ns_key`) is used to look up these trusted URIs, preventing arbitrary URL fetching (SSRF). While fetching external content inherently carries some risk, robust libraries like `httpx` (with timeout) and `BeautifulSoup`/`RDFLib` are used, along with explicit sanitization in `html_to_markdown` for known `html-to-markdown` library issues. No direct command injection, use of `eval`, or hardcoded secrets were identified. Input validation relies on `FastMCP` framework's `Annotated` types.
Updated: 2026-01-15GitHub
45
51
High Cost
Sec8

Enables AI agents to search, download, and manage professional stock photos from Unsplash with automated attribution.

Setup Requirements

  • ⚠️Requires an Unsplash API access key (free tier available for testing, paid for higher limits).
  • ⚠️Requires Node.js 18.x or higher.
  • ⚠️Windows users may encounter 'Client closed' errors due to process management; specific `mcp.json` configurations are provided in documentation to mitigate this.
  • ⚠️While the default `downloadMode` is 'urls_only', auto-downloading many large images or generating complex attribution files can lead to significant token usage due to verbose JSON/HTML/React outputs for LLMs.
Verified SafeView Analysis
The server loads the Unsplash API key from environment variables, which is good practice. Filenames are sanitized, preventing common path traversal vulnerabilities. It primarily uses stdio for communication, reducing direct network attack surface. The use of `exiftool-vendored` for metadata processing involves spawning external binaries (perl script/executable) which is a potential, albeit common, point of exploitation if a maliciously crafted image could leverage `exiftool` vulnerabilities. No direct `eval` or similar dangerous patterns were found in the provided source code.
Updated: 2026-01-17GitHub
45
52
High Cost
ScrapeGraphAI icon

scrapegraph-mcp

by ScrapeGraphAI

Sec9

Provides AI-powered web scraping, structured data extraction, multi-page crawling, and agentic automation capabilities for language models.

Setup Requirements

  • ⚠️Requires a ScrapeGraph AI API Key (paid service from dashboard.scrapegraphai.com).
  • ⚠️Python 3.13+ required.
  • ⚠️Node.js and npm/npx required for Smithery installation/usage.
Verified SafeView Analysis
The server acts as a secure proxy to the ScrapeGraph AI API, handling API keys via environment variables or MCP config, and using `httpx.Client` for external requests. Input parameters are validated. No `eval` or direct shell execution observed. The `agentic_scrapper` tool's interaction capabilities are dependent on the ScrapeGraph AI backend's safeguards and user intent, and users should exercise caution with untrusted URLs.
Updated: 2026-01-10GitHub
45
20
High Cost
Sec2

Enables AI agents to control Autodesk Fusion 360 through its API, execute Python code directly within Fusion, and integrate with other Model Context Protocol (MCP) tools.

Setup Requirements

  • ⚠️Requires an external 'Aura Friday MCP-Link Server' to be installed and running.
  • ⚠️Requires Autodesk Fusion 360 to be installed and running as a host application.
  • ⚠️AI-executed Python code has FULL and UNRESTRICTED system access; users are solely responsible for all outcomes and must fully trust the AI and its prompts.
Review RequiredView Analysis
The server's design inherently grants 'ABSOLUTE MAXIMUM ACCESS' via `exec()` for Python code execution, allowing AI agents to run arbitrary code with full system, network, and Fusion API privileges. This is explicitly stated as a feature but represents a significant security risk if the AI agent or its prompts are untrusted or compromised. The `mcp_client.py` disables SSL certificate verification and hostname checking for its SSE connection, which is concerning, even if intended for localhost-only communication. Updates are cryptographically signed, which is a strong positive security feature.
Updated: 2026-01-16GitHub
45
40
High Cost
Sec9

The PagerDuty MCP Server allows MCP-enabled clients (like AI agents) to interact with a PagerDuty account to manage incidents, services, schedules, event orchestrations, and other PagerDuty resources.

Setup Requirements

  • ⚠️Requires a PagerDuty User API Token for authentication.
  • ⚠️Python 3.12+ is required.
  • ⚠️Requires 'asdf-vm' and 'uv' for local development setup.
Verified SafeView Analysis
The server primarily relies on a PagerDuty User API Token, which is passed via environment variables (PAGERDUTY_USER_API_KEY, PAGERDUTY_API_HOST) and is not hardcoded. The default mode is read-only, requiring an explicit '--enable-write-tools' flag for any destructive operations, which is a good security practice. No direct use of 'eval' or other highly dangerous functions was found. It uses standard PagerDuty API calls.
Updated: 2026-01-14GitHub
45
16
Medium Cost
mholzen icon

workflowy

by mholzen

Sec6

Connect AI assistants to Workflowy data and outlines for search, bulk operations, and reporting, or manage Workflowy via CLI.

Setup Requirements

  • ⚠️Requires a Workflowy API Key (obtained from Workflowy.com/api-key/) saved to `~/.workflowy/api.key` or set as `WORKFLOWY_API_KEY` environment variable.
  • ⚠️The `transform` command's `--exec` flag allows arbitrary shell command execution on the host system, posing a significant security risk if exposed to an AI assistant (via MCP) or used carelessly. The `--write-root-id` restriction does not apply to `--exec` functionality.
  • ⚠️For offline mode, Workflowy's Dropbox auto-backup feature must be enabled and synced locally.
Verified SafeView Analysis
The `workflowy_transform` MCP tool and CLI `transform` command include an `--exec` flag that allows execution of arbitrary shell commands on the host machine. If the MCP server is run with `--expose=all` (or `--expose=transform`) and connected to an AI assistant, a malicious or poorly-constrained AI could execute arbitrary local code. While the `--write-root-id` feature provides sandboxing for Workflowy data operations, it does NOT mitigate the risk of local shell command execution via `--exec`. Users must exercise extreme caution when exposing `workflowy_transform` to AI assistants or when using the `--exec` flag directly. API keys are managed responsibly via file permissions and environment variables, not hardcoded.
Updated: 2026-01-18GitHub
45
15
Medium Cost
Xeron2000 icon

redBookMCP

by Xeron2000

Sec9

This server provides a Model Context Protocol (MCP) interface for generating Xiaohongshu-style graphic content, including outlines and images, by orchestrating calls to external AI services.

Setup Requirements

  • ⚠️Requires an external AI image generation API URL, API Key, and model name (e.g., OpenAI DALL-E compatible service), which are typically paid services.
  • ⚠️Requires Node.js and npm/pnpm to be installed.
  • ⚠️The project is explicitly marked as 'Deprecated' in the README, recommending the use of Claude Code's built-in skills instead, implying potential lack of maintenance or a superior alternative.
  • ⚠️Requires a local `DATA_DIR` to be specified and writable for storing project data and generated images.
Verified SafeView Analysis
The code appears generally well-structured and avoids common critical vulnerabilities like 'eval' or direct command injection in tool arguments. API keys and data directory paths are correctly retrieved from environment variables, preventing hardcoding of secrets. File system operations are confined to the `DATA_DIR` specified via environment variable, mitigating arbitrary file access. Network calls are outgoing to configurable image generation APIs. However, the project is marked as deprecated, which means it may not receive future security updates.
Updated: 2026-01-18GitHub
45
45
Medium Cost
Sec7

An analytics and observability SDK for Multi-modal Conversational Platform (MCP) servers, capturing user behavior and tool interactions for product development and debugging.

Setup Requirements

  • ⚠️Requires compatible MCP server: Supports 'mcp>=1.2.0' or 'fastmcp>=2.7.0,!=2.9.*'.
  • ⚠️Outbound network access required to 'https://api.mcpcat.io' and any configured telemetry endpoints.
  • ⚠️Project ID or Exporters required: Must be initialized with either an MCPCat project ID (from mcpcat.io) or a configuration for external telemetry exporters.
Review RequiredView Analysis
The SDK employs monkey-patching, which alters the runtime behavior of the host MCP server. It collects and transmits data (including tool call arguments, responses, and potentially full stack traces) to api.mcpcat.io and optional third-party observability platforms. Users must carefully implement the `redact_sensitive_information` callback to prevent sensitive data from being sent externally. No direct `eval()` or `exec()` calls, or obvious hardcoded secrets were found.
Updated: 2026-01-19GitHub
45
11
High Cost
Abbabon icon

unity-mcp-sharp

by Abbabon

Sec9

Integrates AI assistants with Unity Editor for game development automation via Model Context Protocol (MCP).

Setup Requirements

  • ⚠️Requires Unity 2021.3+ Editor
  • ⚠️Requires Docker Desktop installed and running
  • ⚠️Requires AI assistant configuration (e.g., VS Code, Cursor, Claude Desktop .json settings)
Verified SafeView Analysis
The server uses standard .NET and ASP.NET Core frameworks, runs locally or in a Docker container, and by default exposes ports 8080 for HTTP and WebSocket. No 'eval' or malicious patterns were found in the truncated code. Hardcoded secrets are not present for runtime server operation; package signing credentials are handled via .env for CI/CD. Default `ASPNETCORE_ENVIRONMENT=Development` in `docker-compose.yml` should be reviewed for production deployments, but doesn't inherently pose a critical risk for local development.
Updated: 2025-12-15GitHub
PreviousPage 64 of 713Next