Stop Searching. Start Trusting.

The curated directory of MCP servers, vetted for security, efficiency, and quality.

Tired of the MCP "Marketplace" Chaos?

We built MCPScout.ai to solve the ecosystems biggest pain points.

No Insecure Dumps

We manually analyze every server for basic security flaws.

Easy Setup

Our gotcha notes warn you about complex setups.

Avoid "Token Hogs"

We estimate token costs for cost-effective agents.

Products, Not Demos

We filter out "Hello World" demos.

CATEGORIES:
SORT:

Vetted Servers(9120)

54
110
Medium Cost
bitwarden icon

mcp-server

by bitwarden

Sec9

Provides AI assistants with secure access to Bitwarden password manager functionality, encompassing both personal vault management via CLI tools and organization administration via Bitwarden Public API.

Setup Requirements

  • ⚠️Requires Bitwarden CLI to be installed globally (e.g., `npm install -g @bitwarden/cli`).
  • ⚠️Requires Node.js version 22+.
  • ⚠️Mandates secure management of sensitive environment variables: `BW_SESSION` for CLI operations and `BW_CLIENT_ID`, `BW_CLIENT_SECRET` for API operations.
Verified SafeView Analysis
The server includes robust security measures to prevent command injection, API endpoint manipulation, and path traversal, using allowlists, input sanitization (removing dangerous characters, null bytes, newlines), and `child_process.spawn` with `shell: false`. API requests are authenticated with OAuth2 and data is sanitized. File path validation (validateFilePath) is particularly comprehensive, preventing various encoding and Unicode bypasses and enforcing an environment-variable-configurable allowlist of directories. The README provides critical warnings, explicitly stating that the server is designed for local use only and must never be exposed publicly, highlighting the inherent risks of exposing sensitive vault data to AI assistants. The high score reflects the strong implementation-level security controls given its sensitive domain, but users must strictly adhere to deployment warnings.
Updated: 2026-01-19GitHub
54
138
Medium Cost
rust-mcp-stack icon

rust-mcp-sdk

by rust-mcp-stack

Sec8

A high-performance, asynchronous Rust SDK for building Model Context Protocol (MCP) servers and clients, supporting various transports and authentication methods.

Setup Requirements

  • ⚠️Requires a Rust toolchain (stable Rust, cargo-nextest, cargo-make) for development and running examples.
  • ⚠️OAuth examples require an external OAuth 2.0 / OpenID Connect provider (e.g., Keycloak, WorkOS AuthKit, Scalekit) to be configured, along with specific environment variables (AUTH_SERVER, CLIENT_ID, CLIENT_SECRET, ENVIRONMENT_URL, RESOURCE_ID).
  • ⚠️TLS/SSL support for HTTP servers requires enabling the `ssl` Cargo feature and providing paths to SSL certificates and private keys.
Verified SafeView Analysis
The SDK provides robust security features including OAuth 2.0/OpenID Connect authentication (JWKS, introspection, UserInfo), DNS rebinding protection, and TLS/SSL support. It leverages standard Rust security practices and async primitives. A potential risk lies in the `StdioTransport::create_with_server_launch` function, which executes arbitrary commands and arguments. While this is a core feature for launching external MCP servers, users of the SDK must ensure that any external input used to construct these commands/arguments is thoroughly sanitized to prevent command injection vulnerabilities. No obvious hardcoded secrets or malicious patterns were found in the provided source code snippets.
Updated: 2026-01-18GitHub
53
134
Medium Cost
Sec9

Provides a Model Context Protocol (MCP) server to enable AI assistants to access DataForSEO's SEO data APIs through a standardized interface.

Setup Requirements

  • ⚠️Requires active DataForSEO API credentials (a paid service).
  • ⚠️Requires Node.js version 14 or higher.
Verified SafeView Analysis
The server uses Zod for input validation on tool parameters and explicitly loads DataForSEO API credentials from environment variables, which is a good security practice. Communication with DataForSEO is via HTTPS. The CLI uses 'child_process.spawn' to launch specific internal scripts, reducing command injection risk compared to 'exec'. One minor concern is if 'FIELD_CONFIG_PATH' (used for field-level response filtering) were to be manipulated in a highly privileged environment, it could lead to loading unintended JSON files, but this is not a direct execution vulnerability and would require prior compromise.
Updated: 2026-01-07GitHub
53
128
High Cost
Sec6

Enhances AI assistant behavior through structured prompt management, multi-step chains, quality gates, and autonomous verification loops, primarily for development tasks.

Setup Requirements

  • ⚠️Requires Node.js (v18-24 recommended) and npm for setup/development.
  • ⚠️Requires Python 3.x for hooks functionality.
  • ⚠️Requires Git for checkpointing and rollback features.
  • ⚠️Requires an LLM API key (e.g., OpenAI, Anthropic, Google Gemini) for semantic analysis and LLM-based quality gates, incurring usage costs.
  • ⚠️Requires an MCP-compatible client (e.g., Claude Code, Cursor, Gemini CLI) to interact with the server.
Verified SafeView Analysis
The server includes 'Ralph Loops' functionality (shell verification gates) which executes arbitrary shell commands (`sh -c <command>`) provided by the LLM for autonomous task verification. While this feature is explicit and attempts to mitigate risks via environment variable whitelisting (`SAFE_ENV_ALLOWLIST`), process detachment, and timeouts, executing arbitrary commands is inherently high-risk. If the LLM is unconstrained or deployed in an untrusted environment, this could lead to unintended system modifications or privilege escalation. Other file system operations (read/write/delete prompts, configs, state) are necessary for resource management but pose standard risks. No obvious 'eval' or malicious obfuscation patterns were detected.
Updated: 2026-01-18GitHub
53
105
Medium Cost
Azure icon

aks-mcp

by Azure

Sec8

The AKS-MCP server acts as a bridge, enabling AI assistants to interact with and manage Azure Kubernetes Service (AKS) clusters and related Azure resources.

Setup Requirements

  • ⚠️Requires Azure CLI to be installed and authenticated (`az login`) with appropriate Azure permissions.
  • ⚠️Requires `kubectl` to be installed and a configured kubeconfig for Kubernetes-related tools.
  • ⚠️OAuth authentication requires prior Azure AD Application Registration with specific Redirect URIs and API permissions.
  • ⚠️Workload Identity deployments require AKS OIDC Issuer enabled, an Azure Managed Identity, and a Federated Credential configured in Azure.
Verified SafeView Analysis
The server implements a three-tier access control (readonly, readwrite, admin) for operations, enforced via command validation and Kubernetes RBAC in Helm deployments. It uses `shlex.Split` to mitigate shell injection risks during CLI command execution. OAuth 2.1 authentication with Azure AD is supported for HTTP transports, including JWT validation and dynamic client registration. Azure credentials are sourced from environment variables or Kubernetes secrets, avoiding hardcoding. Telemetry uses a default Microsoft instrumentation key unless overridden, which is a privacy consideration but not a direct security vulnerability. Strict path validation for federated tokens (`/var/run/secrets/azure/tokens/azure-identity-token`) prevents arbitrary file access. Overall, the project demonstrates a strong focus on security best practices, but improper configuration of 'admin' access can expose sensitive cluster operations.
Updated: 2026-01-16GitHub
53
69
High Cost

slidev-mcp

by LSTM-Kirigaya

Sec7

AI-powered tool for generating professional online presentations using natural language descriptions, built on Slidev and large language models.

Setup Requirements

  • ⚠️Requires an OpenAI API Key or similar Large Language Model API key (likely paid).
  • ⚠️Requires Node.js and a package manager (npm/yarn/pnpm) for Slidev runtime dependencies.
  • ⚠️Requires Python environment setup, specifically using 'uv' for dependency management.
Verified SafeView Analysis
The project integrates large language models and includes a `websearch` utility tool that takes a URL. Without reviewing the implementation, there's a potential risk of SSRF or fetching malicious content if URL inputs are not properly sanitized and validated. No explicit 'eval' or obfuscation mentioned.
Updated: 2025-11-17GitHub
53
37
Medium Cost
deliveryhero icon

asya

by deliveryhero

Sec8

A microservices platform for orchestrating asynchronous, event-driven AI/ML workflows via an MCP JSON-RPC gateway.

Setup Requirements

  • ⚠️Requires a Kubernetes cluster with Helm, KEDA, and the Asya Operator for full deployment.
  • ⚠️Requires a PostgreSQL database for persistent envelope storage (in-memory is for development only).
  • ⚠️Requires either RabbitMQ or AWS SQS message broker to be configured and accessible.
Verified SafeView Analysis
The system relies on external message brokers (RabbitMQ/SQS) and a PostgreSQL database. Default RabbitMQ credentials ('guest:guest') are used if not overridden, posing a risk in production environments. AWS SQS credentials are expected from secrets or IRSA, which is a good practice. Actor runtimes execute Python code, which implies trust in the deployed actor code. The HTTP/SSE endpoints on the gateway perform input validation and use `json.Marshal` for SSE data to mitigate XSS risks. No direct 'eval' or obfuscation found, and inter-service communication over Unix sockets is generally secure.
Updated: 2026-01-16GitHub
53
4
High Cost
Infatoshi icon

thinkingcap

by Infatoshi

Sec8

A multi-agent research MCP server that runs multiple LLM providers in parallel and synthesizes their responses to a given query.

Setup Requirements

  • ⚠️Requires API keys for chosen LLM providers (e.g., OPENROUTER_API_KEY, GROQ_API_KEY), which may involve paid services.
  • ⚠️Requires Node.js and npx to run.
  • ⚠️Changing configured agents requires restarting the server with new command-line arguments.
Verified SafeView Analysis
API keys are correctly managed via environment variables and are not hardcoded. The system uses LLMs to generate structured data (questions) that are then JSON parsed; while this introduces a potential risk if an LLM deviates maliciously from the expected format, the prompts explicitly guide the LLM to return only a JSON array of strings, mitigating the risk. Web search is performed against DuckDuckGo, a legitimate search engine, via direct HTTP requests. There are no detected uses of `eval` or similar dangerous functions on untrusted inputs.
Updated: 2025-11-25GitHub
53
40
Medium Cost

tiger-slack

by timescale

Sec7

An AI-powered Slack bot, likely integrating with Claude, designed to process and respond to messages within a Slack workspace.

Setup Requirements

  • ⚠️Requires Docker and Docker Compose for deployment.
  • ⚠️Requires creating and configuring a Slack application.
  • ⚠️Requires an Anthropic Claude API key (paid service) and other environment variables for configuration.
Verified SafeView Analysis
Cannot audit code for 'eval' or obfuscation. Assumed standard network practices for Slack bot integration; requires outbound access to Slack and LLM APIs. Inherent risks with user input and API key management. The presence of .env.sample encourages secure credential handling.
Updated: 2025-11-18GitHub
53
204
Medium Cost
kevinwatt icon

yt-dlp-mcp

by kevinwatt

Sec8

Integrate video platform capabilities like search, metadata extraction, and content download into AI agents using yt-dlp.

Setup Requirements

  • ⚠️Requires `yt-dlp` to be installed on the system (e.g., via `winget`, `brew`, `pip`).
  • ⚠️Cookie authentication for private/age-restricted content, or to avoid rate limits, requires a JavaScript runtime (like Deno or Node.js with EJS) to be installed on the system.
  • ⚠️On Linux, browser cookie extraction may require the `secretstorage` Python module (`pip install secretstorage`).
Verified SafeView Analysis
The server primarily acts as a wrapper around the `yt-dlp` command-line tool, executed via `_spawnPromise`. Critical security measures include robust URL validation (`validateUrl`), input sanitization (`sanitizeFilename` for file paths, `encodeURIComponent` for search queries), and comprehensive Zod schema validation for all tool inputs (as highlighted in the changelog for v0.7.0), which significantly mitigates command injection risks. Sensitive cookie information is handled through environment variables, with validation for file paths and browser names, and a clear priority system (file over browser). Automatic response truncation (`characterLimit`, `maxTranscriptLength`) is implemented to prevent context overflow in LLMs. The `_spawnPromise` includes error handling for spawning failures. While reliance on an external executable (`yt-dlp`) always introduces a dependency risk, the explicit input validation and sanitization efforts make this server reasonably secure for its intended purpose.
Updated: 2026-01-04GitHub
53
4
Medium Cost
Sec9

The server enables natural language querying and analysis of Weights & Biases data, specifically focusing on ML experiment tracking (W&B Models) and LLM/GenAI application observability (W&B Weave) through the Model Context Protocol.

Setup Requirements

  • ⚠️Requires Python 3.11+.
  • ⚠️Requires a Weights & Biases API Key for most operations, which must be provided via environment variables, .netrc, or command-line arguments (for STDIO) or as a Bearer token (for HTTP).
  • ⚠️For server-side MCP clients (e.g., OpenAI, LeChat) connecting to a local HTTP server, a public URL (e.g., via ngrok) is required.
Verified SafeView Analysis
The server demonstrates robust security practices, particularly for multi-tenant environments. It utilizes `ContextVar` for per-request API key isolation, preventing cross-request data leakage in concurrent operations. The `create_report` tool explicitly patches the `wandb_workspaces` API client to also use `ContextVar`, addressing known singleton contamination vulnerabilities and handling markdown input carefully. The `query_wandb_tool` allows arbitrary GraphQL queries, which is a powerful but potentially risky feature; however, its usage is heavily documented with critical warnings for the LLM to manage context windows and avoid open-ended queries. Session management includes optional HMAC-SHA256 verification via a secrets resolver. Limited `subprocess.run` calls are for low-risk operations (e.g., `git rev-parse HEAD`). No direct `eval` or `os.system` for user-controlled input was found, and sensitive secrets are expected to be managed via environment variables or a secrets resolver.
Updated: 2025-11-25GitHub
53
100
Medium Cost
juspay icon

neurolink

by juspay

Sec9

NeuroLink is a comprehensive AI toolkit that unifies multiple AI providers, offers advanced orchestration, real-time services, and a Human-in-the-Loop safety system, allowing modular enhancement of AI models through an extensible MCP-compliant middleware and tool ecosystem.

Setup Requirements

  • ⚠️Requires API keys for at least one AI provider (e.g., OPENAI_API_KEY, ANTHROPIC_API_KEY) configured as environment variables to access cloud models.
  • ⚠️For local AI model usage, requires Ollama to be installed and running locally on the system.
  • ⚠️For Google Vertex AI, requires Google Cloud Project ID and authentication credentials (e.g., GOOGLE_APPLICATION_CREDENTIALS path or service account keys) to be properly configured.
Verified SafeView Analysis
The project demonstrates strong security awareness by implementing explicit proxy handling, masking credentials in logs, and providing a robust Human-in-the-Loop (HITL) system for dangerous actions. Controlled use of `new Function()` for specific, sanitized internal/tool logic is present (e.g., in `src/lib/agent/directTools.ts` for a calculator tool and `src/lib/utils/schemaConversion.ts` for Zod schema compilation), which appears justified. The primary inherent risk lies in misconfiguration or the introduction of untrusted external MCP (Model Context Protocol) servers.
Updated: 2026-01-19GitHub
PreviousPage 50 of 760Next