Stop Searching. Start Trusting.

The curated directory of MCP servers, vetted for security, efficiency, and quality.

Tired of the MCP "Marketplace" Chaos?

We built MCPScout.ai to solve the ecosystems biggest pain points.

No Insecure Dumps

We manually analyze every server for basic security flaws.

Easy Setup

Our gotcha notes warn you about complex setups.

Avoid "Token Hogs"

We estimate token costs for cost-effective agents.

Products, Not Demos

We filter out "Hello World" demos.

CATEGORIES:
SORT:

Vetted Servers(8554)

47
49
High Cost
Sec8

An MCP server that allows AI agents like Claude Code to consult stronger, more capable AI models (e.g., GPT-5.2, Gemini 3.0 Pro) for complex code analysis, debugging, and architectural advice.

Setup Requirements

  • ⚠️Requires API keys (e.g., OPENAI_API_KEY, GEMINI_API_KEY, DEEPSEEK_API_KEY) for API mode, which are typically paid services.
  • ⚠️Requires local installation and authentication of `gemini` CLI or `codex` CLI tools if using CLI mode.
  • ⚠️Requires Node.js version 18.0.0 or higher.
Verified SafeView Analysis
The server uses `child_process.spawn` for CLI integrations with `shell: false`, which mitigates direct shell injection. API keys are loaded from environment variables, preventing hardcoding. Input file paths for context (via `processFiles`) are resolved to absolute paths, but direct user input of malicious paths could theoretically lead to unintended file reads, though typically these are controlled by the invoking AI agent. The risk primarily lies with potential vulnerabilities in the external CLI tools (Gemini CLI, Codex CLI) that are invoked, or how they parse constructed prompt/file arguments.
Updated: 2026-01-12GitHub
47
39
Medium Cost
crawlbase icon

crawlbase-mcp

by crawlbase

Sec9

A Model Context Protocol (MCP) server that enables AI agents and LLMs to fetch fresh, structured, real-time web content (HTML, Markdown, screenshots) via Crawlbase's scraping infrastructure.

Setup Requirements

  • ⚠️Requires free/paid Crawlbase API tokens (CRAWLBASE_TOKEN, CRAWLBASE_JS_TOKEN).
  • ⚠️Requires Node.js version >= 18.0.0.
Verified SafeView Analysis
The server uses Zod for robust input validation on all API parameters, significantly reducing injection risks. It handles sensitive tokens by allowing them via environment variables or per-request HTTP headers, which is a good security practice. File system access is limited to reading static `package.json` and controlled debug logs. The use of `sharp` for image processing includes size limits (8000px max dimension) to prevent potential image-bomb attacks. Debug logging, if enabled in production, could potentially expose request details, but this is an opt-in configuration.
Updated: 2025-11-25GitHub
46
63
Medium Cost
heurist-network icon

heurist-mesh-mcp-server

by heurist-network

Sec8

Provides AI models (like Claude) access to Web3 and blockchain tools via the Heurist Mesh API for tasks such as cryptocurrency data analysis, token security review, and social media intelligence.

Setup Requirements

  • ⚠️Requires Python 3.10 or higher.
  • ⚠️Requires a Heurist API key to access most tools (free credits available with invite code 'claude').
  • ⚠️Installation requires either UV package manager (recommended) or Docker.
  • ⚠️For Claude Desktop users connecting to an SSE endpoint, `mcp-proxy` is recommended for connectivity.
Verified SafeView Analysis
No 'eval' or obfuscation found. Network calls are made to configurable Heurist Mesh API endpoints. The server relies on the `HEURIST_API_KEY` for authentication with the Heurist Mesh API, which should be managed securely by the user (e.g., via environment variables) to prevent unauthorized access. The code primarily acts as a proxy, so its security is also dependent on the security of the upstream Heurist Mesh API. Input sanitization for tool arguments is handled by the underlying Heurist Mesh API, not explicitly within this server.
Updated: 2026-01-15GitHub
46
55
Medium Cost
StacklokLabs icon

mkp

by StacklokLabs

Sec7

MKP is a Model Context Protocol (MCP) server for Kubernetes, enabling LLM-powered applications to interact with Kubernetes clusters by providing tools for resource listing, getting, applying, deleting, and executing commands.

Setup Requirements

  • ⚠️Requires Go 1.24+ and a Kubernetes cluster with configured kubeconfig.
  • ⚠️Write operations (apply_resource, delete_resource, post_resource) are disabled by default and must be explicitly enabled with `--read-write=true`.
  • ⚠️The 'ExecInPod' capability, if enabled, grants significant access and requires careful Kubernetes RBAC configuration for the server's service account to mitigate security risks.
Verified SafeView Analysis
The server includes an `ExecInPod` functionality, which allows arbitrary command execution within pods. While this is a core feature, it's a high-risk operation and relies heavily on appropriate Kubernetes RBAC configurations for the server's service account to prevent abuse. The server defaults to read-only mode, and write operations must be explicitly enabled via a flag, which is a good security practice. Rate limiting is built-in and enabled by default to protect against excessive API calls. The project maintains a security policy and responsible disclosure process.
Updated: 2026-01-13GitHub
46
21
Low Cost
Sec8

Exposes any OpenAPI documented HTTP API as a Model Context Protocol (MCP) server for AI agents, with support for mock mode and authentication.

Setup Requirements

  • ⚠️Requires Java 21 or newer.
  • ⚠️Docker is the recommended method for building and running the server locally.
  • ⚠️Requires a valid and accessible OpenAPI specification URL to function (or needs to be in mock mode for testing).
  • ⚠️If authentication is enabled, a valid `infobip.openapi.mcp.security.auth.auth-url` endpoint is required.
Verified SafeView Analysis
The framework itself appears well-engineered with explicit handling for authorization headers in core components (ToolHandler, InitialAuthenticationFilter). Authentication is delegated to a configurable external `auth-url`, which is a good security practice. However, the overall security posture heavily depends on the trustworthiness of the provided OpenAPI specification and configured API endpoints. Malicious OpenAPI specifications or API responses could potentially lead to data exposure or prompt injections into AI agents. The 'JSON double serialization mitigation' helps handle malformed LLM inputs, preventing certain types of errors but should not be seen as a replacement for robust input validation on the underlying API. There are no obvious signs of 'eval' or similar dangerous dynamic code execution patterns on untrusted inputs within the provided source.
Updated: 2026-01-19GitHub
46
13
High Cost
OriNachum icon

reachy-mini-mcp

by OriNachum

Sec7

Control a Reachy Mini robot through an MCP or OpenAI-compatible API, enabling dynamic execution of robot movements, gestures, and conversational interactions.

Setup Requirements

  • ⚠️Requires a Reachy Mini Robot (physical or simulated via MuJoCo).
  • ⚠️Requires the Reachy Mini Daemon running and accessible (default: http://localhost:8000).
  • ⚠️Requires Python 3.10+.
  • ⚠️For TTS functionality, requires the 'piper' executable and a compatible voice model (e.g., set `PIPER_MODEL` environment variable).
  • ⚠️Full 'Conversation Stack' including an LLM (e.g., Llama-3.2-3B-Instruct-FP8 via vLLM) requires Docker and GPU hardware for efficient inference.
Verified SafeView Analysis
The server uses dynamic loading of Python scripts from a controlled 'tools_repository/scripts' directory for tool execution via `importlib.util.spec_from_file_location` and `spec.loader.exec_module`. While this is dynamic code execution, it's safer than `eval()` or `exec()` of arbitrary strings, which the `INLINE_REMOVAL_SUMMARY.md` explicitly states have been removed. The `tts_queue.py` module utilizes `subprocess.run` and `subprocess.Popen` to interact with `piper` (TTS) and `aplay` (audio playback); inputs for these commands appear to be reasonably handled (e.g., text via stdin, temporary files for audio) to mitigate injection risks. No obvious hardcoded secrets were found, with environment variables used for configuration. If `server_openai.py` is used, it binds to `0.0.0.0` which means it can be externally accessible if the host's firewall permits, posing a standard network exposure risk. The most significant inherent security consideration is the power of the `operate_robot` tool, especially in its 'sequence mode', when controlled by an external, potentially unconstrained LLM, which could lead to unintended or potentially destructive robot actions.
Updated: 2025-11-21GitHub
46
91
Medium Cost
opensearch-project icon

opensearch-mcp-server-py

by opensearch-project

Sec9

Enables AI assistants and LLMs to interact with OpenSearch clusters by providing a standardized Model Context Protocol (MCP) interface through built-in and dynamic tools.

Setup Requirements

  • ⚠️Requires a running OpenSearch cluster to connect to.
  • ⚠️Authentication (IAM roles, AWS profiles, basic auth) needs careful configuration via environment variables or a YAML config file.
  • ⚠️Requires Python 3.10+.
  • ⚠️Recommended to install 'uv' for streamlined dependency management and execution via 'uvx'.
Verified SafeView Analysis
The server employs robust input validation through Pydantic models, structured authentication methods (IAM, Basic, AWS credentials, Header-based), configurable SSL verification, and active response size limiting to prevent memory exhaustion. Write operations via the GenericOpenSearchApiTool are protected by an explicit configuration setting (OPENSEARCH_SETTINGS_ALLOW_WRITE). No obvious 'eval' or hardcoded production secrets were found. The primary security consideration is user misconfiguration (e.g., enabling OPENSEARCH_NO_AUTH in production or allowing unrestricted write operations via GenericOpenSearchApiTool if not needed).
Updated: 2026-01-05GitHub
46
212
Medium Cost
Sec9

A Model Context Protocol (MCP) server that exposes OpenAPI endpoints as MCP tools, along with optional support for MCP prompts and resources, enabling Large Language Models to interact with REST APIs.

Setup Requirements

  • ⚠️Requires a valid OpenAPI specification (URL, file path, stdin, or inline content) to be provided at startup.
  • ⚠️Requires an API base URL (--api-base-url or API_BASE_URL) for the target API.
  • ⚠️For APIs with complex authentication (e.g., expiring tokens, refresh tokens), a custom AuthProvider implementation is recommended/required, often involving manual token extraction from a browser session (as demonstrated in the Beatport example).
Verified SafeView Analysis
The server implements several security best practices including preventing HTTP header injection (CRLF), blocking user-controlled system headers (e.g., Host, Content-Length), and redacting sensitive data from authentication error responses (401/403). The HTTP transport validates Origin headers for localhost to prevent DNS rebinding attacks but notes that production implementations should use a whitelist, which is good practice. No 'eval' or obvious malicious patterns were found. Hardcoded secrets are explicitly placeholders.
Updated: 2025-12-30GitHub
46
51
Medium Cost
ProfessionalWiki icon

MediaWiki-MCP-Server

by ProfessionalWiki

Sec9

An MCP server that enables Large Language Model (LLM) clients to interact with any MediaWiki wiki.

Setup Requirements

  • ⚠️Requires a `config.json` file for private wikis or authenticated tools, which needs to be manually created and populated with wiki details and credentials.
  • ⚠️Authentication (OAuth2 or Bot Passwords) must be configured on the target MediaWiki instance for tools marked with 🔐, which involves wiki-specific setup steps.
  • ⚠️A Node.js runtime (version 18 or higher) or Docker environment is required to run the server.
Verified SafeView Analysis
The server demonstrates good security practices by externalizing sensitive credentials (OAuth2 tokens, usernames, passwords) into a `config.json` file. The `wikiService.sanitize` method explicitly prevents these credentials from being exposed in MCP resource content. The core logic relies on the `mwn` library for MediaWiki API interactions, abstracting much of the direct API handling. No 'eval' or obvious obfuscation was found. Network requests are made to configured MediaWiki instances, which is inherent to its functionality. The HTTP transport uses session IDs for request handling. Overall, the design prioritizes secure handling of sensitive data and external interactions.
Updated: 2026-01-19GitHub
46
88
High Cost
sib-swiss icon

sparql-llm

by sib-swiss

Sec7

An LLM-powered agent for generating, validating, and executing SPARQL queries against biomedical knowledge graphs, utilizing Retrieval-Augmented Generation (RAG) with endpoint-specific metadata and schema for improved accuracy.

Setup Requirements

  • ⚠️Requires OpenAI, OpenRouter, MistralAI, or Groq API Key (Paid)
  • ⚠️Docker required for core services (Qdrant, API)
  • ⚠️Python 3.10+ only
  • ⚠️Vector database initialization for production requires manual execution of `index_resources.py` script (`AUTO_INIT=false`)
Verified SafeView Analysis
The system employs several security measures, including `DOMPurify` for HTML sanitization in the frontend (preventing XSS), `validate_sparql_with_void` for checking generated SPARQL queries against known endpoint schemas (mitigating SPARQL injection), and environment variables for API key management. However, potential risks exist inherent to dynamic query generation and external API interactions. A sophisticated LLM jailbreak could theoretically influence the `endpoint_url` passed to `query_sparql` or craft malicious queries that bypass incomplete VoID schema validations, leading to SSRF or unintended data access on controlled endpoints. Logging of user questions and feedback (potentially sensitive information) is protected by an API key.
Updated: 2026-01-13GitHub
46
60
Medium Cost
oculairmedia icon

Letta-MCP-server

by oculairmedia

Sec7

A Model Context Protocol (MCP) server that provides comprehensive tools for agent management, memory operations, and integration with the Letta system.

Setup Requirements

  • ⚠️Requires Node.js and npm to run directly.
  • ⚠️Requires a running Letta instance, configured via `LETTA_BASE_URL` and `LETTA_PASSWORD` environment variables.
  • ⚠️The `InMemoryEventStore` is noted as 'not suitable for production' and should be replaced with persistent storage for production deployments.
  • ⚠️If using `export_agent` with XBackbone upload, ensure `XBACKBONE_URL` and `XBACKBONE_TOKEN` are securely configured via environment variables and not overridden by untrusted input.
Verified SafeView Analysis
The server acts as a proxy for the Letta API. Core security risks related to user-provided code execution (e.g., in `upload_tool`) are primarily handled by the Letta backend, not this MCP server directly. The HTTP transport includes origin validation (CORS) to prevent certain web-based attacks. However, there are potential local file system interaction risks in `export_agent` and `import_agent` if an attacker can manipulate file paths (e.g., directory traversal), though `path.resolve` mitigates some of this. The `export_agent` tool also supports uploading to a configurable XBackbone URL, which could introduce SSRF vulnerabilities if the `xbackbone_url` argument is not strictly controlled by environment variables and can be influenced by a malicious client. `LETTA_PASSWORD` is correctly handled as an environment variable.
Updated: 2026-01-19GitHub
46
61
High Cost

This server provides AI-powered research capabilities by automating interactions with Perplexity.ai's web interface, offering web search, content extraction, chat, and developer tooling without requiring API keys.

Setup Requirements

  • ⚠️Requires Bun runtime and Node.js 18+ for TypeScript compilation.
  • ⚠️Optional Perplexity Pro account support requires a one-time manual browser login via 'bun run login' to save the session.
  • ⚠️Browser automation is resource-intensive (CPU/RAM) and performance depends on website consistency, potentially leading to instability or timeouts.
Verified SafeView Analysis
The server relies heavily on Puppeteer for browser automation, which inherently involves executing JavaScript in a browser context when visiting external websites (e.g., Perplexity.ai, or arbitrary URLs for content extraction). The codebase demonstrates good practices such as filtering unsafe URL schemes (e.g., 'javascript:') in extracted content, implementing content type checks before extensive parsing, and using static scripts for browser evasion. There are no direct 'eval' calls in the Node.js server context handling user input. The main security considerations are the inherent risks of browser automation against external, potentially untrusted sites, and the stability/integrity of Perplexity.ai's website. The use of 'gitingest.com' for GitHub content is an external dependency risk.
Updated: 2025-12-13GitHub
PreviousPage 62 of 713Next