Stop Searching. Start Trusting.

The curated directory of MCP servers, vetted for security, efficiency, and quality.

Tired of the MCP "Marketplace" Chaos?

We built MCPScout.ai to solve the ecosystems biggest pain points.

No Insecure Dumps

We manually analyze every server for basic security flaws.

Easy Setup

Our gotcha notes warn you about complex setups.

Avoid "Token Hogs"

We estimate token costs for cost-effective agents.

Products, Not Demos

We filter out "Hello World" demos.

SORT:

Vetted Servers(395)

57
111
Low Cost
kapilduraphe icon

mcp-watch

by kapilduraphe

Sec8

A comprehensive security scanner for Model Context Protocol (MCP) servers that detects various vulnerabilities in MCP implementations.

Setup Requirements

  • ⚠️Node.js version >=16.0.0 is required.
  • ⚠️Git must be installed and accessible in the system PATH for scanning remote repositories.
  • ⚠️Requires network access to clone GitHub repositories.
Verified SafeView Analysis
The scanner uses 'spawnSync' to execute 'git clone' for remote repository analysis. While arguments are passed safely as an array to prevent shell injection, executing arbitrary git commands on potentially untrusted repositories, even in temporary directories, inherently carries a small risk. The tool actively sanitizes detected credentials in its output to prevent self-leakage, which is a strong security practice for a security scanner.
Updated: 2025-12-07GitHub
55
127
Medium Cost
pydantic icon

logfire-mcp

by pydantic

Sec7

Enables LLMs to retrieve and analyze application telemetry data (OpenTelemetry traces and metrics) from Pydantic Logfire using SQL queries.

Setup Requirements

  • ⚠️Requires a Pydantic Logfire read token, which must be created in the Logfire UI for the specific project.
  • ⚠️Requires `uv` (a Python package installer and runner) to be installed.
  • ⚠️The `arbitrary_query` tool allows arbitrary SQL execution, requiring careful LLM prompting and sandboxing to prevent unintended or malicious queries.
Verified SafeView Analysis
The server provides an `arbitrary_query` tool that directly executes SQL, which is powerful and could be misused if an LLM client is not properly constrained. The `find_exceptions_in_file` tool uses f-strings for SQL query construction, which can be a SQL injection risk if `filepath` were to contain untrusted input.
Updated: 2025-11-26GitHub
55
128
Medium Cost
Sec5

Acts as a bridge between AI assistants and StarRocks databases for direct SQL execution, database exploration, and data visualization.

Setup Requirements

  • ⚠️Requires a StarRocks cluster running on localhost (default port 9030, user 'root', empty password, at least one BE node).
  • ⚠️Arrow Flight SQL functionality is optional and requires specific StarRocks FE configuration (e.g., `arrow_flight_sql_port = 9408` in `fe.conf`) and a FE restart.
  • ⚠️Requires Python 3.10 or higher and several Python packages (e.g., mysql-connector-python, pandas, plotly, kaleido, adbc-driver-manager, adbc-driver-flightsql, pyarrow). These are managed by 'uv' during installation/run but must be resolvable.
Review RequiredView Analysis
The server uses `eval()` for Plotly expressions, although it includes AST-based validation to restrict the expression's complexity. A significant SQL injection risk exists because many SQL queries, particularly in tools like `read_query`, `write_query`, and internal data fetching, are constructed via f-strings and sent without explicit parameterization to the database driver. This is especially problematic if user-controlled input (like query strings or database/table names from an AI agent) is not thoroughly sanitized upstream by the MCP framework or the AI agent itself. The `parse_connection_url` function has a known limitation where an `@` symbol in the password can lead to incorrect parsing, potentially causing connection failures or unintended host connections. Additionally, CORS is configured to `allow_origins=["*"]` by default for HTTP modes, which is insecure for production environments.
Updated: 2025-11-24GitHub
55
1
Low Cost

Enables Claude to perform detailed code analysis, structural overview, symbol extraction, code search, and dependency mapping within a repository using the `kit` CLI.

Setup Requirements

  • ⚠️Requires the `kit` CLI tool to be installed (Python 3.9+ is a prerequisite for `kit`).
  • ⚠️Dependency graph visualization (`--visualize`) requires `Graphviz` to be installed.
  • ⚠️Semantic search requires the `sentence-transformers` Python package, though `kit` prompts for its installation if missing.
Verified SafeView Analysis
The plugin itself consists of documentation and configuration for Claude to invoke the `kit` CLI tool. It does not contain server-side code, direct `eval` calls, obfuscation, or hardcoded secrets. The primary security considerations lie in the `kit` CLI tool's own security and the sandboxing/sanitization mechanisms of the Claude Code environment executing shell commands. Assuming a secure execution environment, the plugin's instructions are safe.
Updated: 2025-12-01GitHub
55
65
High Cost
king-of-the-grackles icon

reddit-research-mcp

by king-of-the-grackles

Sec9

AI-powered Reddit intelligence for market research, competitive analysis, and customer discovery across 20,000+ indexed subreddits.

Setup Requirements

  • ⚠️Requires Reddit API credentials (Client ID, Client Secret, User Agent) to be configured as environment variables for server operation.
  • ⚠️Requires a Descope Project ID for authentication setup, provided via environment variable.
  • ⚠️Requires `SERVER_URL` environment variable to be correctly configured for OAuth callbacks and server-info endpoint.
  • ⚠️Requires Python 3.11+.
Verified SafeView Analysis
All sensitive API keys and URLs are configured via environment variables. Authentication is handled by Descope OAuth2 with a multi-issuer JWT verifier, which is a robust pattern. The server makes external HTTP calls to a ChromaDB proxy for vector search and an Audience API for feed management; the security of these external services and the trustworthiness of the `AUDIENCE_API_URL` are critical considerations. No `eval` or obvious malicious patterns were found in the provided source code.
Updated: 2025-12-11GitHub
55
1
Low Cost
vibecodiq icon

asa-starter-kit

by vibecodiq

Sec9

A deterministic Python CLI for generating and managing production-ready, slice-based FastAPI backend code, ensuring architectural standards and preserving custom logic during regeneration.

Setup Requirements

  • ⚠️Requires Python 3.10+.
  • ⚠️Optional `devbox` environment setup (otherwise requires manual virtual environment and `pip` management).
Verified SafeView Analysis
The core ASA CLI tool focuses on deterministic code generation and architectural enforcement (e.g., boundary linting), which inherently promotes secure development practices. It does not use `eval` or other known dangerous functions. Hardcoded secrets are not present in the provided core logic or demo snippets. The generated FastAPI application's runtime security depends heavily on the user's implementation of business logic within the provided markers (e.g., for JWT generation, database interactions). The linter actively prevents cross-domain import violations.
Updated: 2025-12-05GitHub
55
85
Medium Cost
sulaiman013 icon

powerbi-mcp

by sulaiman013

Sec7

Enables AI assistants to interact with Power BI Desktop and Service for querying data, managing models, and performing safe bulk operations through natural language, ensuring enterprise-grade security and preserving report visual integrity during refactoring.

Setup Requirements

  • ⚠️Requires Windows 10/11 for ADOMD.NET and Power BI Desktop connectivity.
  • ⚠️Requires Power BI Desktop installed for local model interaction and PBIP editing.
  • ⚠️ADOMD.NET client libraries (often bundled with Power BI Desktop or SSMS) must be discoverable.
  • ⚠️Cloud connectivity requires Azure AD App Registration with specific permissions (Dataset.Read.All, Workspace.Read.All) and a Premium Per User (PPU) or Premium Capacity workspace for XMLA endpoint access.
Verified SafeView Analysis
The project integrates a robust security layer for PII detection, audit logging, and access policies, which is a significant positive. However, it relies on environment variables for sensitive cloud credentials (TENANT_ID, CLIENT_ID, CLIENT_SECRET), which is good practice but requires careful management outside the code. The use of 'eval' for .NET assembly loading in connectors, while common for .NET interop, carries inherent risks. Extensive file manipulation for PBIP projects (reading, writing, copying, deleting via `powerbi_pbip_connector.py`) and execution of arbitrary DAX queries means the tool has significant power over the local system and data. The `pbip_load_project` tool directly takes user-provided paths for PBIP projects, which necessitates trust in the input or robust path sanitization to prevent potential traversal vulnerabilities.
Updated: 2025-12-01GitHub
55
1
High Cost
final0920 icon

mcp-worklog

by final0920

Sec9

Automates the generation and management of daily work reports, including collecting content from AI tool sessions for summarization and editing.

Setup Requirements

  • ⚠️Requires a '--storage-path' argument to specify where daily reports are saved.
  • ⚠️AI session collection (Claude Code, Kiro, Cursor) is dependent on the user having these tools installed and their data files existing in standard locations, otherwise, no sessions will be collected.
  • ⚠️Requires Python 3.10 or newer.
Verified SafeView Analysis
The server operates locally using standard I/O and reads/writes files in a user-specified directory. AI session collectors access predefined application data paths (e.g., ~/.claude, %APPDATA%/Kiro, %APPDATA%/Cursor). The CursorCollector uses SQLite with a hardcoded query key, which reduces SQL injection risk. No explicit 'eval' or direct external network calls (beyond standard MCP communication) are apparent from the provided code, nor any hardcoded secrets. File operations are controlled and limited to expected paths for its functionality, and no arbitrary file access based on user input is observed.
Updated: 2025-12-11GitHub
54
88
Medium Cost

MCPcat is an analytics platform designed for MCP server owners to capture user intentions and behavior patterns, offering session replay, trace debugging, and integration with existing observability tools.

Setup Requirements

  • ⚠️Requires @modelcontextprotocol/sdk v1.11 or higher.
  • ⚠️An MCPcat project ID is required for full analytics functionality; otherwise, only configured telemetry exporters will receive data.
  • ⚠️Full functionality (e.g., local log files, detailed stack traces with code context) is optimized for Node.js environments; behavior may vary in edge runtimes.
Verified SafeView Analysis
The SDK performs outbound network calls to its own API (api.mcpcat.io) and configured telemetry exporters (Datadog, Sentry, OTLP). Sensitive configurations like API keys and DSNs are expected to be provided by the user, typically via environment variables, and are not hardcoded within the SDK. The SDK includes a redaction mechanism with a customizable function and a list of protected fields, which is a strong security feature for sensitive data. No 'eval' or obvious obfuscation was found. Local log files (`~/mcpcat.log`) might contain event information, but this is server-local and not exfiltrated by the SDK.
Updated: 2025-12-04GitHub
53
81
Medium Cost
CrowdStrike icon

falcon-mcp

by CrowdStrike

Sec8

The Falcon MCP (Model Context Protocol) server acts as a middleware, connecting AI agents with the CrowdStrike Falcon cybersecurity platform to enable intelligent security analysis and automation in agentic workflows.

Setup Requirements

  • ⚠️Requires Python 3.11 or higher.
  • ⚠️Requires CrowdStrike Falcon API credentials (Client ID, Client Secret) with appropriate scopes.
  • ⚠️Requires Google API Key and Google Model for the ADK agent integration.
  • ⚠️Deployment to Google Cloud requires `gcloud` CLI and specific GCP APIs enabled.
Verified SafeView Analysis
The project is open-source and provides documentation on API scope requirements and security. Environment variables for credentials are used, which is a good practice. However, the `adk_agent_operations.sh` script uses `eval` to load environment variables from the `.env` file. While common for local development, `eval` can be a security risk if the `.env` file is compromised or untrusted, as it executes arbitrary shell code. Users are explicitly instructed to populate this file, mitigating the risk under normal usage, but it's a pattern to be aware of.
Updated: 2025-12-08GitHub
52
29
Medium Cost
rhel-lightspeed icon

linux-mcp-server

by rhel-lightspeed

Sec4

A Model Context Protocol (MCP) server for read-only Linux system administration, diagnostics, and troubleshooting on RHEL-based systems.

Setup Requirements

  • ⚠️Requires Python 3.10 or later.
  • ⚠️SSH host key verification is disabled by default, making it vulnerable to MITM attacks unless `LINUX_MCP_VERIFY_HOST_KEYS` is explicitly set to true.
  • ⚠️Relies on Linux-specific tools (e.g., systemd, journalctl, lsblk) and is optimized for RHEL-based systems, limiting full functionality on other OS types (e.g., macOS, Windows).
  • ⚠️The `read_log_file` tool requires the `LINUX_MCP_ALLOWED_LOG_PATHS` environment variable to be configured with a comma-separated list of permitted log file paths.
Review RequiredView Analysis
The server defaults to `verify_host_keys=False` for SSH connections, making it vulnerable to Man-in-the-Middle (MITM) attacks if not explicitly enabled. Passphrases for SSH keys are handled via environment variables, which can pose risks in some deployment scenarios. File access for the `read_log_file` tool is controlled by a whitelist (`LINUX_MCP_ALLOWED_LOG_PATHS`), which is a good security practice. The `disallow_local_execution_in_containers` decorator is a useful safeguard to prevent unintended local execution in containerized environments.
Updated: 2025-12-10GitHub
51
73
Medium Cost
eversinc33 icon

TriageMCP

by eversinc33

Sec3

Enables an LLM to perform static analysis and triage of PE files using local security tools.

Setup Requirements

  • ⚠️Requires Python 3.13 or newer.
  • ⚠️Requires manual installation and configuration of external tools (FLOSS, UPX, CAPA, YARA rules) and updating their paths in `triage.py`.
  • ⚠️Default tool paths are Windows-specific (e.g., C:\Tools\...).
Review RequiredView Analysis
The server allows an LLM to execute external binaries (FLOSS, UPX, CAPA) and access the local filesystem via user-controlled file paths. Without robust input validation, sanitization, or sandboxing mechanisms, a malicious or compromised LLM could potentially: 1) analyze arbitrary system files (information leak via `list_directory`, `get_hashes`, `get_pe_metadata` etc.), 2) attempt to unpack or modify critical system binaries (`upx_unpack`), or 3) exploit command injection vulnerabilities in the external tools if crafted file paths are passed directly to `subprocess` calls. The hardcoded tool paths also mean the setup is specific and not easily adaptable to different security contexts without code modification.
Updated: 2025-12-01GitHub
PreviousPage 3 of 33Next