Stop Searching. Start Trusting.

The curated directory of MCP servers, vetted for security, efficiency, and quality.

Tired of the MCP "Marketplace" Chaos?

We built MCPScout.ai to solve the ecosystems biggest pain points.

No Insecure Dumps

We manually analyze every server for basic security flaws.

Easy Setup

Our gotcha notes warn you about complex setups.

Avoid "Token Hogs"

We estimate token costs for cost-effective agents.

Products, Not Demos

We filter out "Hello World" demos.

CATEGORIES:
SORT:

Vetted Servers(8554)

33
3
Medium Cost
Sec8

Manages Google Sheets data programmatically via a server-side application, leveraging the Google Sheets API for various data manipulation tasks.

Setup Requirements

  • ⚠️Requires manual setup of a Google Cloud Project with the Google Sheets API enabled and creation of OAuth 2.0 Client IDs.
  • ⚠️Requires copying the `gcp-oauth.keys.json` file (obtained from Google Cloud Console) into the `./dist` directory after building the project.
  • ⚠️Initial run initiates an interactive browser-based authentication flow to generate and save local credentials.
Verified SafeView Analysis
The server uses Google OAuth2 for authentication via `@google-cloud/local-auth` and stores credentials locally in `.gsheets-server-credentials.json`. The user is responsible for securing this file and the `gcp-oauth.keys.json` file which contains their Google Cloud project client credentials. The code includes checks for the existence of `gcp-oauth.keys.json` and handles authentication errors. No 'eval' or other dynamic code execution is present, and no obvious obfuscation or malicious patterns were found. Network interactions are with the Google Sheets API, a trusted service.
Updated: 2026-01-19GitHub
33
47
Low Cost
Sec8

Enables AI assistants to interact with and debug Kubernetes clusters by translating natural language requests into Kubernetes operations.

Setup Requirements

  • ⚠️Requires Python 3.12 or higher for the `autogen` client.
  • ⚠️Requires `kubectl` and optionally `helm`, `cilium`, `hubble` CLIs to be installed and in PATH for local server execution.
  • ⚠️Requires a valid Kubernetes cluster and `KUBECONFIG` setup.
  • ⚠️For `autogen` client, requires Azure OpenAI configuration (deployment, model, API version, endpoint) and Azure AD authentication (DefaultAzureCredential).
Verified SafeView Analysis
The server's core function is executing shell commands (kubectl, helm, cilium, hubble) based on AI input, which inherently carries security risks. However, the project implements strong mitigations: - Explicit `--access-level` controls (readonly, readwrite, admin) filter allowed operations at registration time. - `--allow-namespaces` restricts operations to specific namespaces, including regex support. - Command parsing uses `shlex.Split` to handle quotes and prevent basic injection, though an advanced AI could still generate harmful commands within its allowed scope. - Validation of CLI tools (`kubectl`, `helm`, `cilium`, `hubble`) and kubeconfig connectivity is performed at startup. - Telemetry collection is opt-out and sends basic invocation data. - The project follows Microsoft's security reporting policies.
Updated: 2025-12-30GitHub
33
1
Low Cost
byronwade icon

memoria

by byronwade

Sec9

Memoria is an MCP (Model Context Protocol) server that enhances AI developer tools by providing git-based forensic analysis, revealing hidden file dependencies, risk assessments, and historical context to prevent regressions and improve code quality.

Setup Requirements

  • ⚠️Requires Node.js 18+
  • ⚠️Requires a Git repository with commit history for core functionality
  • ⚠️Cloud features (memories, guardrails) require a team token and access to a Convex backend (paid tier), with device linking for free-tier cloud features
  • ⚠️Local execution with `npx` can be slow; global install (`npm install -g @byronwade/memoria`) is recommended for performance.
Verified SafeView Analysis
The core free tier functionality runs 100% locally and operates on local git repositories, minimizing network risk. Cloud features (paid tier) involve communication with a Convex backend, requiring API keys/device IDs. The system uses standard OAuth flows for GitHub authentication and JWTs for internal service communication. There are no obvious 'eval' or direct code injection vulnerabilities from the provided snippets. `process.env` is used for sensitive credentials (e.g., GITHUB_PRIVATE_KEY, INTERNAL_API_KEY), which is a standard practice. Overall, the architecture appears to be designed with security in mind, especially for local execution. A higher score would require a deeper audit of the Convex backend and full OAuth implementation details.
Updated: 2025-12-09GitHub
33
2
Medium Cost
calvinw icon

fl-studio-mcp

by calvinw

Sec9

An MCP server enabling AI assistants to control FL Studio's piano roll with natural language commands and real-time, automatic updates.

Setup Requirements

  • ⚠️Requires macOS Accessibility permissions for Terminal/Claude Code to enable auto-trigger functionality.
  • ⚠️Windows support is 'partially implemented' and 'may require additional configuration'.
  • ⚠️Requires FL Studio (recent version) with Python scripting support.
Verified SafeView Analysis
The server primarily relies on file-based communication (JSON files) within a dedicated FL Studio script directory and standard I/O for MCP client communication, which limits network exposure. It uses 'subprocess.run' and 'osascript' (macOS) or 'pyautogui' (Windows) to trigger FL Studio and manage window focus, requiring explicit Accessibility permissions from the user. While these permissions are powerful, their use is transparently documented and specific to sending a trigger keystroke ('Cmd+Opt+Y' or 'Ctrl+Alt+Y') and focusing the FL Studio application. No 'eval', 'exec', hardcoded secrets, or arbitrary command injection vectors were found. The code's functionality is well-defined and justifiable for its purpose.
Updated: 2026-01-11GitHub
33
2
Low Cost
rangta10 icon

kali-mcp-server

by rangta10

Sec2

Integrate Kali Linux penetration testing tools with LLMs (e.g., Claude) via the Model Context Protocol for automated security testing and reconnaissance.

Setup Requirements

  • ⚠️Requires Docker for execution.
  • ⚠️The Docker container runs with elevated privileges (`--privileged`, `NET_ADMIN`, `NET_RAW`), posing significant security risks if not managed in a highly isolated environment.
  • ⚠️Requires manual configuration within Claude Desktop's `claude_desktop_config.json`.
Review RequiredView Analysis
CRITICAL: The `server.js` file directly interpolates user-supplied arguments into `exec` calls (e.g., for nmap, whois, sqlmap) without explicit input sanitization, leading to potential arbitrary command injection (Remote Code Execution) if malicious inputs are provided by the LLM or an attacker. Furthermore, the Docker container runs with `--privileged`, `--cap-add=NET_ADMIN`, and `--cap-add=NET_RAW` capabilities, granting extensive and dangerous permissions that could compromise the host system if the container is exploited.
Updated: 2025-11-22GitHub
33
2
Medium Cost
aj-geddes icon

discord-agent-mcp

by aj-geddes

Sec9

AI-powered management and automation of Discord servers, enabling natural language control over channels, roles, moderation, and events.

Setup Requirements

  • ⚠️Requires Node.js 20.0.0+.
  • ⚠️Requires a Discord Bot Token from the Discord Developer Portal, and specific Privileged Gateway Intents ('Server Members Intent' and 'Message Content Intent') to be enabled for the bot.
  • ⚠️Requires the bot to be invited to a Discord server where it has administrative or specific management permissions (e.g., Manage Channels, Manage Roles, Manage Messages, Moderate Members).
Verified SafeView Analysis
The project uses TypeScript and Zod for strong type and input validation, significantly reducing common injection vulnerabilities. Discord.js's permission handling is correctly leveraged for all operations, enforcing least privilege. Configuration (e.g., DISCORD_TOKEN) is handled via environment variables, with clear instructions against committing secrets. The `send_message_with_file` tool allows specifying an absolute file path; while validated for existence, a malicious AI could potentially exfiltrate arbitrary files if the bot's underlying OS permissions allow it. However, the Dockerfile includes `runAsNonRoot: true` and `allowPrivilegeEscalation: false`, which mitigates this risk by limiting file system access. No `eval` or similar dangerous functions were found.
Updated: 2025-12-04GitHub
33
3
Low Cost

Enables AI-driven banking workflows, including authentication, account access, beneficiary management, and transfers, by securely interacting with Apache Fineract/MifosX Self-Service APIs.

Setup Requirements

  • ⚠️Requires Python 3.
  • ⚠️Requires access to an Apache Fineract / MifosX API instance for full functionality.
  • ⚠️The AI client must manage and pass user credentials (username/password) as arguments for each authenticated tool call.
Verified SafeView Analysis
The server does not contain 'eval' or other immediate code injection vulnerabilities. It utilizes environment variables for base URL and tenant ID, with transparent hardcoded defaults. API authentication relies on Basic Authentication, where the AI client must pass the username and password with each tool call, which are then Base64 encoded and sent over HTTPS to the Fineract API. While functional, this is generally less secure than token-based authentication methods, as raw credentials are handled by the client for every interaction. Error responses from the backend API are returned as raw text, which could potentially expose sensitive details if not handled by the client.
Updated: 2026-01-09GitHub
33
1
High Cost
Sec9

A centralized Model Context Protocol (MCP) server for AI Safety research, providing knowledge base, safety evaluation, mechanistic interpretability, and governance tools for research assistants and agentic systems.

Setup Requirements

  • ⚠️LITELLM_API_KEY is required for LLM-based safety evaluations (eval.* tools), which incurs paid LLM API costs.
  • ⚠️Downloading large models (~GBs) for interpretability (interp.* tools) requires significant disk space and potentially VRAM, increasing hosting costs.
  • ⚠️Requires Python 3.10 or higher.
Verified SafeView Analysis
Secrets (e.g., LITELLM_API_KEY) are managed via environment variables. The server defaults to stdio for communication (local IPC), with TCP transport planned but not yet implemented. The README provides strong warnings against exposing the server directly to the internet, explicitly recommending deployment behind an authenticated proxy and usage of VPNs or private networks. Interpretability tools load models from HuggingFace or local paths, which requires trust in the model source, a standard practice in ML development. No direct `eval()` of user input or dangerous `subprocess` calls were identified.
Updated: 2025-11-24GitHub
33
3
Low Cost
stilllovee icon

todo-mcp-server

by stilllovee

Sec8

Provides autonomous task management and random string generation for AI agents via Model Context Protocol (MCP) using stdio or HTTP transports.

Setup Requirements

  • ⚠️Requires a Node.js runtime environment to run.
  • ⚠️The SQLite database file ('tasks.db') is created in the current working directory, which affects data persistence and isolation across different execution contexts.
  • ⚠️Running in HTTP mode requires an available port (default 8123); a custom port can be specified.
Verified SafeView Analysis
The server primarily uses local SQLite for task persistence, which depends on host system file permissions for security. No direct 'eval' or malicious obfuscation found. Standard HTTP server risks apply if exposed publicly, but it's generally intended for local or controlled agent interaction. Input validation relies on the MCP SDK's schemas.
Updated: 2025-11-29GitHub
32
2
Low Cost
Wolfe-Jam icon

faf-mcp

by Wolfe-Jam

Sec8

The server acts as a Model Context Protocol (MCP) provider to give AI assistants, like Claude, a persistent, structured, and deep understanding of a codebase, preventing context drift and optimizing AI performance.

Setup Requirements

  • ⚠️Requires `faf-cli` to be installed globally via `npm install -g faf-cli` for most MCP tools to function.
  • ⚠️The server expects and operates on real local filesystem paths (e.g., `/Users/username/projects/my-app`), not container paths like `/mnt/user-data/`.
  • ⚠️Requires configuration in the user's MCP client (e.g., `~/.cursor/mcp.json` or VS Code MCP extension settings).
Verified SafeView Analysis
The `PathValidator` in `src/handlers/fileHandler.ts` proactively prevents path traversal and access to forbidden system directories, and enforces file size limits. While core functionalities are natively implemented in TypeScript (avoiding shell execution), a fallback mechanism in `src/handlers/engine-adapter.ts` uses `child_process.exec` for non-bundled commands. This fallback includes argument sanitization (`sanitizedArgs`) to mitigate injection risks, but shell execution always carries inherent risks. The `http-sse` transport defaults to listening on `0.0.0.0` with `cors` enabled for all origins, which is a permissive network configuration, common for dev tools but a consideration for broader exposure.
Updated: 2026-01-15GitHub
32
1
Medium Cost
N1KH1LT0X1N icon

Medium-Agent

by N1KH1LT0X1N

Sec8

The Medium MCP Server acts as a comprehensive platform for scraping, processing, and analyzing Medium articles, generating audio versions, and synthesizing research reports, primarily through the MCP protocol for integration with AI assistants like Claude Desktop or a Gradio web interface.

Setup Requirements

  • ⚠️Requires Python 3.10+.
  • ⚠️Requires `playwright install chromium` to be run for the scraping engine.
  • ⚠️Requires a `.env` file with `ELEVENLABS_API_KEY` (for premium audio) and at least one of `GEMINI_API_KEY` or `OPENAI_API_KEY` (for AI synthesis/enhancements) for full functionality.
Verified SafeView Analysis
The server uses `os.environ.get()` for API keys, preventing hardcoding. Input is handled via the FastMCP framework, which is built on FastAPI and is generally secure. Client-side HTML rendering (e.g., in UI or Claude Desktop) could present XSS risks if malicious content is scraped, though the HTML renderer attempts basic sanitization and rendering occurs in sandboxed environments (iframes). Dependency management for external tools like `yt-dlp` and `waybackpy` should be maintained to mitigate potential vulnerabilities.
Updated: 2025-12-10GitHub
32
2
Medium Cost
Sec9

Provides a Model Context Protocol (MCP) server for AI assistants to access personal information, blog posts, and GitHub activity, enabling structured interaction and information retrieval.

Setup Requirements

  • ⚠️Requires a Cloudflare account for deployment (Cloudflare Workers, D1 Database, Analytics Engine bindings).
  • ⚠️Development and testing heavily rely on the Bun runtime and package manager.
  • ⚠️As an MCP server, it requires a compatible AI assistant (e.g., Claude, Cloudflare AI Playground) or a custom MCP client to be fully utilized.
Verified SafeView Analysis
The server demonstrates strong security practices: explicit URL whitelisting and path-based validation (especially for GitHub URLs) in the `web-fetch` tool, robust input validation using Zod schemas, and comprehensive rate limiting for interaction tools (`send_message`, `hire_me`). Database operations are handled via Drizzle ORM, mitigating SQL injection risks, and error messages are sanitized to prevent information leakage. Content length checks are in place for fetched web content to prevent memory exhaustion. The `allow_any_domain` flag in `web-fetch` defaults to `false`, enforcing a secure-by-default posture. While `parseDuckDuckGoResults` uses regex, it operates on a trusted source, minimizing risk. The IP address logging as 'unknown' in interaction tools reduces forensic capabilities but doesn't expose sensitive information.
Updated: 2026-01-19GitHub
PreviousPage 165 of 713Next