Stop Searching. Start Trusting.

The curated directory of MCP servers, vetted for security, efficiency, and quality.

Tired of the MCP "Marketplace" Chaos?

We built MCPScout.ai to solve the ecosystems biggest pain points.

No Insecure Dumps

We manually analyze every server for basic security flaws.

Easy Setup

Our gotcha notes warn you about complex setups.

Avoid "Token Hogs"

We estimate token costs for cost-effective agents.

Products, Not Demos

We filter out "Hello World" demos.

Vetted Servers(7756)

42
27
Medium Cost

This server enables Large Language Models (LLMs) to interact with Prometheus instances via its API, facilitating tasks like generating PromQL queries, analyzing metrics, and reviewing Prometheus health and configuration.

Setup Requirements

  • ⚠️Requires a running Prometheus (or compatible backend like Thanos) instance to connect to.
  • ⚠️Requires an LLM system/framework that supports the Model Context Protocol (MCP) for tool interaction (e.g., Ollama, Gemini-CLI with specific configurations).
  • ⚠️Potentially destructive TSDB admin tools (e.g., delete_series) are disabled by default and require an explicit `--dangerous.enable-tsdb-admin-tools` flag to enable.
Verified SafeView Analysis
The server implements Prometheus HTTP client and web configuration options, supporting TLS and authentication. It explicitly flags and warns about `--dangerous.enable-tsdb-admin-tools`, which enables potentially destructive TSDB admin API tools (e.g., `delete_series`). Users must acknowledge this risk when enabling the flag. There are no obvious signs of 'eval' or hardcoded secrets in the core application logic; configuration examples use placeholders or reference Kubernetes secrets. The overall design prioritizes transparency for security-sensitive operations.
Updated: 2025-12-11GitHub
42
27
Medium Cost

midi-mcp-server

by tubone24

Sec9

An MCP server that enables AI models to generate MIDI files from text-based music data, allowing programmatic creation of musical compositions.

Setup Requirements

  • ⚠️Requires Node.js runtime to build and execute.
  • ⚠️Requires an MCP client (e.g., Cline) for interaction.
  • ⚠️Configuration requires specifying the absolute path to the 'build/index.js' file within the MCP client.
Verified SafeView Analysis
The server runs locally and communicates via stdio, significantly limiting network-based risks. Input parameters, particularly `output_path`, could pose local file system risks if not carefully handled internally, but the sandbox environment (local execution, no network exposure) mitigates the severity. No 'eval' or obfuscation is apparent.
Updated: 2025-11-17GitHub
42
19
High Cost
FlowLLM-AI icon

flowllm

by FlowLLM-AI

Sec3

FlowLLM is a configuration-driven framework for building LLM-powered applications, automatically generating HTTP, MCP, and CMD services from Python operations and YAML configurations.

Setup Requirements

  • ⚠️Requires LLM and Embedding API Keys (typically paid services like OpenAI compatible models).
  • ⚠️External database/vector store services (e.g., Qdrant, Elasticsearch, PostgreSQL) are required for persistence and advanced features.
  • ⚠️Python 3.10+ (as per current README) is mandatory.
Review RequiredView Analysis
The framework utilizes `eval()`, `exec()`, and direct `subprocess.run()` for dynamic flow execution, code execution (ExecuteCodeOp), and shell commands (ShellOp). If deployed as a public service without strict input sanitization, strong authentication, and authorization, these features pose severe remote code execution and system compromise risks. Path traversal vulnerabilities are also possible when loading external files. While these capabilities are inherent to agentic frameworks, their exposure without robust implicit security makes the system unsafe for general public deployment without extensive hardening.
Updated: 2025-12-09GitHub
42
1
Medium Cost
domdomegg icon

google-drive-mcp

by domdomegg

Sec9

Facilitates AI systems to interact with Google Drive for comprehensive file and folder management, including listing, searching, uploading, downloading, and managing comments and permissions.

Setup Requirements

  • ⚠️Requires manual creation of Google OAuth credentials (Client ID, Client Secret) and enabling the Google Drive API in the Google Cloud Console.
  • ⚠️A specific redirect URI (`http://localhost:3000/callback`) must be configured in Google OAuth credentials for the HTTP transport mode.
  • ⚠️Requires Node.js and npm installed to run locally.
Verified SafeView Analysis
The server acts as a stateless OAuth proxy, meaning it does not store user tokens, thereby minimizing data breach risks. It employs `zod` for robust input validation on all tool parameters. Access tokens are validated against Google's `tokeninfo` endpoint before processing requests, ensuring that invalid or expired tokens are rejected early, which prompts clients to refresh their tokens. Input parameters for Google Drive API calls are strongly typed, and external commands like `eval` or direct command execution with user-provided input were not observed in the analyzed source code.
Updated: 2025-12-13GitHub
42
1
Medium Cost
tmonk icon

mcp-stata

by tmonk

Sec3

Connects AI assistants to a local Stata installation for executing commands, inspecting data, exporting graphics, and programmatically verifying results.

Setup Requirements

  • ⚠️Requires a licensed Stata 17+ installation.
  • ⚠️Python 3.11+ is required.
  • ⚠️Manual configuration of the `STATA_PATH` environment variable may be needed if Stata is not auto-discovered.
Review RequiredView Analysis
The `run_command` tool directly executes arbitrary Stata code provided by the LLM. Stata itself has capabilities to execute shell commands (e.g., `shell rm -rf /` or `! rm -rf /`), which means an uncontrolled LLM could potentially lead to arbitrary operating system command execution on the host machine. The `run_do_file` and `load_data` functions also introduce risk by executing external scripts or loading data from potentially untrusted paths/URLs, but the direct command injection via `run_command` is the most critical concern. Without strong LLM guardrails or execution within an isolated sandbox (like a container with restricted privileges), running this server is risky.
Updated: 2025-12-14GitHub
42
1
Medium Cost

This repository provides instructions for connecting Figma designs to the Gemini CLI to enable AI-powered code generation from design contexts.

Setup Requirements

  • ⚠️Requires Figma Personal Access Token (PAT)
  • ⚠️Requires Gemini CLI to be installed globally
  • ⚠️Figma design files must be publicly shared ('Anyone with the link can view')
Verified SafeView Analysis
The provided source code consists solely of README files, offering no server-side code to audit for common vulnerabilities like 'eval' or obfuscation. The security assessment is based on the instructions for integrating with the Figma MCP server via the Gemini CLI, which uses standard bearer token authentication. Users must securely manage their Figma Personal Access Token.
Updated: 2025-11-27GitHub
42
1
Medium Cost
yasg1988 icon

mcp-beget

by yasg1988

Sec9

Manages Beget hosting services (sites, domains, databases, FTP, Cron, DNS, backups, mail) via Claude Code.

Setup Requirements

  • ⚠️Requires Beget hosting account login and password.
  • ⚠️Requires Python 3.10 or higher.
  • ⚠️Requires configuration within Claude Code's `~/.claude/settings.json` file.
Verified SafeView Analysis
Credentials (BEGET_LOGIN, BEGET_PASSWORD) are loaded from environment variables, which is a good practice. The server makes standard HTTPS requests to the official Beget API endpoint. No 'eval', 'exec', or direct arbitrary code execution from user input is present in the provided source code. Security primarily depends on the user's secure handling of environment variables and the Beget API's own security measures.
Updated: 2025-11-28GitHub
42
1
Medium Cost
Sec6

Manages Docker containers and compose projects in a homelab environment via an MCP client like Claude.

Setup Requirements

  • ⚠️Requires Docker to be installed and running on the host system.
  • ⚠️Requires access to the Docker socket (`/var/run/docker.sock`), which often necessitates elevated permissions (e.g., `root` or host `docker` group membership) and can be a common permission-related setup hurdle.
  • ⚠️Python 3.10+ is required if not running via Docker.
Verified SafeView Analysis
The server requires full access to the Docker daemon via `/var/run/docker.sock`, granting complete control over the host's Docker environment. The Docker Compose setup runs as `root` to facilitate this. The remote access mode (HTTP/SSE) operates without authentication by default, requiring careful exposure only on trusted networks or behind a reverse proxy with authentication for internet access. The README explicitly warns against insecure Docker socket permission adjustments (e.g., `chmod 666`). Users must be aware of and comfortable with the high privileges granted.
Updated: 2025-11-17GitHub
42
15
Medium Cost
Sec9

The PinMeTo Location MCP Server integrates the PinMeTo platform with AI agents for natural language interaction with location data and business insights from sources like Google, Facebook, and Apple.

Setup Requirements

  • ⚠️Requires Node.js v22+ and npm/npx.
  • ⚠️Requires PinMeTo API credentials (Account ID, App ID, App Secret) to be configured as environment variables or via the Claude Desktop installer.
  • ⚠️Manual installation with Claude Desktop requires absolute paths for Node.js and the project directory in `claude_desktop_config.json`.
Verified SafeView Analysis
The server uses environment variables for API credentials (PINMETO_ACCOUNT_ID, PINMETO_APP_ID, PINMETO_APP_SECRET), which is a good security practice, with the app secret explicitly marked as sensitive in the manifest. Requests to the PinMeTo API are authenticated using a bearer token obtained via client credentials flow, which is cached for a limited time. Input validation for tool arguments is performed using Zod schemas. There are no obvious `eval` calls, direct command injections, or other critically dangerous patterns found. The use of `axios` for HTTP requests is standard. Logging to `console.error` for failed requests is present but might expose sensitive information if not handled in production logs. Custom User-Agent for axios requests is a good practice for API interaction.
Updated: 2025-12-05GitHub
42
1
Medium Cost
Aiden12581 icon

SpringAIAlibaba

by Aiden12581

Sec9

This repository provides a collection of Spring AI examples demonstrating various integrations with Alibaba Cloud's DashScope platform, covering chat, streaming, prompt engineering, structured output, memory, text-to-image, text-to-speech, embeddings, RAG, and tool calling.

Setup Requirements

  • ⚠️Requires Alibaba Cloud DashScope API Key (a paid service) configured as an environment variable `aliQwen-api` or a Spring property `spring.ai.dashscope.api-key`.
  • ⚠️Requires a running Redis instance for chat memory (SAA-08Persistent) and vector store (SAA-11Embed2vector, SAA-12RAG4AiOps).
  • ⚠️The SAA-02Ollama module requires a local Ollama instance running with the appropriate model.
  • ⚠️The SAA-18TodayMenu module requires a Bailian platform App ID configured via `spring.ai.dashscope.agent.options.app-id`.
Verified SafeView Analysis
The code uses environment variables or Spring's @Value annotation for API keys and other sensitive configurations (e.g., aliQwen-api, Redis host/port, Bailian App ID), which is good practice and avoids hardcoded secrets. No 'eval' or other direct code injection vulnerabilities were found. Standard web application security considerations apply.
Updated: 2025-11-28GitHub
42
31
Medium Cost
Sec9

Interacts with the LinkedIn API to search profiles, retrieve details, search jobs, and send messages, enabling LLMs to access professional network data.

Setup Requirements

  • ⚠️Requires a LinkedIn Developer Account and creating an application to obtain Client ID and Client Secret.
  • ⚠️The server currently implements the `client_credentials` OAuth 2.0 grant type. This means many of the intended tools like `send-message`, `get-my-profile`, `get-network-stats`, and `get-connections` may not function as expected, as they typically require a user-authorized token obtained via an 'Authorization Code Flow' with explicit user consent.
  • ⚠️Node.js 16+ is required.
Verified SafeView Analysis
Uses environment variables for credentials (LINKEDIN_CLIENT_ID, LINKEDIN_CLIENT_SECRET) and implements OAuth 2.0. The current authentication flow uses the `client_credentials` grant type, which is primarily for server-to-server application access and may not support all user-specific LinkedIn API endpoints (e.g., sending messages, retrieving the authenticated user's profile or network statistics) that typically require an 'Authorization Code Flow' with user consent. Zod is used for input validation on MCP tool parameters. No 'eval' or malicious patterns found. API endpoints are hardcoded, but these are official LinkedIn URLs.
Updated: 2025-12-15GitHub
42
34
Medium Cost
Sec3

Enables AI assistants to seamlessly access and interact with Autodesk ShotGrid (Flow Production Tracking) data through the Model Context Protocol (MCP).

Setup Requirements

  • ⚠️Requires Autodesk ShotGrid (Flow Production Tracking) access and valid API script name/key.
  • ⚠️The `download_file` function's insecure SSL/TLS fallbacks (disabling certificate verification) pose a significant security risk and could mask underlying network configuration issues.
  • ⚠️Python 3.8+ and `uv` or `pip` are required for installation.
Review RequiredView Analysis
The `download_file` utility function includes fallback mechanisms that explicitly disable SSL/TLS certificate verification (`requests` with `verify=False` and `urllib` with `ssl.CERT_NONE`) and attempts to force older, less secure TLS protocols (TLSv1). This is a critical security risk as it makes the server vulnerable to Man-in-the-Middle attacks when downloading files, even if intended as a fallback for robustness. API keys are handled via environment variables or HTTP headers, which is a standard practice, but the insecure download mechanism is a major concern. Deployment documentation correctly advises using HTTPS in production.
Updated: 2025-12-13GitHub
PreviousPage 77 of 647Next