Stop Searching. Start Trusting.
The curated directory of MCP servers, vetted for security, efficiency, and quality.
Tired of the MCP "Marketplace" Chaos?
We built MCPScout.ai to solve the ecosystems biggest pain points.
No Insecure Dumps
We manually analyze every server for basic security flaws.
Easy Setup
Our gotcha notes warn you about complex setups.
Avoid "Token Hogs"
We estimate token costs for cost-effective agents.
Products, Not Demos
We filter out "Hello World" demos.
Vetted Servers(8554)
metrc-mcp-server
by samcorl
Provides AI agents with searchable access to METRC cannabis compliance documentation across multiple legal states.
Provides AI agents with searchable access to METRC cannabis compliance documentation across multiple legal states.
Setup Requirements
- ⚠️Requires external METRC documentation data to be imported (`bin/import`) before the server can function meaningfully. This source data is not included in the repository.
- ⚠️Requires Ruby and Bundler to be installed locally to run the server and its dependencies.
- ⚠️Relies on a local SQLite database (`data/metrc.db`) which needs to be populated via the import script.
Verified SafeView Analysis
Custom-Mcp-with-Agno
by Ashrokss
Develop a custom Model Context Protocol (MCP) server and client using the Agno agent framework to integrate custom tools like web search into AI agents.
Develop a custom Model Context Protocol (MCP) server and client using the Agno agent framework to integrate custom tools like web search into AI agents.
Setup Requirements
- ⚠️Requires `GROQ_API_KEY` (for the AI model) and `SERPER_API_KEY` (for web search), which typically come from paid services or have usage limits.
- ⚠️API keys are configured directly in `config/config.py`. Users must ensure this file is not committed to public repositories to prevent key exposure.
- ⚠️Python 3.8+ and specified dependencies (`agno`, `anyio`, `httpx`, `nest_asyncio`) are required, installable via `requirements.txt`.
Verified SafeView Analysis
claude-code-resources
by Grayhat76
Automates the initial setup of a Claude Code development environment by generating customized configuration files, AI agents, and commands based on user-defined project specifics.
Automates the initial setup of a Claude Code development environment by generating customized configuration files, AI agents, and commands based on user-defined project specifics.
Setup Requirements
- ⚠️Requires Bash environment (Windows users need Git Bash or WSL).
- ⚠️Requires Node.js v18+ (v20+ recommended) for Claude Code CLI installation.
- ⚠️Requires a Claude Pro, Max, Team, Enterprise subscription, or an API key for the Claude Code CLI (paid service).
Verified SafeView Analysis
mcp-oci-integration
by luigisaetta
This server provides a framework for developing and integrating Model Context Protocol (MCP) servers in Python with Oracle Cloud Infrastructure (OCI), enabling AI agents to interact with various OCI services and external tools.
This server provides a framework for developing and integrating Model Context Protocol (MCP) servers in Python with Oracle Cloud Infrastructure (OCI), enabling AI agents to interact with various OCI services and external tools.
Setup Requirements
- ⚠️Requires an Oracle Cloud Infrastructure (OCI) account with Generative AI, IAM, Database, Vault, and Usage API permissions.
- ⚠️Specific tools (Semantic Search, Select AI, OML Predictions) require an Oracle Database (23c/26AI for Vector Store, ADB for Select AI/OML).
- ⚠️External API keys for services like GitHub (GITHUB_TOKEN) and Brave Search (BRAVE_API_KEY) are necessary for respective MCP servers.
Verified SafeView Analysis
obsidian-vault-mcp
by MarkOnFire
Provides a Model Context Protocol (MCP) server for Claude (Desktop/Code) to read, search, list, and modify notes and attachments in an Obsidian vault, with awareness of PARA methodology and task management features.
Provides a Model Context Protocol (MCP) server for Claude (Desktop/Code) to read, search, list, and modify notes and attachments in an Obsidian vault, with awareness of PARA methodology and task management features.
Setup Requirements
- ⚠️Requires Python 3.10+
- ⚠️Requires `OBSIDIAN_VAULT_PATH` environment variable set to an existing Obsidian vault (iCloud-synced vaults must be downloaded locally and fully synced)
- ⚠️Requires an MCP-compatible client (e.g., Claude for Desktop/Code) to function as a tool, as it runs via stdio and not as a standalone web server
Verified SafeView Analysis
nexonco-mcp
by 1sustgmboab
An advanced server for accessing and analyzing clinical evidence data to support precision medicine and oncology research, with flexible search options.
An advanced server for accessing and analyzing clinical evidence data to support precision medicine and oncology research, with flexible search options.
Setup Requirements
- ⚠️Requires 'uv' package manager for Python environment management.
- ⚠️Python 3.11 or higher is required.
- ⚠️Docker is recommended for NANDA server setup (Method 1).
- ⚠️Different transport options ('stdio' for Claude, 'sse' for NANDA) need to be considered when running.
Verified SafeView Analysis
Open-MCP
by noemys-tech
Provides a Model Context Protocol (MCP) interface for interacting with S3-compatible object storage services like AWS S3 or MinIO, enabling LLMs to list buckets, list objects, download objects, and retrieve object metadata.
Provides a Model Context Protocol (MCP) interface for interacting with S3-compatible object storage services like AWS S3 or MinIO, enabling LLMs to list buckets, list objects, download objects, and retrieve object metadata.
Setup Requirements
- ⚠️Requires Java 21 or higher and Maven.
- ⚠️Requires access to an S3-compatible object storage server (e.g., MinIO, AWS S3) with valid credentials for tool operations.
- ⚠️**Critical Security Misalignment**: The server's code (v1.0.1) explicitly lacks OAuth 2.1 authentication and allows anonymous sessions, contradicting its own README, rendering it insecure for production without an external authentication layer.
Review RequiredView Analysis
obsidian-mcp-macros
by ianyimi
Develop a multi-agent personal knowledge management system within Obsidian, offering semantic search, voice interface, and distributed computing across local devices.
Develop a multi-agent personal knowledge management system within Obsidian, offering semantic search, voice interface, and distributed computing across local devices.
Setup Requirements
- ⚠️Requires Ollama for local LLM inference and embeddings (e.g., Llama 3.1 70B, nomic-embed-text) necessitating significant RAM/VRAM (e.g., 40GB+ RAM for 70B model).
- ⚠️Requires a running Qdrant Vector Database instance (locally or on Unraid NAS) for semantic search functionality.
- ⚠️The full distributed architecture relies on Tailscale VPN for secure inter-device communication and mobile access, which needs to be configured by the user.
Review RequiredView Analysis
claud-grants
by seanivore
Facilitates grant application management and development roadmap tracking for a decentralized AI developer knowledge sharing protocol, while outlining its technical implementation.
Facilitates grant application management and development roadmap tracking for a decentralized AI developer knowledge sharing protocol, while outlining its technical implementation.
Setup Requirements
- ⚠️Requires Solana development environment (Rust, Solana CLI)
- ⚠️Requires Web3 client-side libraries for interaction
Verified SafeView Analysis
mcp_server_wazuh_2025
by Gitmy3
Integrates Wazuh SIEM data with AI assistants (like Claude) using the Model Context Protocol (MCP) for natural language security queries and analysis.
Integrates Wazuh SIEM data with AI assistants (like Claude) using the Model Context Protocol (MCP) for natural language security queries and analysis.
Setup Requirements
- ⚠️Requires a running Wazuh server (v4.12 recommended) with its API and Indexer (OpenSearch/Elasticsearch) accessible.
- ⚠️Requires an OpenAI API Key for LLM query functionality, which is a paid service.
- ⚠️Disables SSL certificate verification (`WAZUH_VERIFY_SSL=false`) by default for Wazuh connections, posing a significant security risk for production environments.
Review RequiredView Analysis
weave-mcp
by maximilien
Provide a Model Context Protocol (MCP) server for managing and interacting with various vector databases, offering AI-powered tools for schema and chunking suggestions.
Provide a Model Context Protocol (MCP) server for managing and interacting with various vector databases, offering AI-powered tools for schema and chunking suggestions.
Setup Requirements
- ⚠️Requires OpenAI API Key (Paid) for AI features (suggest_schema, suggest_chunking).
- ⚠️Docker is required for local Weaviate setup and E2E tests.
- ⚠️The `weave` CLI binary must be installed and in PATH for AI-powered suggestion tools to function.
- ⚠️Node.js 22.7.5+ and npm are required for the optional MCP Inspector debugging tool.
Review RequiredView Analysis
Cernji-Agents
by TerraCo89
Centralized error analysis and automated issue creation by processing alerts from observability systems (like Kibana/Elasticsearch) and triggering workflows (e.g., in N8N or Linear). It acts as a Model Context Protocol (MCP) server exposing tools for these operations.
Centralized error analysis and automated issue creation by processing alerts from observability systems (like Kibana/Elasticsearch) and triggering workflows (e.g., in N8N or Linear). It acts as a Model Context Protocol (MCP) server exposing tools for these operations.
Setup Requirements
- ⚠️Requires `uv` (Python package manager) to run.
- ⚠️Requires an active ELK (Elasticsearch, Kibana) stack for log aggregation and alerting.
- ⚠️Requires Docker for running the ELK stack and potentially N8N.
- ⚠️Requires N8N for workflow orchestration if `trigger_n8n_workflow` tool is used.
- ⚠️Requires `FASTMCP_API_TOKEN` environment variable for MCP server authentication.
- ⚠️Requires `LINEAR_API_KEY` and `LINEAR_ORG_ID` environment variables for Linear API integration.
- ⚠️Requires `ELASTICSEARCH_URL` environment variable to connect to Elasticsearch.
- ⚠️Requires `ANTHROPIC_API_KEY` (or `OPENAI_API_KEY`) and `MODEL_NAME` environment variables for LLM-driven analysis in the associated `trigger_error_analysis.py` script.