Stop Searching. Start Trusting.
The curated directory of MCP servers, vetted for security, efficiency, and quality.
Tired of the MCP "Marketplace" Chaos?
We built MCPScout.ai to solve the ecosystems biggest pain points.
No Insecure Dumps
We manually analyze every server for basic security flaws.
Easy Setup
Our gotcha notes warn you about complex setups.
Avoid "Token Hogs"
We estimate token costs for cost-effective agents.
Products, Not Demos
We filter out "Hello World" demos.
Vetted Servers(8554)
orla
by dorcha-inc
Orla acts as a runtime for Model Context Protocol (MCP) servers, enabling the execution of lightweight open-source AI agents and command-line tools locally.
Orla acts as a runtime for Model Context Protocol (MCP) servers, enabling the execution of lightweight open-source AI agents and command-line tools locally.
Setup Requirements
- ⚠️Requires Go 1.25+ to build from source.
- ⚠️Requires Ollama for local LLM inference (installed by default, but can be skipped for remote Ollama setup).
- ⚠️Requires Git for installing tools from the official registry.
Verified SafeView Analysis
kindly-web-search-mcp-server
by Shelpuk-AI-Technology-Consulting
Provides web search with robust, LLM-optimized content retrieval from various sources (StackExchange, GitHub, Wikipedia, arXiv, and general webpages) for AI coding assistants.
Provides web search with robust, LLM-optimized content retrieval from various sources (StackExchange, GitHub, Wikipedia, arXiv, and general webpages) for AI coding assistants.
Setup Requirements
- ⚠️Requires at least one search provider API key: `SERPER_API_KEY`, `TAVILY_API_KEY`, or `SEARXNG_BASE_URL`.
- ⚠️A Chromium-based browser (Chrome/Chromium/Edge/Brave) must be installed locally for universal HTML content retrieval; `KINDLY_BROWSER_EXECUTABLE_PATH` may be needed.
- ⚠️Python 3.13+ is required (Python 3.14 is supported, but advanced PDF features may be disabled due to `onnxruntime` availability).
Verified SafeView Analysis
opentelemetry-mcp-server
by traceloop
Enables AI assistants to query and analyze OpenTelemetry traces from LLM applications for debugging, performance, and cost optimization.
Enables AI assistants to query and analyze OpenTelemetry traces from LLM applications for debugging, performance, and cost optimization.
Setup Requirements
- ⚠️Requires Python 3.11 or higher.
- ⚠️Requires 'pipx' or 'uv' for easy installation and execution.
- ⚠️Traceloop backend requires an API key (BACKEND_API_KEY environment variable or --api-key CLI argument).
- ⚠️Jaeger backend requires the 'service_name' parameter for 'search_traces' and 'search_spans' operations.
Verified SafeView Analysis
narsil-mcp
by postrv
AI-powered code analysis and understanding for developers and coding agents, providing semantic search, call graphs, security audits, and architectural insights.
AI-powered code analysis and understanding for developers and coding agents, providing semantic search, call graphs, security audits, and architectural insights.
Setup Requirements
- ⚠️Requires 'git' command-line tool to be installed and available in PATH.
- ⚠️Requires API keys (e.g., VOYAGE_API_KEY, OPENAI_API_KEY) for neural embeddings backend; these are typically paid services.
- ⚠️Advanced graph features (CCG, SPARQL) require building with '--features graph' and local persistence, increasing resource usage.
Verified SafeView Analysis
line-bot-mcp-server
by line
Integrates AI agents with LINE Messaging API for automated communication and management of LINE Official Accounts.
Integrates AI agents with LINE Messaging API for automated communication and management of LINE Official Accounts.
Setup Requirements
- ⚠️Requires a LINE Official Account with Messaging API enabled and a Channel Access Token (requires LINE platform registration and API setup).
- ⚠️Requires Node.js v20+ (if not using Docker).
- ⚠️The `create_rich_menu` tool depends on Puppeteer, which may involve a significant download size for its Chromium browser on first use or require specific executable path configuration in constrained environments.
Verified SafeView Analysis
seline
by tercumantanumut
A backend API server for managing and executing ComfyUI workflows, capable of dynamically generating API endpoints for workflows, building Docker containers for custom nodes and models, and providing an execution queue. It integrates with the Model Context Protocol (MCP) to expose its capabilities to client applications.
A backend API server for managing and executing ComfyUI workflows, capable of dynamically generating API endpoints for workflows, building Docker containers for custom nodes and models, and providing an execution queue. It integrates with the Model Context Protocol (MCP) to expose its capabilities to client applications.
Setup Requirements
- ⚠️Requires Docker to be installed and running, as it dynamically builds and manages ComfyUI Docker containers.
- ⚠️The ComfyUI instances orchestrated by this server typically require an NVIDIA GPU with substantial VRAM (e.g., 12GB+ for models like Flux2 Klein) for effective image/video generation.
- ⚠️The server application itself requires Python 3.10 or higher.
- ⚠️Requires proper network and firewall configuration due to exposed APIs and dynamic Docker operations.
Verified SafeView Analysis
claude-codex-settings
by fcakyon
A comprehensive toolkit and configuration for developing Claude Code plugins, integrating various external services and APIs, and enhancing AI-assisted coding workflows.
A comprehensive toolkit and configuration for developing Claude Code plugins, integrating various external services and APIs, and enhancing AI-assisted coding workflows.
Setup Requirements
- ⚠️Requires installation of multiple command-line tools (jq, gh, ruff, prettier) and Node.js/npm for specific components, despite Claude Code's native installation.
- ⚠️Many MCP integrations necessitate separate API keys or OAuth authentication flows (e.g., az login, gcloud auth) and configuration via environment variables.
- ⚠️The 'paper-search-tools' plugin specifically requires Docker to be installed and running.
- ⚠️A manual post-installation symlink (ln -s CLAUDE.md AGENTS.md) is required for cross-tool compatibility.
Verified SafeView Analysis
aleph
by Hmbown
Aleph is an MCP server that provides LLMs programmatic access to gigabytes of local data without consuming context, implementing the Recursive Language Model (RLM) architecture.
Aleph is an MCP server that provides LLMs programmatic access to gigabytes of local data without consuming context, implementing the Recursive Language Model (RLM) architecture.
Setup Requirements
- ⚠️Requires Python 3.10+.
- ⚠️Requires the `mcp` Python package, installed via `pip install "aleph-rlm[mcp]"`.
- ⚠️Using the API backend for `sub_query` (or for the main LLM loop) requires API keys (e.g., `ANTHROPIC_API_KEY`, `OPENAI_API_KEY`) and a specified model (`ALEPH_SUB_QUERY_MODEL`), incurring costs.
Verified SafeView Analysis
dbhub
by bytebase
A database gateway that exposes SQL data sources via a RESTful API and the Model Context Protocol (MCP), enabling structured interaction with multiple database types.
A database gateway that exposes SQL data sources via a RESTful API and the Model Context Protocol (MCP), enabling structured interaction with multiple database types.
Setup Requirements
- ⚠️Requires database configuration via DSN environment variable, --dsn flag, or a dbhub.toml file. No default database is configured without --demo mode.
- ⚠️SSH tunneling requires --ssh-host and --ssh-user (and either --ssh-password or --ssh-key) CLI arguments or environment variables, necessitating proper SSH setup and key management.
- ⚠️The HTTP transport operates in a stateless mode where the entire MCP server and its tools are re-initialized for every incoming /mcp request. This incurs significant overhead and impacts efficiency, especially for deployments with many configured sources and tools.
Verified SafeView Analysis
Matryoshka
by yogthos
Processes large documents beyond LLM context windows using a Recursive Language Model (RLM) that executes symbolic commands for iterative document analysis.
Processes large documents beyond LLM context windows using a Recursive Language Model (RLM) that executes symbolic commands for iterative document analysis.
Setup Requirements
- ⚠️Requires Node.js and npm/npx to be installed.
- ⚠️Requires a configured LLM provider; supports Ollama (needs local server running) or DeepSeek (requires API key).
- ⚠️If using DeepSeek, requires `DEEPSEEK_API_KEY` to be set in environment variables.
- ⚠️Analyzing code files may require additional `tree-sitter` npm packages to be installed dynamically.
Review RequiredView Analysis
knowns
by knowns-dev
A CLI-first knowledge layer and task/documentation management tool that provides AI agents with persistent project context.
A CLI-first knowledge layer and task/documentation management tool that provides AI agents with persistent project context.
Setup Requirements
- ⚠️Requires Node.js or Bun runtime environment.
- ⚠️Anthropic Claude CLI is required for full AI agent integration via MCP (Model Context Protocol).
- ⚠️Web UI runs on localhost:6420 by default; ensure port is free or configure a different one.
Verified SafeView Analysis
marionette_mcp
by leancodepl
Enables AI agents to inspect and interact with running Flutter applications for automated testing and runtime interaction.
Enables AI agents to inspect and interact with running Flutter applications for automated testing and runtime interaction.
Setup Requirements
- ⚠️Your Flutter application must integrate the `marionette_flutter` package and initialize `MarionetteBinding`.
- ⚠️The Flutter application must be running in debug or profile mode to expose the VM service.
- ⚠️The `dart:logging` package must be used in the Flutter app for the `get_logs` tool to function.
- ⚠️The VM Service URI typically needs to be manually provided to the AI agent for connection.