Stop Searching. Start Trusting.
The curated directory of MCP servers, vetted for security, efficiency, and quality.
Tired of the MCP "Marketplace" Chaos?
We built MCPScout.ai to solve the ecosystems biggest pain points.
No Insecure Dumps
We manually analyze every server for basic security flaws.
Easy Setup
Our gotcha notes warn you about complex setups.
Avoid "Token Hogs"
We estimate token costs for cost-effective agents.
Products, Not Demos
We filter out "Hello World" demos.
Vetted Servers(8554)
claude-team-mcp
by guru111244
A multi-agent MCP server designed for AI development teams to facilitate collaboration between various AI models (like GPT, Claude, Gemini) on complex coding and development tasks.
A multi-agent MCP server designed for AI development teams to facilitate collaboration between various AI models (like GPT, Claude, Gemini) on complex coding and development tasks.
Setup Requirements
- ⚠️Requires API keys for at least one large language model provider (e.g., OpenAI, Anthropic, Gemini, or a custom proxy service), which typically involve usage costs.
- ⚠️Requires Node.js version 18 or higher to run the server.
- ⚠️Designed for integration with specific IDEs (Claude Code, Windsurf, Cursor) via Model Context Protocol (MCP) configuration, necessitating initial setup in the IDE's configuration file.
Verified SafeView Analysis
thebrain-mcp-server
by jqlts1
Integrates TheBrain knowledge graph with AI assistants by providing a Model Context Protocol (MCP) server, a RESTful API, and a Spaced Repetition System (SRS) for efficient knowledge recall.
Integrates TheBrain knowledge graph with AI assistants by providing a Model Context Protocol (MCP) server, a RESTful API, and a Spaced Repetition System (SRS) for efficient knowledge recall.
Setup Requirements
- ⚠️Requires 'THEBRAIN_API_KEY' and 'THEBRAIN_BRAIN_ID' from a paid TheBrain account.
- ⚠️Python 3.10 is specified for local development setup (via Conda).
- ⚠️Default weak password '123456' is active for API access if 'WEB_PASSWORD' environment variable is not explicitly configured, making it unsafe by default.
Review RequiredView Analysis
vite-mcp
by broisnischal
Provides a Model Context Protocol (MCP) server within a Vite development environment, enabling AI agents and other MCP clients to interact with and observe the browser's state and APIs in real-time.
Provides a Model Context Protocol (MCP) server within a Vite development environment, enabling AI agents and other MCP clients to interact with and observe the browser's state and APIs in real-time.
Setup Requirements
- ⚠️Requires Vite dev server to be running (operates only in 'development' mode).
- ⚠️Requires a browser page to be open at the Vite dev server URL for the browser bridge and adapters to function.
- ⚠️For dynamically generated HTML (e.g., React Router, Remix), `import "virtual:mcp";` must be manually added at the very top of the app's entry file, along with `tsconfig.json` updates for TypeScript support.
- ⚠️The `zod` library is a mandatory peer dependency.
Verified SafeView Analysis
memory-mcp-server
by Sinhan88
Provides long-term memory and context storage/retrieval for Large Language Models (LLMs) via an API, adhering to the Model Context Protocol (MCP).
Provides long-term memory and context storage/retrieval for Large Language Models (LLMs) via an API, adhering to the Model Context Protocol (MCP).
Setup Requirements
- ⚠️Requires Python and pip installed
Review RequiredView Analysis
mcp-evernote
by verygoodplugins
Seamless integration with Evernote for note management, organization, and knowledge capture via Model Context Protocol.
Seamless integration with Evernote for note management, organization, and knowledge capture via Model Context Protocol.
Setup Requirements
- ⚠️Requires Evernote API Consumer Key and Consumer Secret, which must be obtained from Evernote Developers (dev.evernote.com).
- ⚠️For Claude Desktop users, a local server is run on port 3000 (default) for the OAuth callback, which must be available.
- ⚠️Requires Node.js version 18.0.0 or higher.
Verified SafeView Analysis
ckan-mcp-server
by ondata
Enables AI agents to interact with CKAN-based open data portals for searching, exploring, and querying datasets, organizations, and tabular data.
Enables AI agents to interact with CKAN-based open data portals for searching, exploring, and querying datasets, organizations, and tabular data.
Setup Requirements
- ⚠️Requires Node.js v18.0.0 or higher.
- ⚠️Requires an MCP-compatible client (e.g., Claude Desktop, HTTP client) to interact with it.
- ⚠️To run as an HTTP server, the `TRANSPORT=http` environment variable must be set (default is `stdio`).
- ⚠️Deploying to Cloudflare Workers requires the `wrangler` CLI and a Cloudflare account.
Verified SafeView Analysis
mcp-doc-generator
by lukaszzychal
Generates technical documentation and various diagrams (architecture, UML, sequence, flowchart, Gantt, dependency, cloud) from code or natural language prompts, supporting Polish, and exports to PDF/DOCX. Includes optional AI image generation.
Generates technical documentation and various diagrams (architecture, UML, sequence, flowchart, Gantt, dependency, cloud) from code or natural language prompts, supporting Polish, and exports to PDF/DOCX. Includes optional AI image generation.
Setup Requirements
- ⚠️Requires Docker and Docker Compose for full functionality (recommended setup).
- ⚠️OpenAI API Key (OPENAI_API_KEY environment variable) is required for AI image generation tools (generate_image_openai, generate_icon_openai, generate_illustration_openai). These tools incur a cost (e.g., $0.04 per standard 1024x1024 image) and are not token-based.
- ⚠️External network access is used by default for Mermaid (via mermaid.ink API) and OpenAI tools. Local Mermaid CLI execution is a fallback option.
Verified SafeView Analysis
scalekit-mcp-server
by scalekit-inc
This server enables AI agents to interact with Scalekit's identity platform through the Model Context Protocol (MCP) for natural language identity management.
This server enables AI agents to interact with Scalekit's identity platform through the Model Context Protocol (MCP) for natural language identity management.
Setup Requirements
- ⚠️Requires a Scalekit account and configured API credentials (SK_CLIENT_ID, SK_CLIENT_SECRET, etc.) to interact with the Scalekit platform.
- ⚠️Requires Node.js version 18 or greater, as indicated by various dependency `engines` fields in `package-lock.json` and ES module usage.
Verified SafeView Analysis
Ghost-MCP-Server
by jgardner04
Manages a Ghost CMS instance programmatically by exposing its Admin API as an MCP Server, allowing AI agents or other systems to create, update, delete, and retrieve content (posts, pages, tags, members, newsletters, tiers) and upload images.
Manages a Ghost CMS instance programmatically by exposing its Admin API as an MCP Server, allowing AI agents or other systems to create, update, delete, and retrieve content (posts, pages, tags, members, newsletters, tiers) and upload images.
Setup Requirements
- ⚠️Requires `GHOST_ADMIN_API_URL` and `GHOST_ADMIN_API_KEY` environment variables.
- ⚠️The image upload tool (`ghost_upload_image`) can only download images from a whitelist of domains (imgur.com, github.com, unsplash.com, cloudinary.com, amazonaws.com) for security reasons. Other image sources will be blocked.
- ⚠️Large list operations (e.g., `get_posts` with `limit=100`) can generate very large JSON outputs, potentially incurring high token costs for AI models.
Verified SafeView Analysis
finance_mcp
by ryar001
Provides structured financial statements (Income Statement, Balance Sheet, Cash Flow) from public companies for consumption by LLMs via an MCP server.
Provides structured financial statements (Income Statement, Balance Sheet, Cash Flow) from public companies for consumption by LLMs via an MCP server.
Setup Requirements
- ⚠️Relies on `yfinance` for data, which scrapes Yahoo Finance; respect Yahoo Finance's terms of service to avoid potential IP bans or service interruptions.
- ⚠️Requires `uv` package manager (or `uvx`) for the recommended quick installation and execution method.
- ⚠️While documentation mentions `GEMINI_API_KEY`, the current implementation directly uses `yfinance` and does not incur LLM token costs for data retrieval, but this might change in future iterations based on project history.
Verified SafeView Analysis
MemCP
by ardaaltinors
MemCP provides a memory management system for AI assistants, enabling persistent context, knowledge graphs, and user profile synthesis across conversations and various AI platforms.
MemCP provides a memory management system for AI assistants, enabling persistent context, knowledge graphs, and user profile synthesis across conversations and various AI platforms.
Setup Requirements
- ⚠️Requires paid API keys for OpenAI or Google Gemini (for embeddings and LLM inference).
- ⚠️Requires Docker and Docker Compose for the full infrastructure setup (PostgreSQL, Qdrant, RabbitMQ, Redis).
- ⚠️Requires Python 3.11+ and the 'uv' package manager for local development setup.
- ⚠️Multiple external services (PostgreSQL, Qdrant, RabbitMQ, Redis) must be running and accessible.
Review RequiredView Analysis
refrag
by DIMANANDEZ
A Python library for Retrieval Augmented Generation (RAG) that uses micro-chunking, fast direct embedding, and query-time heuristic compression to reduce context size and improve retrieval efficiency, with optional LLM-based reranking for precision.
A Python library for Retrieval Augmented Generation (RAG) that uses micro-chunking, fast direct embedding, and query-time heuristic compression to reduce context size and improve retrieval efficiency, with optional LLM-based reranking for precision.
Setup Requirements
- ⚠️Requires OpenAI or Anthropic API Key (Paid, only if using the optional REFRAGReranker component)
- ⚠️Requires `sentence-transformers` models (downloaded on first use, can be large depending on model)
- ⚠️The official documentation (README, How-It-Works) describes a system that uses LLMs for generating representations during indexing, but the provided source code explicitly implements 'Fast direct encoding (no LLM during indexing)' and heuristic compression, making the core functionality LLM-free and contradicting the documentation's core innovation description.