qdrant-loader
Verified Safeby martin-papy
Overview
A Model Context Protocol (MCP) server that provides advanced Retrieval-Augmented Generation (RAG) capabilities to AI development tools by bridging a QDrant knowledge base for intelligent, context-aware search.
Installation
mcp-qdrant-loaderEnvironment Variables
- QDRANT_URL
- LLM_API_KEY
- QDRANT_COLLECTION_NAME
- LLM_PROVIDER
- LLM_BASE_URL
- LLM_EMBEDDING_MODEL
- LLM_CHAT_MODEL
- OPENAI_API_KEY
- MCP_LOG_LEVEL
- MCP_LOG_FILE
- MCP_DISABLE_CONSOLE_LOGGING
- QDRANT_API_KEY
Security Notes
The server follows good security practices for credential management by exclusively using environment variables and implementing redaction of sensitive information in logs. By default, it binds to localhost for stdio transport, minimizing network exposure. For HTTP transport, origin validation and CORS middleware are configured. No direct dynamic code execution from user input ('eval' or similar) was identified, reducing RCE risks. The system relies on external LLM APIs and QDrant, requiring secure configuration of these external services. While input validation is present via Pydantic schemas for arguments, robust sanitization within each search tool's logic for all user-provided string arguments is always a critical consideration, though the RAG nature of the server inherently limits execution of arbitrary code.
Similar Servers
context-portal
Manages structured project context for AI assistants and developer tools, enabling Retrieval Augmented Generation (RAG) and prompt caching within IDEs.
qdrant-mcp-server
This server provides semantic search capabilities using Qdrant vector database, primarily focused on code vectorization for intelligent codebase indexing and semantic code search, as well as general document search.
concept-rag
This MCP server provides conceptual search, document analysis, and library exploration capabilities over a knowledge base using LanceDB and LLM-based concept extraction.
the-pensieve
The Pensieve server acts as a RAG-based knowledge management system, allowing users to store, query, and analyze their knowledge using natural language and LLM-powered insights.