rag-server-mcp
by SylphxAI
Overview
Provides Retrieval Augmented Generation (RAG) capabilities to Model Context Protocol (MCP) clients by indexing project documents and retrieving relevant content for LLMs.
Installation
docker-compose up -d --build && docker exec ollama ollama pull nomic-embed-textEnvironment Variables
- CHROMA_URL
- OLLAMA_HOST
- INDEX_PROJECT_ON_STARTUP
- CHUNK_SIZE
- CHUNK_OVERLAP
Security Notes
CRITICAL: This project is explicitly marked as DEPRECATED in its README, with a recommendation to migrate to CodeRAG. Deprecated software is unmaintained and inherently poses significant security risks due to unpatched vulnerabilities. Additionally, the server interacts directly with the file system (indexing/deleting files relative to its current working directory) based on input from MCP clients. While intended for project RAG, a malicious MCP client could potentially manipulate or delete arbitrary files within the server's CWD. E2E tests indicate integration issues with ChromaDB and Ollama plugins, which might lead to unexpected behavior.
Similar Servers
mcp-local-rag
Local RAG server for developers enabling private, offline semantic search with keyword boosting on personal or project documents (PDF, DOCX, TXT, MD, HTML).
Little_MCP
A local AI assistant leveraging Retrieval-Augmented Generation (RAG) and multi-tool agents for document Q&A, real-time information, and SQL database interaction.
mcp-memory-server
Provides semantic memory storage for AI assistants, enabling them to store and retrieve decisions, patterns, and context across sessions using vector embeddings.
mcp-rag-server
Provides a local, zero-network Retrieval-Augmented Generation server for any code repository, enabling semantic search and file access through the Model Context Protocol (MCP) for AI clients like GitHub Copilot Agent.