mcp-markdown-rag
Verified Safeby UnitVectorY-Labs
Overview
Indexing and semantic search of local Markdown documents using vector embeddings and an MCP server.
Installation
./rag -mcpEnvironment Variables
- RAG_OLLAMA_URL
- RAG_EMBEDDING_MODEL
- RAG_DB_PATH
Security Notes
The tool interacts with the local file system for indexing and database storage, and makes HTTP requests to a configurable Ollama API endpoint. By default, it uses 'http://localhost:11434/api/embeddings', limiting network exposure. Users must ensure the configured Ollama endpoint is trusted to prevent sensitive Markdown content from being sent to an untrusted embedding service. The MCP server runs over standard I/O, which is typically a secure communication channel.
Similar Servers
haiku.rag
Opinionated agentic RAG powered by LanceDB, Pydantic AI, and Docling to provide hybrid search, intelligent QA, and multi-agent research over user-provided documents, accessible via CLI, Python API, Web App, TUI, or as an MCP server for AI assistants.
Context-Engine
Self-improving code search and context engine for IDEs and AI agents, providing hybrid semantic/lexical search, symbol graph navigation, and persistent memory.
pageindex-mcp
This MCP server acts as a bridge, enabling LLM-native, reasoning-based RAG on documents (local or online PDFs) for MCP-compatible agents like Claude and Cursor, without requiring a vector database locally.
apple-rag-mcp
Provides a comprehensive RAG (Retrieval-Augmented Generation) server for AI agents to search and retrieve content from Apple's developer documentation and WWDC transcripts.