mcpRAG
Verified Safeby rajagopal17
Overview
A Retrieval-Augmented Generation (RAG) system for document-based question answering using local embeddings and a Gemini LLM.
Installation
python ragModel.pyEnvironment Variables
- GEMINI_API_KEY
Security Notes
The system loads the Gemini API key from an environment variable (`.env`), which is good practice. Ollama embeddings are processed locally, reducing external data transfer risks. There are no obvious signs of 'eval', obfuscation, or direct shell command injection points from user input. A potential code structure issue in `ragModel.py` exists where the Gemini generation call is outside the `if __name__ == '__main__':` block, relying on variables defined within it. This is a code correctness concern rather than a direct security vulnerability but could lead to runtime errors or unexpected behavior if the file is imported.
Similar Servers
haiku.rag
Opinionated agentic RAG powered by LanceDB, Pydantic AI, and Docling to provide hybrid search, intelligent QA, and multi-agent research over user-provided documents, accessible via CLI, Python API, Web App, TUI, or as an MCP server for AI assistants.
flexible-graphrag
The Flexible GraphRAG MCP Server integrates document processing, knowledge graph building, hybrid search, and AI query capabilities via the Model Context Protocol (MCP) for clients like Claude Desktop and MCP Inspector.
mcp-local-rag
Local RAG server for developers enabling private, offline semantic search with keyword boosting on personal or project documents (PDF, DOCX, TXT, MD, HTML).
rag-server-mcp
Provides Retrieval Augmented Generation (RAG) capabilities to Model Context Protocol (MCP) clients by indexing project documents and retrieving relevant content for LLMs.