mcpRAG
Verified Safeby rajagopal17
Overview
A Retrieval-Augmented Generation (RAG) system for document-based question answering using local embeddings and a Gemini LLM.
Installation
python ragModel.pyEnvironment Variables
- GEMINI_API_KEY
Security Notes
The system loads the Gemini API key from an environment variable (`.env`), which is good practice. Ollama embeddings are processed locally, reducing external data transfer risks. There are no obvious signs of 'eval', obfuscation, or direct shell command injection points from user input. A potential code structure issue in `ragModel.py` exists where the Gemini generation call is outside the `if __name__ == '__main__':` block, relying on variables defined within it. This is a code correctness concern rather than a direct security vulnerability but could lead to runtime errors or unexpected behavior if the file is imported.
Similar Servers
haiku.rag
An opinionated agentic RAG system that uses LanceDB for vector storage, Pydantic AI for multi-agent workflows, and Docling for document processing, exposing its capabilities as MCP tools for AI assistants.
flexible-graphrag
The Flexible GraphRAG MCP Server provides a Model Context Protocol (MCP) interface for AI assistants (like Claude Desktop) to interact with a sophisticated RAG and GraphRAG system for document processing, knowledge graph auto-building, hybrid search, and AI Q&A.
mcp-local-rag
A privacy-first, local document search server that leverages semantic search for Model Context Protocol (MCP) clients.
rag-server-mcp
Provides Retrieval Augmented Generation (RAG) capabilities to Model Context Protocol (MCP) clients by indexing local project documents and retrieving relevant information for LLMs.