UltraRAG
Verified Safeby OpenBMB
Overview
An open-source RAG framework for building, experimenting, and evaluating complex Retrieval-Augmented Generation (RAG) pipelines with low-code YAML configurations and native multimodal support.
Installation
ultrarag show uiEnvironment Variables
- log_level
- CUDA_VISIBLE_DEVICES
- VLLM_WORKER_MULTIPROC_METHOD
- RETRIEVER_API_KEY
- LLM_API_KEY
- EXA_API_KEY
- TAVILY_API_KEY
- ZHIPUAI_API_KEY
- ULTRARAG_LOG_TS
Security Notes
The server uses `ast.literal_eval` for parsing configuration values like list delimiters, which is generally safer than `eval` but still processes string input as Python literals. Subprocess execution (`subprocess.Popen`, `asyncio.create_subprocess_exec`) is used for launching MCP servers and external tools like `mineru`, which is inherent to its architecture; parameters appear to be sanitized or derived from trusted sources. Network risks include making API calls to external LLM providers (OpenAI, ZhipuAI) and web search services (Exa, Tavily), and also supports deploying a remote retriever via a configurable URL (`retriever_url`). If `retriever_url` is user-controlled in a non-isolated environment, it could pose a Server-Side Request Forgery (SSRF) risk. Hardcoded secrets are avoided by relying on environment variables (e.g., `LLM_API_KEY`, `EXA_API_KEY`). The framework's overall security depends significantly on how users configure and deploy individual MCP servers and pipelines.
Similar Servers
5ire
A desktop AI assistant client that integrates with various LLM providers and connects to Model Context Protocol (MCP) servers for extended tool-use and knowledge base capabilities.
context-portal
Manages structured project context for AI assistants and developer tools, enabling Retrieval Augmented Generation (RAG) and prompt caching within IDEs.
mem-agent-mcp
Provides a Model Context Protocol (MCP) server for a memory agent, enabling LLMs to interact with an Obsidian-like memory system for contextual assistance and RAG.
solon-ai
The Model Context Protocol (MCP) server provides a standardized interface for AI models to interact with external tools, resources, and prompt templates through a structured, bidirectional communication protocol.