rmcp_memex
Verified Safeby Loctree
Overview
A local Model Context Protocol (MCP) server providing Retrieval-Augmented Generation (RAG) capabilities with embedded vector storage and local embeddings/reranking.
Installation
cargo run --release -- --log-level infoEnvironment Variables
- DISABLE_MLX
- DRAGON_BASE_URL
- MLX_JIT_MODE
- MLX_JIT_PORT
- EMBEDDER_PORT
- RERANKER_PORT
- EMBEDDER_MODEL
- RERANKER_MODEL
- FASTEMBED_CACHE_PATH
- HF_HUB_CACHE
- LANCEDB_PATH
- PROTOC
- HOME
Security Notes
The server runs as a local process, communicating via stdin/stdout using JSON-RPC, which inherently limits direct network attack surface. It relies on well-established Rust libraries for HTTP (reqwest), data serialization (serde_json), and vector storage (lancedb). No 'eval' or similar dynamic code execution patterns were found. Environment variables are used for configuration, preventing hardcoded secrets. Processing external files (PDFs via pdf-extract, arbitrary text) does introduce a potential risk if these libraries or the parsing logic are vulnerable to malformed inputs. The MLX bridge relies on a separate, assumed-to-be-trusted, local MLX HTTP server.
Similar Servers
haiku.rag
Opinionated agentic RAG powered by LanceDB, Pydantic AI, and Docling to provide hybrid search, intelligent QA, and multi-agent research over user-provided documents, accessible via CLI, Python API, Web App, TUI, or as an MCP server for AI assistants.
Context-Engine
Self-improving code search and context engine for IDEs and AI agents, providing hybrid semantic/lexical search, symbol graph navigation, and persistent memory.
local_faiss_mcp
Provides a local FAISS-based vector database as an MCP server for Retrieval-Augmented Generation (RAG) applications, enabling document ingestion, semantic search, and prompt generation.
vector-mcp
Provides a standardized API for AI agents to manage and interact with various vector database technologies for Retrieval Augmented Generation (RAG).