VectorMind
Verified Safeby RecallFlow
Overview
A lightweight vector database service providing semantic search and Retrieval Augmented Generation (RAG) capabilities using Redis as a backend for storing embeddings.
Installation
docker compose up -dEnvironment Variables
- REDIS_INDEX_NAME
- REDIS_ADDRESS
- REDIS_PASSWORD
- MCP_HTTP_PORT
- API_REST_PORT
- EMBEDDING_MODEL
- MODEL_RUNNER_BASE_URL
Security Notes
The application handles data segmentation (labels, metadata) and provides multiple text splitting strategies (chunking, markdown sections, delimited). Input validation is present for API requests (e.g., ensuring 'content', 'document', 'chunk_size' are not empty or invalid). Environment variables are used for sensitive configurations like Redis password, which is a good practice. The reliance on 'MODEL_RUNNER_BASE_URL' for the embedding model (potentially local or self-hosted) can mitigate risks associated with external API key exposure, although an API key is passed as an empty string to the OpenAI client. No obvious use of 'eval' or user-controlled shell command execution in the Go application logic. Dockerization further enhances isolation.
Similar Servers
context-portal
Manages structured project context for AI assistants and developer tools, enabling Retrieval Augmented Generation (RAG) and prompt caching within IDEs.
local_faiss_mcp
Provides a local FAISS-based vector database as an MCP server for Retrieval-Augmented Generation (RAG) applications, enabling document ingestion, semantic search, and prompt generation.
vector-mcp
Provides a standardized API for AI agents to manage and interact with various vector database technologies for Retrieval Augmented Generation (RAG).
concept-rag
This MCP server provides conceptual search, document analysis, and library exploration capabilities over a knowledge base using LanceDB and LLM-based concept extraction.