vector-mcp
by Knuckles-Team
Overview
Provides a standardized API for AI agents to manage and interact with various vector database technologies for Retrieval Augmented Generation (RAG).
Installation
docker run -d --name vector-mcp -p 8004:8004 -e HOST=0.0.0.0 -e PORT=8004 -e TRANSPORT=http -e AUTH_TYPE=none -e EUNOMIA_TYPE=none knucklessg1/vector-mcp:latestEnvironment Variables
- MCP_URL
- DATABASE_TYPE
- DATABASE_PATH
- DB_HOST
- DB_PORT
- DBNAME
- USERNAME
- PASSWORD
- API_TOKEN
- PROVIDER
- OPENAI_BASE_URL
- OPENAI_API_KEY
- ANTHROPIC_API_KEY
- GOOGLE_API_KEY
- HF_TOKEN
- AUTH_TYPE
- OIDC_CONFIG_URL
- OIDC_CLIENT_ID
- OIDC_CLIENT_SECRET
- OIDC_BASE_URL
- TOKEN_JWKS_URI
- TOKEN_ISSUER
- TOKEN_AUDIENCE
Security Notes
The `vector-mcp` server defaults to `AUTH_TYPE=none`, which is a critical security risk for any production deployment, leaving the server unprotected. While robust authentication mechanisms (JWT, OAuth, OIDC) are supported and configurable via environment variables, the default insecure posture is a significant concern. Database credentials are passed via environment variables, which relies on the user's secure environment configuration. The server binds to `0.0.0.0`, requiring external network isolation.
Similar Servers
haiku.rag
Opinionated agentic RAG powered by LanceDB, Pydantic AI, and Docling to provide hybrid search, intelligent QA, and multi-agent research over user-provided documents, accessible via CLI, Python API, Web App, TUI, or as an MCP server for AI assistants.
Context-Engine
Self-improving code search and context engine for IDEs and AI agents, providing hybrid semantic/lexical search, symbol graph navigation, and persistent memory.
qdrant-loader
A Model Context Protocol (MCP) server that provides advanced Retrieval-Augmented Generation (RAG) capabilities to AI development tools by bridging a QDrant knowledge base for intelligent, context-aware search.
local_faiss_mcp
Provides a local FAISS-based vector database as an MCP server for Retrieval-Augmented Generation (RAG) applications, enabling document ingestion, semantic search, and prompt generation.