Context-Engine
by m1rl0k
Overview
Self-improving code search and context engine for IDEs and AI agents, providing hybrid semantic/lexical search, symbol graph navigation, and persistent memory.
Installation
docker compose up -dEnvironment Variables
- QDRANT_URL
- COLLECTION_NAME
- EMBEDDING_MODEL
- FASTMCP_HOST
- FASTMCP_PORT
- FASTMCP_INDEXER_PORT
- LLAMACPP_URL
- GLM_API_KEY
- GLM_API_BASE
- GLM_MODEL
- CTXCE_AUTH_ENABLED
- CTXCE_AUTH_SHARED_TOKEN
- CTXCE_AUTH_ADMIN_TOKEN
- NEO4J_GRAPH
- NEO4J_URI
- NEO4J_USER
- NEO4J_PASSWORD
- NEO4J_DATABASE
- OPENLIT_ENABLED
- OTEL_EXPORTER_OTLP_ENDPOINT
- RERANKER_MODEL
- REMOTE_UPLOAD_GIT_MAX_COMMITS
- REMOTE_UPLOAD_GIT_SINCE
- LEX_SPARSE_MODE
- PATTERN_VECTORS
- MULTI_REPO_MODE
- REFRAG_RUNTIME
- REFRAG_DECODER
Security Notes
The system extensively uses `subprocess.run` and `subprocess.Popen` for internal orchestration (e.g., Git commands, Python scripts, Docker operations). This introduces a potential risk of shell injection if user inputs (e.g., file paths, queries) are not rigorously sanitized. While environment variables are used for secrets (e.g., `GITHUB_TOKEN`, `OPENAI_API_KEY`), default passwords like 'contextengine' for Neo4j exist in development configurations. Network communication between services (MCP, Qdrant, Llama.cpp) is managed, but exposed ports require network segmentation in production. The `ctxce` CLI, driven by the VS Code extension, also represents a potential attack surface.
Similar Servers
MaxKB
MaxKB (Max Knowledge Brain) is an enterprise-grade intelligent agent platform designed to lower the technical barrier and deployment costs of AI implementation, helping businesses quickly integrate mainstream large language models, build proprietary knowledge bases, and offer a progressive upgrade path from RAG to complex workflow automation and advanced agents for various application scenarios like smart customer service and office assistants.
VectorCode
Indexes code repositories to generate relevant contextual information for Large Language Models (LLMs), enhancing their performance on specific or private codebases.
qdrant-loader
A Model Context Protocol (MCP) server that provides advanced Retrieval-Augmented Generation (RAG) capabilities to AI development tools by bridging a QDrant knowledge base for intelligent, context-aware search.
codebase-RAG
A Retrieval-Augmented Generation (RAG) server designed to assist AI agents and developers in understanding and navigating codebases through semantic search.