cortexgraph
Verified Safeby prefrontal-systems
Overview
A Model Context Protocol (MCP) server providing AI assistants with ephemeral, local short-term memory, temporal decay, reinforcement, and automatic promotion to long-term storage.
Installation
python -m cortexgraph.serverEnvironment Variables
- CORTEXGRAPH_STORAGE_PATH
- CORTEXGRAPH_ENABLE_EMBEDDINGS
- CORTEXGRAPH_EMBED_MODEL
- LTM_VAULT_PATH
- LTM_PROMOTED_FOLDER
- CORTEXGRAPH_STORAGE_BACKEND
Security Notes
The project demonstrates a very strong focus on security, with dedicated modules and features for input validation, path traversal prevention, file/directory permission hardening (0o600/0o700), and sensitive data detection. The `detect_secrets` module actively scans content and `.env` files for common secret patterns (API keys, tokens, passwords) and warns the user. Rate limiting is implemented for API endpoints. `subprocess.run` is used for external CLI (`bd` for Beads integration) but appears to be handled with care, using `--json` output and not `shell=True`. Overall, security is a core concern, making it robust against common vulnerabilities.
Similar Servers
mcp-memory-service
A Model Context Protocol (MCP) server providing persistent, semantic memory storage and retrieval capabilities for AI agents. It supports lightweight semantic reasoning (contradiction, causal inference), content chunking, multi-backend storage (SQLite-vec, Cloudflare, Hybrid), autonomous memory consolidation (decay, association, clustering, compression, forgetting), and real-time updates via SSE. It's designed for token-efficient interaction with LLMs.
context-sync
Context Sync provides AI systems with persistent, queryable memory across all development tools, sessions, and projects, allowing AI to remember codebase details, architectural decisions, and conversation history.
post-cortex
Provides long-term, persistent memory and knowledge management for AI assistants, enabling them to store, semantically search, and retrieve conversation context, decisions, and code-related insights.
memory-mcp
Provides persistent memory and intelligent context window caching for LLM conversations within AI coding environments.