mendix-mcp-server
by jordnlvr
Overview
A self-learning Model Context Protocol (MCP) server providing expert-level Mendix development assistance, including knowledge base queries, project and theme analysis, code generation patterns, and troubleshooting, with support for semantic search and auto-harvesting.
Installation
npm startEnvironment Variables
- SUPABASE_URL
- SUPABASE_ANON_KEY
- SUPABASE_SERVICE_KEY
- PINECONE_API_KEY
- PINECONE_INDEX
- PINECONE_ENVIRONMENT
- AZURE_OPENAI_API_KEY
- AZURE_OPENAI_ENDPOINT
- AZURE_OPENAI_EMBEDDING_DEPLOYMENT
- OPENAI_API_KEY
- HARVEST_INTERVAL_DAYS
- HARVEST_AUTO_RUN
- LOG_LEVEL
Security Notes
The project includes an obfuscated, hardcoded Pinecone API key in `src/vector/VectorStore.js` as a fallback if environment variables are not set. This is a critical security vulnerability as it could expose access to a shared Pinecone index. While `flyio-secrets.sh` indicates proper environment variable usage for deployment, a local run without explicit configuration would use the hardcoded key. The `SyncReminder` module uses `child_process.execSync` for git operations, which carries inherent risks if not carefully managed, although its current use for self-repo management appears controlled. Public REST and SSE endpoints lack user-level authentication, relying on rate-limiting for abuse prevention, which is noted as a future improvement in the `ARCHITECTURE.md`.
Similar Servers
chunkhound
Provides local-first codebase intelligence, extracting architecture, patterns, and institutional knowledge for AI assistants.
mcp-documentation-server
A local-first MCP server for document management, semantic search, and AI-powered document intelligence.
codeweaver
A code intelligence platform that provides semantically rich, context-aware code search for AI agents, aimed at reducing cognitive load and token costs for coding tasks.
bluera-knowledge
Provides a semantic knowledge base and intelligent web crawling capabilities to power coding agents, enabling them to search internal project files, Git repositories, and crawled web documentation.