emceepee
Verified Safeby eastlondoner
Overview
A proxy server enabling AI agents to dynamically connect to and interact with multiple Model Context Protocol (MCP) backend servers, exposing the full MCP protocol via a simplified tool interface or a sandboxed JavaScript execution environment.
Installation
npx emceepeeEnvironment Variables
- PORT
- EMCEEPEE_NO_CODEMODE
- EMCEEPEE_CONFIG
- EMCEEPEE_LOG_DIR
Security Notes
The project demonstrates a strong commitment to security, especially in its `codemode_execute` feature. It explicitly uses Node.js `vm` module to create a sandboxed execution environment. This sandbox blocks access to critical Node.js globals (`process`, `require`, `module`, `global`, `globalThis`, `Buffer`), network/IO APIs (`fetch`, `XMLHttpRequest`, `WebSocket`), dynamic code execution (`eval`, `Function`), and direct filesystem access. It also enforces execution timeouts (default 30s, max 5min) and limits the number of `mcp.*` API calls (default 100) and code length (100KB) to prevent abuse and denial-of-service. While `vm` provides strong isolation, it's not as absolutely foolproof as a separate process or `isolated-vm` (which is noted as a future enhancement in `CODEMODE_PLAN.md`), but for its intended use case as an intermediary for trusted AI agents, it provides a highly secure execution model. No obvious hardcoded secrets or malicious patterns were found.
Similar Servers
mcpo
Exposes Model Context Protocol (MCP) tools as OpenAPI-compatible HTTP servers.
mcphub
An orchestration hub that aggregates, manages, and routes Model Context Protocol (MCP) servers and their tools, providing a centralized interface, user management, OAuth 2.0 authorization server capabilities, and AI-powered tool discovery and routing.
mcp-language-server
Serves as an MCP (Model Context Protocol) gateway, enabling LLMs to interact with Language Servers (LSPs) for codebase navigation, semantic analysis, and code editing operations.
aicode-toolkit
An MCP proxy server that aggregates multiple Model Context Protocol (MCP) servers, enabling on-demand tool discovery and execution, thereby significantly reducing AI agent token usage and improving context window efficiency by loading tools progressively.