long-context-mcp
Verified Safeby wx-b
Overview
A Model Context Protocol (MCP) server that implements Recursive Language Models (RLM) for solving long-context problems by programmatically probing, recursing, and synthesizing over large codebases or documents.
Installation
uv run rlm_mcp_server/server.pyEnvironment Variables
- OPENROUTER_API_KEY
- OPENAI_API_KEY
- RLM_DEFAULT_MODEL
- RLM_DEFAULT_RECURSION_MODEL
Security Notes
The server's core functionality involves executing LLM-generated code (`rlm.core.rlm.RLM`) to probe context. While this inherently carries execution risk, the project is highly transparent about it, strongly recommends and defaults to a Docker sandbox for execution (`environment: docker`), and enforces API key management via environment variables (no hardcoded secrets). File ingestion (`rlm_mcp_server/ingest.py`) also enforces repo boundaries to prevent path traversal. The use of `ast.literal_eval` for parsing LLM output, while safer than `eval`, is a point of note for output processing.
Similar Servers
fastmcp
FastMCP is an ergonomic interface for the Model Context Protocol (MCP), providing a comprehensive framework for building and interacting with AI agents, tools, resources, and prompts across various transports and authentication methods.
jinni
A tool to efficiently provide Large Language Models with structured project context for code comprehension and generation tasks.
action_mcp
ActionMCP is a Ruby gem providing Model Context Protocol (MCP) server capabilities to Rails applications, enabling AI assistants to connect to external data sources and tools.
1xn-vmcp
An open-source platform for composing, customizing, and extending multiple Model Context Protocol (MCP) servers into a single logical, virtual MCP server, enabling fine-grained context engineering for AI workflows and agents.