memory-lane
Verified Safeby robbgatica
Overview
AI-powered memory forensics analysis using Volatility 3 and an LLM.
Installation
cd /path/to/memory-forensics-mcp/examples && export MCP_LLM_PROFILE=llama70b && python ollama_client.pyEnvironment Variables
- VOLATILITY_PATH
- DUMPS_DIR
- MCP_LLM_PROFILE
- OLLAMA_MODEL
- MCP_SERVER_PATH
- OPENAI_API_KEY
Security Notes
The server integrates Volatility 3 for memory analysis and exposes its capabilities via the Model Context Protocol (MCP). It handles memory dumps (potentially malicious data) for analysis. No direct 'eval' or 'exec' calls were found in the provided source. Network communication from the Ollama client to the Ollama server is localhost by default, reducing external network risks. Sensitive configurations like API keys are expected as environment variables. Provenance tracking (`provenance.py`) provides an audit trail. The primary security considerations involve the inherent risks of handling potentially malicious memory dumps in a forensic context, and the security of the underlying Volatility 3 framework. The parsing of LLM responses for tool calls introduces a potential, but mitigated, risk if the JSON parsing or tool argument mapping is flawed. Recommendations for running in isolated environments are present in the documentation.
Similar Servers
jadx-mcp-server
Facilitates live, LLM-driven reverse engineering and vulnerability analysis of Android APKs by integrating JADX with the Model Context Protocol.
Reversecore_MCP
Provides a Micro-Agent Control Protocol (MCP) server that wraps various reverse engineering CLI tools and libraries, enabling AI agents to perform binary analysis, malware analysis, and vulnerability research through natural language commands.
langfuse-mcp
Provides a comprehensive Model Context Protocol (MCP) server for Langfuse, enabling AI agents to debug, analyze, and manage AI traces, observations, sessions, exceptions, and prompts.
mcp-server-cortex
This server acts as a bridge, exposing Cortex threat intelligence analysis capabilities as tools consumable by Model Context Protocol (MCP) clients, such as large language models (LLMs).