mock-llm
Verified Safeby dwmkerr
Overview
Provides a configurable mock API server compatible with OpenAI's API, the Model Context Protocol (MCP), and the Agent-to-Agent (A2A) Protocol, primarily for deterministic testing and development of AI applications.
Installation
npm install -g mock-llm && mock-llmEnvironment Variables
- HOST
- PORT
- AGENT_CARD_HOST
- AGENT_CARD_PORT
Security Notes
The server's core functionality for mocking responses and templating (using JMESPath) does not appear to use inherently dangerous functions like 'eval' or direct command execution with user input, making it safe for its intended purpose. Environment variables are used for host and port configuration, which is a good practice. However, the `/config` API endpoints (GET, POST, PATCH, DELETE) allow for runtime modification and reset of the server's rules and streaming configuration. If this mock server is deployed in an environment accessible by unauthorized parties without additional security layers (e.g., API Gateway authentication), an attacker could reconfigure the mock responses, potentially disrupting client testing or manipulating test data. This is a common pattern for mock servers and acceptable for local or isolated testing environments but poses a risk in shared or production-like setups without proper access controls.
Similar Servers
mcpo
Exposes Model Context Protocol (MCP) tools as OpenAPI-compatible HTTP servers.
inspector
Local development and debugging platform for Model Context Protocol (MCP) clients and servers, including proxying MCP server interactions, simulating UI widgets, and facilitating OAuth flows. It enables building, testing, and developing MCP clients and servers.
mcp-openapi-server
A Model Context Protocol (MCP) server that exposes OpenAPI endpoints as MCP tools, along with optional support for MCP prompts and resources, enabling Large Language Models to interact with REST APIs.
tmcp
A server implementation for the Model Context Protocol (MCP) to enable LLMs to access external context and tools.