alertmanager-mcp-server
Verified Safeby ntk148v
Overview
Enables AI assistants and tools to query and manage Prometheus Alertmanager resources programmatically and securely.
Installation
docker run -e ALERTMANAGER_URL=http://your-alertmanager:9093 -e ALERTMANAGER_USERNAME=your_username -e ALERTMANAGER_PASSWORD=your_password -e ALERTMANAGER_TENANT=your_tenant_id -p 8000:8000 ghcr.io/ntk148v/alertmanager-mcp-serverEnvironment Variables
- ALERTMANAGER_URL
- ALERTMANAGER_USERNAME
- ALERTMANAGER_PASSWORD
- ALERTMANAGER_TENANT
- MCP_TRANSPORT
- MCP_HOST
- MCP_PORT
Security Notes
The server uses environment variables for sensitive configuration like Alertmanager URL, username, and password, avoiding hardcoded secrets. It employs basic authentication and provides multi-tenant support via the `X-Scope-OrgId` header, which is handled securely using `ContextVar` for per-request isolation. HTTP requests use a fixed timeout (60 seconds) to prevent hanging connections. There are no explicit uses of `eval` or other obvious code injection vectors in the provided source code. The primary network risk involves `requests.request` calls to a configurable `ALERTMANAGER_URL`, but this URL is typically managed via environment variables, limiting user-controlled SSRF risks. Overall, the security posture appears sound for its intended purpose.
Similar Servers
inspector
A web-based client and proxy server for inspecting and interacting with Model Context Protocol (MCP) servers, allowing users to browse resources, prompts, and tools, perform requests, and debug OAuth authentication flows.
mcp-grafana
Provides a Model Context Protocol (MCP) server for Grafana, enabling AI agents to interact with Grafana features such as dashboards, datasources, alerting, incidents, and more through a structured tool-based interface.
opentelemetry-mcp-server
Enables AI assistants to query and analyze OpenTelemetry traces from LLM applications for debugging, performance, and cost optimization.
prometheus-mcp-server
Serves as an MCP (Model Context Protocol) gateway, enabling Large Language Models (LLMs) to interact with and analyze a running Prometheus instance through its API.