a2a
Verified Safeby kubestellar
Overview
AI agent for multi-cluster Kubernetes management, enabling workload distribution and operational tasks across KubeStellar environments.
Installation
uv run kubestellar-mcpEnvironment Variables
- KUBECONFIG
- OPENAI_API_KEY
- GEMINI_API_KEY
- DEFAULT_LLM_PROVIDER
- GEMINI_MODEL
- OPENAI_MODEL
- LLM_TEMPERATURE
- SHOW_THINKING
- SHOW_TOKEN_USAGE
- COLOR_OUTPUT
Security Notes
The server primarily interacts with `kubectl` and `helm` via `asyncio.create_subprocess_exec`, which is generally safer than `shell=True`. However, as an LLM agent, it's inherently susceptible to prompt injection, where a malicious user could instruct the LLM to construct and execute harmful Kubernetes manifests (via `policy_yaml` in `binding_policy_management`) or fetch manifests from untrusted sources (`fetch_manifest` with `insecure_skip_tls_verify`). API keys are handled securely via environment variables or a permissions-restricted file. No obvious `eval` or code obfuscation was found.
Similar Servers
mcp-server-kubernetes
This MCP server enables AI agents to connect to and manage Kubernetes clusters by executing kubectl and Helm commands.
kubernetes-mcp-server
Facilitates AI agent interaction with Kubernetes and OpenShift clusters by exposing management and observability tools via the Model Context Protocol.
mcp-k8s-go
This MCP server enables interaction with Kubernetes clusters to list, get, apply, and execute commands on various resources through a conversational interface.
mcp_massive
An AI agent orchestration server, likely interacting with LLMs and managing multi-agent workflows.