a2a
Verified Safeby kubestellar
Overview
AI-powered agent for multi-cluster Kubernetes management and orchestration using KubeStellar.
Installation
uv run kubestellar agentEnvironment Variables
- OPENAI_API_KEY
- GEMINI_API_KEY
- CLAUDE_API_KEY
- ANTHROPIC_API_KEY
- KUBECONFIG
- DEFAULT_LLM_PROVIDER
- GEMINI_MODEL
- CLAUDE_MODEL
- OPENAI_MODEL
- LLM_TEMPERATURE
- SHOW_THINKING
- SHOW_TOKEN_USAGE
- COLOR_OUTPUT
- XDG_CONFIG_HOME
Security Notes
The agent executes `kubectl` and `helm` commands via `subprocess.create_subprocess_exec` and `subprocess.Popen`. Commands are constructed as lists of strings, mitigating direct shell injection. However, as an AI agent, it generates these commands based on LLM output, which inherently carries a risk of executing unintended commands if the LLM hallucinates or misinterprets instructions. A 'plan' review and user confirmation step in `agent.py` acts as a crucial safeguard. The `fetch_manifest` function can download content from arbitrary URLs and has an `insecure_skip_tls_verify` option, posing a risk if used without caution. API keys are managed in a `.config/kubestellar/api_keys.json` file with restrictive permissions (0o600) or via environment variables.
Similar Servers
mcp-server-kubernetes
Manages Kubernetes clusters by executing kubectl and Helm commands, facilitating automation and interaction through the Model Context Protocol.
kubernetes-mcp-server
Provides a Model Context Protocol (MCP) server for AI agents to interact with Kubernetes and OpenShift clusters, enabling AI-driven cluster management and diagnosis.
mcp-k8s-go
An MCP server enabling AI assistants and users to interact with and manage Kubernetes clusters by listing, getting, applying, and executing commands on Kubernetes resources.
mcp_massive
An AI agent orchestration server, likely interacting with LLMs and managing multi-agent workflows.