sisterd_lite
by rintaro-s
Overview
An AI-native OS core designed for LLMs to autonomously monitor, control, and optimize Linux systems by interacting with system services and tools.
Installation
./start-mcp.shEnvironment Variables
- SYSTERD_STATE_DIR
- SYSTERD_SOCKET
- SYSTERD_MODE_TOKEN
Security Notes
The server exposes powerful system management tools, including direct shell command execution (`execute_shell_command` using `subprocess.run(..., shell=True)`) and self-modification capabilities (reading/writing workspace files). While a permission system (`PermissionManager`) and auditing decorators (`permission_audit`) are in place, the 'full' template explicitly grants AI agents broad, highly privileged access. This design is inherently high-risk, as a compromised or hallucinating LLM could execute arbitrary, destructive commands, including system reboots, user management, and sensitive file modifications. Network exposure through HTTP/JSON-RPC (ports 8089/7861 by README, 8888/7860 by script default) to these powerful tools without strong external authentication/sandboxing constitutes a severe vulnerability. The `OllamaClient` creates outbound connections, and `ContainerManager` directly invokes Docker commands, adding attack surface.
Similar Servers
schedcp
Develop and deploy eBPF-based Linux kernel schedulers to optimize performance for specific workloads, particularly those exhibiting 'long-tail' load imbalances.
mcp-kubernetes
The mcp-kubernetes server acts as a bridge for AI assistants to interact with Kubernetes clusters, translating natural language requests into kubectl, Helm, Cilium, or Hubble operations for debugging and management.
1xn-vmcp
An open-source platform for composing, customizing, and extending multiple Model Context Protocol (MCP) servers into a single logical, virtual MCP server, enabling fine-grained context engineering for AI workflows and agents.
llms
A centralized configuration and documentation management system for LLMs, providing tools for building skills, commands, agents, prompts, and managing MCP servers across multiple LLM providers.