sisterd_lite
by rintaro-s
Overview
An AI-native OS core designed for LLMs to autonomously monitor, control, and optimize Linux systems by interacting with system services and tools.
Installation
./start-mcp.shEnvironment Variables
- SYSTERD_STATE_DIR
- SYSTERD_SOCKET
- SYSTERD_MODE_TOKEN
Security Notes
The server exposes powerful system management tools, including direct shell command execution (`execute_shell_command` using `subprocess.run(..., shell=True)`) and self-modification capabilities (reading/writing workspace files). While a permission system (`PermissionManager`) and auditing decorators (`permission_audit`) are in place, the 'full' template explicitly grants AI agents broad, highly privileged access. This design is inherently high-risk, as a compromised or hallucinating LLM could execute arbitrary, destructive commands, including system reboots, user management, and sensitive file modifications. Network exposure through HTTP/JSON-RPC (ports 8089/7861 by README, 8888/7860 by script default) to these powerful tools without strong external authentication/sandboxing constitutes a severe vulnerability. The `OllamaClient` creates outbound connections, and `ContainerManager` directly invokes Docker commands, adding attack surface.
Similar Servers
schedcp
Develop, evaluate, and dynamically manage custom eBPF-based CPU schedulers for Linux, particularly focusing on optimizing long-tail and memory-intensive workloads (like AI/ML, I/O, distributed processing).
1xn-vmcp
An open-source platform for composing, customizing, and extending multiple Model Context Protocol (MCP) servers into a single logical, virtual MCP server, enabling fine-grained context engineering for AI workflows and agents.
mcp-cybersec-watchdog
A Linux server security auditing and continuous monitoring tool that provides security posture analysis and anomaly detection capabilities, designed to be integrated with AI agents.
llms
A centralized configuration and documentation management system for LLMs, providing tools for building skills, commands, agents, prompts, and managing MCP servers across multiple LLM providers.