charlie
by henriquemoody
Overview
A universal command-line interface (CLI) and Python library that generates agent-specific configurations (commands, rules, MCP servers) for various AI agents from a single YAML or Markdown specification, supporting configuration inheritance and variable templating.
Installation
docker run --rm -v $(pwd):/workspace ghcr.io/henriquemoody/charlie generate <agent_name>Environment Variables
- LANG (if referenced as {{env:LANG}} in a configuration, e.g., in examples/simple/charlie.yaml)
- Any_CUSTOM_ENV_VAR (if referenced as {{env:MY_CUSTOM_ENV_VAR}} in the configuration, it must be set in the environment or .env file to avoid an error)
- Any_MCP_SERVER_ENV_VAR (if defined within an MCP server's 'env' section, e.g., GITHUB_PERSONAL_ACCESS_TOKEN, these are passed directly to the MCP server process)
Security Notes
The `repository_fetcher.py` module clones arbitrary Git repositories specified in the `extends` field of the configuration without additional security validation of the source. This is explicitly noted in the source code as a risk, meaning a malicious 'extends' URL could introduce harmful code or configurations into the project. The tool uses `subprocess.run` to execute git commands, which is a common vector for command injection if inputs are not properly sanitized; while inputs appear controlled in Charlie's internal logic, the origin of these inputs from external configurations (e.g., repository URLs) requires trust. There are no obvious signs of 'eval' or other direct code execution vulnerabilities within Charlie itself, and `yaml.safe_load` is used for parsing.
Similar Servers
inspector
A web-based client and proxy server for inspecting and interacting with Model Context Protocol (MCP) servers, allowing users to browse resources, prompts, and tools, perform requests, and debug OAuth authentication flows.
Lynkr
Lynkr is an AI orchestration layer that acts as an LLM gateway, routing language model requests to various providers (Ollama, Databricks, OpenAI, etc.). It provides an OpenAI-compatible API and enables AI-driven coding tasks via a rich set of tools and a multi-agent framework, with a strong focus on security, performance, and token efficiency. It allows AI agents to interact with a defined workspace (reading/writing files, executing shell commands, performing Git operations) and leverages long-term memory and agent learning to enhance task execution.
AgentUp
A developer-first framework for building, deploying, and managing secure, scalable, and configurable AI agents, supporting various agent types (reactive, iterative) and the Model-Context Protocol (MCP) for seamless interactions.
responsible-vibe-mcp
Manages conversation state and guides LLM coding agents through structured software development workflows with long-term project memory and multi-agent collaboration.