workflows-mcp
Verified Safeby qtsone
Overview
Automate, orchestrate, and manage development workflows (like CI/CD, TDD phases, GitHub issues, file operations) using AI assistants via YAML definitions. It acts as a backend server for AI assistants to execute complex, multi-step automation tasks.
Installation
uvx workflows-mcp --refreshEnvironment Variables
- WORKFLOW_SECRET_ANY_KEY
- WORKFLOWS_TEMPLATE_PATHS
- LLM_CONFIG_PATH
- OPENAI_API_KEY
- ANTHROPIC_API_KEY
- GEMINI_API_KEY
- OLLAMA_API_KEY
- AZURE_OPENAI_KEY
- GITHUB_TOKEN
Security Notes
The server's core functionality involves executing arbitrary shell commands (`Shell` block type) and interacting with external APIs/filesystems as defined in YAML workflows. While this is its intended purpose for automation, it inherently carries security risks if malicious workflows are executed. The project implements several important mitigations: automatic secret redaction from outputs, audit logging for secret access, and robust variable resolution rules (`ForbiddenNamespaceRule`, `SecretRedactionRule`) to prevent unauthorized access to system resources or secrets from Jinja2 templates. However, one of the first-party templates (`github-create-issue.yaml`) utilizes `eval` within a `Shell` block for dynamic command construction. Although variables are constructed carefully, the use of `eval` in a shell context is generally a high-risk operation and can be a vector for shell injection if inputs are not perfectly sanitized. The `ShellExecutor` itself uses `shlex.split` for command execution (safer than `shell=True`), but the ultimate impact depends on the content of the `command` string after all template rendering.
Similar Servers
npcpy
Core library of the NPC Toolkit that supercharges natural language processing pipelines and agent tooling. It's a flexible framework for building state-of-the-art applications and conducting novel research with LLMs. Supports multi-agent systems, fine-tuning, reinforcement learning, genetic algorithms, model ensembling, and NumPy-like operations for AI models (NPCArray). Includes a built-in Flask server for deploying agent teams via REST APIs, and multimodal generation (image, video, audio).
arcade-mcp
Provides a framework and pre-built toolkits for integrating Large Language Models (LLMs) with various external services and databases, enabling AI agents to interact with the real world.
concierge
A framework for building and serving agentic workflows, enabling autonomous agents to interact with application services through structured stages and tasks.
claude-prompts
This server provides a hot-reloadable prompt engine with chains, quality gates, and structured reasoning for AI assistants, enhancing control over Claude's behavior in prompt workflows.