Back to Home
bigmonmulgrew icon

MCP-AI-swarm

Verified Safe

by bigmonmulgrew

Overview

An AI orchestration and microservices framework designed to route, manage, and distribute LLM requests to various AI backends for tasks like data extraction from unstructured text and generating visual output.

Installation

Run Command
docker compose up --build

Environment Variables

  • OLLAMA_HOST
  • OLLAMA_PORT_E
  • OLLAMA_PORT_I
  • OLLAMA_MODEL_STORAGE
  • OLLAMA_URL
  • OLLAMA_DEFAULT_MODEL
  • AI_MAX_CONCURRENT
  • AI_QUEUE_URL
  • MCPS_HOST
  • MCPS_PORT_I
  • MCPS_PORT_E
  • MCP_DATA_HOST
  • MCP_DATA_PORT_E
  • MCP_DATA_URL
  • MCP_VISUALISER_HOST
  • MCP_VISUALISER_PORT_E
  • MCP_VISUALISER_URL
  • MCP_VERDICT_PORT_E
  • NVIDIA_VISIBLE_DEVICES
  • NVIDIA_DRIVER_CAPABILITIES
  • FRONTEND_PORT_E

Security Notes

The `setup_envs.py` script uses `subprocess.run(..., shell=True)` which can be a security risk if the `cmd` or `cwd` arguments are derived from untrusted input. However, in this specific context, the commands are hardcoded or constructed from internal paths, mitigating immediate exploitability for the setup script itself. The main server components (FastAPI) use Pydantic for request body validation, which helps prevent injection attacks. No direct 'eval' or obvious hardcoded secrets are present in the core application logic. Inter-service communication is via HTTP, suggesting a need for proper network isolation and authentication mechanisms in a production environment.

Similar Servers

Stats

Interest Score0
Security Score7
Cost ClassMedium
Avg Tokens1500
Stars0
Forks0
Last Update2026-01-14

Tags

AI OrchestrationLLM ProxyMicroservicesData ExtractionDistributed AI