Back to Home
EPS-AI-SOLUTIONS icon

GeminiCLI

Verified Safe

by EPS-AI-SOLUTIONS

Overview

A lightweight MCP server for integrating with Ollama and Gemini CLI, featuring a task queue, response caching, prompt optimization, and a multi-agent swarm system for complex AI orchestration.

Installation

Run Command
node src/server.js

Environment Variables

  • API_VERSION
  • DEFAULT_MODEL
  • FAST_MODEL
  • CODER_MODEL
  • CACHE_DIR
  • CACHE_TTL
  • CACHE_ENABLED
  • CACHE_ENCRYPTION_KEY
  • QUEUE_MAX_CONCURRENT
  • QUEUE_MAX_RETRIES
  • QUEUE_RETRY_DELAY_BASE
  • QUEUE_TIMEOUT_MS
  • QUEUE_RATE_LIMIT_TOKENS
  • QUEUE_RATE_LIMIT_REFILL
  • MODEL_CACHE_TTL_MS
  • HEALTH_CHECK_TIMEOUT_MS
  • HYDRA_YOLO
  • HYDRA_RISK_BLOCKING
  • OLLAMA_HOST
  • GEMINI_API_KEY
  • LOG_LEVEL

Security Notes

The server implements active prompt risk detection (`detectPromptRisk`) to flag or block potentially malicious input (e.g., prompt injection attempts, system prompt disclosure, data exfiltration keywords). It avoids hardcoding API keys by relying on environment variables (`GEMINI_API_KEY`, `CACHE_ENCRYPTION_KEY`, `OLLAMA_HOST`). Cache entries can be encrypted using AES-256-GCM if `CACHE_ENCRYPTION_KEY` is provided, otherwise they are stored in plain text. A potential concern lies in the documented 'GOD MODE' capabilities of the agents, which, if a prompt injection were to bypass the built-in filters, could lead to local system access or elevated privileges. However, the explicit risk-blocking configuration (`HYDRA_RISK_BLOCKING`) adds a layer of protection.

Similar Servers

Stats

Interest Score0
Security Score8
Cost ClassHigh
Avg Tokens10000
Stars0
Forks0
Last Update2026-01-17

Tags

OllamaGeminiMCPAI OrchestrationPrompt Engineering