ollama-mcp-server
Verified Safeby paolodalprato
Overview
Provides a self-contained Model Context Protocol (MCP) server for local Ollama management, enabling features like listing models, chatting, server control, and intelligent model recommendations.
Installation
ollama-mcp-serverEnvironment Variables
- OLLAMA_HOST
- OLLAMA_PORT
- OLLAMA_TIMEOUT
- HARDWARE_ENABLE_GPU_DETECTION
- HARDWARE_GPU_MEMORY_FRACTION
- HARDWARE_ENABLE_CPU_FALLBACK
- HARDWARE_MEMORY_THRESHOLD_GB
Security Notes
The server uses `subprocess.run` and `subprocess.Popen` to interact with the local `ollama` command-line tool and other system utilities (`nvidia-smi`, `rocm-smi`, `lspci`, `sysctl`). While the command arguments appear to be well-controlled and do not directly expose arbitrary command injection from raw user input, executing external binaries always carries an inherent risk. The primary security consideration is the integrity and security of the locally installed `ollama` executable and other system tools.
Similar Servers
osaurus
Osaurus is an AI edge runtime for macOS, enabling users to run local and cloud AI models, orchestrate tools via the Model Context Protocol (MCP), and power AI applications and workflows on Apple Silicon.
mcp-client-for-ollama
An interactive terminal client for connecting local Ollama LLMs to Model Context Protocol (MCP) servers, enabling advanced tool use and workflow automation for local LLMs.
claude-prompts-mcp
Enhances AI assistant behavior through structured prompt management, multi-step chains, quality gates, and autonomous verification loops, primarily for development tasks.
mcp-rubber-duck
An MCP (Model Context Protocol) server that acts as a bridge to query multiple OpenAI-compatible LLMs, enabling multi-agent AI workflows and providing an AI 'rubber duck' debugging panel.