ollama-fastmcp-wrapper
by andreamoro
Overview
A proxy service that bridges Ollama with FastMCP, enabling local LLM tool-augmented reasoning by exposing MCP servers' functionality to Ollama models.
Installation
uv run python ollama_wrapper.py apiSecurity Notes
The primary security concern stems from the LLM's ability to invoke FastMCP tools based on user input. If a malicious user can craft prompts that trick the LLM into calling dangerous tools (if any are configured beyond the benign examples) or passing exploitable arguments to legitimate tools, it could lead to vulnerabilities like arbitrary code execution or information disclosure. The default binding of the API server to `0.0.0.0` (accessible from any network interface) without built-in authentication further increases risk, allowing unauthorized network access. Additionally, the `StdioTransport` mechanism for spawning local MCP servers could be a vector for arbitrary command execution if the `mcp_servers_config.toml` file is improperly configured with malicious commands by an administrator. While the `__coerce_parameters` function attempts basic type coercion, it is not a comprehensive input sanitizer against all forms of malicious tool arguments. The server relies on careful configuration and secure LLM prompting practices.
Similar Servers
mcpo
Exposes Model Context Protocol (MCP) tools as OpenAPI-compatible HTTP servers.
mcp-client-for-ollama
An interactive terminal client for connecting local Ollama LLMs to Model Context Protocol (MCP) servers, enabling advanced tool use and workflow automation for local LLMs.
ollama-mcp-bridge
Provides an API layer in front of the Ollama API, seamlessly adding tools from multiple MCP servers so every Ollama request can access all connected tools transparently.
modular-mcp
A proxy server that efficiently manages and loads large tool collections from multiple Model Context Protocol (MCP) servers on-demand for LLMs, reducing context overhead.