mcp
Verified Safeby trustspirit
Overview
Integrate Google Gemini API features (text generation, chat, image analysis, web search, embeddings, image/video generation) into a Model Context Protocol (MCP) server for local or remote AI client applications.
Installation
docker run -i --rm -e GEMINI_API_KEY=your-api-key gemini-mcpEnvironment Variables
- GEMINI_API_KEY
- MCP_MODE
- PORT
Security Notes
The server correctly loads API keys from environment variables and does not appear to contain 'eval' or other dynamic code execution vulnerabilities. It uses standard Node.js and Google Generative AI SDKs. CORS is enabled, which is acceptable for local client integration but would require stricter origin control if exposed publicly. Notably, the Gemini safety settings are configured to 'BLOCK_NONE' for all harm categories, meaning the model will not proactively block potentially harmful content. This is a configuration choice for AI output, not a vulnerability in the server's code, but users should be aware of this setting.
Similar Servers
gemini-mcp-tool
A Model Context Protocol (MCP) server that enables AI assistants to interact with the Google Gemini CLI for comprehensive code and file analysis, structured edit suggestions, and creative brainstorming.
gemini-mcp-rs
A high-performance Rust MCP server that enables AI-driven tasks by wrapping the Gemini CLI, facilitating integration with MCP-compatible clients like Claude Code.
gemini-mcp
The server provides a Model Context Protocol (MCP) interface to Google Gemini AI services, enabling multimodal generation including image creation, image editing, and video production.
mcp-gemini-prompt-enhancer
A Model Context Protocol (MCP) server that provides a prompt optimization service for Large Language Models (LLMs) using Google Gemini, with advanced prompt engineering support and automatic PDF asset management.