nanobanana-mcp-server
Verified Safeby zhongweili
Overview
AI-powered image generation and editing using Google Gemini models (Flash and Pro) with intelligent model selection via Model Context Protocol.
Installation
uvx nanobanana-mcp-server@latestEnvironment Variables
- GEMINI_API_KEY
- GOOGLE_API_KEY
- NANOBANANA_MODEL
- IMAGE_OUTPUT_DIR
- LOG_LEVEL
- LOG_FORMAT
- FASTMCP_TRANSPORT
- FASTMCP_HOST
- FASTMCP_PORT
- FASTMCP_MASK_ERRORS
Security Notes
The server utilizes environment variables for API keys (`GEMINI_API_KEY` or `GOOGLE_API_KEY`), avoiding hardcoded secrets. It defaults to `stdio` transport, limiting network exposure; when HTTP transport is configured, `FASTMCP_HOST` defaults to `127.0.0.1`. Input validation (`core/validation.py`) is implemented for prompts, image counts, MIME types, base64 data, and file paths, including basic path traversal protection (`".." in path or path.startswith("/")`) for the `upload_file` tool. The `ruff.toml` explicitly enables security linting (`"S"` for flake8-bandit). `subprocess.run` is used in scripts but mostly without `shell=True` and for trusted `uv`/`twine` commands. Logging uses `stderr` for MCP STDIO compatibility, preventing log injection into RPC messages. Error details can be masked in production (`mask_error_details`).
Similar Servers
rmcp
Serves as an AI assistant backend to perform comprehensive statistical analysis, econometric modeling, machine learning, time series analysis, and data science tasks using R through natural language conversations.
ls-mcp
A command-line tool for discovering, analyzing, and reporting on Model Context Protocol (MCP) server configurations in a local development environment, including their status, versioning, and potential credential exposures.
codebadger
Static code analysis Model Context Protocol (MCP) server utilizing Joern's Code Property Graph (CPG) technology to provide structural and security analysis for various programming languages.
seamless-agent
Enhances GitHub Copilot by providing an interactive user confirmation tool, allowing AI agents to request approval or additional input before executing actions.