boomi-mcp-server
Verified Safeby RenEra-ai
Overview
Provides a secure Model Context Protocol (MCP) server for Claude Code and other AI clients to integrate with Boomi Platform APIs, enabling automated management of Boomi accounts, trading partners, and processes with OAuth 2.0 authentication and cloud-native credential storage.
Installation
python server_http.pyEnvironment Variables
- OIDC_CLIENT_ID
- OIDC_CLIENT_SECRET
- OIDC_BASE_URL
- SESSION_SECRET
- SECRETS_BACKEND
- GCP_PROJECT_ID
- AWS_REGION
- AWS_SECRET_PREFIX
- AZURE_KEY_VAULT_URL
- AZURE_SECRET_PREFIX
- MCP_JWT_ALG
- MCP_JWT_SECRET
- MCP_JWT_JWKS_URI
- MCP_JWT_ISSUER
- MCP_JWT_AUDIENCE
- MCP_HOST
- MCP_PORT
- MCP_PATH
- LOG_LEVEL
- CORS_ORIGINS
- CREDENTIALS_DB_PATH
Security Notes
The server is designed with a strong emphasis on security for production deployments. It leverages cloud-native secret management (AWS Secrets Manager, GCP Secret Manager, Azure Key Vault) for credentials and implements robust JWT authentication (RS256/JWKS) with automatic key rotation and explicit issuer/audience validation. Development mode (HS256) is clearly identified with warnings about using default secrets. Role-based access control is enforced via JWT scopes. No 'eval' or obvious obfuscation detected. Relies on well-established third-party libraries (PyJWT, boto3, google-cloud-secret-manager, azure-identity). The HTTP server setup with Starlette allows for secure HTTP practices (HTTPS only, session management).
Similar Servers
SageMCP
A scalable platform for hosting MCP servers with multi-tenant support, OAuth integration, and connector plugins for various services, deployed on Kubernetes.
fastify-mcp-server
A Fastify plugin providing a streamable HTTP transport for the Model Context Protocol (MCP), enabling AI assistants to interact with services.
fastify-mcp
Integrates Model Context Protocol (MCP) server functionality into Fastify web applications, supporting streamable HTTP and legacy HTTP+SSE transports.
fluidmcp
Orchestrates Model Context Protocol (MCP) servers and LLM inference engines (like vLLM) via a unified FastAPI gateway, enabling dynamic management, tool invocation, and multi-model LLM serving.