canvasxpress-mcp-server-main
Verified Safeby neuhausi
Overview
This server provides AI assistants with the ability to generate CanvasXpress JSON configurations for data visualizations from natural language descriptions, utilizing Retrieval Augmented Generation (RAG) and semantic search.
Installation
docker run -d -p 8000:8000 -v $(PWD)/vector_db:/root/.cache --name canvasxpress-mcp-server --env-file .env canvasxpress-mcp-server:latest python -m src.mcp_server --httpEnvironment Variables
- AZURE_OPENAI_KEY
- AZURE_OPENAI_API_VERSION
- LLM_MODEL
- LLM_ENVIRONMENT
- LLM_PROVIDER
- EMBEDDING_PROVIDER
- GOOGLE_API_KEY
- GEMINI_MODEL
- OPENAI_EMBEDDING_MODEL
- GEMINI_EMBEDDING_MODEL
- ONNX_EMBEDDING_MODEL
- MCP_TRANSPORT
- MCP_HOST
- MCP_PORT
- PROMPT_VERSION
- ALT_WORDING_COUNT
Security Notes
The server demonstrates good security practices by externalizing API keys to environment variables and avoiding direct execution of arbitrary user input. It relies on established external LLM and embedding APIs (Azure OpenAI, Google Gemini, HuggingFace for local models). Network access for the HTTP server mode is standard but would require proper authentication and network security in a production environment, which FastMCP supports. The system's parsing of LLM output via regex in `_extract_json_from_response` is a common approach and does not introduce obvious vulnerabilities in this context.
Similar Servers
fastmcp
FastMCP is an ergonomic interface for the Model Context Protocol (MCP), providing a comprehensive framework for building and interacting with AI agents, tools, resources, and prompts across various transports and authentication methods.
UltraRAG
An open-source RAG framework for building, experimenting, and evaluating complex Retrieval-Augmented Generation (RAG) pipelines with low-code YAML configurations and native multimodal support.
context-portal
Manages structured project context for AI assistants and developer tools, enabling Retrieval Augmented Generation (RAG) and prompt caching within IDEs.
mem-agent-mcp
Provides a Model Context Protocol (MCP) server for a memory agent, enabling LLMs to interact with an Obsidian-like memory system for contextual assistance and RAG.