mcp-studies
by LAB271
Overview
Develops and demonstrates various MCP server architectures and features, ranging from basic stdio transport to Docker deployment with vector databases, primarily for learning and prototyping AI agent integrations.
Installation
uv run spikes/000_stdio/main_mcp_server.pyEnvironment Variables
- LOG_LEVEL
- PYTHONUNBUFFERED
- SERVER_NAME
- FASTMCP_HOST
- FASTMCP_PORT
- NEO4J_AUTH
- NEO4J_HEAP_INITIAL
- NEO4J_HEAP_MAX
- NEO4J_HOST
- NEO4J_PORT
- NEO4J_USER
- NEO4J_PASSWORD
- NEO4J_DATABASE
- MCP_TRANSPORT
- POSTGRES_HOST
- POSTGRES_PORT
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_DB
Security Notes
Contains `eval()` function in the `calculate` tool (`spikes/003_docker/main_server.py`), which is a severe code injection vulnerability despite character filtering. Hardcoded default credentials (e.g., `neo4j/neo4jpassword`, `mcp_user/mcp_password`) are present in `docker-compose.yml` files (though configurable via environment variables), posing a risk if not overridden in production environments.
Similar Servers
keyboard-local
Enables AI clients to execute real-world tasks through connected third-party tools (APIs, CLIs, SDKs) with human approval, leveraging a secure GitHub Codespace environment.
1mcp
Orchestrates AI agent tool calls by executing JavaScript/TypeScript code in a WASM sandbox, reducing LLM context bloat and managing security policies.
company-docs-mcp
Transforms organizational documentation into an AI-powered knowledge base for semantic search, Q&A via chat interface, Claude Desktop, and Slack integration.
mcp-4get
Provides LLM clients with access to the 4get Meta Search engine API via the Model Context Protocol (MCP).