mcp-explore
Verified Safeby AteetAgarwal
Overview
A Streamlit chat application integrating multiple MCP servers (local and remote) to orchestrate LLM tool calls.
Installation
uv run streamlit run .\client_v2.pyEnvironment Variables
- AZURE_OPENAI_API_KEY
- AZURE_OPENAI_ENDPOINT
- OPENAI_API_VERSION
Security Notes
The code generally follows good security practices for a demo application. API keys are loaded from environment variables (.env). SQL queries for the expense tracker use parameterized statements, preventing common SQL injection vulnerabilities. Direct execution of external processes via `uv run` in the `SERVERS` configuration uses hardcoded command arguments, limiting risks from user input-driven arbitrary command execution. Tool arguments from the LLM are expected to be JSON or dicts, and `json.loads()` is used, which is generally safe within this context. No explicit `eval` or `exec` on user-controlled input was found.
Similar Servers
Docker_MCPGUIApp
This repository provides a starter template for building full-stack AI assistants that integrate with real-world tools using Docker MCP Gateway and a Large Language Model.
fastchat-mcp
A Python client for integrating Language Models with Model Context Protocol (MCP) servers, allowing natural language interaction with external tools, resources, and prompts.
MultiServer-Mcp
Demonstrates building and interacting with multiple Microservice-Compatible Protocol (MCP) servers for math and text processing using a LangChain MCP client for direct tool invocation.
ChatBot
Develops a versatile and interactive AI chatbot using LangGraph, featuring advanced concepts like memory, persistence, tool integration (search, calculator, stock price), multi-party communication (MCP), and retrieval-augmented generation (RAG) with a Streamlit user interface and user management.