remote-mcpserver
by dalianuyou
Overview
An AI chatbot application that utilizes the Model Context Protocol (MCP) to interact with multiple external services (MCP servers), specifically for retrieving and managing research paper information from arXiv and potentially accessing a filesystem and other web resources.
Installation
uv run Chatbot_MultiMCPClient.pyEnvironment Variables
- ANTHROPIC_API_KEY
- ANTHROPIC_AUTH_TOKEN
Security Notes
The primary `Research_MCPServer.py` code is reasonably secure, avoiding direct code injection vulnerabilities. However, the client (`Chatbot_MultiMCPClient.py`) launches other MCP servers specified in `server_config.json` using `npx` and `uvx`. This means the overall security is dependent on the trustworthiness and security of these external packages (`@modelcontextprotocol/server-filesystem`, `mcp-server-fetch`). The `filesystem` server, in particular, could expose broad filesystem access to the LLM (via tool calls) if not carefully contained, posing a risk of unauthorized file operations or data leakage in sensitive directories. The `search_papers` tool creates directories and writes JSON files based on LLM-provided topics, which could lead to excessive disk usage or unintended file creation if the LLM is jailbroken, though the path sanitization helps prevent directory traversal.
Similar Servers
mcp-server
Provides a Model Context Protocol (MCP) server for AI agents to search and retrieve curated documentation for the Strands Agents framework, facilitating AI coding assistance.
mcp-use-cli
An interactive command-line interface (CLI) tool for connecting to and interacting with Model Context Protocol (MCP) servers using natural language, acting as an AI client that orchestrates LLM responses with external tools.
MCP-Agent
An autonomous AI agent designed to discover, connect to, and utilize tools and resources from various Model Context Protocol (MCP) servers to accomplish tasks.
arXiv-mcp
Provides a Model Context Protocol (MCP) server for searching and retrieving arXiv academic papers for LLMs.