Hands_on_LLM
by Theanh130124
Overview
Develop and demonstrate LLM agents interacting with external tools via MCP (Microservice Communication Protocol) servers, featuring prompt caching for efficiency and a Streamlit UI for a banking assistant.
Installation
python mcp/mcp_server.pyEnvironment Variables
- MODEL_API_KEY
- HF_API_KEY
Security Notes
The LLM client configuration (`mcp/llm_call_mcp_sse.py` and `mcp/agent_call_mcp_sse.py`) uses hardcoded `api_key="FAKEAPI"` or `api_key="ANYTHING"` when connecting to a custom LLM endpoint (`http://103.78.3.96:8000/v1`). This indicates a lack of proper authentication for LLM calls and presents a significant security vulnerability if the target endpoint is exposed or not adequately secured. Additionally, prompt caching stores queries and responses in plain JSON files on the local filesystem, which could expose sensitive information.
Similar Servers
Polymcp
A comprehensive TypeScript framework for building and orchestrating Model Context Protocol (MCP) servers and AI agents, enabling LLMs to intelligently discover, select, and execute external tools.
mcpc
Build and compose agentic Model Context Protocol (MCP) servers and tools, enabling AI assistants to discover, integrate, and orchestrate other MCP servers for complex tasks.
mcp-servers
An MCP server for managing files in Google Cloud Storage, supporting CRUD operations (save, get, search, delete) and exposing files as resources.
zeromcp
A minimal, pure Python Model Context Protocol (MCP) server for exposing tools, resources, and prompts via HTTP/SSE and Stdio transports.