Hands_on_LLM
by Theanh130124
Overview
Develop and demonstrate LLM agents interacting with external tools via MCP (Microservice Communication Protocol) servers, featuring prompt caching for efficiency and a Streamlit UI for a banking assistant.
Installation
python mcp/mcp_server.pyEnvironment Variables
- MODEL_API_KEY
- HF_API_KEY
Security Notes
The LLM client configuration (`mcp/llm_call_mcp_sse.py` and `mcp/agent_call_mcp_sse.py`) uses hardcoded `api_key="FAKEAPI"` or `api_key="ANYTHING"` when connecting to a custom LLM endpoint (`http://103.78.3.96:8000/v1`). This indicates a lack of proper authentication for LLM calls and presents a significant security vulnerability if the target endpoint is exposed or not adequately secured. Additionally, prompt caching stores queries and responses in plain JSON files on the local filesystem, which could expose sensitive information.
Similar Servers
agentor
Deploy scalable AI agents with tool integrations (weather, email, GitHub, etc.) and support for A2A and MCP communication protocols.
Polymcp
A comprehensive toolkit and agent framework for building Model Context Protocol (MCP) servers and orchestrating them with Large Language Models (LLMs) across Python and TypeScript environments.
mcp-servers
An MCP server for fetching, cleaning, and intelligently extracting content from web pages, designed for agent-building frameworks.
zeromcp
A minimal, pure Python Model Context Protocol (MCP) server for exposing tools, resources, and prompts via HTTP/SSE and Stdio transports.