Back to Home
ahmedibrahim085 icon

lmstudio-bridge-enhanced

Verified Safe

by ahmedibrahim085

Overview

Bridges local LLMs running in LM Studio with the Model Context Protocol (MCP) ecosystem, enabling autonomous AI agents to use external tools (filesystem, web, knowledge graph, GitHub, vision) with multi-model orchestration, structured JSON output, and intelligent model capability management.

Installation

Run Command
python3 /absolute/path/to/lmstudio-bridge-enhanced/main.py

Environment Variables

  • LMSTUDIO_HOST
  • LMSTUDIO_PORT
  • DEFAULT_MODEL
  • MCP_JSON_PATH
  • GITHUB_PERSONAL_ACCESS_TOKEN
  • WORKING_DIR
  • LMS_MAX_RETRIES
  • LMS_RETRY_BASE_DELAY
  • LMS_RETRY_MAX_DELAY
  • LMS_EXTRA_NUMERIC_PARAMS

Security Notes

The server acts as an MCP client, dynamically discovering and executing other MCP servers defined in a user-configurable `.mcp.json` file. While input parameters like `mcp_name` and `working_directory` are properly validated (e.g., `validate_mcp_name` regex, `validate_working_directory` for path traversal, fixed in v3.2.1), the `command` and `args` fields read directly from `.mcp.json` for spawning subprocesses are inherently trusted. A malicious or compromised `.mcp.json` file could still lead to arbitrary command execution on the host system if the user's configuration is not secured. HTML escaping (`html.escape`) is used for LLM reasoning output to prevent XSS. `GITHUB_PERSONAL_ACCESS_TOKEN` is handled via environment variables, not hardcoded. Logging is standardized and bare `except` clauses are replaced (fixed in v3.2.1). Overall, generally safe with the strong recommendation to ensure the `.mcp.json` configuration file is secure and trusted.

Similar Servers

Stats

Interest Score0
Security Score8
Cost ClassLow
Avg Tokens2000
Stars0
Forks0
Last Update2025-11-27

Tags

LLM BridgeMCP ServerAutonomous AgentLocal LLMLM StudioTool CallingMultimodalPython