inspector
by modelcontextprotocol
Overview
A web-based client and proxy server for inspecting and interacting with Model Context Protocol (MCP) servers, allowing users to browse resources, prompts, and tools, perform requests, and debug OAuth authentication flows.
Installation
node client/bin/start.jsEnvironment Variables
- PORT
- HOST
- DANGEROUSLY_OMIT_AUTH
- ALLOWED_ORIGINS
- CLIENT_PORT
- SERVER_PORT
- INSPECTOR_URL
Security Notes
The proxy server (`server/src/index.ts`) is designed to execute arbitrary commands and arguments (e.g., `command`, `args`, `env`) received via URL query parameters if the `stdio` transport is used. This is a critical remote code execution vulnerability if the proxy is exposed to untrusted networks or users. While potentially intended for local/trusted development environments, this design choice poses a severe risk. Additionally, the server can be configured with the `DANGEROUSLY_OMIT_AUTH` environment variable, bypassing its session token authentication and increasing exposure risk. The client-side OAuth implementation adheres to standard practices, but the proxy's inherent capability for arbitrary command execution significantly lowers the overall security score.
Similar Servers
chrome-devtools-mcp
Control and inspect a live Chrome browser programmatically via an MCP server, enabling AI coding agents to perform reliable automation, in-depth debugging, and performance analysis.
mcp-grafana
Provides a Model Context Protocol (MCP) server for Grafana, enabling AI agents to interact with Grafana features such as dashboards, datasources, alerting, incidents, and more through a structured tool-based interface.
Lynkr
Lynkr is an AI orchestration layer that acts as an LLM gateway, routing language model requests to various providers (Ollama, Databricks, OpenAI, etc.). It provides an OpenAI-compatible API and enables AI-driven coding tasks via a rich set of tools and a multi-agent framework, with a strong focus on security, performance, and token efficiency. It allows AI agents to interact with a defined workspace (reading/writing files, executing shell commands, performing Git operations) and leverages long-term memory and agent learning to enhance task execution.
llms
A centralized configuration and documentation management system for LLMs, providing tools for building skills, commands, agents, prompts, and managing MCP servers across multiple LLM providers.