agents
by inkeep
Overview
This MCP (Model Context Protocol) Server acts as a proxy for the Inkeep Agent Framework's Management API. It exposes administrative functionalities (e.g., CRUD operations for agents, projects, tools, and credentials) via the MCP protocol, allowing other clients or agents to interact with the Inkeep Management API through a standardized interface.
Installation
node packages/agents-manage-mcp/dist/mcp-server.js start --port=2718Environment Variables
- SDK_DEFAULT_BASE_URL
- SDK_DEFAULT_TIMEOUT_MS
- SDK_DEFAULT_HEADERS
- LOG_LEVEL
- NANGO_SECRET_KEY
- NANGO_SERVER_URL
- NANGO_AUTH_BASE_URL
Security Notes
This MCP server exposes highly privileged administrative functionalities (e.g., creating, updating, deleting projects, agents, credentials). When configured to run as an HTTP server (`--transport=http`), the source code does not implement built-in authentication or authorization for *incoming* requests. It relies entirely on external security measures (e.g., being deployed as a trusted internal service, or behind an API Gateway/Load Balancer that enforces authentication and authorization). If exposed directly to the internet without such external safeguards, it would grant any caller full administrative control over the Inkeep Agent Framework instance it connects to via its configured upstream API keys and URLs. This is a critical security risk if not deployed correctly within a secured ecosystem.
Similar Servers
trigger.dev
A platform for building and executing reliable, scalable background tasks and complex workflows, supporting various runtimes (Node.js, Python, Bun), including advanced AI agent orchestration, event-driven processing, and real-time data handling.
mcp-use
A comprehensive framework for building full-stack Model Context Protocol (MCP) applications, including AI agents, MCP servers with UI widgets, and integrated debugging tools in both Python and TypeScript.
volcano-sdk
A TypeScript SDK for building multi-provider AI agents that chain LLM reasoning with external tools and orchestrate multi-agent workflows.
Lynkr
Lynkr is an AI orchestration layer that acts as an LLM gateway, routing language model requests to various providers (Ollama, Databricks, OpenAI, etc.). It provides an OpenAI-compatible API and enables AI-driven coding tasks via a rich set of tools and a multi-agent framework, with a strong focus on security, performance, and token efficiency. It allows AI agents to interact with a defined workspace (reading/writing files, executing shell commands, performing Git operations) and leverages long-term memory and agent learning to enhance task execution.