Lynkr
Verified Safeby vishalveerareddy123
Overview
Lynkr is an AI orchestration layer that acts as an LLM gateway, routing language model requests to various providers (Ollama, Databricks, OpenAI, etc.). It provides an OpenAI-compatible API and enables AI-driven coding tasks via a rich set of tools and a multi-agent framework, with a strong focus on security, performance, and token efficiency. It allows AI agents to interact with a defined workspace (reading/writing files, executing shell commands, performing Git operations) and leverages long-term memory and agent learning to enhance task execution.
Installation
npm startEnvironment Variables
- MODEL_PROVIDER
- DATABRICKS_API_BASE
- DATABRICKS_API_KEY
- OLLAMA_ENDPOINT
- OLLAMA_MODEL
- OLLAMA_EMBEDDINGS_MODEL
- OPENROUTER_API_KEY
- OPENROUTER_MODEL
- OPENROUTER_EMBEDDINGS_MODEL
- OPENAI_API_KEY
- OPENAI_MODEL
- AZURE_OPENAI_ENDPOINT
- AZURE_OPENAI_API_KEY
- AZURE_OPENAI_DEPLOYMENT
- LLAMACPP_ENDPOINT
- LLAMACPP_MODEL
- LLAMACPP_EMBEDDINGS_ENDPOINT
- LMSTUDIO_ENDPOINT
- LMSTUDIO_MODEL
- AWS_BEDROCK_REGION
- AWS_BEDROCK_API_KEY
- AWS_BEDROCK_MODEL_ID
- PREFER_OLLAMA
- FALLBACK_ENABLED
- FALLBACK_PROVIDER
- PORT
- WORKSPACE_ROOT
- RATE_LIMIT_ENABLED
- WEB_SEARCH_ENDPOINT
- WEB_SEARCH_ALLOW_ALL
- MCP_SANDBOX_ENABLED
- MCP_SANDBOX_IMAGE
- MCP_SANDBOX_RUNTIME
- AGENTS_ENABLED
- MEMORY_ENABLED
- TOKEN_TRACKING_ENABLED
- PROMPT_CACHE_ENABLED
- POLICY_MAX_STEPS
- POLICY_MAX_TOOL_CALLS
- POLICY_DISALLOWED_TOOLS
- POLICY_GIT_ALLOW_PUSH
- POLICY_GIT_TEST_COMMAND
- POLICY_SAFE_COMMANDS_ENABLED
Security Notes
The project demonstrates a robust focus on security. It includes explicit `SHELL_BLOCKLIST_PATTERNS` and `PYTHON_BLOCKLIST_PATTERNS`, and a `SafeCommandDSL` to evaluate shell commands against allowed flags and blocked patterns, specifically disallowing dangerous commands like `rm`, `mv`, `cp`, `chmod`, `sudo`, `reboot`, `shutdown`, and `killall`. The `evaluateToolCall` policy enforces controls on tool usage, including file access (allowed/blocked paths) and Git operations (configurable push/pull/commit permissions). Sensitive content is sanitized using `sanitiseText`. For process execution, a Docker-based sandbox (`src/mcp/sandbox.js`) is configurable to provide isolation, resource limits, and network control. Configuration management (`src/config/index.js`) relies on environment variables, preventing hardcoded secrets. While designed with strong safeguards, any system allowing dynamic code execution by an AI carries inherent residual risk.
Similar Servers
inspector
A web-based client and proxy server for inspecting and interacting with Model Context Protocol (MCP) servers, allowing users to browse resources, prompts, and tools, perform requests, and debug OAuth authentication flows.
claude-prompts
This server provides a hot-reloadable prompt engine with chains, quality gates, and structured reasoning for AI assistants, enhancing control over Claude's behavior in prompt workflows.
AgentUp
A developer-first framework for building, deploying, and managing AI agents, bringing Docker-like consistency and operational ease to AI agent development.
AgentUp
A developer-first framework for building, deploying, and managing secure, scalable, and configurable AI agents, supporting various agent types (reactive, iterative) and the Model-Context Protocol (MCP) for seamless interactions.