Back to Home
vishalveerareddy123 icon

Lynkr

Verified Safe

by vishalveerareddy123

Overview

Lynkr is an AI orchestration layer that acts as an LLM gateway, routing language model requests to various providers (Ollama, Databricks, OpenAI, etc.). It provides an OpenAI-compatible API and enables AI-driven coding tasks via a rich set of tools and a multi-agent framework, with a strong focus on security, performance, and token efficiency. It allows AI agents to interact with a defined workspace (reading/writing files, executing shell commands, performing Git operations) and leverages long-term memory and agent learning to enhance task execution.

Installation

Run Command
npm start

Environment Variables

  • MODEL_PROVIDER
  • DATABRICKS_API_BASE
  • DATABRICKS_API_KEY
  • OLLAMA_ENDPOINT
  • OLLAMA_MODEL
  • OLLAMA_EMBEDDINGS_MODEL
  • OPENROUTER_API_KEY
  • OPENROUTER_MODEL
  • OPENROUTER_EMBEDDINGS_MODEL
  • OPENAI_API_KEY
  • OPENAI_MODEL
  • AZURE_OPENAI_ENDPOINT
  • AZURE_OPENAI_API_KEY
  • AZURE_OPENAI_DEPLOYMENT
  • LLAMACPP_ENDPOINT
  • LLAMACPP_MODEL
  • LLAMACPP_EMBEDDINGS_ENDPOINT
  • LMSTUDIO_ENDPOINT
  • LMSTUDIO_MODEL
  • AWS_BEDROCK_REGION
  • AWS_BEDROCK_API_KEY
  • AWS_BEDROCK_MODEL_ID
  • PREFER_OLLAMA
  • FALLBACK_ENABLED
  • FALLBACK_PROVIDER
  • PORT
  • WORKSPACE_ROOT
  • RATE_LIMIT_ENABLED
  • WEB_SEARCH_ENDPOINT
  • WEB_SEARCH_ALLOW_ALL
  • MCP_SANDBOX_ENABLED
  • MCP_SANDBOX_IMAGE
  • MCP_SANDBOX_RUNTIME
  • AGENTS_ENABLED
  • MEMORY_ENABLED
  • TOKEN_TRACKING_ENABLED
  • PROMPT_CACHE_ENABLED
  • POLICY_MAX_STEPS
  • POLICY_MAX_TOOL_CALLS
  • POLICY_DISALLOWED_TOOLS
  • POLICY_GIT_ALLOW_PUSH
  • POLICY_GIT_TEST_COMMAND
  • POLICY_SAFE_COMMANDS_ENABLED

Security Notes

The project demonstrates a robust focus on security. It includes explicit `SHELL_BLOCKLIST_PATTERNS` and `PYTHON_BLOCKLIST_PATTERNS`, and a `SafeCommandDSL` to evaluate shell commands against allowed flags and blocked patterns, specifically disallowing dangerous commands like `rm`, `mv`, `cp`, `chmod`, `sudo`, `reboot`, `shutdown`, and `killall`. The `evaluateToolCall` policy enforces controls on tool usage, including file access (allowed/blocked paths) and Git operations (configurable push/pull/commit permissions). Sensitive content is sanitized using `sanitiseText`. For process execution, a Docker-based sandbox (`src/mcp/sandbox.js`) is configurable to provide isolation, resource limits, and network control. Configuration management (`src/config/index.js`) relies on environment variables, preventing hardcoded secrets. While designed with strong safeguards, any system allowing dynamic code execution by an AI carries inherent residual risk.

Similar Servers

Stats

Interest Score95
Security Score9
Cost ClassMedium
Avg Tokens2500
Stars225
Forks16
Last Update2026-01-18

Tags

AI OrchestrationLLM GatewayCoding AgentTool UseHybrid RoutingSecurity PolicySandboxPerformance OptimizationToken ManagementMulti-Agent SystemMemory SystemOpenAI Compatible APIWorkspace Interaction