llm-radar
Verified Safeby ajentsor
Overview
Provides real-time, curated intelligence about major AI models (OpenAI, Anthropic, Google) to AI assistants via the Model Context Protocol (MCP).
Installation
docker run -p 8000:8000 ghcr.io/ajentsor/llm-radar:latestEnvironment Variables
- OPENAI_API_KEY
- ANTHROPIC_API_KEY
- GOOGLE_API_KEY
Security Notes
No direct use of 'eval' or obvious obfuscation detected. API keys for data fetching are managed through environment variables, which is a good practice. The MCP server component loads its model data from a specific external public URL (`https://llm-radar.ajents.company/models.json`). While the server code itself is robust, its real-time data integrity and availability are dependent on this external service. The data generation process (which involves calls to OpenAI, Anthropic, and Google APIs and enrichment by Claude) runs offline as a daily GitHub Action, reducing the direct live server's exposure to those API interactions.
Similar Servers
awesome-remote-mcp-servers
A curated directory for developers to discover, evaluate, and integrate high-quality, official remote Model Context Protocol (MCP) servers into their AI applications and LLM clients.
awesome-mcp-servers
A comprehensive collection of Model Context Protocol (MCP) servers, standardizing AI application context provision.
mcp-omnisearch
Provides a unified interface for various search, AI response, content processing, and enhancement tools via Model Context Protocol (MCP).
mcp-server
Provides a Model Context Protocol (MCP) server for AI agents to search and retrieve curated documentation for the Strands Agents framework, facilitating AI coding assistance.