prompt-learning-mcp
Verified Safeby erichowens
Overview
Stateful prompt optimization and learning from performance history for LLM-based agents.
Installation
npm run startEnvironment Variables
- VECTOR_DB_URL
- REDIS_URL
- OPENAI_API_KEY
Security Notes
The server uses environment variables for API keys and database URLs, avoiding hardcoded secrets. It employs LLM calls for evaluation and optimization, which are generally safe if prompts do not introduce code execution. The `cli.ts` and `setup.ts` scripts utilize `execSync` and `node -e` for system commands and configuration, respectively. While these carry inherent risks, they are used within the context of installation and CLI operations, where the user has initiated the commands. Input validation for `transcript_path` in `handleHook` is limited to existence, but in the context of Claude Code hooks, this path is expected to be controlled by the system. Overall, the security posture is strong for its intended purpose.
Similar Servers
Context-Engine
Self-improving code search and context engine for IDEs and AI agents, providing hybrid semantic/lexical search, symbol graph navigation, and persistent memory.
claude-prompts
This server provides a hot-reloadable prompt engine with chains, quality gates, and structured reasoning for AI assistants, enhancing control over Claude's behavior in prompt workflows.
mcp-local-rag
Local RAG server for developers enabling private, offline semantic search with keyword boosting on personal or project documents (PDF, DOCX, TXT, MD, HTML).
LLMling
A declarative Python framework for building LLM applications, managing resources, prompts, and tools, serving as a backend for MCP servers and Pydantic-AI agents.