Agentic-AI-LLM-Apps
by mohammadreza-mohammadi94
Overview
A Retrieval-Augmented Generation (RAG) system for querying Alice's Adventures in Wonderland using LangChain and FAISS.
Installation
python -m alice_ragEnvironment Variables
- OPENAI_API_KEY
Security Notes
Critical: A Cerebras API key is hardcoded in `projects/multi-agent/Cerebras-Debate-Orchestrator/src/llm_interface.py`. The `calculator_tool` in `projects/tools/Tool-Augmented Chain with Calculator/app.py` uses `eval`, which, despite attempts at sandboxing, is a high-risk function. Several email sending tools across different projects contain placeholder email addresses (`YOUR_EMAIL_ADDRESS`) that need to be replaced. External API calls (OpenAI, Tavily, Pushover, SendGrid, NewsAPI, Groq, Cerebras) are made across different projects, requiring careful management of API keys via environment variables.
Similar Servers
haiku.rag
Opinionated agentic RAG powered by LanceDB, Pydantic AI, and Docling to provide hybrid search, intelligent QA, and multi-agent research over user-provided documents, accessible via CLI, Python API, Web App, TUI, or as an MCP server for AI assistants.
mcp-local-rag
Local RAG server for developers enabling private, offline semantic search with keyword boosting on personal or project documents (PDF, DOCX, TXT, MD, HTML).
codebase-RAG
A Retrieval-Augmented Generation (RAG) server designed to assist AI agents and developers in understanding and navigating codebases through semantic search.
concept-rag
This MCP server provides conceptual search, document analysis, and library exploration capabilities over a knowledge base using LanceDB and LLM-based concept extraction.