RagThisCode
Verified Safeby ValerianRey
Overview
Set up a RAG (Retrieval-Augmented Generation) system to chat with the code of any public or private GitHub repository.
Installation
docker run -p 7070:7070 -p 9000:9000 -e OPENAI_API_KEY=$OPENAI_API_KEY -e GITHUB_ACCESS_TOKEN=$GITHUB_ACCESS_TOKEN -v $PWD/data/chroma_langchain_db:/app/data/chroma_langchain_db --cpus="1.0" --memory="2g" -d ragthiscodeEnvironment Variables
- OPENAI_API_KEY
- GITHUB_ACCESS_TOKEN
Security Notes
Secrets (API keys, tokens) are correctly handled via environment variables. CORS is configured for local development and should be adapted for production. The frontend uses `dangerouslySetInnerHTML` with LLM-generated content, which carries a minor risk if an LLM generates malicious HTML/JS, though the `marked.parse` function mitigates this for standard markdown.
Similar Servers
Context-Engine
Self-improving code search and context engine for IDEs and AI agents, providing hybrid semantic/lexical search, symbol graph navigation, and persistent memory.
rag-code-mcp
Provides AI-ready semantic code search and RAG capabilities for various programming languages to AI assistants, running entirely locally.
mcp-rag-server
Provides a local, zero-network Retrieval-Augmented Generation server for any code repository, enabling semantic search and file access through the Model Context Protocol (MCP) for AI clients like GitHub Copilot Agent.
tiny_chat
A RAG-enabled chat application that integrates with various LLM backends (OpenAI, Ollama, vLLM) and a Qdrant vector database, offering web search capabilities and an OpenAI-compatible API.