textrawl
Verified Safeby jeffgreendesign
Overview
Serves as a personal knowledge base allowing AI models (e.g., Claude) to search, retrieve, and add documents, emails, notes, and web pages from a user's collection.
Installation
npx tsx watch --env-file=.env src/index.tsEnvironment Variables
- SUPABASE_URL
- SUPABASE_SERVICE_KEY
- OPENAI_API_KEY
- OLLAMA_BASE_URL
- OLLAMA_MODEL
- API_BEARER_TOKEN
- PORT
- NODE_ENV
- LOG_LEVEL
- ALLOWED_ORIGINS
- ENABLE_MEMORY
- COMPACT_RESPONSES
- UI_PORT
Security Notes
The server demonstrates strong security practices including `timingSafeEqual` for bearer token comparison to prevent timing attacks, robust rate limiting across API endpoints to prevent DoS, and careful sanitization of user-provided filenames and output paths to mitigate path traversal and injection risks. Configuration is loaded from environment variables, with a production check for `API_BEARER_TOKEN`. Error handling avoids leaking stack traces in production. Input validation (file types, tag limits) is implemented for uploads. The `validateOutputDir` function is a good example of defense-in-depth against directory traversal by ensuring paths are within allowed base directories. Database interactions rely on the Supabase client and RPCs, which are generally safe against SQL injection if the underlying functions are parameterized.
Similar Servers
qdrant-loader
A Model Context Protocol (MCP) server that provides advanced Retrieval-Augmented Generation (RAG) capabilities to AI development tools by bridging a QDrant knowledge base for intelligent, context-aware search.
mcp-raganything
Provides a FastAPI REST API and MCP server for Retrieval Augmented Generation (RAG) capabilities, integrating with the RAG-Anything and LightRAG libraries for multi-modal document processing and knowledge graph operations.
concept-rag
This MCP server provides conceptual search, document analysis, and library exploration capabilities over a knowledge base using LanceDB and LLM-based concept extraction.
the-pensieve
The Pensieve server acts as a RAG-based knowledge management system, allowing users to store, query, and analyze their knowledge using natural language and LLM-powered insights.