openAi_MCP_server
Verified Safeby AbhilashPoshanagari
Overview
An MCP server that acts as an AI assistant providing RAG capabilities for deep research, interactive UI generation (tables, maps, forms, buttons), PostgreSQL database access with query validation, and handling long-running tasks. It features an integrated OAuth 2.0 authentication server and uses an LLM agent to orchestrate tools and responses.
Installation
python rag_mcp_server.pyEnvironment Variables
- REMOTE_MONGODB_SERVER
- REMOTE_MONGODB_DB
- REMOTE_MONGODB_COLLECTION
- POSTGRE_SERVER
- POSTGRE_PORT
- POSTGRE_DB
- POSTGRE_USERNAME
- POSTGRE_PASSWORD
- MCP_SERVER_NAME
- MCP_HOST
- MCP_PORT
- SENTENCE_TRANSFORMER_MODEL_PATH
- OPENAI_EMBEDDING_MODEL
- OPENAI_API_KEY
- NGROK_AUTHTOKEN
Security Notes
1. Hardcoded client_id ("demo_client") and client_secret ("demo_secret") in `rag_mcp_auth_server.py` are used for a demo client and should not be used in production environments. 2. `ast.literal_eval` is used in `map_layout_tool` and `RestApiHelper.safe_parse_features`. While safer than `eval`, it can still parse arbitrary Python literals and could potentially be exploited with carefully crafted inputs to consume excessive resources or lead to other unintended behaviors. 3. CORS is configured with `allow_origins=["*"]` in `rag_mcp_server.py`, which is insecure for production environments and should be restricted to specific trusted origins.
Similar Servers
mcp-servers
An MCP server for fetching, cleaning, and intelligently extracting content from web pages, designed for agent-building frameworks.
agentxsuite
A unified open-source platform for connecting, managing, and monitoring AI agents and tools across various Model Context Protocol (MCP) servers.
mcp-server-llmling
mcp-server-llmling serves as a Machine Chat Protocol (MCP) server, providing a YAML-based system to configure and manage LLM applications, including resources, prompts, and tools.
ai-agent-mcp-server
This project implements an MCP (Model Context Protocol) server and client using AMQP (RabbitMQ) for communication, enabling an LLM-powered agent to interact with internal tools and data resources.