mcp_servers
by manish6007
Overview
A combined Model Context Protocol (MCP) server that provides tools for querying Amazon Redshift databases and performing vector-based knowledge base searches.
Installation
python -m combined_mcp_server.mainEnvironment Variables
- AWS_REGION
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- REDSHIFT_CLUSTER_ID
- REDSHIFT_DATABASE
- REDSHIFT_HOST
- REDSHIFT_RESULTS_BUCKET
- KNOWLEDGEBASE_S3_BUCKET
- POSTGRES_SECRET_NAME
- POSTGRES_HOST
- POSTGRES_DATABASE
- POSTGRES_USER
- POSTGRES_PASSWORD
- BEDROCK_EMBEDDING_MODEL
- MCP_TRANSPORT
Security Notes
The `run_query` tool directly accepts raw SQL as input, which is a significant SQL injection vulnerability if exposed to untrusted user input or if an LLM generates malicious SQL. The `list_tables` and `describe_table` tools use f-strings to embed `schema` and `table` names directly into SQL queries, also creating SQL injection opportunities if these parameters are not rigorously sanitized by the calling agent or application. The `query_vectorstore` tool similarly uses f-strings for the `query` parameter within `plainto_tsquery`, which could lead to unexpected behavior or resource exhaustion with malicious input. While intended for LLM agents, these patterns pose a high risk without strict input validation or sandboxing.
Similar Servers
context-portal
Manages structured project context for AI assistants and developer tools, enabling Retrieval Augmented Generation (RAG) and prompt caching within IDEs.
opensearch-mcp-server-py
Enables AI assistants and LLMs to interact with OpenSearch clusters by providing a standardized Model Context Protocol (MCP) interface through built-in and dynamic tools.
enhanced-postgres-mcp-server
This server acts as a Model Context Protocol interface for PostgreSQL, enabling LLMs to query data, modify records, and manage database schema objects with read and write capabilities.
bluera-knowledge
Provides a semantic knowledge base and intelligent web crawling capabilities to power coding agents, enabling them to search internal project files, Git repositories, and crawled web documentation.