mcp ragdocs
Enables AI assistants to enhance their responses with relevant documentation through a semantic vector search, offering tools for managing and processing documentation efficiently.
Enables AI assistants to enhance their responses with relevant documentation through a semantic vector search, offering tools for managing and processing documentation efficiently.
An MCP server implementation that provides tools for retrieving and processing documentation through vector search, enabling AI assistants to augment their responses with relevant documentation context.
search_documentation
Search through the documentation using vector search
Returns relevant chunks of documentation with source information
list_sources
List all available documentation sources
Provides metadata about each source
extract_urls
Extract URLs from text and check if they're already in the documentation
Useful for preventing duplicate documentation
remove_documentation
Remove documentation from a specific source
Cleans up outdated or irrelevant documentation
list_queue
List all items in the processing queue
Shows status of pending documentation processing
run_queue
Process all items in the queue
Automatically adds new documentation to the vector store
clear_queue
Clear all items from the processing queue
Useful for resetting the system
add_documentation
The RAG Documentation tool is designed for:
The project includes a docker-compose.yml
file for easy containerized deployment. To start the services:
docker-compose up -d
To stop the services:
docker-compose down
The system includes a web interface that can be accessed after starting the Docker Compose services:
http://localhost:3030
The system uses Ollama as the default embedding provider for local embeddings generation, with OpenAI available as a fallback option. This setup prioritizes local processing while maintaining reliability through cloud-based fallback.
EMBEDDING_PROVIDER
: Choose the primary embedding provider ('ollama' or 'openai', default: 'ollama')EMBEDDING_MODEL
: Specify the model to use (optional)OPENAI_API_KEY
: Required when using OpenAI as providerFALLBACK_PROVIDER
: Optional backup provider ('ollama' or 'openai')FALLBACK_MODEL
: Optional model for fallback providerAdd this to your cline_mcp_settings.json
:
{
"mcpServers": {
"rag-docs": {
"command": "node",
"args": ["/path/to/your/mcp-ragdocs/build/index.js"],
"env": {
"EMBEDDING_PROVIDER": "ollama", // default
"EMBEDDING_MODEL": "nomic-embed-text", // optional
"OPENAI_API_KEY": "your-api-key-here", // required for fallback
"FALLBACK_PROVIDER": "openai", // recommended for reliability
"FALLBACK_MODEL": "nomic-embed-text", // optional
"QDRANT_URL": "http://localhost:6333"
},
"disabled": false,
"autoApprove": [
"search_documentation",
"list_sources",
"extract_urls",
"remove_documentation",
"list_queue",
"run_queue",
"clear_queue",
"add_documentation"
]
}
}
}
Add this to your claude_desktop_config.json
:
{
"mcpServers": {
"rag-docs": {
"command": "node",
"args": ["/path/to/your/mcp-ragdocs/build/index.js"],
"env": {
"EMBEDDING_PROVIDER": "ollama", // default
"EMBEDDING_MODEL": "nomic-embed-text", // optional
"OPENAI_API_KEY": "your-api-key-here", // required for fallback
"FALLBACK_PROVIDER": "openai", // recommended for reliability
"FALLBACK_MODEL": "nomic-embed-text", // optional
"QDRANT_URL": "http://localhost:6333"
}
}
}
}
The system uses Ollama by default for efficient local embedding generation. For optimal reliability:
{
// Ollama is used by default, no need to specify EMBEDDING_PROVIDER
"EMBEDDING_MODEL": "nomic-embed-text", // optional
"FALLBACK_PROVIDER": "openai",
"FALLBACK_MODEL": "text-embedding-3-small",
"OPENAI_API_KEY": "your-api-key-here"
}
This configuration ensures: - Fast, local embedding generation with Ollama - Automatic fallback to OpenAI if Ollama fails - No external API calls unless necessary
Note: The system will automatically use the appropriate vector dimensions based on the provider: - Ollama (nomic-embed-text): 768 dimensions - OpenAI (text-embedding-3-small): 1536 dimensions
This project is a fork of qpd-v/mcp-ragdocs, originally developed by qpd-v. The original project provided the foundation for this implementation.
Special thanks to the original creator, qpd-v, for their innovative work on the initial version of this MCP server. This fork has been enhanced with additional features and improvements by Rahul Retnan.
If the MCP server fails to start due to a port conflict, follow these steps:
npx kill-port 3030
Restart the MCP server
If the issue persists, check for other processes using the port:
lsof -i :3030