openrouter mcp multimodal
Provides chat and image analysis capabilities through OpenRouter.ai's diverse model ecosystem, enabling both text conversations and powerful multimodal image processing with various AI models.
Provides chat and image analysis capabilities through OpenRouter.ai's diverse model ecosystem, enabling both text conversations and powerful multimodal image processing with various AI models.
An MCP (Model Context Protocol) server that provides chat and image analysis capabilities through OpenRouter.ai's diverse model ecosystem. This server combines text chat functionality with powerful image analysis capabilities.
Configurable temperature and other parameters
Image Analysis:
Support for various image sources (local files, URLs, data URLs)
Model Selection:
Support for default model configuration
Performance Optimization:
Normalized path processing for consistent behavior across platforms
MCP Configuration Support:
Flexible API key and model specification options
Robust Error Handling:
Multiple backup strategies for file reading
Image Processing Enhancements:
npm install -g @stabgan/openrouter-mcp-multimodal
docker run -i -e OPENROUTER_API_KEY=your-api-key-here stabgandocker/openrouter-mcp-multimodal:latest
Add one of the following configurations to your MCP settings file (e.g., cline_mcp_settings.json
or claude_desktop_config.json
):
{
"mcpServers": {
"openrouter": {
"command": "npx",
"args": [
"-y",
"@stabgan/openrouter-mcp-multimodal"
],
"env": {
"OPENROUTER_API_KEY": "your-api-key-here",
"DEFAULT_MODEL": "qwen/qwen2.5-vl-32b-instruct:free"
}
}
}
}
{
"mcpServers": {
"openrouter": {
"command": "uv",
"args": [
"run",
"-m",
"openrouter_mcp_multimodal"
],
"env": {
"OPENROUTER_API_KEY": "your-api-key-here",
"DEFAULT_MODEL": "qwen/qwen2.5-vl-32b-instruct:free"
}
}
}
}
{
"mcpServers": {
"openrouter": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"-e", "OPENROUTER_API_KEY=your-api-key-here",
"-e", "DEFAULT_MODEL=qwen/qwen2.5-vl-32b-instruct:free",
"stabgandocker/openrouter-mcp-multimodal:latest"
]
}
}
}
{
"mcpServers": {
"openrouter": {
"command": "smithery",
"args": [
"run",
"stabgan/openrouter-mcp-multimodal"
],
"env": {
"OPENROUTER_API_KEY": "your-api-key-here",
"DEFAULT_MODEL": "qwen/qwen2.5-vl-32b-instruct:free"
}
}
}
}
For comprehensive examples of how to use this MCP server, check out the examples directory. We provide:
Each example comes with clear documentation and step-by-step instructions.
This project uses the following key dependencies:
@modelcontextprotocol/sdk
: ^1.8.0 - Latest MCP SDK for tool implementationopenai
: ^4.89.1 - OpenAI-compatible API client for OpenRoutersharp
: ^0.33.5 - Fast image processing libraryaxios
: ^1.8.4 - HTTP client for API requestsnode-fetch
: ^3.3.2 - Modern fetch implementationNode.js 18 or later is required. All dependencies are regularly updated to ensure compatibility and security.
Send text or multimodal messages to OpenRouter models:
use_mcp_tool({
server_name: "openrouter",
tool_name: "mcp_openrouter_chat_completion",
arguments: {
model: "google/gemini-2.5-pro-exp-03-25:free", // Optional if default is set
messages: [
{
role: "system",
content: "You are a helpful assistant."
},
{
role: "user",
content: "What is the capital of France?"
}
],
temperature: 0.7 // Optional, defaults to 1.0
}
});
For multimodal messages with images:
use_mcp_tool({
server_name: "openrouter",
tool_name: "mcp_openrouter_chat_completion",
arguments: {
model: "anthropic/claude-3.5-sonnet",
messages: [
{
role: "user",
content: [
{
type: "text",
text: "What's in this image?"
},
{
type: "image_url",
image_url: {
url: "https://example.com/image.jpg"
}
}
]
}
]
}
});