MCP server Deepseek R1
A Node.js/TypeScript implementation of a Model Context Protocol server for the Deepseek R1 language model, optimized for reasoning tasks with a large context window and fully integrated with Claude Desktop.
A Node.js/TypeScript implementation of a Model Context Protocol server for the Deepseek R1 language model, optimized for reasoning tasks with a large context window and fully integrated with Claude Desktop.
A Model Context Protocol (MCP) server implementation for the Deepseek R1 language model. Deepseek R1 is a powerful language model optimized for reasoning tasks with a context window of 8192 tokens.
Why Node.js? This implementation uses Node.js/TypeScript as it provides the most stable integration with MCP servers. The Node.js SDK offers better type safety, error handling, and compatibility with Claude Desktop.
# Clone and install
git clone https://github.com/66julienmartin/MCP-server-Deepseek_R1.git
cd deepseek-r1-mcp
npm install
# Set up environment
cp .env.example .env # Then add your API key
# Build and run
npm run build
By default, this server uses the deepseek-R1 model. If you want to use DeepSeek-V3 instead, modify the model name in src/index.ts
:
// For DeepSeek-R1 (default)
model: "deepseek-reasoner"
// For DeepSeek-V3
model: "deepseek-chat"
deepseek-r1-mcp/
├── src/
│ ├── index.ts # Main server implementation
├── build/ # Compiled files
│ ├── index.js
├── LICENSE
├── README.md
├── package.json
├── package-lock.json
└── tsconfig.json
Create a .env
file:
DEEPSEEK_API_KEY=your-api-key-here
Update Claude Desktop configuration:
{
"mcpServers": {
"deepseek_r1": {
"command": "node",
"args": ["/path/to/deepseek-r1-mcp/build/index.js"],
"env": {
"DEEPSEEK_API_KEY": "your-api-key"
}
}
}
}
npm run dev # Watch mode
npm run build # Build for production
{
"name": "deepseek_r1",
"arguments": {
"prompt": "Your prompt here",
"max_tokens": 8192, // Maximum tokens to generate
"temperature": 0.2 // Controls randomness
}
}
The default value of temperature
is 0.2.
Deepseek recommends setting the temperature
according to your specific use case:
USE CASE | TEMPERATURE | EXAMPLE |
---|---|---|
Coding / Math | 0.0 | Code generation, mathematical calculations |
Data Cleaning / Data Analysis | 1.0 | Data processing tasks |
General Conversation | 1.3 | Chat and dialogue |
Translation | 1.3 | Language translation |
Creative Writing / Poetry | 1.5 | Story writing, poetry generation |
The server provides detailed error messages for common issues: - API authentication errors - Invalid parameters - Rate limiting - Network issues
Contributions are welcome! Please feel free to submit a Pull Request.
MIT
[
{
"description": "Generate text using DeepSeek R1 model",
"inputSchema": {
"properties": {
"max_tokens": {
"description": "Maximum tokens to generate (default: 8192)",
"maximum": 8192,
"minimum": 1,
"type": "number"
},
"prompt": {
"description": "Input text for DeepSeek",
"type": "string"
},
"temperature": {
"description": "Sampling temperature (default: 0.2)",
"maximum": 2,
"minimum": 0,
"type": "number"
}
},
"required": [
"prompt"
],
"type": "object"
},
"name": "deepseek_r1"
}
]