vibe coder mcp
An MCP server that supercharges AI assistants with powerful tools for software development, enabling research, planning, code generation, and project scaffolding through natural language interaction.
An MCP server that supercharges AI assistants with powerful tools for software development, enabling research, planning, code generation, and project scaffolding through natural language interaction.
Vibe Coder is an MCP (Model Context Protocol) server designed to supercharge your AI assistant (like Cursor, Cline AI, or Claude Desktop) with powerful tools for software development. It helps with research, planning, generating requirements, creating starter projects, and more!
Vibe Coder MCP integrates with MCP-compatible clients to provide the following capabilities:
workflows.json
.generate-code-stub
).refactor-code
).analyze-dependencies
).git-summary
).research-manager
) and generates planning documents like PRDs (generate-prd
), user stories (generate-user-stories
), task lists (generate-task-list
), and development rules (generate-rules
).generate-fullstack-starter-kit
).get-job-result
tool.(See "Detailed Tool Documentation" and "Feature Details" sections below for more)
Follow these micro-steps to get the Vibe Coder MCP server running and connected to your AI assistant.
node -v
If not installed or outdated: Download from nodejs.org.
Check Git Installation:
git --version
If not installed: Download from git-scm.com.
Get OpenRouter API Key:
Navigate to where you want to store the project:
cd ~/Documents # Example: Change to your preferred location
Clone the Repository:
Run:
git clone https://github.com/freshtechbro/vibe-coder-mcp.git
(Or use your fork's URL if applicable)
Navigate to Project Directory:
cd vibe-coder-mcp
Choose the appropriate script for your operating system:
For Windows: 1. In your terminal (still in the vibe-coder-mcp directory), run:
setup.bat
2. Wait for the script to complete (it will install dependencies, build the project, and create necessary directories).
3. If you see any error messages, refer to the Troubleshooting section below.
For macOS or Linux: 1. Make the script executable:
chmod +x setup.sh
2. Run the script:
./setup.sh
3. Wait for the script to complete.
4. If you see any error messages, refer to the Troubleshooting section below.
The script performs these actions:
* Checks Node.js version (v18+)
* Installs all dependencies via npm
* Creates necessary workflow-agent-files directories
* Builds the TypeScript project
* Creates a default .env
file if one doesn't exist (you will populate this next).
* Sets executable permissions (on Unix systems)
.env
)Locate the .env
File:
.env
file created by the setup script in the main vibe-coder-mcp
directory.Add Your OpenRouter API Key:
OPENROUTER_API_KEY=your_openrouter_api_key_here
your_openrouter_api_key_here
with your actual OpenRouter API key.Configure Output Directory (Optional):
workflow-agent-files/
inside the project), add this line:
VIBE_CODER_OUTPUT_DIR=/path/to/your/desired/output/directory
/
). If this variable is not set, the default directory will be used.Review Other Settings (Optional):
GEMINI_MODEL
, PERPLEXITY_MODEL
) to ensure they're available on your OpenRouter plan. The llm_config.json
file provides more granular control per task if needed.LOG_LEVEL
(default: info) - options include: 'fatal', 'error', 'warn', 'info', 'debug', 'trace'.Save the .env
File.
This crucial step connects Vibe Coder to your AI assistant. Each environment requires slightly different configuration.
You need the full, absolute path to the build/index.js
file:
For Windows: 1. In your terminal, navigate to the build directory:
cd build
2. Get the absolute path:
echo %cd%index.js
3. Copy the output (e.g., C:UsersYourNameProjectsvibe-coder-mcpuildindex.js
)
For macOS/Linux: 1. In your terminal, navigate to the build directory:
cd build
2. Get the absolute path:
pwd
3. Append /index.js
to the output and copy the result (e.g., /Users/YourName/Projects/vibe-coder-mcp/build/index.js
)
Create a configuration block by:
Copy this JSON template:
"vibe-coder-mcp": {
"command": "node",
"args": ["PATH_PLACEHOLDER"],
"env": {
"NODE_ENV": "production"
// API Keys and other sensitive config are now loaded via the .env file
// You can optionally set VIBE_CODER_OUTPUT_DIR here if you prefer it over .env
// "VIBE_CODER_OUTPUT_DIR": "/absolute/path/to/output"
},
"disabled": false,
"autoApprove": [
"research",
"generate-rules",
"generate-prd",
"generate-user-stories",
"generate-task-list",
"generate-fullstack-starter-kit",
"generate-code-stub",
"refactor-code",
"analyze-dependencies",
"git-summary",
"run-workflow"
]
}
Replace PATH_PLACEHOLDER
with the absolute path you obtained in Step 5.1.
Important: Use forward slashes /
even on Windows (e.g., C:/Users/...
)
Important: Do NOT put your OPENROUTER_API_KEY
directly in this configuration block anymore. It should only be in your .env
file.
Ctrl+Shift+P
Cmd+Shift+P
Preferences: Open User Settings (JSON)
mcpServers
object:"mcpServers": {}
mcpServers
object:Ctrl+S
or Cmd+S
)Example of a complete settings.json section:
"mcpServers": {
"some-existing-server": {
// existing configuration...
},
"vibe-coder-mcp": {
"command": "node",
"args": ["C:/Users/YourName/Projects/vibe-coder-mcp/build/index.js"],
// Rest of your configuration...
}
}
C:Users[YourUsername]AppDataRoamingCursorUserglobalStoragesaoudrizwan.claude-devsettingscline_mcp_settings.json
~/Library/Application Support/Cursor/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json
Linux: ~/.config/Cursor/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json
Open this file with a text editor.
Find or add the mcpServers
object:
{"mcpServers": {}}
If it exists but has no mcpServers
, add it at the root level
Add your configuration block inside the mcpServers
object:
Paste your configuration block from step 5.2
Save the file.
Restart VS Code completely.
Ctrl+Shift+P
or Cmd+Shift+P
).Preferences: Open User Settings (JSON)
.C:Users[YourUsername]AppDataRoamingClaudeclaude_desktop_config.json
~/Library/Application Support/Claude/claude_desktop_config.json
Linux: ~/.config/Claude/claude_desktop_config.json
Open this file with a text editor.
Find or add the mcpServers
object at the root level:
mcpServers
If it already has mcpServers
, locate it
Add your configuration block inside the mcpServers
object:
Paste your configuration block from step 5.2
Save the file.
Close and reopen Claude Desktop.
Example of a complete claude_desktop_config.json:
{
"theme": "system",
"mcpServers": {
"vibe-coder-mcp": {
"command": "node",
"args": ["/Users/YourName/Projects/vibe-coder-mcp/build/index.js"],
"env": {
"NODE_ENV": "production",
"OPENROUTER_API_KEY": "your-openrouter-api-key",
// Rest of your configuration...
},
"disabled": false,
"autoApprove": [
// Your auto-approve tools...
]
}
}
}
Completely restart your AI assistant application.
Test a Simple Command:
Type a test command like: Research modern JavaScript frameworks
Check for Proper Response:
The Vibe Coder MCP server follows a modular architecture centered around a tool registry pattern:
flowchart TD
subgraph Initialization
Init[index.ts] --> Config[Load Configuration]
Config --> Server[Create MCP Server]
Server --> ToolReg[Register Tools]
ToolReg --> InitEmbed[Initialize Embeddings]
InitEmbed --> Ready[Server Ready]
end
subgraph Request_Flow
Req[Client Request] --> ReqProc[Request Processor]
ReqProc --> Route[Routing System]
Route --> Execute[Tool Execution]
Execute --> Response[Response to Client]
end
subgraph Routing_System ["Routing System (Hybrid Matcher)"]
Route --> Semantic[Semantic Matcher]
Semantic --> |High Confidence| Registry[Tool Registry]
Semantic --> |Low Confidence| SeqThink[Sequential Thinking]
SeqThink --> Registry
end
subgraph Tool_Execution
Registry --> |Get Definition| Definition[Tool Definition]
Definition --> |Validate Input| ZodSchema[Zod Validation]
ZodSchema --> |Execute| Executor[Tool Executor]
Executor --> |May Use| Helper[Utility Helpers]
Helper --> |Research| Research[Research Helper]
Helper --> |File Ops| File[File I/O]
Helper --> |Embeddings| Embed[Embedding Helper]
Helper --> |Git| Git[Git Helper]
Executor --> ReturnResult[Return Result]
end
subgraph Error_Handling
ReturnResult --> |Success| Success[Success Response]
ReturnResult --> |Error| ErrorHandler[Error Handler]
ErrorHandler --> CustomErr[Custom Error Types]
CustomErr --> FormattedErr[Formatted Error Response]
end
Execute --> |Session State| State[Session State]
State --> |Persists Between Calls| ReqProc
vibe-coder-mcp/
├── .env # Environment configuration
├── mcp-config.json # Example MCP configuration
├── package.json # Project dependencies
├── README.md # This documentation
├── setup.bat # Windows setup script
├── setup.sh # macOS/Linux setup script
├── tsconfig.json # TypeScript configuration
├── vitest.config.ts # Vitest (testing) configuration
├── workflows.json # Workflow definitions
├── build/ # Compiled JavaScript (after build)
├── docs/ # Additional documentation
├── VibeCoderOutput/ # Tool output directory
│ ├── research-manager/
│ ├── rules-generator/
│ ├── prd-generator/
│ ├── user-stories-generator/
│ ├── task-list-generator/
│ ├── fullstack-starter-kit-generator/
│ └── workflow-runner/
└── src/ # Source code
├── index.ts # Entry point
├── logger.ts # Logging configuration (Pino)
├── server.ts # MCP server setup
├── services/ # Core services
│ ├── hybrid-matcher/ # Request routing orchestration
│ ├── request-processor/ # Handles incoming requests
│ ├── routing/ # Semantic routing & registry
│ │ ├── embeddingStore.ts # Tool embedding storage
│ │ ├── semanticMatcher.ts # Semantic matching
│ │ └── toolRegistry.ts # Tool registration/execution
│ ├── state/ # Session state management
│ │ └── sessionState.ts # In-memory state storage
│ └── workflows/ # Workflow execution
│ └── workflowExecutor.ts # Workflow engine
├── testUtils/ # Testing utilities
│ └── mockLLM.ts # Mock LLM for tests
├── tools/ # Tool implementations
│ ├── index.ts # Tool registration
│ ├── sequential-thinking.ts # Fallback routing
│ ├── code-refactor-generator/ # Code refactoring
│ ├── code-stub-generator/ # Code stub creation
│ ├── dependency-analyzer/ # Dependency analysis
│ ├── fullstack-starter-kit-generator/ # Project gen
│ ├── git-summary-generator/ # Git integration
│ ├── prd-generator/ # PRD creation
│ ├── research-manager/ # Research tool
│ ├── rules-generator/ # Rules creation
│ ├── task-list-generator/ # Task lists
│ ├── user-stories-generator/ # User stories
│ └── workflow-runner/ # Workflow execution
├── types/ # TypeScript definitions
│ ├── globals.d.ts
│ ├── sequentialThought.ts
│ ├── tools.ts
│ └── workflow.ts
└── utils/ # Shared utilities
├── embeddingHelper.ts # Embedding generation
├── errors.ts # Custom error classes
├── fileReader.ts # File I/O
├── gitHelper.ts # Git operations
└── researchHelper.ts # Research functionality
Vibe Coder uses a sophisticated routing approach to select the right tool for each request:
flowchart TD
Start[Client Request] --> Process[Process Request]
Process --> Hybrid[Hybrid Matcher]
subgraph "Primary: Semantic Routing"
Hybrid --> Semantic[Semantic Matcher]
Semantic --> Embeddings[Query Embeddings]
Embeddings --> Tools[Tool Embeddings]
Tools --> Compare[Compare via Cosine Similarity]
Compare --> Score[Score & Rank Tools]
Score --> Confidence{High Confidence?}
end
Confidence -->|Yes| Registry[Tool Registry]
subgraph "Fallback: Sequential Thinking"
Confidence -->|No| Sequential[Sequential Thinking]
Sequential --> LLM[LLM Analysis]
LLM --> ThoughtChain[Thought Chain]
ThoughtChain --> Extraction[Extract Tool Name]
Extraction --> Registry
end
Registry --> Executor[Execute Tool]
Executor --> Response[Return Response]
The Tool Registry is a central component for managing tool definitions and execution:
flowchart TD
subgraph "Tool Registration (at import)"
Import[Import Tool] --> Register[Call registerTool]
Register --> Store[Store in Registry Map]
end
subgraph "Tool Definition"
Def[ToolDefinition] --> Name[Tool Name]
Def --> Desc[Description]
Def --> Schema[Zod Schema]
Def --> Exec[Executor Function]
end
subgraph "Server Initialization"
Init[server.ts] --> Import
Init --> GetAll[getAllTools]
GetAll --> Loop[Loop Through Tools]
Loop --> McpReg[Register with MCP Server]
end
subgraph "Tool Execution"
McpReg --> ExecTool[executeTool Function]
ExecTool --> GetTool[Get Tool from Registry]
GetTool --> Validate[Validate Input]
Validate -->|Valid| ExecFunc[Run Executor Function]
Validate -->|Invalid| ValidErr[Return Validation Error]
ExecFunc -->|Success| SuccessResp[Return Success Response]
ExecFunc -->|Error| HandleErr[Catch & Format Error]
HandleErr --> ErrResp[Return Error Response]
end
The Sequential Thinking mechanism provides LLM-based fallback routing:
flowchart TD
Start[Start] --> Estimate[Estimate Number of Steps]
Estimate --> Init[Initialize with System Prompt]
Init --> First[Generate First Thought]
First --> Context[Add to Context]
Context --> Loop{Needs More Thoughts?}
Loop -->|Yes| Next[Generate Next Thought]
Next -->|Standard| AddStd[Add to Context]
Next -->|Revision| Rev[Mark as Revision]
Next -->|New Branch| Branch[Mark as Branch]
Rev --> AddRev[Add to Context]
Branch --> AddBranch[Add to Context]
AddStd --> Loop
AddRev --> Loop
AddBranch --> Loop
Loop -->|No| Extract[Extract Final Solution]
Extract --> End[End With Tool Selection]
subgraph "Error Handling"
Next -->|Error| Retry[Retry with Simplified Request]
Retry -->|Success| AddRetry[Add to Context]
Retry -->|Failure| FallbackEx[Extract Partial Solution]
AddRetry --> Loop
FallbackEx --> End
end
flowchart TD
Start[Client Request] --> SessionID[Extract Session ID]
SessionID --> Store{State Exists?}
Store -->|Yes| Retrieve[Retrieve Previous State]
Store -->|No| Create[Create New State]
Retrieve --> Context[Add Context to Tool]
Create --> NoContext[Execute Without Context]
Context --> Execute[Execute Tool]
NoContext --> Execute
Execute --> SaveState[Update Session State]
SaveState --> Response[Return Response to Client]
subgraph "Session State Structure"
State[SessionState] --> PrevCall[Previous Tool Call]
State --> PrevResp[Previous Response]
State --> Timestamp[Timestamp]
end
The Workflow system enables multi-step sequences:
flowchart TD
Start[Client Request] --> Parse[Parse Workflow Request]
Parse --> FindFlow[Find Workflow in workflows.json]
FindFlow --> Steps[Extract Steps]
Steps --> Loop[Process Each Step]
Loop --> PrepInput[Prepare Step Input]
PrepInput --> ExecuteTool[Execute Tool via Registry]
ExecuteTool --> SaveOutput[Save Step Output]
SaveOutput --> NextStep{More Steps?}
NextStep -->|Yes| MapOutput[Map Output to Next Input]
MapOutput --> Loop
NextStep -->|No| FinalOutput[Prepare Final Output]
FinalOutput --> End[Return Workflow Result]
subgraph "Input/Output Mapping"
MapOutput --> Direct[Direct Value]
MapOutput --> Extract[Extract From Previous]
MapOutput --> Transform[Transform Values]
end
Workflows are defined in the workflows.json
file located in the root directory of the project. This file contains predefined sequences of tool calls that can be executed with a single command.
workflows.json
file must be placed in the project root directory (same level as package.json){
"workflows": {
"workflowName1": {
"description": "Description of what this workflow does",
"inputSchema": {
"param1": "string",
"param2": "string"
},
"steps": [
{
"id": "step1_id",
"toolName": "tool-name",
"params": {
"param1": "{workflow.input.param1}"
}
},
{
"id": "step2_id",
"toolName": "another-tool",
"params": {
"paramA": "{workflow.input.param2}",
"paramB": "{steps.step1_id.output.content[0].text}"
}
}
],
"output": {
"summary": "Workflow completed message",
"details": ["Output line 1", "Output line 2"]
}
}
}
}
Workflow step parameters support template strings that can reference:
- Workflow inputs: {workflow.input.paramName}
- Previous step outputs: {steps.stepId.output.content[0].text}
Use the run-workflow
tool with:
Run the newProjectSetup workflow with input {"productDescription": "A task manager app"}
Each tool in the src/tools/
directory includes comprehensive documentation in its own README.md file. These files cover:
Refer to these individual READMEs for in-depth information:
src/tools/code-refactor-generator/README.md
src/tools/code-stub-generator/README.md
src/tools/dependency-analyzer/README.md
src/tools/fullstack-starter-kit-generator/README.md
src/tools/git-summary-generator/README.md
src/tools/prd-generator/README.md
src/tools/research-manager/README.md
src/tools/rules-generator/README.md
src/tools/task-list-generator/README.md
src/tools/user-stories-generator/README.md
src/tools/workflow-runner/README.md
generate-code-stub
): Creates boilerplate code (functions, classes, etc.) based on a description and target language. Useful for quickly scaffolding new components.refactor-code
): Takes an existing code snippet and refactoring instructions (e.g., "convert to async/await", "improve readability", "add error handling") and returns the modified code.analyze-dependencies
): Parses manifest files like package.json
or requirements.txt
to list project dependencies.git-summary
): Provides a summary of the current Git status, showing staged or unstaged changes (diff). Useful for quick checks before committing.research-manager
): Performs deep research on technical topics using Perplexity Sonar, providing summaries and sources.generate-rules
): Creates project-specific development rules and guidelines.generate-prd
): Generates comprehensive product requirements documents.generate-user-stories
): Creates detailed user stories with acceptance criteria.generate-task-list
): Builds structured development task lists with dependencies.generate-fullstack-starter-kit
): Creates customized project starter kits with specified frontend/backend technologies, including basic setup scripts and configuration.run-workflow
): Executes predefined sequences of tool calls for common development tasks.By default, outputs from the generator tools are stored for historical reference in the VibeCoderOutput/
directory within the project. This location can be overridden by setting the VIBE_CODER_OUTPUT_DIR
environment variable in your .env
file or AI assistant configuration.
Example structure (default location):
VibeCoderOutput/
├── research-manager/ # Research reports
│ └── TIMESTAMP-QUERY-research.md
├── rules-generator/ # Development rules
│ └── TIMESTAMP-PROJECT-rules.md
├── prd-generator/ # PRDs
│ └── TIMESTAMP-PROJECT-prd.md
├── user-stories-generator/ # User stories
│ └── TIMESTAMP-PROJECT-user-stories.md
├── task-list-generator/ # Task lists
│ └── TIMESTAMP-PROJECT-task-list.md
├── fullstack-starter-kit-generator/ # Project templates
│ └── TIMESTAMP-PROJECT/
└── workflow-runner/ # Workflow outputs
└── TIMESTAMP-WORKFLOW/
Interact with the tools via your connected AI assistant:
Research modern JavaScript frameworks
Create development rules for a mobile banking application
Generate a PRD for a task management application
Generate user stories for an e-commerce website
Create a task list for a weather app based on [user stories]
Think through the architecture for a microservices-based e-commerce platform
Create a starter kit for a React/Node.js blog application with user authentication
Generate a python function stub named 'calculate_discount' that takes price and percentage
Refactor this code to use async/await: [paste code snippet]
Analyze dependencies in package.json
Show unstaged git changes
Run workflow newProjectSetup with input { "projectName": "my-new-app", "description": "A simple task manager" }
While the primary use is integration with an AI assistant (using stdio), you can run the server directly for testing:
Production Mode (Stdio):
npm start
Development Mode (Stdio, Pretty Logs):
npm run dev
nodemon
and pino-pretty
SSE Mode (HTTP Interface):
# Production mode over HTTP
npm run start:sse
# Development mode over HTTP
npm run dev:sse
args
array is correct/
even on WindowsRun node <path-to-build/index.js>
directly to test if Node can find it
Check Configuration Format:
Verify that the mcpServers
object contains your server
Restart the Assistant:
"disabled": false
is setRemove any //
comments as JSON doesn't support them
Verify autoApprove Array:
autoApprove
array match exactly"process-request"
to the array if using hybrid routingCheck if you have sufficient credits
Environment Variable Issues:
.env
file (for local runs)npm run build
to ensure the build directory existsCheck if build output is going to a different directory (check tsconfig.json)
File Permission Errors:
Try running with LOG_LEVEL=debug
in your .env
file
For AI Assistant Runs:
"NODE_ENV": "production"
in the env configurationTry a more explicit request that mentions the tool name
**Git Summary Tool