mcp inception
A TypeScript-based server that allows calling other MCP clients from your own MCP client, facilitating task delegation and context window offloading for enhanced multi-agent interactions.
A TypeScript-based server that allows calling other MCP clients from your own MCP client, facilitating task delegation and context window offloading for enhanced multi-agent interactions.
Ok this is a difficult one. Will take some setting up unfortunately. However, if you manage to make this more straightforward, please send me PR's.
Call another mcp client from your mcp client. Delegate tasks, offload context windows. An agent for your agent!
This is a TypeScript-based MCP server that implements a simple LLM query system.
execute_mcp_client
- Ask a question to a separate LLM, ignore all the intermediate steps it takes when querying it's tools, and return the output.execute_map_reduce_mcp_client
- Process multiple items in parallel and then sequentially reduce the results to a single output.mapPrompt
with {item}
placeholder for individual item processingreducePrompt
with {accumulator}
and {result}
placeholders for combining resultsitems
to processinitialValue
for the accumulator~/.llm/config.json
llm
executable#!/bin/bash
source ./venv/bin/activate
llm --no-confirmations
Install dependencies:
npm install
Build the server:
npm run build
For development with auto-rebuild:
npm run watch
To use with Claude Desktop, add the server config:
On MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json
On Windows: %APPDATA%/Claude/claude_desktop_config.json
{
"mcpServers": {
"mcp-inception": {
"command": "node",
"args": ["~/Documents/Cline/MCP/mcp-inception/build/index.js"], // build/index.js from this repo
"disabled": false,
"autoApprove": [],
"env": {
"MCP_INCEPTION_EXECUTABLE": "./run_llm.sh", // bash file from Development->Dependencies
"MCP_INCEPTION_WORKING_DIR": "/mcp-client-cli working dir"
}
}
}
}
Since MCP servers communicate over stdio, debugging can be challenging. We recommend using the MCP Inspector, which is available as a package script:
npm run inspector
The Inspector will provide a URL to access debugging tools in your browser.
[
{
"description": "Offload certain tasks to AI. Used for research purposes, do not use for code editing or anything code related. Only used to fetch data.",
"inputSchema": {
"properties": {
"command": {
"description": "The MCP client command to execute",
"type": "string"
}
},
"required": [
"command"
],
"type": "object"
},
"name": "execute_mcp_client"
},
{
"description": "Execute multiple AI tasks in parallel, with responses in JSON key-value pairs.",
"inputSchema": {
"properties": {
"items": {
"description": "Array of parameters to process in parallel",
"items": {
"type": "string"
},
"type": "array"
},
"prompt": {
"description": "The base prompt to use for all executions",
"type": "string"
}
},
"required": [
"prompt",
"items"
],
"type": "object"
},
"name": "execute_parallel_mcp_client"
},
{
"description": "Process multiple items in parallel then sequentially reduce the results to a single output.",
"inputSchema": {
"properties": {
"initialValue": {
"description": "Initial value for the accumulator (optional).",
"type": "string"
},
"items": {
"description": "Array of items to process.",
"items": {
"type": "string"
},
"type": "array"
},
"mapPrompt": {
"description": "Template prompt for processing each individual item. Use {item} as placeholder for the current item.",
"type": "string"
},
"reducePrompt": {
"description": "Template prompt for reducing results. Use {accumulator} and {result} as placeholders.",
"type": "string"
}
},
"required": [
"mapPrompt",
"reducePrompt",
"items"
],
"type": "object"
},
"name": "execute_map_reduce_mcp_client"
}
]