mcp titan
This advanced memory server facilitates neural memory-based sequence learning and prediction, enhancing code generation and understanding through state maintenance and manifold optimization as inspired by Google Research's framework.
This advanced memory server facilitates neural memory-based sequence learning and prediction, enhancing code generation and understanding through state maintenance and manifold optimization as inspired by Google Research's framework.
A neural memory system for LLMs that can learn and predict sequences while maintaining state through a memory vector. This MCP (Model Context Protocol) server provides tools for Claude 3.7 Sonnet and other LLMs to maintain memory state across interactions.
# Clone the repository
git clone https://github.com/yourusername/titan-memory.git
cd titan-memory
# Install dependencies
npm install
# Build the project
npm run build
# Start the server
npm start
The Titan Memory MCP server provides the following tools:
helpGet help about available tools.
Parameters:
tool (optional): Specific tool name to get help forcategory (optional): Category of tools to exploreshowExamples (optional): Include usage examplesverbose (optional): Include detailed descriptionsinit_modelInitialize the Titan Memory model with custom configuration.
Parameters:
inputDim: Input dimension size (default: 768)hiddenDim: Hidden dimension size (default: 512)memoryDim: Memory dimension size (default: 1024)transformerLayers: Number of transformer layers (default: 6)numHeads: Number of attention heads (default: 8)ffDimension: Feed-forward dimension (default: 2048)dropoutRate: Dropout rate (default: 0.1)maxSequenceLength: Maximum sequence length (default: 512)memorySlots: Number of memory slots (default: 5000)similarityThreshold: Similarity threshold (default: 0.65)surpriseDecay: Surprise decay rate (default: 0.9)pruningInterval: Pruning interval (default: 1000)gradientClip: Gradient clipping value (default: 1.0)forward_passPerform a forward pass through the model to get predictions.
Parameters:
x: Input vector or textmemoryState (optional): Memory state to usetrain_stepExecute a training step to update the model.
Parameters:
x_t: Current input vector or textx_next: Next input vector or textget_memory_stateGet the current memory state and statistics.
Parameters:
type (optional): Optional memory type filtermanifold_stepUpdate memory along a manifold direction.
Parameters:
base: Base memory statevelocity: Update directionprune_memoryRemove less relevant memories to free up space.
Parameters:
threshold: Pruning threshold (0-1)save_checkpointSave memory state to a file.
Parameters:
path: Checkpoint file pathload_checkpointLoad memory state from a file.
Parameters:
path: Checkpoint file pathreset_gradientsReset accumulated gradients to recover from training issues.
Parameters: None
The Titan Memory MCP server is designed to work seamlessly with Claude 3.7 Sonnet in Cursor. Here's an example of how to use it:
// Initialize the model
const result = await callTool("init_model", {
inputDim: 768,
memorySlots: 10000,
transformerLayers: 8,
});
// Perform a forward pass
const { predicted, memoryUpdate } = await callTool("forward_pass", {
x: "const x = 5;", // or vector: [0.1, 0.2, ...]
memoryState: currentMemory,
});
// Train the model
const result = await callTool("train_step", {
x_t: "function hello() {",
x_next: " console.log('world');",
});
// Get memory state
const state = await callTool("get_memory_state", {});
The Titan Memory MCP server includes sophisticated memory management to prevent memory leaks and ensure efficient tensor operations:
The Titan Memory MCP server is built with a modular architecture:
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.