prysm mcp server
A Model Context Protocol server enabling AI assistants to scrape web content with high accuracy and flexibility, supporting multiple scraping modes and content formatting options.
A Model Context Protocol server enabling AI assistants to scrape web content with high accuracy and flexibility, supporting multiple scraping modes and content formatting options.
The Prysm MCP (Model Context Protocol) Server enables AI assistants like Claude and others to scrape web content with high accuracy and flexibility.
# Recommended: Install the LLM-optimized version
npm install -g @pinkpixel/prysm-mcp
# Or install the standard version
npm install -g prysm-mcp
# Or clone and build
git clone https://github.com/pinkpixel-dev/prysm-mcp.git
cd prysm-mcp
npm install
npm run build
We provide detailed integration guides for popular MCP-compatible applications:
There are multiple ways to set up Prysm MCP Server:
Create a mcp.json
file in the appropriate location according to the above guides.
{
"mcpServers": {
"prysm-scraper": {
"description": "Prysm web scraper with custom output directories",
"command": "npx",
"args": [
"-y",
"@pinkpixel/prysm-mcp"
],
"env": {
"PRYSM_OUTPUT_DIR": "${workspaceFolder}/scrape_results",
"PRYSM_IMAGE_OUTPUT_DIR": "${workspaceFolder}/scrape_results/images"
}
}
}
}
The server provides the following tools:
scrapeFocused
Fast web scraping optimized for speed (fewer scrolls, main content only).
Please scrape https://example.com using the focused mode
Available Parameters:
- url
(required): URL to scrape
- maxScrolls
(optional): Maximum number of scroll attempts (default: 5)
- scrollDelay
(optional): Delay between scrolls in ms (default: 1000)
- scrapeImages
(optional): Whether to include images in results
- downloadImages
(optional): Whether to download images locally
- maxImages
(optional): Maximum images to extract
- output
(optional): Output directory for downloaded images
scrapeBalanced
Balanced web scraping approach with good coverage and reasonable speed.
Please scrape https://example.com using the balanced mode
Available Parameters:
- Same as scrapeFocused
with different defaults
- maxScrolls
default: 10
- scrollDelay
default: 2000
- Adds timeout
parameter to limit total scraping time (default: 30000ms)
scrapeDeep
Maximum extraction web scraping (slower but thorough).
Please scrape https://example.com using the deep mode with maximum scrolls
Available Parameters:
- Same as scrapeFocused
with different defaults
- maxScrolls
default: 20
- scrollDelay
default: 3000
- maxImages
default: 100
formatResult
Format scraped data into different structured formats (markdown, HTML, JSON).
Format the scraped data as markdown
Available Parameters:
- data
(required): The scraped data to format
- format
(required): Output format - "markdown", "html", or "json"
- includeImages
(optional): Whether to include images in output (default: true)
- output
(optional): File path to save the formatted result
You can also save formatted results to a file by specifying an output path:
Format the scraped data as markdown and save it to "my-results/output.md"
By default, when saving formatted results, files will be saved to ~/prysm-mcp/output/
. You can customize this in two ways:
# Linux/macOS
export PRYSM_OUTPUT_DIR="/path/to/custom/directory"
export PRYSM_IMAGE_OUTPUT_DIR="/path/to/custom/image/directory"
# Windows (Command Prompt)
set PRYSM_OUTPUT_DIR=C:path ocustomdirectory
set PRYSM_IMAGE_OUTPUT_DIR=C:path ocustomimagedirectory
# Windows (PowerShell)
$env:PRYSM_OUTPUT_DIR="C:path ocustomdirectory"
$env:PRYSM_IMAGE_OUTPUT_DIR="C:path ocustomimagedirectory"
# For general results
Format the scraped data as markdown and save it to "/absolute/path/to/file.md"
# For image downloads when scraping
Please scrape https://example.com and download images to "/absolute/path/to/images"
.cursor/mcp.json
), you can set these environment variables:{
"mcpServers": {
"prysm-scraper": {
"command": "npx",
"args": ["-y", "@pinkpixel/prysm-mcp"],
"env": {
"PRYSM_OUTPUT_DIR": "${workspaceFolder}/scrape_results",
"PRYSM_IMAGE_OUTPUT_DIR": "${workspaceFolder}/scrape_results/images"
}
}
}
}
If PRYSM_IMAGE_OUTPUT_DIR
is not specified, it will default to a subfolder named images
inside the PRYSM_OUTPUT_DIR
.
If you provide only a relative path or filename, it will be saved relative to the configured output directory.
The formatResult
tool handles paths in the following ways:
/home/user/file.md
)subfolder/file.md
)output.md
)# Install dependencies
npm install
# Build the project
npm run build
# Run the server locally
node bin/prysm-mcp
# Debug MCP communication
DEBUG=mcp:* node bin/prysm-mcp
# Set custom output directories
PRYSM_OUTPUT_DIR=./my-output PRYSM_IMAGE_OUTPUT_DIR=./my-output/images node bin/prysm-mcp
You can run the server directly with npx without installing:
# Run with default settings
npx @pinkpixel/prysm-mcp
# Run with custom output directories
PRYSM_OUTPUT_DIR=./my-output PRYSM_IMAGE_OUTPUT_DIR=./my-output/images npx @pinkpixel/prysm-mcp
MIT
Developed by Pink Pixel
Powered by the Model Context Protocol and Puppeteer
[
{
"description": "Fast web scraping optimized for speed (fewer scrolls, main content only)",
"inputSchema": {
"properties": {
"downloadImages": {
"description": "Whether to download images locally",
"type": "boolean"
},
"maxImages": {
"description": "Maximum number of images to extract",
"type": "number"
},
"maxScrolls": {
"description": "Maximum number of scroll attempts (default: 5)",
"type": "number"
},
"minImageSize": {
"description": "Minimum width/height for images in pixels",
"type": "number"
},
"output": {
"description": "Output directory for downloaded images",
"type": "string"
},
"pages": {
"description": "Number of pages to scrape (if pagination is present)",
"type": "number"
},
"scrapeImages": {
"description": "Whether to include images in the scrape result",
"type": "boolean"
},
"scrollDelay": {
"description": "Delay between scrolls in ms (default: 1000)",
"type": "number"
},
"url": {
"description": "URL of the webpage to scrape",
"type": "string"
}
},
"required": [
"url"
],
"type": "object"
},
"name": "scrapeFocused"
},
{
"description": "Balanced web scraping approach with good coverage and reasonable speed",
"inputSchema": {
"properties": {
"downloadImages": {
"description": "Whether to download images locally",
"type": "boolean"
},
"maxImages": {
"description": "Maximum number of images to extract",
"type": "number"
},
"maxScrolls": {
"description": "Maximum number of scroll attempts (default: 10)",
"type": "number"
},
"minImageSize": {
"description": "Minimum width/height for images in pixels",
"type": "number"
},
"output": {
"description": "Output directory for downloaded images",
"type": "string"
},
"pages": {
"description": "Number of pages to scrape (if pagination is present)",
"type": "number"
},
"scrapeImages": {
"description": "Whether to include images in the scrape result",
"type": "boolean"
},
"scrollDelay": {
"description": "Delay between scrolls in ms (default: 2000)",
"type": "number"
},
"timeout": {
"description": "Maximum time in ms for the scrape operation (default: 30000)",
"type": "number"
},
"url": {
"description": "URL of the webpage to scrape",
"type": "string"
}
},
"required": [
"url"
],
"type": "object"
},
"name": "scrapeBalanced"
},
{
"description": "Maximum extraction web scraping (slower but thorough)",
"inputSchema": {
"properties": {
"downloadImages": {
"description": "Whether to download images locally",
"type": "boolean"
},
"maxImages": {
"description": "Maximum number of images to extract",
"type": "number"
},
"maxScrolls": {
"description": "Maximum number of scroll attempts (default: 20)",
"type": "number"
},
"minImageSize": {
"description": "Minimum width/height for images in pixels",
"type": "number"
},
"output": {
"description": "Output directory for downloaded images",
"type": "string"
},
"pages": {
"description": "Number of pages to scrape (if pagination is present)",
"type": "number"
},
"scrapeImages": {
"description": "Whether to include images in the scrape result",
"type": "boolean"
},
"scrollDelay": {
"description": "Delay between scrolls in ms (default: 3000)",
"type": "number"
},
"url": {
"description": "URL of the webpage to scrape",
"type": "string"
}
},
"required": [
"url"
],
"type": "object"
},
"name": "scrapeDeep"
},
{
"description": "Format scraped data into different structured formats (markdown, HTML, JSON)",
"inputSchema": {
"properties": {
"data": {
"description": "The scraped data to format",
"type": "object"
},
"format": {
"description": "The format to convert the data to",
"enum": [
"markdown",
"html",
"json"
],
"type": "string"
},
"includeImages": {
"description": "Whether to include images in the formatted output (default: true)",
"type": "boolean"
},
"output": {
"description": "File path to save the formatted result. If not provided, will use the default directory.",
"type": "string"
}
},
"required": [
"data",
"format"
],
"type": "object"
},
"name": "formatResult"
}
]