firecrawl mcp server
A Model Context Protocol (MCP) server implementation that integrates with FireCrawl for advanced web scraping capabilities.
A Model Context Protocol (MCP) server implementation that integrates with FireCrawl for advanced web scraping capabilities.
A Model Context Protocol (MCP) server implementation that integrates with Firecrawl for web scraping capabilities.
Big thanks to @vrknetha, @cawstudios for the initial implementation!
env FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp
npm install -g firecrawl-mcp
Configuring Cursor ?️ Note: Requires Cursor version 0.45.6+
To configure FireCrawl MCP in Cursor:
env FIRECRAWL_API_KEY=your-api-key npx -y firecrawl-mcp
If you are using Windows and are running into issues, try
cmd /c "set FIRECRAWL_API_KEY=your-api-key && npx -y firecrawl-mcp"
Replace your-api-key
with your FireCrawl API key.
After adding, refresh the MCP server list to see the new tools. The Composer Agent will automatically use FireCrawl MCP when appropriate, but you can explicitly request it by describing your web scraping needs. Access the Composer via Command+L (Mac), select "Agent" next to the submit button, and enter your query.
Add this to your ./codeium/windsurf/model_config.json
:
{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY_HERE"
}
}
}
}
To install FireCrawl for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @mendableai/mcp-server-firecrawl --client claude
FIRECRAWL_API_KEY
: Your FireCrawl API keyFIRECRAWL_API_URL
FIRECRAWL_API_URL
(Optional): Custom API endpoint for self-hosted instanceshttps://firecrawl.your-domain.com
FIRECRAWL_RETRY_MAX_ATTEMPTS
: Maximum number of retry attempts (default: 3)FIRECRAWL_RETRY_INITIAL_DELAY
: Initial delay in milliseconds before first retry (default: 1000)FIRECRAWL_RETRY_MAX_DELAY
: Maximum delay in milliseconds between retries (default: 10000)FIRECRAWL_RETRY_BACKOFF_FACTOR
: Exponential backoff multiplier (default: 2)FIRECRAWL_CREDIT_WARNING_THRESHOLD
: Credit usage warning threshold (default: 1000)FIRECRAWL_CREDIT_CRITICAL_THRESHOLD
: Credit usage critical threshold (default: 100)For cloud API usage with custom retry and credit monitoring:
# Required for cloud API
export FIRECRAWL_API_KEY=your-api-key
# Optional retry configuration
export FIRECRAWL_RETRY_MAX_ATTEMPTS=5 # Increase max retry attempts
export FIRECRAWL_RETRY_INITIAL_DELAY=2000 # Start with 2s delay
export FIRECRAWL_RETRY_MAX_DELAY=30000 # Maximum 30s delay
export FIRECRAWL_RETRY_BACKOFF_FACTOR=3 # More aggressive backoff
# Optional credit monitoring
export FIRECRAWL_CREDIT_WARNING_THRESHOLD=2000 # Warning at 2000 credits
export FIRECRAWL_CREDIT_CRITICAL_THRESHOLD=500 # Critical at 500 credits
For self-hosted instance:
# Required for self-hosted
export FIRECRAWL_API_URL=https://firecrawl.your-domain.com
# Optional authentication for self-hosted
export FIRECRAWL_API_KEY=your-api-key # If your instance requires auth
# Custom retry configuration
export FIRECRAWL_RETRY_MAX_ATTEMPTS=10
export FIRECRAWL_RETRY_INITIAL_DELAY=500 # Start with faster retries
Add this to your claude_desktop_config.json
:
{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY_HERE",
"FIRECRAWL_RETRY_MAX_ATTEMPTS": "5",
"FIRECRAWL_RETRY_INITIAL_DELAY": "2000",
"FIRECRAWL_RETRY_MAX_DELAY": "30000",
"FIRECRAWL_RETRY_BACKOFF_FACTOR": "3",
"FIRECRAWL_CREDIT_WARNING_THRESHOLD": "2000",
"FIRECRAWL_CREDIT_CRITICAL_THRESHOLD": "500"
}
}
}
}
The server includes several configurable parameters that can be set via environment variables. Here are the default values if not configured:
const CONFIG = {
retry: {
maxAttempts: 3, // Number of retry attempts for rate-limited requests
initialDelay: 1000, // Initial delay before first retry (in milliseconds)
maxDelay: 10000, // Maximum delay between retries (in milliseconds)
backoffFactor: 2, // Multiplier for exponential backoff
},
credit: {
warningThreshold: 1000, // Warn when credit usage reaches this level
criticalThreshold: 100, // Critical alert when credit usage reaches this level
},
};
These configurations control:
Retry Behavior
Automatically retries failed requests due to rate limits
Example: With default settings, retries will be attempted at:
Credit Usage Monitoring
The server utilizes FireCrawl's built-in rate limiting and batch processing capabilities:
firecrawl_scrape
)Scrape content from a single URL with advanced options.
{
"name": "firecrawl_scrape",
"arguments": {
"url": "https://example.com",
"formats": ["markdown"],
"onlyMainContent": true,
"waitFor": 1000,
"timeout": 30000,
"mobile": false,
"includeTags": ["article", "main"],
"excludeTags": ["nav", "footer"],
"skipTlsVerification": false
}
}
firecrawl_batch_scrape
)Scrape multiple URLs efficiently with built-in rate limiting and parallel processing.
{
"name": "firecrawl_batch_scrape",
"arguments": {
"urls": ["https://example1.com", "https://example2.com"],
"options": {
"formats": ["markdown"],
"onlyMainContent": true
}
}
}
Response includes operation ID for status checking:
{
"content": [
{
"type": "text",
"text": "Batch operation queued with ID: batch_1. Use firecrawl_check_batch_status to check progress."
}
],
"isError": false
}
firecrawl_check_batch_status
)Check the status of a batch operation.
{
"name": "firecrawl_check_batch_status",
"arguments": {
"id": "batch_1"
}
}
firecrawl_search
)Search the web and optionally extract content from search results.
{
"name": "firecrawl_search",
"arguments": {
"query": "your search query",
"limit": 5,
"lang": "en",
"country": "us",
"scrapeOptions": {
"formats": ["markdown"],
"onlyMainContent": true
}
}
}
firecrawl_crawl
)Start an asynchronous crawl with advanced options.
{
"name": "firecrawl_crawl",
"arguments": {
"url": "https://example.com",
"maxDepth": 2,
"limit": 100,
"allowExternalLinks": false,
"deduplicateSimilarURLs": true
}
}
firecrawl_extract
)Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.
{
"name": "firecrawl_extract",
"arguments": {
"urls": ["https://example.com/page1", "https://example.com/page2"],
"prompt": "Extract product information including name, price, and description",
"systemPrompt": "You are a helpful assistant that extracts product information",
"schema": {
"type": "object",
"properties": {
"name": { "type": "string" },
"price": { "type": "number" },
"description": { "type": "string" }
},
"required": ["name", "price"]
},
"allowExternalLinks": false,
"enableWebSearch": false,
"includeSubdomains": false
}
}
Example response:
{
"content": [
{
"type": "text",
"text": {
"name": "Example Product",
"price": 99.99,
"description": "This is an example product description"
}
}
],
"isError": false
}
urls
: Array of URLs to extract information fromprompt
: Custom prompt for the LLM extractionsystemPrompt
: System prompt to guide the LLMschema
: JSON schema for structured data extractionallowExternalLinks
: Allow extraction from external linksenableWebSearch
: Enable web search for additional contextincludeSubdomains
: Include subdomains in extractionWhen using a self-hosted instance, the extraction will use your configured LLM. For cloud API, it uses FireCrawl's managed LLM service.
Conduct deep web research on a query using intelligent crawling, search, and LLM analysis.
{
"name": "firecrawl_deep_research",
"arguments": {
"query": "how does carbon capture technology work?",
"maxDepth": 3,
"timeLimit": 120,
"maxUrls": 50
}
}
Arguments: - query (string, required): The research question or topic to explore. - maxDepth (number, optional): Maximum recursive depth for crawling/search (default: 3). - timeLimit (number, optional): Time limit in seconds for the research session (default: 120). - maxUrls (number, optional): Maximum number of URLs to analyze (default: 50).
Returns:
Generate a standardized llms.txt (and optionally llms-full.txt) file for a given domain. This file defines how large language models should interact with the site.
{
"name": "firecrawl_generate_llmstxt",
"arguments": {
"url": "https://example.com",
"maxUrls": 20,
"showFullText": true
}
}
Arguments:
Returns: - Generated llms.txt file contents and optionally the llms-full.txt (data.llmstxt and/or data.llmsfulltxt)
The server includes comprehensive logging:
Example log messages:
[INFO] FireCrawl MCP Server initialized successfully
[INFO] Starting scrape for URL: https://example.com
[INFO] Batch operation queued with ID: batch_1
[WARNING] Credit usage has reached warning threshold
[ERROR] Rate limit exceeded, retrying in 2s...
The server provides robust error handling:
Example error response:
{
"content": [
{
"type": "text",
"text": "Error: Rate limit exceeded. Retrying in 2 seconds..."
}
],
"isError": true
}
# Install dependencies
npm install
# Build
npm run build
# Run tests
npm test
npm test
MIT License - see LICENSE file for details
[
{
"description": "Scrape a single webpage with advanced options for content extraction. Supports various formats including markdown, HTML, and screenshots. Can execute custom actions like clicking or scrolling before scraping.",
"inputSchema": {
"properties": {
"actions": {
"description": "List of actions to perform before scraping",
"items": {
"properties": {
"direction": {
"description": "Scroll direction",
"enum": [
"up",
"down"
],
"type": "string"
},
"fullPage": {
"description": "Take full page screenshot",
"type": "boolean"
},
"key": {
"description": "Key to press (for press action)",
"type": "string"
},
"milliseconds": {
"description": "Time to wait in milliseconds (for wait action)",
"type": "number"
},
"script": {
"description": "JavaScript code to execute",
"type": "string"
},
"selector": {
"description": "CSS selector for the target element",
"type": "string"
},
"text": {
"description": "Text to write (for write action)",
"type": "string"
},
"type": {
"description": "Type of action to perform",
"enum": [
"wait",
"click",
"screenshot",
"write",
"press",
"scroll",
"scrape",
"executeJavascript"
],
"type": "string"
}
},
"required": [
"type"
],
"type": "object"
},
"type": "array"
},
"excludeTags": {
"description": "HTML tags to exclude from extraction",
"items": {
"type": "string"
},
"type": "array"
},
"extract": {
"description": "Configuration for structured data extraction",
"properties": {
"prompt": {
"description": "User prompt for LLM extraction",
"type": "string"
},
"schema": {
"description": "Schema for structured data extraction",
"type": "object"
},
"systemPrompt": {
"description": "System prompt for LLM extraction",
"type": "string"
}
},
"type": "object"
},
"formats": {
"description": "Content formats to extract (default: ['markdown'])",
"items": {
"enum": [
"markdown",
"html",
"rawHtml",
"screenshot",
"links",
"screenshot@fullPage",
"extract"
],
"type": "string"
},
"type": "array"
},
"includeTags": {
"description": "HTML tags to specifically include in extraction",
"items": {
"type": "string"
},
"type": "array"
},
"location": {
"description": "Location settings for scraping",
"properties": {
"country": {
"description": "Country code for geolocation",
"type": "string"
},
"languages": {
"description": "Language codes for content",
"items": {
"type": "string"
},
"type": "array"
}
},
"type": "object"
},
"mobile": {
"description": "Use mobile viewport",
"type": "boolean"
},
"onlyMainContent": {
"description": "Extract only the main content, filtering out navigation, footers, etc.",
"type": "boolean"
},
"removeBase64Images": {
"description": "Remove base64 encoded images from output",
"type": "boolean"
},
"skipTlsVerification": {
"description": "Skip TLS certificate verification",
"type": "boolean"
},
"timeout": {
"description": "Maximum time in milliseconds to wait for the page to load",
"type": "number"
},
"url": {
"description": "The URL to scrape",
"type": "string"
},
"waitFor": {
"description": "Time in milliseconds to wait for dynamic content to load",
"type": "number"
}
},
"required": [
"url"
],
"type": "object"
},
"name": "firecrawl_scrape"
},
{
"description": "Discover URLs from a starting point. Can use both sitemap.xml and HTML link discovery.",
"inputSchema": {
"properties": {
"ignoreSitemap": {
"description": "Skip sitemap.xml discovery and only use HTML links",
"type": "boolean"
},
"includeSubdomains": {
"description": "Include URLs from subdomains in results",
"type": "boolean"
},
"limit": {
"description": "Maximum number of URLs to return",
"type": "number"
},
"search": {
"description": "Optional search term to filter URLs",
"type": "string"
},
"sitemapOnly": {
"description": "Only use sitemap.xml for discovery, ignore HTML links",
"type": "boolean"
},
"url": {
"description": "Starting URL for URL discovery",
"type": "string"
}
},
"required": [
"url"
],
"type": "object"
},
"name": "firecrawl_map"
},
{
"description": "Start an asynchronous crawl of multiple pages from a starting URL. Supports depth control, path filtering, and webhook notifications.",
"inputSchema": {
"properties": {
"allowBackwardLinks": {
"description": "Allow crawling links that point to parent directories",
"type": "boolean"
},
"allowExternalLinks": {
"description": "Allow crawling links to external domains",
"type": "boolean"
},
"deduplicateSimilarURLs": {
"description": "Remove similar URLs during crawl",
"type": "boolean"
},
"excludePaths": {
"description": "URL paths to exclude from crawling",
"items": {
"type": "string"
},
"type": "array"
},
"ignoreQueryParameters": {
"description": "Ignore query parameters when comparing URLs",
"type": "boolean"
},
"ignoreSitemap": {
"description": "Skip sitemap.xml discovery",
"type": "boolean"
},
"includePaths": {
"description": "Only crawl these URL paths",
"items": {
"type": "string"
},
"type": "array"
},
"limit": {
"description": "Maximum number of pages to crawl",
"type": "number"
},
"maxDepth": {
"description": "Maximum link depth to crawl",
"type": "number"
},
"scrapeOptions": {
"description": "Options for scraping each page",
"properties": {
"excludeTags": {
"items": {
"type": "string"
},
"type": "array"
},
"formats": {
"items": {
"enum": [
"markdown",
"html",
"rawHtml",
"screenshot",
"links",
"screenshot@fullPage",
"extract"
],
"type": "string"
},
"type": "array"
},
"includeTags": {
"items": {
"type": "string"
},
"type": "array"
},
"onlyMainContent": {
"type": "boolean"
},
"waitFor": {
"type": "number"
}
},
"type": "object"
},
"url": {
"description": "Starting URL for the crawl",
"type": "string"
},
"webhook": {
"oneOf": [
{
"description": "Webhook URL to notify when crawl is complete",
"type": "string"
},
{
"properties": {
"headers": {
"description": "Custom headers for webhook requests",
"type": "object"
},
"url": {
"description": "Webhook URL",
"type": "string"
}
},
"required": [
"url"
],
"type": "object"
}
]
}
},
"required": [
"url"
],
"type": "object"
},
"name": "firecrawl_crawl"
},
{
"description": "Scrape multiple URLs in batch mode. Returns a job ID that can be used to check status.",
"inputSchema": {
"properties": {
"options": {
"properties": {
"excludeTags": {
"items": {
"type": "string"
},
"type": "array"
},
"formats": {
"items": {
"enum": [
"markdown",
"html",
"rawHtml",
"screenshot",
"links",
"screenshot@fullPage",
"extract"
],
"type": "string"
},
"type": "array"
},
"includeTags": {
"items": {
"type": "string"
},
"type": "array"
},
"onlyMainContent": {
"type": "boolean"
},
"waitFor": {
"type": "number"
}
},
"type": "object"
},
"urls": {
"description": "List of URLs to scrape",
"items": {
"type": "string"
},
"type": "array"
}
},
"required": [
"urls"
],
"type": "object"
},
"name": "firecrawl_batch_scrape"
},
{
"description": "Check the status of a batch scraping job.",
"inputSchema": {
"properties": {
"id": {
"description": "Batch job ID to check",
"type": "string"
}
},
"required": [
"id"
],
"type": "object"
},
"name": "firecrawl_check_batch_status"
},
{
"description": "Check the status of a crawl job.",
"inputSchema": {
"properties": {
"id": {
"description": "Crawl job ID to check",
"type": "string"
}
},
"required": [
"id"
],
"type": "object"
},
"name": "firecrawl_check_crawl_status"
},
{
"description": "Search and retrieve content from web pages with optional scraping. Returns SERP results by default (url, title, description) or full page content when scrapeOptions are provided.",
"inputSchema": {
"properties": {
"country": {
"description": "Country code for search results (default: us)",
"type": "string"
},
"filter": {
"description": "Search filter",
"type": "string"
},
"lang": {
"description": "Language code for search results (default: en)",
"type": "string"
},
"limit": {
"description": "Maximum number of results to return (default: 5)",
"type": "number"
},
"location": {
"description": "Location settings for search",
"properties": {
"country": {
"description": "Country code for geolocation",
"type": "string"
},
"languages": {
"description": "Language codes for content",
"items": {
"type": "string"
},
"type": "array"
}
},
"type": "object"
},
"query": {
"description": "Search query string",
"type": "string"
},
"scrapeOptions": {
"description": "Options for scraping search results",
"properties": {
"formats": {
"description": "Content formats to extract from search results",
"items": {
"enum": [
"markdown",
"html",
"rawHtml"
],
"type": "string"
},
"type": "array"
},
"onlyMainContent": {
"description": "Extract only the main content from results",
"type": "boolean"
},
"waitFor": {
"description": "Time in milliseconds to wait for dynamic content",
"type": "number"
}
},
"type": "object"
},
"tbs": {
"description": "Time-based search filter",
"type": "string"
}
},
"required": [
"query"
],
"type": "object"
},
"name": "firecrawl_search"
},
{
"description": "Extract structured information from web pages using LLM. Supports both cloud AI and self-hosted LLM extraction.",
"inputSchema": {
"properties": {
"allowExternalLinks": {
"description": "Allow extraction from external links",
"type": "boolean"
},
"enableWebSearch": {
"description": "Enable web search for additional context",
"type": "boolean"
},
"includeSubdomains": {
"description": "Include subdomains in extraction",
"type": "boolean"
},
"prompt": {
"description": "Prompt for the LLM extraction",
"type": "string"
},
"schema": {
"description": "JSON schema for structured data extraction",
"type": "object"
},
"systemPrompt": {
"description": "System prompt for LLM extraction",
"type": "string"
},
"urls": {
"description": "List of URLs to extract information from",
"items": {
"type": "string"
},
"type": "array"
}
},
"required": [
"urls"
],
"type": "object"
},
"name": "firecrawl_extract"
},
{
"description": "Conduct deep research on a query using web crawling, search, and AI analysis.",
"inputSchema": {
"properties": {
"maxDepth": {
"description": "Maximum depth of research iterations (1-10)",
"type": "number"
},
"maxUrls": {
"description": "Maximum number of URLs to analyze (1-1000)",
"type": "number"
},
"query": {
"description": "The query to research",
"type": "string"
},
"timeLimit": {
"description": "Time limit in seconds (30-300)",
"type": "number"
}
},
"required": [
"query"
],
"type": "object"
},
"name": "firecrawl_deep_research"
}
]