YT to LinkedIn MCP Server
A Model Context Protocol (MCP) server that automates generating LinkedIn post drafts from YouTube videos. This server provides high-quality, editable content drafts based on YouTube video transcripts.
A Model Context Protocol (MCP) server that automates generating LinkedIn post drafts from YouTube videos. This server provides high-quality, editable content drafts based on YouTube video transcripts.
A Model Context Protocol (MCP) server that automates generating LinkedIn post drafts from YouTube videos. This server provides high-quality, editable content drafts based on YouTube video transcripts.
Clone the repository:
git clone <repository-url>
cd yt-to-linkedin
Create a virtual environment and install dependencies:
python -m venv venv
source venv/bin/activate # On Windows: venvScriptsactivate
pip install -r requirements.txt
Create a .env
file in the project root with your API keys:
OPENAI_API_KEY=your_openai_api_key
YOUTUBE_API_KEY=your_youtube_api_key
Run the application:
uvicorn app.main:app --reload
Access the API documentation at http://localhost:8000/docs
Build the Docker image:
docker build -t yt-to-linkedin-mcp .
Run the container:
docker run -p 8000:8000 --env-file .env yt-to-linkedin-mcp
Ensure you have the Smithery CLI installed and configured.
Deploy to Smithery:
smithery deploy
Endpoint: /api/v1/transcript
Method: POST
Description: Extract transcript from a YouTube video
Request Body:
{
"youtube_url": "https://www.youtube.com/watch?v=VIDEO_ID",
"language": "en",
"youtube_api_key": "your_youtube_api_key" // Optional, provide your own YouTube API key
}
Response:
{
"video_id": "VIDEO_ID",
"video_title": "Video Title",
"transcript": "Full transcript text...",
"language": "en",
"duration_seconds": 600,
"channel_name": "Channel Name",
"error": null
}
Endpoint: /api/v1/summarize
Method: POST
Description: Generate a summary from a video transcript
Request Body:
{
"transcript": "Video transcript text...",
"video_title": "Video Title",
"tone": "professional",
"audience": "general",
"max_length": 250,
"min_length": 150,
"openai_api_key": "your_openai_api_key" // Optional, provide your own OpenAI API key
}
Response:
{
"summary": "Generated summary text...",
"word_count": 200,
"key_points": [
"Key point 1",
"Key point 2",
"Key point 3"
]
}
Endpoint: /api/v1/generate-post
Method: POST
Description: Generate a LinkedIn post from a video summary
Request Body:
{
"summary": "Video summary text...",
"video_title": "Video Title",
"video_url": "https://www.youtube.com/watch?v=VIDEO_ID",
"speaker_name": "Speaker Name",
"hashtags": ["ai", "machinelearning"],
"tone": "professional",
"voice": "first_person",
"audience": "technical",
"include_call_to_action": true,
"max_length": 1200,
"openai_api_key": "your_openai_api_key" // Optional, provide your own OpenAI API key
}
Response:
{
"post_content": "Generated LinkedIn post content...",
"character_count": 800,
"estimated_read_time": "About 1 minute",
"hashtags_used": ["#ai", "#machinelearning"]
}
Endpoint: /api/v1/output
Method: POST
Description: Format the LinkedIn post for output
Request Body:
{
"post_content": "LinkedIn post content...",
"format": "json"
}
Response:
{
"content": {
"post_content": "LinkedIn post content...",
"character_count": 800
},
"format": "json"
}
Variable | Description | Required |
---|---|---|
OPENAI_API_KEY | OpenAI API key for summarization and post generation | No (can be provided in requests) |
YOUTUBE_API_KEY | YouTube Data API key for fetching video metadata | No (can be provided in requests) |
PORT | Port to run the server on (default: 8000) | No |
Note: While environment variables for API keys are optional (as they can be provided in each request), it's recommended to set them for local development and testing. When deploying to Smithery, users will need to provide their own API keys in the requests.
MIT