MCP Integration
The Model Context Protocol (MCP) enables AI assistants to create and edit videos using natural language. Instead of writing JSON specifications, describe what you want and the AI generates it.
Quick Start
Add the Shotstack MCP server to your AI assistant. For Claude Desktop, add to your config file:
macOS: ~/Library/Application\ Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"shotstack": {
"command": "npx",
"args": ["-y", "@shotstack/shotstack-mcp-server"],
"env": {
"SHOTSTACK_API_KEY": "your_api_key"
}
}
}
}
Restart Claude Desktop afterwards.
The MCP server currently only supports production API keys. Sandbox/stage keys will not work.
Get your production API key from dashboard.shotstack.io, then try:
Create a 10-second video with 'Hello World' text centered on screen
See Platform Setup for other AI assistants.
Available Tools
| Tool | Description |
|---|---|
render_video | Create a video from a JSON specification |
get_render_status | Check render progress and get output URL |
create_template | Save a reusable video template |
list_templates | List all saved templates |
get_template | Retrieve a template's structure |
render_template | Render a video from a template with merge fields |
delete_template | Remove a template |
inspect_media | Get metadata about a media file |
See Tools Reference for parameters and examples.
Learn More
- Platform Setup — Configuration for each AI assistant
- Tools Reference — Complete tool documentation
- Best Practices — Tips for effective AI video generation