Build videos with AI on Shotstack
Two ways to drive the Shotstack render API from an AI tool. Both follow the same Edit JSON conventions.
MCP Server — for chat clients and IDEs
A single endpoint at https://mcp.shotstack.io/ that connects Shotstack to Claude, ChatGPT, Cursor, Claude Code, VS Code Copilot, Codex CLI, Gemini CLI, Windsurf, Zed, JetBrains, Goose, and Raycast. The agent composes Edit JSON, mounts the Studio canvas inline (chat clients) or returns short share links (terminal clients), and renders on demand.
CLI + Claude Code Skill — for terminal agents
A terminal-native CLI plus a Claude Code Skill that ships the same authoring conventions to coding agents (Claude Code, Cursor, Codex CLI, Gemini CLI, etc.). Install with npm i -g @shotstack/cli, or pull the skill with npx skills add shotstack/shotstack-cli.
Edit JSON conventions
The video timeline JSON has rules every agent needs to follow — track ordering is reversed, asset type names differ from CSS instinct, fonts must come from a specific allowlist. The same conventions guide ships with the CLI Skill and is returned by the MCP server's get_shotstack_guide tool.
Plain text everywhere
Every doc URL on shotstack.io/docs/guide/ accepts a .md suffix and returns plain markdown. The /llms.txt and /llms-full.txt indexes give an LLM the full corpus in one fetch.