# Shotstack Documentation (Full) This file is generated from markdown docs visible in the sidebar for high-recall agent retrieval. Use `llms.txt` for lightweight navigation. --- ## What is Shotstack? - Markdown URL: https://shotstack.io/docs/guide/what-is-shotstack.md - Web URL: https://shotstack.io/docs/guide/ - Description: Shotstack is a platform for editing videos using code that enables marketers, developers and designers to automate the generation of videos at scale. # What is Shotstack? [Shotstack](https://shotstack.io/) is a platform for editing videos, using code, that enables developers, marketers, and designers to automate the generation of videos at scale. Shotstack uses a REST based API hosted in the cloud that uses JSON data describing the arrangement of a video edit. JSON is used to arrange, trim and animate assets including images, video, titles and audio. It also includes a number of built in motion effects, transitions and filters to rapidly edit a professional looking video. The cloud based API provides the infrastructure to allow thousands of videos to be rendered concurrently in a matter of minutes allowing for mass personalization and customization of content. The API can also be used to generate images and audio only files. --- ## Edit Videos Using Code - Markdown URL: https://shotstack.io/docs/guide/getting-started/core-concepts.md - Web URL: https://shotstack.io/docs/guide/getting-started/core-concepts/ - Description: Learn the basics of how a video edit is put together using JSON # Edit Videos Using Code Shotstack is a cloud based [video editing API](https://shotstack.io). Unlike traditional desktop video editing applications like Adobe Premiere or After Effects, Shotstack is purely code based - you create an edit using code. The API uses a REST architectural style using JSON to describe how the video should be edited. Although code based, the API follows many of the principles of desktop editing software such as a timeline, tracks and clips. If you have used with Adobe Premiere, After Effects or even Flash some of these concepts should be familiar, all be it without a graphical user interface. These core concepts and principles are described below: ### Edit An edit encompasses everything about the video that you want to create and is made of two components - the `timeline` which describes the assets \(videos, images and titles\) that you want use and how they are arranged \(edited\) and the `output` format \(gif or mp4\) and resolution \(HD, SD, mobile, etc..\). ```json { "timeline": {...}, // the main video edit description "output": {...} // the output format and resolution } ``` :::info The Edit is the JSON payload that you POST to the _render_ endpoint to create your video. ::: ### Timeline The timeline represents the time that the video will play for, using seconds and starting from 0. A timeline is made up of `tracks` which are like layers that contain clips. ```json { "timeline": { "tracks": [{...}] // array of tracks }, ... } ``` ### Tracks Tracks are like layers within the timeline which span the entire length of the timeline, a track contains one or more `clips` Tracks can be visualized as layers that can be placed one on top of another. Clips on the top most track will obscure those on tracks below it. This can be useful for when you want to layer a title over a video clip or you want to fade between clips. Tracks also help to logically organize different types of content, for example a title track which plays over the top of a video and a watermark track that plays over the top of everything. Tracks are an array within the timeline, the first element is the top track and the last element is the bottom most track: ```json { "timeline": { "tracks": [ { ... // top track }, { ... // bottom track } ] }, ... } ``` ### Clips Clips are where you select the asset \(video, image or title\) you want to display within a track on the timeline, when it starts, how long it plays for and what effects, transitions and filters you want to apply. Clips are placed on a track at a specific `start` time, for example a clip with a `start` time of 3 will appear in the video after three seconds have played. Clips also have a `length` which determines how long the clip will play for in seconds. You can choose to set the `start` or `length` property automatically by setting it to `auto` or `end`. See the [smart clips](/docs/guide/architecting-an-application/smart-clips) section for all options. Each `asset` type has a specific set of options and features but you must specify the `type` of asset and also the `content` of the asset such a URL `src` to a video or image file or for titles a `text` string. ```json { "timeline": { "tracks": [ { "clips": [ { // place an image at the very start of the track/timeline that plays for 4 seconds. "asset":{ "type": "image", "src": "https://s3-ap-southeast-2.amazonaws.com/my-bucket/photo.jpg", }, "start": 0, "length": 4 }, { // place a video that starts on the fourth second of the track/timeline ("start": 4) that has // the first two seconds trimmed ("trim": 2) and plays for 6 seconds ("length": 6). "asset":{ "type": "video", "src": "https://s3-ap-southeast-2.amazonaws.com/my-bucket/footage.mp4", "trim": 2 }, "start": 4, "length": 6 } ] }, ] }, ... } ``` :::danger Do not place clips on a track so that their start or end times overlap. This will result in the clips flickering because the API does not know which clip to display. ::: A clip can also have transitions, effects and filters applied to them. A clip may look something like this when more options are applied: ```json { "asset":{ "type": "video", "src": "https://s3-ap-southeast-2.amazonaws.com/my-bucket/footage.mp4", "trim": 2, "volume": 0.5 }, "start": 0, "length": 4, "transition": { "in": "fade", "out": "fade" }, "filter": "boost", "effect": "slideRight" } ``` From these basic building blocks you should be able to assemble a sophisticated, reusable template to create videos. Assets can be replaced or clips added using loops, conditions and other coding constructs allowing you to build a range of video based applications. --- ## Request API Keys - Markdown URL: https://shotstack.io/docs/guide/getting-started/request-api-keys.md - Web URL: https://shotstack.io/docs/guide/getting-started/request-api-keys/ - Description: Before you can use the API you need to register and receive your API keys # Request API Keys [Sign up](https://dashboard.shotstack.io/register) to the Shotstack service to generate API keys. Keys for both a staging sandbox and a live production environment are available in the dashboard. Find your keys in the menu under your account name (top right hand corner): ![API keys menu link](../../static/img/keys-dropdown.png) Click on the **API Keys** item in the menu to open the keys page: ![API keys dashboard page](../../static/img/api-keys-dashboard.png) Keys are a random 40 character string that look something like: ```text HBHjHylztJqjEeIFQxNJWB5g2mPa4lI40wX8oNVD ``` Staging sandbox and production URL's are: | Sandbox URL | Production URL | | :------------------------------| :-------------------------- | | https://api.shotstack.io/stage | https://api.shotstack.io/v1 | :::tip Use the sandbox environment for testing. Only AI assetes get charged. Videos are watermarked. ::: :::caution Using your production key will use up your credits and may result in being [charged](https://shotstack.io/pricing/). ::: --- ## Hello World - Markdown URL: https://shotstack.io/docs/guide/getting-started/hello-world-using-curl.md - Web URL: https://shotstack.io/docs/guide/getting-started/hello-world-using-curl/ - Description: Create your first video in 5 minutes using the command line and Curl # Hello World This Hello World guide should provide you with a basic understanding of how quick and easy it is to create a simple video using styled text, a soundtrack and a simple fade transition. Before you begin ensure you have registered and received your [API keys](./request-api-keys.md). ### Requirements This tutorial uses Curl to create your first 'Hello World' video. Curl is a cross platform command line utility that can be used to POST data to an API. ### Create the Hello World edit We'll use a text file to define the JSON data so that it is easy to edit and format. Open your preferred text editor and copy and paste the following JSON, save the file as **hello.json**: ```json { "timeline": { "soundtrack": { "src": "https://s3-ap-southeast-2.amazonaws.com/shotstack-assets/music/moment.mp3", "effect": "fadeOut" }, "tracks": [ { "clips": [ { "asset": { "type": "text", "text": "HELLO WORLD", "font": { "family": "Montserrat ExtraBold", "color": "#ffffff", "size": 32 }, "alignment": { "horizontal": "left" } }, "start": 0, "length": 5, "transition": { "in": "fade", "out": "fade" } } ] } ] }, "output": { "format": "mp4", "size": { "width": 1024, "height": 576 } } } ``` ### POST the edit to the Shotstack API You can now go ahead and POST the **hello.json** edit data to the API for rendering, to do this we'll use the **render** endpoint. From the same directory as the **hello.json** file, run the following Curl command: ```bash curl -X POST \ -H "Content-Type: application/json" \ -H "x-api-key: ptmpOiyKgGYMnnONwvXH7FHzDGOazrIjaEDUS7Cf" \ -d @hello.json \ https://api.shotstack.io/edit/stage/render ``` :::info Replace the **x-api-key** value with your own sandbox (stage) API key. ::: You should receive a 201 response code and payload similar to: ```json { "success":true, "message":"Created", "response":{ "message":"Render Successfully Queued", "id":"d2b46ed6-998a-4d6b-9d91-b8cf0193a655" } } ``` :::info Take a note of the **id**, it will be different from the example above, you will need this in the next step. You should keep track of **id** in your own database as there is currently no way to retrieve a list of render id's. ::: ### Poll the API to check the render status Your video is now either waiting in a queue to be rendered or is being processed. To check the status and know when rendering is complete and the video file is ready we need to poll the API. Using the **id** noted in the previous step and your own key, call the staging API using the following command: ```bash curl -X GET \ -H "Content-Type: application/json" \ -H "x-api-key: ptmpOiyKgGYMnnONwvXH7FHzDGOazrIjaEDUS7Cf" \ https://api.shotstack.io/edit/stage/render/d2b46ed6-998a-4d6b-9d91-b8cf0193a655 ``` :::info Replace the UUID at the end of the render URL with the **id** output in the previous step. ::: In less than one minute you should receive a 200 response code and payload similar to the output below. The status parameter tells you what stage the render is at, **done** means the video is ready for download. Other status are queued, fetching, rendering, saving and failed. ```json { "success": true, "message": "OK", "response": { "id": "d2b46ed6-998a-4d6b-9d91-b8cf0193a655", "owner": "5ca6hu7s9k", "plan": "free", "status": "done", "url": "https://shotstack-api-stage-output.s3-ap-southeast-2.amazonaws.com/5ca6hu7s9k/d2b46ed6-998a-4d6b-9d91-b8cf0193a655.mp4", "data": { "output": { ... }, "timeline": { ... } }, "created": "2024-01-16T12:02:42.148Z", "updated": "2024-01-16T12:02:51.867Z" } } ``` :::caution Warning If the status is **failed** then there has been an error and the render will not continue. Attempt to resolve the issue based on the error message then try again. If the problem persists contact us for support. ::: ### Hosting and serving the rendered video When the status returned from polling equals **done**, then the video is ready for [hosting](/docs/guide/serving-assets/hosting) or downloading. By default all videos are automatically sent to our [video hosting and CDN service](/docs/guide/serving-assets/hosting). After a few seconds the video should be available at the following locations: | Staging | Production | | :--------------------------------- | :------------------------------ | | **https://cdn.shotstack.io/au/stage/** | **https://cdn.shotstack.io/au/v1/** | The video created in this guide would be available at: ``` https://cdn.shotstack.io/au/stage/5ca6hu7s9k/d2b46ed6-998a-4d6b-9d91-b8cf0193a655.mp4 ``` The [Serve API](/docs/guide/serving-assets/serve-api) provides more information on the status and availability of hosted files. ### Transferring the video to your own hosting service If you use your own hosting solution, you can [opt-out](/docs/guide/serving-assets/self-host) from the Shotstack hosting service and transfer the file to the location of your choice. Videos are stored temporarily for a 24 hour period at the following locations: | Staging | Production | | :------------------------------------------------------------------ | :------------------------------------------------------------------ | | **https://shotstack-api-stage-output.s3-ap-southeast-2.amazonaws.com/** | **https://shotstack-api-v1-output.s3-ap-southeast-2.amazonaws.com/** | Download or transfer the video using the URL returned in the get render status response: ```bash curl -O https://shotstack-api-stage-output.s3-ap-southeast-2.amazonaws.com/5ca6hu7s9k/d2b46ed6-998a-4d6b-9d91-b8cf0193a655.mp4 ``` :::warning Danger Do not serve the temporary URLs from your application as the file will be deleted after 24 hours. Check the [Hosting and Serving Assets](/docs/guide/serving-assets/hosting) guide for the correct way to serve assets generated by the video editing API. ::: ### Conclusion You will now have a basic understanding of the key steps needed to create a simple edit: 1. Prepare the edit data 2. POST the edit data to the API 3. Poll the API and wait for the render to complete 4. Serve the video from the Shotstack CDN 5. Download or transfer the video to your own storage --- ## Editing Videos, Images and Audio - Markdown URL: https://shotstack.io/docs/guide/architecting-an-application/guidelines.md - Web URL: https://shotstack.io/docs/guide/architecting-an-application/guidelines/ - Description: Guidelines on how to create an application incorporating the Shotstack API. # Editing Videos, Images and Audio Shotstack provides a framework to automate the editing process for your videos, images and audio and the heavy lifting of rendering thousands of videos concurrently. Shotstack is useful is in building media based applications and automating workflows where a template can be used and assets created in batches or large volumes. ## Key steps to editing media ### Prepare JSON using the Studio An edit in Shotstack is defined using JSON which specifies when different assets (images, videos, titles) will appear and play on a timeline. In order to create a video you first need to prepare the JSON data so that it contains the correct assets arranged on the timeline. You can use the Shotstack Studio to help create this JSON data. ### Post JSON to API render endpoint Once your have prepared an edit using the Studio you POST the data to the _**render** endpoint of our API. ### Poll render endpoint The render process takes approximately 10 seconds for a 30-second video. Your application needs to poll the API on a regular basis to check the status of the video render by calling the _**render**_ endpoint with the **id** of the current render. When the status is `done` you can proceed to downloading or transferring the video to your application. :::info Tip We recommend [using webhooks](webhooks.md) to reduce the number of calls to the API and receive a notification as soon as rendering completes. ::: ### Hosting and serving video Once a video is finished rendering, it is available at a temporary URL as well as automatically transferred to the [Shotstack hosting service](/docs/guide/serving-assets/hosting). ### Download or transfer the video Instead of using the Shotstack hosting service, the video can be fetched from our servers \(S3\) and transferred to your own application for further processing and hosting. You may wish to transfer the video to your own S3 or storage service, upload it Google Drive, Vimeo or any of our other destinations. :::danger Warning If you are not using our hosting service, we will store the video on our servers for 24 hours only. After that time they will be deleted. ::: --- ## Rich Text - Markdown URL: https://shotstack.io/docs/guide/architecting-an-application/rich-text.md - Web URL: https://shotstack.io/docs/guide/architecting-an-application/rich-text/ - Description: Create advanced animated text with gradients, shadows, and effects # Rich Text The rich-text asset provides advanced text rendering capabilities with support for animations, gradients, shadows, strokes, and custom styling. It's ideal for creating professional titles, animated text effects, and stylized typography. ## Common rich-text patterns Before diving into individual properties, here are solutions to common text needs: ### Animated title Create an eye-catching title with a word-by-word ascend animation, a custom font and a linear gradient: ```json { "timeline": { "fonts": [ { "src": "https://shotstack-assets.s3.amazonaws.com/fonts/Oswald-VariableFont.ttf" } ], "tracks": [ { "clips": [ { "asset": { "type": "rich-text", "text": "Evening Night", "font": { "family": "Oswald", "size": 125, "weight": 900 }, "style": { "letterSpacing": 8, "gradient": { "type": "linear", "stops": [ { "offset": 0, "color": "#005AA7" }, { "offset": 1, "color": "#FFFDE4" } ] } }, "animation": { "preset": "ascend", "duration": 1 } }, "start": 0, "length": 5, "width": 1000, "height": 300 } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` ### Bold call-to-action text Create impactful text with stroke outline and shadow for maximum visibility: ```json { "timeline": { "background": "#000000", "fonts": [ { "src": "https://shotstack-assets.s3.amazonaws.com/fonts/SpecialElite-Regular.ttf" } ], "tracks": [ { "clips": [ { "asset": { "type": "rich-text", "text": "SUBSCRIBE NOW", "font": { "family": "Montserrat", "size": 64, "weight": 800, "color": "#ffd700" }, "style": { "letterSpacing": 4, "textTransform": "uppercase" }, "stroke": { "width": 4, "color": "#000000", "opacity": 1 }, "shadow": { "offsetX": 6, "offsetY": 6, "blur": 12, "color": "#000000", "opacity": 0.7 }, "background": { "color": "#FF5962", "borderRadius": 25 }, "animation": { "preset": "shift", "duration": 2, "style": "character", "direction": "up" }, "align": { "horizontal": "center" } }, "start": 0, "length": 5, "width": 800, "height": 200 } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` ### Typewriter subtitle effect Create text that appears character by character: ```json { "timeline": { "background": "#000000", "fonts": [ { "src": "https://shotstack-assets.s3.amazonaws.com/fonts/SpecialElite-Regular.ttf" } ], "tracks": [ { "clips": [ { "asset": { "type": "rich-text", "text": "Let me tell you a story...", "font": { "family": "SpecialElite-Regular", "size": 42, "color": "#ffffff" }, "animation": { "preset": "typewriter", "duration": 1.5, "style": "character" } }, "start": 0, "length": 5, "width": 900, "height": 150 } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` ## Basic usage A simple rich-text element requires only text, dimensions, and font properties: ```json { "asset": { "type": "rich-text", "text": "Welcome to Shotstack", "font": { "family": "Roboto", "size": 48, "color": "#ffffff" } }, "start": 0, "length": 5, "width": 800, "height": 400 } ``` This creates white Roboto text at 48px within an 800×400 pixel container. The `width` and `height` define the text's maximum area, and text is centered by default. The clip appears at the beginning of the timeline and lasts 5 seconds. ## Fonts The font properties control the core typography of your text. Choose from built-in fonts or load custom fonts to match your brand. Font styling is broken down into seven properties: 1. `font` 2. `style` 3. `stroke` 4. `shadow` 5. `align` 6. `animation` ### Font properties Customize the appearance of your text with font styling options: ```json { "asset": { "type": "rich-text", "text": "Styled Text", "font": { "family": "Roboto", "size": 64, "weight": 800, "color": "#ff6b6b", "opacity": 0.9 } }, "start": 0, "length": 5, "width": 800, "height": 400 } ``` This creates bold italic text using Roboto (weight 800) at 64px with a coral red color at 90% opacity. **Font options:** - `family` - Font name (default: "Roboto") - `size` - Font size in pixels, 12-200 (default: 48) - `weight` - Font weight: 100-900 or "normal", "bold" (default: "400") - `color` - Hex color code (default: "#ffffff") - `opacity` - Text opacity, 0-1 (default: 1) ### Available fonts Built-in fonts: - Roboto - Montserrat - Open Sans - Work Sans - Uni Neue - Arapey - Clear Sans - Didact Gothic - Movie Letter - Permanent Marker - Sue Ellen Francisco ### Custom fonts Use your own fonts by providing a URL to an OTF or TTF font file: ```json { "timeline": { "background": "#FFD166", "fonts": [ { "src": "https://shotstack-assets.s3.amazonaws.com/fonts/PressStart2P-Regular.ttf" } ], "tracks": [ { "clips": [ { "asset": { "type": "rich-text", "text": "GAME OVER", "font": { "family": "Press Start 2P", "size": 60, "color": "#EF476F" } }, "start": 0, "length": 5, "width": 1000, "height": 400 } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` ![custom fonts example](/img/rich-text/image.png) :::info The font file must be publicly accessible. The `family` property can be set using either: 1. **The font's embedded name** (recommended) — use the font's real family name as stored in the font file, e.g. `"Press Start 2P"` for `PressStart2P-Regular.ttf`. This is the name you would see when installing the font on your computer. 2. **The filename** — use the filename without the extension, e.g. `"PressStart2P-Regular"` for `PressStart2P-Regular.ttf`. ::: ### International fonts Shotstack supports everything from Arabic calligraphy to Devanagari ligatures, as long as the font provides the necessary glyphs. #### Hindi ```json { "timeline": { "background": "#015C37", "fonts": [ { "src": "https://shotstack-assets.s3.amazonaws.com/fonts/Hind-Regular.ttf" } ], "tracks": [ { "clips": [ { "asset": { "type": "rich-text", "text": "जहाँ चाह, वहाँ राह।", "font": { "family": "Hind", "size": 60, "color": "#FFFFFF" } }, "start": 0, "length": 5, "width": 1000, "height": 400 } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` ![Hindi text example](/img/rich-text/hindi-font.png) #### Chinese (Simplified) ```json { "timeline": { "background": "#CD071E", "fonts": [ { "src": "https://shotstack-assets.s3.amazonaws.com/fonts/NotoSansTC-VariableFont_wght.ttf" } ], "tracks": [ { "clips": [ { "asset": { "type": "rich-text", "text": "千里之行,始于足下。", "font": { "family": "NotoSansTC-VariableFont_wght", "size": 75, "color": "#FABC3C" } }, "start": 0, "length": 5, "width": 1000, "height": 400 } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` ![Simplified Chinese example](/img/rich-text/chinese-font.png) ## Text styling Beyond fonts, text styling properties control spacing, transformations, and decorations. These properties help create modern, spacious layouts and emphasize important text. ### Letter spacing and line height Adjust text layout with spacing and transformation options: ```json { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "rich-text", "text": "Spaced Out Text", "font": { "family": "Roboto", "size": 48, "color": "#ffffff" }, "style": { "letterSpacing": 35, "lineHeight": 1.5, "textTransform": "uppercase", "textDecoration": "underline" } }, "start": 0, "length": 5, "width": 1000, "height": 400 } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` ![letter spacing and line height example](/img/rich-text/image-2.png) This adds 35 pixels between characters, increases line height to 2× the font size (more space between lines), converts text to uppercase, and adds an underline. **Style options:** - `letterSpacing` - Space between characters in pixels (default: 0) - `lineHeight` - Relative line height, 0-10 (default: 1.2) - `textTransform` - "none", "uppercase", "lowercase", "capitalize" (default: "none") - `textDecoration` - "none", "underline", "line-through" (default: "none") ## Gradients Gradients add visual interest by transitioning between colors. Use linear gradients for modern, directional effects, or radial gradients for spotlight and glow effects. Gradients work best with bold, large text and override the font color property. ### Linear gradient Create text with a smooth color transition from one color to another: ```json { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "rich-text", "text": "MILKSHAKE", "font": { "family": "Montserrat", "size": 125, "weight": 800 }, "style": { "gradient": { "type": "linear", "angle": 45, "stops": [ { "offset": 0, "color": "#C6FFDD" }, { "offset": 0.5, "color": "#FBD786" }, { "offset": 1, "color": "#f7797d" } ] } } }, "start": 0, "length": 5, "width": 900, "height": 400 } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` ![three-color gradient transition](/img/rich-text/image-1.png) A three-color gradient transition at a 45-degree diagonal. The `angle` controls the direction (0° is left to right, 90° is top to bottom), and `stops` define colors at specific positions (0 is start, 0.5 is middle and 1 is end). ### Radial gradient Create text with a color that radiates from the center outward: ```json { "timeline": { "background": "#FFFFFF", "tracks": [ { "clips": [ { "asset": { "type": "rich-text", "text": "NOVA", "font": { "family": "Montserrat", "size": 200, "weight": 800 }, "style": { "gradient": { "type": "radial", "stops": [ { "offset": 0, "color": "#ffd89b" }, { "offset": 0.5, "color": "#f6416c" }, { "offset": 1, "color": "#2d3561" } ] } } }, "start": 0, "length": 5, "width": 900, "height": 400 } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` ![radial gradient example](/img/rich-text/image-3.png) The gradient radiates from yellow at the center through pink at the midpoint to dark blue at the edges. Radial gradients create a spotlight or glow effect and work best with centered text. **Gradient options:** - `type` - "linear" or "radial" (default: "linear") - `angle` - Gradient angle in degrees, 0-360 (default: 0, only for linear) - `stops` - Array of color stops (minimum 2 required) - `offset` - Position of stop, 0-1 (required) - `color` - Hex color code (required) :::info Gradients require at least 2 color stops. Use 2-4 stops for smooth, professional gradients. More stops can create banding effects. When using gradients, the gradient fill overrides the `font.color` property. ::: ## Stroke Add a bold outline around text for visibility and impact: ```json { "asset": { "type": "rich-text", "text": "Outlined Text", "font": { "family": "Permanent Marker", "size": 72, "color": "#ffffff" }, "stroke": { "width": 4, "color": "#000000", "opacity": 1 } }, "start": 0, "length": 5, "width": 800, "height": 400 } ``` ![outlined text](/img/rich-text/image-5.png) The 4-pixel black stroke creates a bold outline that makes white text readable over any background. Stroke width of 3-6 pixels works well for most text sizes. Combine stroke with shadow for maximum visibility. **Stroke options:** - `width` - Stroke width in pixels (default: 0) - `color` - Hex color code (default: "#000000") - `opacity` - Stroke opacity, 0-1 (default: 1) ## Shadow Add depth and improve readability with drop shadows: ```json { "timeline": { "background": "#F9DB6A", "tracks": [ { "clips": [ { "asset": { "type": "rich-text", "text": "MONDAY", "font": { "family": "Montserrat", "size": 175, "color": "#432963", "weight": 900 }, "shadow": { "offsetX": 8, "offsetY": 8, "blur": 6, "color": "#77E6F0", "opacity": 0.8 } }, "start": 0, "length": 5, "width": 1000, "height": 400 } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` ![drop shadow example](/img/rich-text/image-4.png) The shadow appears 8 pixels down and to the right with an 6-pixel blur, creating a soft depth effect. Lower opacity (0.5-0.8) creates subtle shadows, while higher opacity (0.9-1.0) creates dramatic shadows. **Shadow options:** - `offsetX` - Horizontal offset in pixels (default: 0) - `offsetY` - Vertical offset in pixels (default: 0) - `blur` - Blur radius in pixels (default: 0) - `color` - Hex color code (default: "#000000") - `opacity` - Shadow opacity, 0-1 (default: 0.5) ## Background Add a colored background box behind text: ```json { "asset": { "type": "rich-text", "text": "Background Text", "font": { "family": "Roboto", "size": 48, "color": "#ffffff" }, "background": { "color": "#3498db", "opacity": 0.8, "borderRadius": 20 } }, "start": 0, "length": 5, "width": 600, "height": 200 } ``` ![Lower-third with translucent blue rounded background and white text rendered over a forest scene](/img/rich-text/translucent-background.png) By default, the background fills the entire clip container (600×200 pixels) with a semi-transparent blue color. The rounded corners (`borderRadius: 20`) soften the edges. ### Wrapping the background to the text When the text length is unknown at template-authoring time — merge fields, user input, transcribed names — a fixed-width background looks awkward. Short text leaves a wide empty bar; long text overflows or feels cramped against the edges. Set `background.wrap` to `true` and the background shrinks to fit the rendered text plus the asset's padding (and stroke width, if present), producing a pill that hugs whatever the runtime value turns out to be. The render below uses an identical template for three different names. Without wrap (left), the background is always the full clip width. With wrap (right), the pill resizes to each name: ```json { "asset": { "type": "rich-text", "text": "Alexandra Wellington", "font": { "family": "Montserrat", "size": 48, "color": "#ffffff", "weight": 600 }, "padding": 18, "background": { "color": "#ef4444", "borderRadius": 60, "wrap": true } }, "start": 0, "length": 2, "width": 600, "height": 100 } ``` The clip stays 600×100, but the red pill only wraps the text plus 18 pixels of padding on each side. The text stays positioned within the clip per its `align` settings; the pill follows the text. Combined with `border`, the border tracks the wrapped pill in the same way. **Background options:** - `color` - Hex color code (optional) - `opacity` - Background opacity, 0-1 (default: 1) - `borderRadius` - Border radius in pixels (default: 0) - `wrap` - When `true`, the background shrinks to fit the rendered text plus padding instead of filling the clip (default: false) ## Alignment Position text within its container using horizontal and vertical alignment: ```json { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "rich-text", "text": "\"The mystery of human existence lies not in just staying alive, but in finding something to live for.\" \n \n- Fyodor Dostoevsky", "font": { "family": "Open Sans", "size": 48 }, "align": { "horizontal": "left", "vertical": "top" } }, "start": 0, "length": 5, "width": 500, "height": 600 } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` ![alignment example](/img/rich-text/image-7.png) This positions text in the top-left corner of the 500x600 container. Text is centered both horizontally and vertically by default. **Alignment options:** - `horizontal` - "left", "center", "right" (default: "center") - `vertical` - "top", "middle", "bottom" (default: "middle") ## Animations Text animations bring your text to life. Choose from preset animations like typewriter, fade-in, slide-in, and more. Control animation speed and duration to match your video's pacing. Animations are perfect for intros, subtitles, and attention-grabbing titles. ### Example: Fade-in animation Create text that smoothly fades into view: ```json { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "rich-text", "text": "Ascend while others stay idle", "font": { "family": "Roboto", "size": 60, "color": "#ffffff" }, "animation": { "preset": "typewriter", "duration": 2, "style": "word" } }, "start": 0, "length": 5, "width": 800, "height": 400 } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` The text fades in over 2 seconds. The optional `duration` parameter controls how long the animation takes, with the default being set to the length of the clip. The animation starts when the clip begins. ### Available animation presets #### fadeIn ```json "animation": { "preset": "fadeIn", // Animation type "duration": 2, // Animation length (default: full clip) "style": "character" // "word", "character", or "full" (default) } ``` **typewriter** ```json "animation": { "preset": "typewriter", // Animation type "duration": 2, // Animation length (default: full clip) "style": "character" // "word" or "character" (default: character) } ``` **slideIn** ```json "animation": { "preset": "slideIn", // Animation type "duration": 2, // Animation length (default: full clip) "style": "character", // "word" or "character" (default) "direction": "left" // "left" (default), "right", "up", "down" } ``` **ascend** ```json "animation": { "preset": "ascend", // Animation type "duration": 2, // Animation length (default: full clip) "direction": "up" // "up" (default), "down" } ``` **shift** - Text shifts from a direction with fade ```json "animation": { "preset": "shift", // Animation type "duration": 2, // Animation length (default: full clip) "style": "character", // "word" or "character" (default) "direction": "left" // "left" (default), "right", "up", "down" } ``` ## Related topics - [Animations](/docs/guide/architecting-an-application/animations) - Animate clip properties over time - [Positioning](/docs/guide/architecting-an-application/positioning) - Position and scale rich-text assets --- ## Positioning - Markdown URL: https://shotstack.io/docs/guide/architecting-an-application/positioning.md - Web URL: https://shotstack.io/docs/guide/architecting-an-application/positioning/ - Description: Position assets precisely on the video canvas using position, offset, fit, and scale properties # Positioning Positioning controls where assets appear in your video. Because source assets vary in size and aspect ratio, Shotstack provides multiple properties that work together to give you precise control over placement and sizing. ## Common positioning patterns Before diving into how each property works, here are solutions to common positioning needs: ### Consistent sizing for variable images When using images of different sizes or aspect ratios (common in templates with user-provided content), set fixed dimensions with the appropriate fit mode to ensure consistent layout: ```json { "asset": { "type": "image", "src": "https://shotstack-assets.s3.amazonaws.com/images/woods1.jpg" }, "start": 0, "length": 5, "width": 400, "height": 400, "fit": "contain" } ``` ![Three images of different aspect ratios (portrait, square, landscape) all sized to 400x400 with fit contain, showing letterboxing](/img/positioning/consistent-sizing-fixed-dimensions.png) By setting fixed `width` and `height` values, every image—regardless of its original dimensions—will occupy exactly 800x450 pixels in your video. This is critical when working with user-provided content where source image sizes vary unpredictably. The `fit` property determines how the source image fills this fixed area: - `fit: "contain"` (shown above) - Scales the image proportionally to fit entirely within the 800x450 area. Maintains aspect ratio but may add letterboxing. Best when you must show the complete image. - `fit: "crop"` - Fills the 800x450 area by cropping the source image. Centers the crop region. - `fit: "cover"` - Scales the image to completely fill the 800x450 area. Does not aintain aspect ratio and may distort the image. ### Logo in corner Places a logo in the top-right corner with margin: ```json { "asset": { "type": "image", "src": "https://shotstack-assets.s3.amazonaws.com/logos/shotstack/small/logo-mono-white.png" }, "start": 0, "length": 5, "fit": "crop", // Default setting. Can be ommitted. "scale": 0.07, "position": "topRight", "offset": { "x": -0.05, "y": -0.05 } } ``` ![Shotstack logo positioned in top-right corner with offset margin](/img/positioning/logo-corner-offset.png) The `offset` property adds spacing from the corner anchor point, while the `fit` properties value of `crop` crops the image to the size of the viewport. The `scale` property resizes the clip to 7% of the viewport size. ### Full-bleed background Fills the entire canvas with an image, cropping edges if needed: ```json { "asset": { "type": "image", "src": "https://shotstack-assets.s3.amazonaws.com/images/woods1.jpg" }, "start": 0, "length": 5, "fit": "crop", // Default setting. Can be omitted. "position": "center" // Default setting. Can be omitted. } ``` ![Forest image filling entire canvas using fit crop, centered position](/img/positioning/full-bleed-background.png) The `"fit": "crop"` property scales the image to fill the clip area while maintaining aspect ratio. ## The coordinate system Shotstack uses normalized coordinates that work regardless of your output resolution. The viewport center is `(0, 0)`, edges extend from `-1` to `1` in both directions. This means the same positioning values work identically for 720p, 1080p, or 4K videos—no need to recalculate for different resolutions. ![Coordinate system diagram showing viewport with center at (0,0) and edges from -1 to 1 in both directions](/img/positioning/coordinate-system-diagram.png) ## Position property The `position` property anchors your asset to one of nine points on the viewport: - `center` (default) - `top`, `bottom`, `left`, `right` - `topLeft`, `topRight`, `bottomLeft`, `bottomRight` ```json { "asset": { "type": "image", "src": "https://shotstack-assets.s3.amazonaws.com/logos/shotstack/small/logo-mono-white.png" }, "start": 0, "length": 5, "fit": "none", "position": "topLeft" } ``` ![Shotstack logo positioned at top-left corner using position property with fit none](/img/positioning/position-top-left.png) Use `position` when you want to align content to standard locations like corners or edges. :::info Make sure you are using the correct `fit` property. The default `"fit": "crop"` will scale the asset to the viewport, allowing you to control how the asset is cropped. ::: ## Offset property The `offset` property fine-tunes placement from the `position` anchor point, relative to the total width and height of the viewport. Values range from `-1` to `1`: - `x`: Negative moves left, positive moves right - `y`: Negative moves down, positive moves up ```json { "asset": { "type": "image", "src": "https://shotstack-assets.s3.amazonaws.com/logos/shotstack/small/logo-mono-white.png" }, "start": 0, "length": 5, "fit": "none", "position": "topLeft", "offset":{ "x": 0.05, "y": -0.05 } } ``` ![Shotstack logo positioned at top-left with offset creating 5% margin from edges](/img/positioning/offset-with-margin.png) This example positions the asset at `bottomLeft`, then moves it right by 10% of width of the viewport, and up by 10% of the height of the viewport, creating margin from the edges. :::info Offset values use a relative `-1` to `1` range, based on the `width` and `height` selected in the `output`. Values > 1 will move the image offscreen. ::: ## Fit property The `fit` property controls how assets scale to fit the clip: ### Crop Scales the asset to fill the clip bounds while maintaining aspect ratio. This may crop the asset if aspect ratio doesn't match the viewport size: ```json { "asset": { "type": "image", "src": "https://shotstack-assets.s3.amazonaws.com/images/mountain-mist-lrg.jpg" }, "start": 0, "length": 5, "fit": "crop" } ``` ![Mountain landscape image with fit crop, filling canvas while maintaining aspect ratio](/img/positioning/fit-crop.png) ### Cover Scales the asset to fill the clip area. This does not maintain the aspect ratio and may distort the asset if the viewport aspect ratio doesn't match: ```json { "asset": { "type": "image", "src": "https://shotstack-assets.s3.amazonaws.com/images/mountain-mist-lrg.jpg" }, "start": 0, "length": 5, "fit": "cover" } ``` ![Mountain landscape image with fit cover, stretched to fill canvas without maintaining aspect ratio](/img/positioning/fit-cover.png) ### Contain Scales the asset to fit within the clip bounds while maintaining aspect ratio, but may add letterboxing if aspect ratios don't match: ```json { "asset": { "type": "image", "src": "https://shotstack-assets.s3.amazonaws.com/images/mountain-mist-lrg.jpg" }, "start": 0, "length": 5, "fit": "contain" } ``` ![Mountain landscape image with fit contain, showing letterboxing to preserve aspect ratio](/img/positioning/fit-contain.png) ### None No scaling applied. Asset uses its original dimensions: ```json { "asset": { "type": "image", "src": "https://shotstack-assets.s3.amazonaws.com/images/mountain-mist-lrg.jpg" }, "start": 0, "length": 5, "fit": "none" } ``` ![Mountain landscape image with fit none, showing original dimensions without scaling](/img/positioning/fit-none.png) Use when you want exact control over sizing, typically combined with `scale`. :::info If no `fit` property is specified, the default behavior is `crop`, which will crop assets to fill the frame. ::: ## Width & Height Clips are set to the width and height of the viewport by default, unless you explicitly set its dimensions. :::info The `width` and `height` properties are not currently available in Shotstack Studio. These properties are only available when using the Shotstack API directly. ::: ### Implicit dimensions When `width` and `height` are **not** explicitly set the clip will assume the dimensions of the viewport as set in the `output` property. ### Explicit dimensions You can set the exact pixel dimensions for a clip. This is helpful if you want to make sure your assets are contained within a fixed boundary: ```json { "asset": { "type": "video", "src": "https://shotstack-assets.s3.amazonaws.com/footage/city.mp4" }, "start": 0, "length": 5, "width": 480, "height": 270, "position": "bottomRight", "offset": { "x": -0.1, "y": -0.1 } } ``` ![Picture-in-picture effect with smaller video overlay positioned in bottom-right corner using fixed dimensions](/img/positioning/picture-in-picture.png) This creates a picture-in-picture effect with a fixed-size video overlay. :::info Fixed size clips are particularly useful if you are using UGC or other assets from which you cannot guarantee the size and/or aspect ratio. ::: ## Scale The `scale` property resizes assets proportionally. A value of `1` is normal size, `0.5` is half size, `2` is double size: ```json { "asset": { "type": "image", "src": "https://shotstack-assets.s3.amazonaws.com/images/woods1.jpg" }, "start": 0, "length": 5, "scale": 0.2 } ``` ![Forest image scaled down to 20% of viewport size using scale property](/img/positioning/scale-example.png) Use `scale` when you want proportional resizing. Use `width` and `height` when you need exact dimensions. :::info The `scale` property is applied to the `clip` **after** `fit` scaling is applied to the asset. ::: ## How properties work together Properties are applied in this order: 1. **Width/Height** - Sets explicit dimensions of the clip 2. **Fit** - Scales asset to the clip 3. **Position** - Anchors to grid point 4. **Offset** - Fine-tunes from anchor relative to viewport 5. **Scale** - Final size multiplier Example using multiple properties: ```json { "asset": { "type": "image", "src": "https://shotstack-assets.s3.amazonaws.com/images/woods1.jpg" }, "start": 0, "length": 5, "fit": "contain", "position": "topLeft", "width": 500, "height": 500, "offset": { "x": 0.1, "y": -0.1 }, "scale": 0.8 } ``` ![Forest image in 500x500 clip with fit contain, positioned top-left with offset and scaled to 80%](/img/positioning/combined-properties.png) This example: 1. The clip container is explicitly set to 500px by 500px. 2. The asset is resized to fit within the 500px by 500px container without cropping the image, potentially with letterboxing. 3. Anchors the clip to the top-left corner of the viewport (`topLeft`) 4. Adds 10% margins margings (relative to the width and height of the viewport) from the corner 5. Scales the clip down by 80% to 400px by 400px. ## Related topics - [Animations](/docs/guide/architecting-an-application/animations) - Animate position and offset over time --- ## SVG - Markdown URL: https://shotstack.io/docs/guide/architecting-an-application/svg.md - Web URL: https://shotstack.io/docs/guide/architecting-an-application/svg/ - Description: Create scalable vector graphics using raw SVG markup # SVG The SVG asset lets you add scalable vector graphics to your videos using raw SVG markup. Unlike the [shape asset](/docs/guide/architecting-an-application/shapes) which is limited to basic rectangles, circles, and lines, the SVG asset supports complex paths, polygons, and custom shapes defined using standard [SVG 2 markup](https://www.w3.org/TR/SVG2/). ## Common SVG patterns Before diving into individual properties, here are solutions to common SVG needs: ### Circle with fill color Create a simple filled circle: ![SVG Cirle](/img/svg/svg-circle.png) ```json { "timeline": { "background": "#1a1a2e", "tracks": [ { "clips": [ { "asset": { "type": "svg", "src": "" }, "start": 0, "length": 5 } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` ### Rectangle with stroke Create a rectangle with a colored border and fill: ![alt text](/img/svg/svg-rectangle.png) ```json { "timeline": { "background": "#000000", "tracks": [ { "clips": [ { "asset": { "type": "svg", "src": "" }, "start": 0, "length": 5 } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` ### Path-based custom shape Create a custom star shape using a path: ![alt text](/img/svg/svg-path.png) ```json { "timeline": { "background": "#1a1a2e", "tracks": [ { "clips": [ { "asset": { "type": "svg", "src": "" }, "start": 0, "length": 5 } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` ## Basic usage A minimal SVG asset requires the `type` set to `svg` and a `src` property containing the raw SVG markup as a string: ```json { "asset": { "type": "svg", "src": "" }, "start": 0, "length": 5 } ``` The `src` property accepts any valid SVG markup as a string. The outer `` element should include the `xmlns` attribute, a `viewBox` to define the coordinate system, and `width` and `height` attributes to set the dimensions. **SVG asset properties:** - `type` - Must be set to `"svg"` - `src` - Raw SVG markup string :::caution Limitations - **Text is not supported.** The `` element cannot be used within SVG markup. Use the [rich-text asset](/docs/guide/architecting-an-application/rich-text) for text rendering. - **Animated SVGs are not supported.** SVG animation elements such as `` and `` are not processed. Use [clip animations](/docs/guide/architecting-an-application/animations) to animate the SVG asset as a whole. ::: ## Advanced usage SVG markup can go well beyond simple shapes. You can use filters, gradients, blend modes, and other SVG features to create complex visual effects like this mesh gradient background: ![svg gradient](/img/svg/svg-gradient.png) ```json { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "svg", "src": "" }, "start": 0, "length": 5 } ] } ] }, "output": { "format": "mp4", "size": { "width": 1000, "height": 500 } } } ``` ## Related topics - [Positioning](/docs/guide/architecting-an-application/positioning) - Position and scale assets on the timeline - [Animations](/docs/guide/architecting-an-application/animations) - Animate clip properties over time --- ## Rich Captions - Markdown URL: https://shotstack.io/docs/guide/architecting-an-application/rich-captions.md - Web URL: https://shotstack.io/docs/guide/architecting-an-application/rich-captions/ - Description: Add styled captions with word-level animations and karaoke effects import CollapsibleCode from '@site/src/components/CollapsibleCode'; # Rich Captions The rich-caption asset provides advanced captioning with word-level animations, active word highlighting, and rich-text styling. It supports karaoke-style effects, bounce, pop, slide, and more — all synchronized to your audio. Use it with SRT/VTT subtitle files or auto-generate captions from audio and video clips using aliases. ## Common rich-caption patterns Before diving into individual properties, here are solutions to common captioning needs: ### Automated Styled word-by-Word Captions Create custom-styled captions where each word highlight is highlighted as its spoken, auto-generated from a video clip. Captions are automatically transcribed by using the `alias` property. Simply set the `alias` property on the clip you wish to have transcribed, and reference that clip in the `src` property of the `rich-caption` by using the `alias://` prefix. {` { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "rich-caption", "src": "alias://scott", "font": { "family": "_gP_1RrxsjcxVyin9l9n_j2RStR3qDpraA", "size": 52, "color": "#ffffff", "opacity": 1, "weight": 700 }, "animation": { "style": "highlight" }, "border": { "width": 0, "color": "#000000", "opacity": 1, "radius": 18 }, "style": { "textTransform": "uppercase" }, "padding": { "top": 0, "right": 0, "bottom": 0, "left": 0 }, "stroke": { "width": 3, "color": "#000000", "opacity": 1 }, "active": { "stroke": { "width": 3, "color": "#000000", "opacity": 1 }, "font": { "background": "#690be9" } } }, "start": 0, "length": "end", "width": 522, "height": 187, "offset": { "x": 0, "y": 0 }, "transform": { "rotate": { "angle": -7.5 } } } ] }, { "clips": [ { "asset": { "type": "video", "src": "https://shotstack-assets.s3.amazonaws.com/footage/scott-ko.mp4" }, "start": 0, "length": "auto", "alias": "scott" } ] } ], "fonts": [ { "src": "https://fonts.gstatic.com/s/luckiestguy/v25/_gP_1RrxsjcxVyin9l9n_j2RStR3qDpraA.ttf" } ] }, "output": { "size": { "width": 1024, "height": 576 }, "format": "mp4" } } `} ### Simple SRT/VTT Captions Create captions using an SRT or VTT file by referencing the url with the SRT or VTT file: {` { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "rich-caption", "src": "https://shotstack-assets.s3.amazonaws.com/captions/transcript.srt", "font": { "family": "Roboto", "size": 28, "color": "#ffffff" }, "align": { "vertical": "bottom" }, "stroke": { "width": 2 } }, "start": 0, "length": "end" } ] }, { "clips": [ { "asset": { "type": "video", "src": "https://shotstack-assets.s3.amazonaws.com/footage/scott-ko.mp4" }, "start": 0, "length": "auto" } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } `} ## Basic usage ### Using an SRT or VTT file The simplest rich-caption uses an external subtitle file: ```json { "asset": { "type": "rich-caption", "src": "https://shotstack-assets.s3.amazonaws.com/captions/transcript.srt" }, "start": 0, "length": "end" } ``` The `src` property accepts a URL to a publicly accessible SRT or VTT file. The captions are timed and displayed automatically based on the subtitle file's timestamps. ### Auto-generated captions You can automatically generate captions from audio or video clips using [aliases](/docs/guide/architecting-an-application/aliases). Assign an alias to your media clip, then reference it with the `alias://` prefix. Shotstack automatically detects the language and generates word-level captions from the audio. We support [99 languages](https://docs.google.com/spreadsheets/d/1Fwc8lH5Vyxv_5AblFagM2dgyeAX7j-yvif0t1j41Eu0/edit?usp=sharing). {` { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "rich-caption", "src": "alias://speech" }, "start": 0, "length": "end" } ] }, { "clips": [ { "alias": "speech", "asset": { "type": "video", "src": "https://shotstack-assets.s3.amazonaws.com/footage/scott-ko.mp4" }, "start": 0, "length": "auto" } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } `} :::info Auto-captioning works with video, audio, and text-to-speech clips. See [Aliases](/docs/guide/architecting-an-application/aliases) for more details on declaring and referencing aliases. ::: ## Fonts Rich captions use the same font system as [Rich Text](/docs/guide/architecting-an-application/rich-text#fonts), including the same [built-in fonts](/docs/guide/architecting-an-application/rich-text#available-fonts), [custom font loading](/docs/guide/architecting-an-application/rich-text#custom-fonts), and [international font support](/docs/guide/architecting-an-application/rich-text#international-fonts). ## Active word styling **What are active words?** The `active` property isolates the exact word currently being spoken in the audio and applies distinct styling to it (such as a color change, size increase, or background highlight). It temporarily overrides the base font style to create dynamic, real-time text tracking.
The `active` property supports `font`, `stroke` and `shadow`. The `font`, `stroke`, and `shadow` properties work the same way as the base properties — only the values you set override the base styling, everything else is inherited.
{` { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "rich-caption", "src": "alias://source_310e0028", "font": { "family": "V8mDoQDjQSkFtoMM3T6r8E7mDbZyCts0DqQ", "size": 152, "color": "#ffffff", "weight": 700 }, "align": { "vertical": "middle" }, "stroke": { "width": 2, "color": "#000000", "opacity": 1 }, "animation": { "style": "slide" }, "active": { "font": { "color": "#efbf04", "opacity": 1 } } }, "start": 0, "length": "end" } ] }, { "clips": [ { "asset": { "type": "audio", "src": "https://shotstack-video-hosting.s3.amazonaws.com/documentation/rich-captions/the-great-dictator-trimmed.m4a", "effect": "fadeOut" }, "start": 0, "length": "auto", "alias": "source_310e0028" } ] } ], "fonts": [ { "src": "https://fonts.gstatic.com/s/spacegrotesk/v22/V8mDoQDjQSkFtoMM3T6r8E7mDbZyCts0DqQ.ttf" } ] }, "output": { "format": "mp4", "size": { "width": 1080, "height": 1920 } } } `} Here the base styling sets white text with a black stroke. The `active` override changes only the color to gold (`#efbf04`) — the font family, size, weight, and stroke are all inherited from the base. ## Animations The `animation` property controls how words are highlighted or revealed as they're spoken. Choose from 8 animation styles.
### karaoke Word-by-word color fill as spoken. All words are visible from the start; the active word changes to the active color. ```json "animation": { "style": "karaoke" } ```
### highlight Similar to karaoke — the active word changes to the active color when spoken, but with a more immediate color transition. ```json "animation": { "style": "highlight" } ```
### pop Each word scales up when active, creating an energetic, punchy effect. ```json "animation": { "style": "pop" } ```
### fade Gradual opacity transition per word. Words fade in as they become active. ```json "animation": { "style": "fade" } ```
### slide Words slide in from a direction. Use the `direction` property to control where words slide from. ```json "animation": { "style": "slide", "direction": "up" } ``` **Direction options:** `left`, `right`, `up` (default), `down`
### bounce Spring animation on word appearance. Words bounce into place with a natural spring effect. ```json "animation": { "style": "bounce" } ```
### typewriter Words appear one by one and stay visible. Each word is revealed in sequence as it's spoken, building up the full caption. ```json "animation": { "style": "typewriter" } ```
### none No animation. All words are visible immediately with no highlighting or transitions. ```json "animation": { "style": "none" } ```
## Migrating from legacy captions The rich-caption asset is the successor to the [caption](/docs/guide/architecting-an-application/captions) asset. Your existing SRT/VTT files and `alias://` references work identically — change `"type": "caption"` to `"type": "rich-caption"` and you're ready to use the new styling options. ## Related topics - [Rich Text](/docs/guide/architecting-an-application/rich-text) - Static text with rich styling - [Aliases](/docs/guide/architecting-an-application/aliases) - Auto-generate captions from audio/video clips - [Positioning](/docs/guide/architecting-an-application/positioning) - Position and scale caption containers --- ## Aliases - Markdown URL: https://shotstack.io/docs/guide/architecting-an-application/aliases.md - Web URL: https://shotstack.io/docs/guide/architecting-an-application/aliases/ - Description: Reference clips by name to synchronize timing and enable auto-captioning # Aliases Aliases let you reference one clip from another using a simple naming system. Instead of manually copying timing values between clips, you can define an alias once and reference it wherever needed. This keeps clips synchronized and simplifies template maintenance. ## Common alias patterns Before diving into how aliases work, here are solutions to common needs: ### Synchronized overlay Keep a text overlay perfectly aligned with a video clip, even when the video timing changes: ```json { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "rich-text", "text": "Breaking News" }, "start": "alias://main-video", "length": "alias://main-video" } ] }, { "clips": [ { "alias": "main-video", "asset": { "type": "video", "src": "https://shotstack-assets.s3.amazonaws.com/footage/beach.mp4" }, "start": 0, "length": "auto" } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` The text overlay automatically inherits both `start` and `length` from the video. If the video source changes and has a different duration, the overlay stays synchronized without manual updates. ### Auto-generated captions Generate captions automatically from a video's audio track: ```json { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "caption", "src": "alias://speech" }, "start": 0, "length": "end" } ] }, { "clips": [ { "alias": "speech", "asset": { "type": "video", "src": "https://github.com/shotstack/test-media/raw/main/captioning/scott-ko.mp4" }, "start": 0, "length": "auto" } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` Shotstack transcribes the audio from the aliased clip and generates captions automatically. ## Declaring an alias Add the `alias` property to any clip to give it a referenceable name: ```json { "alias": "background", "asset": { "type": "video", "src": "https://shotstack-assets.s3.amazonaws.com/footage/beach.mp4" }, "start": 0, "length": 10 } ``` **Alias naming rules:** - Use letters, numbers, hyphens, and underscores - Must be unique within your timeline - Examples: `main-video`, `intro_clip`, `speech1` ## Referencing an alias Reference an alias using the `alias://` prefix followed by the alias name: ```text alias://name ``` You can use alias references in these properties: | Property | Description | |----------|-------------| | `start` | Inherit the start time from the referenced clip | | `length` | Inherit the length from the referenced clip | | `src` (caption asset) | Generate captions from the referenced clip's audio | ### Referencing start Use `alias://name` in the `start` property to synchronize when a clip begins: ```json { "asset": { "type": "image", "src": "https://shotstack-assets.s3.amazonaws.com/logos/logo.png" }, "start": "alias://main-video", "length": 3 } ``` The image appears at the exact moment the `main-video` clip starts. ### Referencing length Use `alias://name` in the `length` property to match another clip's duration: ```json { "asset": { "type": "image", "src": "https://shotstack-assets.s3.amazonaws.com/images/overlay.png" }, "start": 0, "length": "alias://background" } ``` The image stays visible for exactly as long as the `background` clip plays. ### Referencing both Combine both references to perfectly synchronize two clips: ```json { "asset": { "type": "html", "html": "
PREVIEW
" }, "start": "alias://main-video", "length": "alias://main-video" } ``` The watermark appears and disappears with the main video, regardless of when or how long it plays. ### Referencing for captions Use `alias://name` in a caption asset's `src` property to auto-generate captions: ```json { "asset": { "type": "caption", "src": "alias://narrator" }, "start": 0, "length": "end" } ``` Shotstack extracts audio from the `narrator` clip, transcribes it, and generates timed captions automatically. :::info Auto-captioning works with `audio`, `video`, and `text-to-speech` clips. The language is detected automatically. ::: ## Working with auto and end Alias references resolve after `auto` and `end` values are calculated. This means you can reference clips that use these keywords: ```json { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "rich-text", "text": "Overlay" }, "start": "alias://video", "length": "alias://video" } ] }, { "clips": [ { "alias": "video", "asset": { "type": "video", "src": "https://shotstack-assets.s3.amazonaws.com/footage/beach.mp4" }, "start": 0, "length": "auto" } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` The video's `auto` length is resolved first (based on its actual duration), then the text overlay inherits that calculated value. ## Dependency chains Aliases can reference other clips that also use alias references, creating a dependency chain: ```json { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "rich-text", "text": "Title" }, "start": "alias://overlay", "length": "alias://overlay" } ] }, { "clips": [ { "alias": "overlay", "asset": { "type": "image", "src": "https://shotstack-assets.s3.amazonaws.com/images/woods1.jpg" }, "start": "alias://base", "length": "alias://base", "scale": 0.25 } ] }, { "clips": [ { "alias": "base", "asset": { "type": "video", "src": "https://shotstack-assets.s3.amazonaws.com/footage/beach.mp4" }, "start": 0, "length": 10 } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` Shotstack resolves these in the correct order: `base` first, then `overlay`, then the text clip. :::warning Circular references are not allowed. If clip A references clip B and clip B references clip A, the render will fail with an error. ::: ## Related topics - [Captions](/docs/guide/architecting-an-application/captions) - Style and customize auto-generated captions - [Merging data](/docs/guide/architecting-an-application/merging-data) - Use merge fields for dynamic templates --- ## Smart Clips - Markdown URL: https://shotstack.io/docs/guide/architecting-an-application/smart-clips.md - Web URL: https://shotstack.io/docs/guide/architecting-an-application/smart-clips/ - Description: Automatic timing of clips # Smart Clips Smart Clips are designed to simplify video editing workflows by automating the timing settings of your clips, removing the need to manually set the `start` and `length` properties. This is particularly useful when handling a large number of clips or when precise timing is difficult to determine in advance. Smart clips provide you with options to automatically set `start` and `length` properties based on the assets you are using. ### Automated timing By setting the `start` or `length` of a clip to `auto`, Shotstack automatically calculates and assigns the appropriate values based on the asset's duration and its position within the track. This example will automatically set the `start` and `length` property: ```json { "timeline": { "tracks": [ { "clips": [ { "asset":{ "type": "video", "src": "https://s3-ap-southeast-2.amazonaws.com/my-bucket/footage.mp4", }, "start": "auto", "length": "auto" } ] }, ] } } ``` For multiple clips, the `auto` setting stitches clips sequentially according to their order in the track, maintaining their full length: ```json { "timeline": { "tracks": [ { "clips": [ { "asset":{ "type": "video", "src": "https://s3-ap-southeast-2.amazonaws.com/my-bucket/footage.mp4", }, "start": "auto", "length": "auto" }, { "asset":{ "type": "video", "src": "https://s3-ap-southeast-2.amazonaws.com/my-bucket/footage2.mp4", }, "start": "auto", "length": "auto" } ] }, ] } } ``` :::danger Warning When using the `auto` setting for image, title or text assets (or luma assets used in conjunction with these assets), the length will be automatically set to 3 seconds. ::: ### Persistent timing To ensure a clip plays through to the end of your video timeline, set its length property to **end**. This configuration is useful for elements like watermarks or soundtracks that need to cover the entire video. If the clip's duration is shorter than the timeline's total duration, it will stop when the clip ends. To add a watermark that appears throughout the entire duration of the video, you can configure the clip as follows: ```json { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "image", "src": "https://shotstack-assets.s3.amazonaws.com/assets/logo.png" }, "start": "auto", "length": "end" } ] }, { "clips": [ { "asset": { "src": "https://s3-ap-southeast-2.amazonaws.com/my-bucket/footage.mp4", "type": "video" }, "start": "auto", "length": "auto" }, { "asset": { "src": "https://s3-ap-southeast-2.amazonaws.com/my-bucket/footage2.mp4", "type": "video" }, "start": "auto", "length": "auto" } ] } ] } } ``` --- ## Merging Data - Markdown URL: https://shotstack.io/docs/guide/architecting-an-application/merging-data.md - Web URL: https://shotstack.io/docs/guide/architecting-an-application/merging-data/ - Description: How to merge data and build templates with merge fields # Merging Data Shotstack makes it easy to build reusable templates with **merge fields**. Similar to what you would expect with a mail merge application or template language. Shotstack edits are described using JSON to create a template. Within the JSON you can use placeholders with double braces, like this: `{{ FIRST_NAME }}`. The `merge` parameter provides an array of key/value pairs that can be used to find and replace the placeholder value. ### Setting up the template First of all, prepare the template with a placeholder. The JSON edit below will generate a 5 second video with a text title. A placeholder will be replaced when the video is rendered: ```json { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "text", "text": "Hello {{ NAME }}", "font": { "size": 32, "align": "center", "font": "Montserrat ExtraBold", "color": "#FFFFFF" }, "alignment": { "horizontal": "center" } }, "start": 0, "length": 5 } ] } ] }, "output": { "format": "mp4", "resolution": "hd" } } ``` In the JSON above we have added a placeholder `{{ NAME }}` to the title `text` parameter. Our JSON is now ready to be used as a template. :::info Tip Whitespace between the braces and text is ignored. The placeholder text is case sensitive and we recommend all upper case, snake case variable names, using underscores (`_`) instead of spaces. At this time the `{{` delimiter can not be changed. ::: ### Setting up the merge field To merge data, use the `merge` parameter with an array of key/value pairs, like this: ```json "merge": [ { "find": "NAME", "replace": "World" } ] ``` :::info Tip Note that the `find` parameter value does not include the braces delimiter. ::: Before posting the JSON to the API, add the merge array so that the final JSON body looks like: ```json { "merge": [ { "find": "NAME", "replace": "World" } ], "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "text", "text": "Hello {{ NAME }}}}", "font": { "size": 32, "align": "center", "font": "Montserrat ExtraBold", "color": "#FFFFFF" }, "alignment": { "horizontal": "center" } }, "start": 0, "length": 5 } ] } ] }, "output": { "format": "mp4", "resolution": "hd" } } ``` ### Checking the merge status If the render fails or you want to check that the merge was successful, you can pass the query string parameter `?data=true&merge=true` in the status request. The status request would be similar to: ```bash GET https://api.shotstack.io/{{ENV}}/render/{{RENDER_ID}}?data=true&merged=true ``` Where `{{ENV}}` is the environment you are using, for example `stage` or `v1`. The `data` parameter returns the edit JSON in the response, and `merged` merges the data with the edit JSON. ### Using multiple placeholders and merge fields You can use multiple placeholders and merge fields for the following scenarios: - Multiple placeholders with the same variable, which will be replaced by a single merge field. - Multiple placeholders with different variables, which will be replaced by multiple merge fields. --- ## Animations - Markdown URL: https://shotstack.io/docs/guide/architecting-an-application/animations.md - Web URL: https://shotstack.io/docs/guide/architecting-an-application/animations/ - Description: How to add animations to your video # Animations Shotstack provides a comprehensive system for creating smooth transitions (known as tweening) between keyframes in your video clips. ### Tweening with Shotstack Shotstack's tweening framework allows developers to create animations by specifying how properties change over time. The core of this system is the use of arrays to define animation keyframes. Instead of using a single start and end point, developers can create complex animations by defining multiple keyframe segments for a given property. Each segment specifies a `start` value (the initial state), `end` value (the final state), `start` time (when the animation starts), `duration` (how long the animation lasts), and `interpolation` method (how the values transition from start to end). This approach enables the creation of nuanced, multi-stage animations for properties such as position and scale, all within a straightforward, array-based structure. We currently support tweening for the following properties: * Opacity: Animate the transparency of a clip. * Offset: Control the x and y position of a clip. * Rotation: Animate the rotation of a clip * Skew: Control the horizontal and vertical shearing effect * Volume: Animate the audio volume of a clip. ### Animating a single property You can animate movement by adding an array with tweening properties to the property you wish to animate. #### Animate position ```json { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "image", "src": "https://shotstack-assets.s3.amazonaws.com/logos/shotstack/small/icon.png" }, "start": 0, "length": 5, "fit": "none", "offset": { "x": [ { "from": -1, "to": 0, "start": 0, "length": 2.5 } ], "y": [ { "from": 0, "to": 1, "start": 2.5, "length": 2.5 } ] } } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` ### Animate multiple properties You can add multiple tweening combinations to create complex animations. ```json { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "image", "src": "https://shotstack-assets.s3.amazonaws.com/logos/shotstack/small/icon.png" }, "start": 0, "length": 5, "fit": "none", "opacity": [ { "from": 0, "to": 1, "start": 0, "length": 5 } ], "offset": { "x": [ { "from": -1, "to": 0, "start": 0, "length": 2.5 } ], "y": [ { "from": 0, "to": 1, "start": 2.5, "length": 2.5 } ] }, "transform": { "rotate": { "angle": [ { "from": 0, "to": 360, "start": 0, "length": 2.5 }, { "from": 360, "to": 0, "start": 2.5, "length": 2.5 } ] } } } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` ### Interpolation methods The default interpolation method is `linear`, producing a constant rate of change. For more complex effects, you can use a `bezier` curve with easings. ```json { "output": { "format": "mp4", "fps": 60, "size": { "width": 1280, "height": 720 } }, "timeline": { "tracks": [ { "clips": [ { "start": 0, "length": 5, "fit": "none", "asset": { "type": "image", "src": "https://shotstack-assets.s3.amazonaws.com/logos/shotstack/small/icon.png" }, "offset": { "x": [ { "start": 0, "length": 2.5, "from": -1, "interpolation": "bezier", "to": 0, "easing": "easeInOutQuart" } ], "y": [ { "start": 2.5, "length": 2.5, "from": 0, "interpolation": "bezier", "to": 1, "easing": "easeInOutQuart" } ] } } ] } ] } } ``` #### Easings The easings available are: | Easing Type | Description | |--------------------|-----------------------------------------------------------------------------| | `ease` | Smooth transition with a gradual acceleration and deceleration. | | `easeIn` | Starts slowly, then speeds up. | | `easeOut` | Starts quickly, then slows down. | | `easeInOut` | Starts and ends slowly, with a faster middle section. | | `easeInQuad` | Speeds up at a steady rate from the start. | | `easeOutQuad` | Slows down steadily towards the end. | | `easeInOutQuad` | Gradual speed up, then gradual slow down. | | `easeInCubic` | Accelerates faster than `easeInQuad`, for a stronger start. | | `easeOutCubic` | Decelerates faster than `easeOutQuad`, for a stronger finish. | | `easeInOutCubic` | Strong acceleration at the start, smooth middle, strong deceleration at the end. | | `easeInQuart` | Even stronger acceleration at the start, for a very quick takeoff. | | `easeOutQuart` | Even stronger deceleration at the end, for a very smooth stop. | | `easeInOutQuart` | Quick start, steady middle, smooth stop. | | `easeInQuint` | Extremely fast start, used for dramatic effects. | | `easeOutQuint` | Extremely smooth stop, used for dramatic effects. | | `easeInOutQuint` | Very fast start and very smooth stop. | | `easeInSine` | Gentle acceleration, like easing into a swing. | | `easeOutSine` | Gentle deceleration, like easing out of a swing. | | `easeInOutSine` | Gentle start and end, like a smooth pendulum swing. | | `easeInExpo` | Starts slowly but speeds up dramatically, like an explosion. | | `easeOutExpo` | Starts fast then slows down dramatically, like something settling. | | `easeInOutExpo` | Starts and ends dramatically, for a very dynamic effect. | | `easeInCirc` | Starts slow and curves quickly into motion, like rolling a ball. | | `easeOutCirc` | Starts quickly and slows down in a circular motion, like a ball stopping. | | `easeInOutCirc` | Curves into motion, speeds up, then curves to a stop. | | `easeInBack` | Starts by pulling back slightly before speeding up. | | `easeOutBack` | Overshoots the end slightly, then settles back. | | `easeInOutBack` | Pulls back at the start, speeds up, overshoots the end, then settles back. | --- ## Masking with Luma Mattes - Markdown URL: https://shotstack.io/docs/guide/architecting-an-application/masks-luma-mattes.md - Web URL: https://shotstack.io/docs/guide/architecting-an-application/masks-luma-mattes/ - Description: How to use luma mattes to mask clips # Masking with Luma Mattes Luma mattes are used to create masks by leveraging the brightness (luminance) values of an image or video. The white areas of the matte will retain full visibility, while the black areas will be fully transparent. Grey values represent partial transparency, allowing for smooth transitions and blending. ### Static Luma Mattes Static luma mattes use a black-and-white image to define the transparent and visible areas of another clip. The white areas of the image remain fully visible, while the black areas will mask out the corresponding parts of the video or image. The following image is a static circular luma matte: Circle Mask To apply this matte, reference the image as a `luma` asset in the same track as the video or image it will mask: ```json { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "luma", "src": "https://shotstack-assets.s3-ap-southeast-2.amazonaws.com/luma-mattes/static/circle-sd.jpg" }, "start": 0, "length": 10 }, { "asset": { "type": "video", "src": "https://s3-ap-southeast-2.amazonaws.com/shotstack-assets/footage/cat.hd.mp4" }, "start": 0, "length": 10 } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` In this configuration, the `luma` asset applies a circular matte to the video clip. The white areas of the luma image will reveal the underlying video, while the black areas will display the cat video clip in the same track, effectively creating a masked effect. ### Animated Luma Mattes Instead of using a static black-and-white image, you can use a video with black-and-white animations as a luma matte. This allows for dynamic transitions and unique effects between clips and tracks. To apply an animated luma matte, reference the animated video as a `luma` asset in the same track as the video or image it will mask: ```json { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "luma", "src": "https://templates.shotstack.io/basic/asset/video/luma/double-arrow/double-arrow-down.mp4" }, "start": 3, "length": 2 }, { "start": 0, "length": 4, "asset": { "type": "video", "src": "https://shotstack-assets.s3-ap-southeast-2.amazonaws.com/footage/table-mountain.mp4" } } ] }, { "clips": [ { "asset": { "type": "video", "src": "https://shotstack-assets.s3-ap-southeast-2.amazonaws.com/footage/road.mp4" }, "start": 3, "length": 4 } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` In this setup, the `luma` asset applies an animated matte to the `video` clip. The varying shades of black, white, and grey in the `luma` video control the transparency levels, revealing portions of the underlying video. This creates a smooth, dynamic transition between two clips by masking one clip and gradually revealing the other. --- ## Chromakey - Markdown URL: https://shotstack.io/docs/guide/architecting-an-application/chromakey.md - Web URL: https://shotstack.io/docs/guide/architecting-an-application/chromakey/ - Description: Adding a chromakey to your video # Chromakey Chroma key, commonly known as green screen, is a technique that replaces a specific color in a video with a different background image or video, enabling seamless integration of diverse environments. Shotstack allows you to set chroma key values programmatically, which helps automate background replacement and speeds up your video production process. ### Using chroma key You can apply chroma key to your video assets by adding the `chromaKey` property to the asset. You can specify the `color` you want to replace and adjust the `threshold` and `halo` settings to ensure effective background replacement. - `Color`: The chroma key color as a hex value. For a green screen, use a green hex value. - `Threshold`: Pixels within this distance from the key color are eliminated by setting their alpha values to zero. - `Halo`: Pixels within the halo distance from the threshold boundary are given an increasing alpha value based on their distance from the threshold. ```json { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "video", "src": "https://shotstack-assets.s3.amazonaws.com/footage/avatar-chromakey.mp4", "chromaKey": { "color": "#24d590", "threshold": 150, "halo": 100 } }, "length": 7.2, "start": 0 } ] }, { "clips": [ { "asset": { "type": "image", "src": "https://shotstack-assets.s3.amazonaws.com/images/waterfall-square.jpg" }, "length": "end", "start": 0 } ] } ] }, "output": { "format": "mp4", "size": { "width": 576, "height": 576 } } } ``` --- ## Inspecting Assets - Markdown URL: https://shotstack.io/docs/guide/architecting-an-application/inspect-assets.md - Web URL: https://shotstack.io/docs/guide/architecting-an-application/inspect-assets/ - Description: Inspect and view metadata for a media file using the probe endpoint # Inspecting Assets When working with user generated content or media from third party services, you may not have details about the asset, like width, height, duration, or framerate. The Shotstack API comes with a **probe** endpoint that can be called to retrieve metadata for any asset hosted online. This endpoint is powered by [FFmpeg](https://ffmpeg.org/), the industry-standard multimedia framework. ### Calling the probe endpoint To call the **probe** endpoint you send a get request to the following URL, with the URL of the asset to be inspected. ```bash GET https://api.shotstack.io/{{ENV}}/probe/{{ASSET_URL}} ``` The `ASSET_URL` is the file to inspect and must be URL encoded, and `{{ENV}}` is the environment you are working in, for example `stage` or `v1`. An example URL is: https://api.shotstack.io/v1/probe/https%3A%2F%2Fgithub.com%2Fshotstack%2Ftest-media%2Fraw%2Fmain%2Fcaptioning%2Fscott-ko.mp4 ### Reading the probe response The URL above returns the following payload: ```json { "success": true, "message": "ok", "response": { "metadata": { "streams": [ { "index": 0, "codec_name": "h264", "codec_long_name": "H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10", "profile": "Main", "codec_type": "video", "codec_time_base": "1001/48000", "codec_tag_string": "avc1", "codec_tag": "0x31637661", "width": 1920, "height": 1080, "coded_width": 1920, "coded_height": 1088, "closed_captions": 0, "has_b_frames": 1, "pix_fmt": "yuv420p", "level": 41, "color_range": "tv", "color_space": "bt709", "color_transfer": "bt709", "color_primaries": "bt709", "chroma_location": "left", "refs": 1, "is_avc": "true", "nal_length_size": "4", "r_frame_rate": "24000/1001", "avg_frame_rate": "24000/1001", "time_base": "1/48000", "start_pts": 0, "start_time": "0.000000", "duration_ts": 1241240, "duration": "25.859167", "bit_rate": "10044458", "bits_per_raw_sample": "8", "nb_frames": "620", "disposition": { "default": 1, "dub": 0, "original": 0, "comment": 0, "lyrics": 0, "karaoke": 0, "forced": 0, "hearing_impaired": 0, "visual_impaired": 0, "clean_effects": 0, "attached_pic": 0, "timed_thumbnails": 0 }, "tags": { "language": "eng", "handler_name": "VideoHandler" } }, { "index": 1, "codec_name": "aac", "codec_long_name": "AAC (Advanced Audio Coding)", "profile": "LC", "codec_type": "audio", "codec_time_base": "1/48000", "codec_tag_string": "mp4a", "codec_tag": "0x6134706d", "sample_fmt": "fltp", "sample_rate": "48000", "channels": 2, "channel_layout": "stereo", "bits_per_sample": 0, "r_frame_rate": "0/0", "avg_frame_rate": "0/0", "time_base": "1/48000", "start_pts": 0, "start_time": "0.000000", "duration_ts": 1242096, "duration": "25.877000", "bit_rate": "317375", "max_bit_rate": "317375", "nb_frames": "1215", "disposition": { "default": 1, "dub": 0, "original": 0, "comment": 0, "lyrics": 0, "karaoke": 0, "forced": 0, "hearing_impaired": 0, "visual_impaired": 0, "clean_effects": 0, "attached_pic": 0, "timed_thumbnails": 0 }, "tags": { "language": "eng", "handler_name": "SoundHandler" } } ], "chapters": [], "format": { "filename": "https://github.com/shotstack/test-media/raw/main/captioning/scott-ko.mp4", "nb_streams": 2, "nb_programs": 0, "format_name": "mov,mp4,m4a,3gp,3g2,mj2", "format_long_name": "QuickTime / MOV", "start_time": "0.000000", "duration": "25.920000", "size": "33513207", "bit_rate": "10343582", "probe_score": 100, "tags": { "major_brand": "isom", "minor_version": "512", "compatible_brands": "isomiso2avc1mp41", "encoder": "Hybrik 1.215.18" } } } } } ``` The response includes a `response.metadata` object that includes `streams` and `format` metadata. The `streams` array contains information about the media files audio and video streams. Note that the video stream is usually index 0 but not always. Your application will need to parse the response and check the `codec_type` to be sure. The `format` object includes information about the file itself such as file size, filetype, duration and bitrate. :::info Info Under the hood the probe endpoint uses [FFprobe](https://ffmpeg.org/ffprobe.html) to read the metadata of a media file. The `metadata` value is the exact same output that FFprobe returns, converted to JSON. ::: --- ## Temporary Output Files - Markdown URL: https://shotstack.io/docs/guide/architecting-an-application/temporary-files.md - Web URL: https://shotstack.io/docs/guide/architecting-an-application/temporary-files/ - Description: All rendered files expire after 24 hours. Use the Serve API to manage hosted assets or send to a destination # Temporary Output Files All files generated by the API expire after 24 hours. Files must either be downloaded or transferred to your own storage or hosting solution or sent to one of the available [destinations](../serving-assets/destinations/index.md). After 24 hours the file at the URL provided by the API will be deleted. By default, all rendered assets are sent to the [Shotstack destination](../serving-assets/destinations/shotstack.md), a permanent hosting service with global CDN. If you have your own storage or hosting solution and do not want assets to be saved to the Shotstack destination you can [opt out](../serving-assets/self-host.md). --- ## Templates - Markdown URL: https://shotstack.io/docs/guide/architecting-an-application/templates.md - Web URL: https://shotstack.io/docs/guide/architecting-an-application/templates/ - Description: How to use templates, placeholders and merge fields # Templates Shotstack templates provide a way to save an [edit](https://shotstack.io/docs/api/#tocs_edit) that you want to re-use at a later date. A template can also be saved with placeholders using double curly braces (handlebars), like `{{ FIRST_NAME }}`. These placeholders can be replaced with real values when the video is rendered. A template can be rendered using it's ID and an array of [merge fields](merging-data.md) that replace the placeholders at render time. ### Render a template with merge fields Once you have saved a template, you can render it at any time in the future. While it is possible to render the template by id only, it is more likely you will want to replace placeholders using merge fields. To render a template call the following endpoint: ```bash POST https://api.shotstack.io/{{ENV}}/templates/render ``` The following cURL request will replace the templates placeholders with the provided merge fields and render a video: ```bash curl -X POST \ -H "Content-Type: application/json" \ -H "x-api-key: ptmpOiyKgGYMnnONwvXH7FHzDGOazrIjaEDUS7Cf" \ -d ' { "id": "ca84c7c5-5fe4-66d3-839e-ee0f1d4c2641", "merge": [ { "find": "URL", "replace": "https://shotstack-assets.s3-ap-southeast-2.amazonaws.com/footage/city-timelapse.mp4" }, { "find": "TRIM", "replace": 3 }, { "find": "DURATION", "replace": 10 } ] }' ``` The following response will be returned including the render task id; `response.id`: ```json { "success": true, "message": "Created", "response": { "message": "Render Successfully Queued", "id": "b3a3034f-8827-9e33-a6a8-ba745d36fc1" } } ``` The video is now queued for rendering and can be polled like any other render request using the [GET render endpoint](../getting-started/hello-world-using-curl.md#poll-the-api-to-check-the-render-status) or using [webhooks](../architecting-an-application/webhooks.md). ### Create a template To create a template you POST an edit to the **templates** endpoint: ```bash POST https://api.shotstack.io/{{ENV}}/templates/ ``` Where `{{ENV}}` is the environment you are working with, for example `stage` or `v1`. The following cURL request will create a template in the **stage** environment. The video `src` and `length` use placeholder values `{{ URL }}` and `{{ DURATION }}`. The place holders can be replaced when rendering using merge fields. ```bash curl -X POST \ -H "Content-Type: application/json" \ -H "x-api-key: ptmpOiyKgGYMnnONwvXH7FHzDGOazrIjaEDUS7Cf" \ -d ' { "name": "Basic test template", "template": { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "video", "src": "{{ URL }}" }, "start": 0, "length": "{{ DURATION }}" } ] } ] }, "output": { "format": "mp4", "resolution": "sd" } } }' \ https://api.shotstack.io/stage/templates/ ``` The POST request should return a JSON payload that includes the the template ID; `response.id`: ```json { "success": true, "message": "Created", "response": { "message": "Template Successfully Created", "id":"ca84c7c5-5fe4-66d3-839e-ee0f1d4c2641" } } ``` ### Retrieve a template You can retrieve a template using the following endpoint: ```bash GET https://api.shotstack.io/{{ENV}}/templates/{{ID}} ``` The following cURL request retrieves the template we just created: ```bash curl -X GET \ -H "Content-Type: application/json" \ -H "x-api-key: ptmpOiyKgGYMnnONwvXH7FHzDGOazrIjaEDUS7Cf" \ https://api.shotstack.io/stage/templates/ca84c7c5-5fe4-66d3-839e-ee0f1d4c2641 ``` The response should look like: ```json { "success": true, "message": "OK", "response": { "id": "ca84c7c5-5fe4-66d3-839e-ee0f1d4c2641", "name": "Basic test template", "owner": "5ca6hu7s9k", "template": { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "video", "src": "{{ URL }}" }, "start": 0, "length": "{{ DURATION }}" } ] } ] }, "output": { "format": "mp4", "resolution": "sd" } } } } ``` ### Retrieve a list of all templates You can retrieve a list of template using the following endpoint: ```bash GET https://api.shotstack.io/{{ENV}}/templates/ ``` The following cURL request returns all templates created in the **stage** environment: ```bash curl -X GET \ -H "Content-Type: application/json" \ -H "x-api-key: ptmpOiyKgGYMnnONwvXH7FHzDGOazrIjaEDUS7Cf" \ https://api.shotstack.io/stage/templates ``` The response should include a list of all templates: ```json { "success": true, "message": "OK", "response": { "owner": "5ca6hu7s9k", "templates": [ { "id": "ca84c7c5-5fe4-66d3-839e-ee0f1d4c2641", "name": "Basic test template", "created": "2022-12-20T12:24:44.650Z", "updated": "2022-12-20T12:24:44.451Z" }, { "id": "17eff444b-b0f4-8a91-8fcc-7423ef711060", "name": "Another template", "created": "2022-12-19T22:37:39.287Z", "updated": "2022-12-20T03:57:59.827Z" }, { "id": "e63874c58-3a62-3c98-8942-2bb70cad3ba9", "name": "Hello world template", "created": "2022-10-03T07:10:46.125Z", "updated": "2022-12-20T03:57:25.865Z" } ] } } ``` ### Update an existing template To update and overwrite an existing template use the following endpoint: ```bash PUT https://api.shotstack.io/{{ENV}}/templates/{{ID}} ``` The following cURL request updates the template we created earlier and adds a `{{ TRIM }}` placeholder and changes the resolution to `hd`: ```bash curl -X PUT \ -H "Content-Type: application/json" \ -H "x-api-key: ptmpOiyKgGYMnnONwvXH7FHzDGOazrIjaEDUS7Cf" \ -d ' { "name": "Basic test template", "template": { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "video", "src": "{{ URL }}", "trim": "{{ TRIM }}" }, "start": 0, "length": "{{ DURATION }}" } ] } ] }, "output": { "format": "mp4", "resolution": "hd" } } }' \ https://api.shotstack.io/stage/templates/ca84c7c5-5fe4-66d3-839e-ee0f1d4c2641 ``` The response should indicate the template was updated and return the template id: ```json { "success": true, "message": "OK", "response": { "id": "ca84c7c5-5fe4-66d3-839e-ee0f1d4c2641", "message": "Template Successfully Updated" } } ``` --- ## Webhooks - Markdown URL: https://shotstack.io/docs/guide/architecting-an-application/webhooks.md - Web URL: https://shotstack.io/docs/guide/architecting-an-application/webhooks/ - Description: Receive callback POST notifications whenever the status of a render completes or fails # Webhooks A webhook is a technique used by API's, SaaS providers and online services that send data to a users web server or API endpoints. It is normally used to notify a customer about a change in status or trigger an event from one system to another. The Shotstack webhook will notify your application when a render has finished successfully or failed to complete. The alternative to a webhook is polling. With polling an application will usually retrieve data from an API endpoint at regular scheduled intervals and process the response. Polling will continue until the expected response is received. With Shotstack you might poll the render endpoint every 5 seconds until the response confirms rendering is complete or failed. ### Why you should use webhooks Because video rendering can take a relatively long time to complete, potentially several seconds or even several minutes, and because the rendering time for each video may vary in length, polling can be inefficient requiring several unnecessary calls to our API while you wait for the rendering to complete. Also if you are rendering multiple videos in parallel and polling for several video at a time you can use up your allocated request limits and requests may be throttled by the API. Instead of polling you can set up a webhook callback using a URL pointing to your server or application that will receive notifications when the render completes. This requires a single request from our API to your application instead of multiple requests from your application to our API. ### Setting up the callback via the Shotstack API Setting up a webhook callback using the Shotstack API is easy. When a render completes or fails we will POST data to your web server, in the same way that you might post data from an online form or AJAX request from a web site. Simply provide the URL of the page or endpoint in your application that will receive the POSTed data using the `callback` parameter of an edit. Add the callback parameter as shown below: ```json { "timeline": {...}, "output": {...}, "callback": "https://my-application.com/status.php" } ``` In the example above the callback parameter will POST data to **https://my-application.com/status.php**, where **https://my-application.com/** is your website or application and **status.php** is a script that will handle the POSTed request. The `callback` parameter is also documented in our [API Reference](https://shotstack.io/docs/api/#tocsedit). ### Receiving the callback POST in your application In the example above we added a callback parameter with a value of **https://my-application.com/status.php**. Assuming your web site or application is hosted at **https://my-application.com** you would add a **status.php** script (or equivalent language) like: ```php status; ``` This PHP script will read the JSON request and output the status of the video. In a real world application you may want to confirm the status is done and then perform another action such as saving data to a database or downloading the video to your hosting provider. The data sent to the callback URL is sent as JSON encoded data and the payload looks similar to: ```json { "type": "edit", "action": "render", "id": "6d7acbe6-e7c1-4cf8-b8ea-94e7522878cc", "owner": "5ca6hu7s9k", "status": "done", "url": "https://shotstack-api-v1-output.s3-ap-southeast-2.amazonaws.com/5ca6hu7s9k/6d7acbe6-e7c1-4cf8-b8ea-94e7522878cc.mp4", "error": null, "completed": "2020-02-24T11:52:20.810Z" } ``` While the example above is in PHP you can use any language to receive the JSON and parse or decode it. If the render fails with a status of failed then an error reason will be provided against the `error` parameter. ### Sending metadata You can pass through additional metadata by using query strings in your webhook url: ```json { "timeline": {...}, "output": {...}, "callback": "https://my-application.com/status.php?var1=foo&var2=bar" } ``` When rendering from a [template](templates.md) you can use [merge fields](merging-data.md) to merge data into these parameters: ```json { "timeline": {...}, "output": {...}, "callback": "https://my-application.com/status.php?var={{PARAM}}}}", "merge": [ { "find": "PARAM", "replace": "foo" } ] } ``` ### Retries If the callback POST receives an error code outside the range of 200-399 or a response is not returned by your application within 10 seconds we will cancel the request and retry. An exponential back-off is used to avoid placing a burden on the server receiving the webhook callback. By default we will try to re-queue the message 10 times with a back-off exponent of 3. The retries will be as follows: | Attempt | Back-off \(sec\) | Back-off \(min\) | Cumulative \(sec\) | Cumulative \(min\) | | :------ | :--------------- | :--------------- | :----------------- | :----------------- | | 1 | - | - | 0 | 0 | | 2 | 8 | 0:08 | 8 | 0:08 | | 3 | 27 | 0:27 | 35 | 0:35 | | 4 | 64 | 1:04 | 99 | 1:39 | | 5 | 125 | 2:05 | 224 | 3:44 | | 6 | 216 | 3:36 | 440 | 7:20 | | 7 | 343 | 5:43 | 783 | 13:03 | | 8 | 512 | 8:32 | 1295 | 21:35 | | 9 | 729 | 12:09 | 2024 | 33:44 | | 10 | 900 | 15:00 | 2924 | 48:44 | ### Considerations - Ensure your application responds within 10 seconds. Make sure your architecture offloads long running processes until after you have sent the webhook a response. For example, respond with a 200 response code and then perform video downloads, database updates and email dispatching in a separate process or script. - If you queue several videos to render at once, ensure your server is ready to handle multiple callback requests at once. In reality these may be spread over a number of seconds but you should ensure your system can handle the spike in requests and that security mechanisms such as a Web Application Firewall \(WAF\) will not blacklist the Shotstack service. - Your endpoint for receiving webhook callbacks must be publicly available over the internet. In theory anyone can post data to this endpoint. We currently not provide signed payloads so if you are concerned about the validity of the data posted you receive, you should make a request to the API using your API key and the render ID and ensure all data matches. --- ## Caching - Markdown URL: https://shotstack.io/docs/guide/architecting-an-application/caching.md - Web URL: https://shotstack.io/docs/guide/architecting-an-application/caching/ - Description: Caching assets and source footage to speed up render times and reduce bandwidth # Caching When you perform a render, any source footage, images and audio files are provided by you as URL's which are downloaded and cached on our servers for 24 hours. By caching the files we reduce the bandwidth charges you may incur and speed up subsequent renders which rely on the same file. Caching is particularly useful for common assets that are used in multiple renders, for example logos, soundtracks and intro/outro videos. Caching is enabled by default and you do not need to do anything. If you wish to disable caching you can pass a boolean parameter with the timeline settings: ```json { "timeline":{ "cache": false, ... } } ``` Disabling caching will download all files required by the render. At this time it is not possible to selectively cache files. --- ## Limitations - Markdown URL: https://shotstack.io/docs/guide/architecting-an-application/limitations.md - Web URL: https://shotstack.io/docs/guide/architecting-an-application/limitations/ - Description: System constraints and limitations # Limitations ### Render Time Rendering takes approximately **20 seconds per minute** of video. Additional effects, filters, or transitions increase render time. ### Usage Limits - **Production API:** 10 requests per second. - **Sandbox API:** 1 request per second, 5,000 requests per month. ### Sandbox Environment - Same limits as production, except for a **maximum render duration of 10 minutes**. - All videos rendered in the sandbox environment will have a watermark applied, - You need at least one credit to make use of the sandbox environment. ### Disk Space Limits - **Individual source files:** 5GB maximum per file. - **Source footage + output video:** Must not exceed 10GB total. --- ## Creating Videos with Generative AI Assets - Markdown URL: https://shotstack.io/docs/guide/generating-assets/generative-ai.md - Web URL: https://shotstack.io/docs/guide/generating-assets/generative-ai/ - Description: Enhance your videos by incorporating generative AI assets instead of relying solely on stock footage. # Creating Videos with Generative AI Assets :::info Use AI Assistants Want to create videos using natural language? See [Build videos with AI](/docs/guide/agents/) — connect Shotstack to Claude, ChatGPT, Cursor, Claude Code, or build your own agentic app. ::: Enhance your videos by incorporating generative AI assets instead of relying solely on stock footage. Shotstack enables you to create and embed AI-generated audio and image assets directly into your video projects. ### Generative AI Assets Shotstack supports the following AI-generated assets, which can be easily integrated into your videos: - **[Text to Image](/docs/guide/generating-assets/ai-image-generation)**: Transform text prompts into high-quality images. - **[Text to Speech](/docs/guide/generating-assets/ai-speech-generation)**: Convert and translate text into speech with a variety of voices and languages. - **[Text to Video](/docs/guide/generating-assets/ai-video-generation)**: Generate video clips from text prompts. ### How to Add Generative AI Assets Integrating generative AI assets into your videos is simple. Specify the type of asset you want to create, adjust the generation settings, and define the clip properties to control exactly when and where the asset appears in your video. For example, to generate an image using the text-to-image asset: ```json { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "text-to-image", "prompt": "Landscape of Sydney harbour in bad weather, Editorial Photography, Aerial View, 32k, Optics", "width": 1280, "height": 720 }, "start": 0, "length": 5 } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` ### Using Multiple AI Assets You can combine multiple AI assets in a single video. Shotstack will generate the assets according to your specifications and automatically insert them into your video at the designated points. ```json { "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "text-to-speech", "text": "Good evening, in Sydney tonight we’re tracking a developing story as unexpected storms roll in across the city, bringing with them flash flooding warnings and major disruptions to the evening commute.", "voice": "Joanna" }, "start": 0, "length": "auto" } ] }, { "clips": [ { "asset": { "type": "text-to-image", "prompt": "Landscape of Sydney harbour in bad weather, Editorial Photography, Aerial View, 32k, Optics", "width": 1280, "height": 720 }, "start": 0, "length": "end" } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } } ``` :::danger Warning Generated AI assets in the sandbox environment will incur credits. ::: --- ## Text to Speech Generation - Markdown URL: https://shotstack.io/docs/guide/generating-assets/ai-speech-generation.md - Web URL: https://shotstack.io/docs/guide/generating-assets/ai-speech-generation/ - Description: The text-to-speech feature enables you to convert written text into spoken audio with a selection of voices and languages. # Text to Speech Generation ## Convert Text into Speech The text-to-speech feature enables you to convert written text into spoken audio with a selection of voices and languages. You can seamlessly integrate this audio into a clip by specifying the desired voice and the text to be spoken. ```json { "asset": { "type": "text-to-speech", "text": "Good evening, in Sydney tonight we’re tracking a developing story as unexpected storms roll in across the city, bringing with them flash flooding warnings and major disruptions to the evening commute.", "voice": "Amy" }, "start": 0, "length": "auto" } ``` This request will embed the generated speech at the beginning of your video, with the duration automatically matching the length of the generated audio. For more details on optimizing timing, refer to [smart clips](/docs/guide/architecting-an-application/smart-clips). ### Translating text To create an audio file in a different language, use the `language` option. Ensure that you select a `voice` compatible with the desired `language`. ```json { "asset": { "type": "text-to-speech", "text": "Good evening, in Sydney tonight we’re tracking a developing story as unexpected storms roll in across the city, bringing with them flash flooding warnings and major disruptions to the evening commute.", "voice": "Seoyeon", "language": "ko-KR" }, "start": 0, "length": "auto" } ``` The above example creates an audio file in Korean. The English text is translated to and spoken in Korean. #### Supported translations | Language | Value | |----------|-------| | Chinese (Mandarin) | cmn-CN | | Danish | da-DK | | German | de-DE | | English (Australian) | en-AU | | English (British) | en-GB | | English (Indian) | en-IN | | English (US) | en-US | | Spanish (European) | es-ES | | Spanish (Mexican) | es-MX | | Spanish (US) | es-US | | French (Canadian) | fr-CA | | French | fr-FR | | Italian | it-IT | | Japanese | ja-JP | | Hindi | hi-IN | | Korean | ko-KR | | Norwegian Bokmål | nb-NO | | Dutch | nl-NL | | Polish | pl-PL | | Portuguese (Brazilian) | pt-BR | | Portuguese (European) | pt-PT | | Swedish | sv-SE | | English (New Zealand) | en-NZ | | English (South African) | en-ZA | | Catalan | ca-ES | | German (Austrian) | de-AT | | Chinese (Cantonese) | yue-CN | | Arabic (Gulf) | ar-AE | | Finnish | fi-FI | ### Newscaster mode Shotstack’s text-to-speech service includes a `newscaster` mode, which produces audio that emulates a newsreader’s delivery. To enable this mode, set the `newscaster` option to `true`. ```json { "asset": { "type": "text-to-speech", "text": "Good evening, in Sydney tonight we’re tracking a developing story as unexpected storms roll in across the city, bringing with them flash flooding warnings and major disruptions to the evening commute.", "voice": "Joanna", "newscaster": true }, "start": 0, "length": "auto" } ``` The `newscaster` style is available with the `Matthew` and `Joanna` voices in US English, the `Lupe` voice in US Spanish, and the `Amy` voice in British English. ### Voices The Shotstack text-to-speech service offers a variety of voices in different languages and genders: | Voice Name | Language | Gender | |------------|----------|--------| | Hala | Arabic (Gulf) | Female | | Lisa | Dutch (Belgian) | Female | | Arlet | Catalan | Female | | Hiujin | Chinese (Cantonese) | Female | | Zhiyu | Chinese (Mandarin) | Female | | Sofie | Danish | Female | | Laura | Dutch | Female | | Olivia | English (Australian) | Female | | Amy | English (British) | Female | | Emma | English (British) | Female | | Brian | English (British) | Male | | Arthur | English (British) | Male | | Kajal | English (Indian) | Female | | Niamh | English (Ireland) | Female | | Aria | English (New Zealand) | Female | | Ayanda | English (South African) | Female | | Ivy | English (US) | Female (child) | | Joanna | English (US) | Female | | Kendra | English (US) | Female | | Kimberly | English (US) | Female | | Salli | English (US) | Female | | Joey | English (US) | Male | | Justin | English (US) | Male (child) | | Kevin | English (US) | Male (child) | | Matthew | English (US) | Male | | Ruth | English (US) | Female | | Stephen | English (US) | Male | | Suvi | Finnish | Female | | Léa | French | Female | | Rémi | French | Male | | Gabrielle | French (Canadian) | Female | | Liam | French (Canadian) | Male | | Vicki | German | Female | | Daniel | German | Male | | Hannah | German (Austrian) | Female | | Bianca | Italian | Female | | Adriano | Italian | Male | | Takumi | Japanese | Male | | Kazuha | Japanese | Female | | Tomoko | Japanese | Female | | Seoyeon | Korean | Female | | Ida | Norwegian | Female | | Ola | Polish | Female | | Camila | Portuguese (Brazilian) | Female | | Vitória/Vitoria | Portuguese (Brazilian) | Female | | Thiago | Portuguese (Brazilian) | Male | | Inês/Ines | Portuguese (European) | Female | | Lucia | Spanish (European) | Female | | Sergio | Spanish (European) | Male | | Mia | Spanish (Mexican) | Female | | Andrés | Spanish (Mexican) | Male | | Lupe | Spanish (US) | Female | | Pedro | Spanish (US) | Male | | Elin | Swedish | Female | ## ElevenLabs Integration Our ElevenLabs integration is currently unavailable. :::danger Warning Generated AI assets in the sandbox environment will incur credits. ::: --- ## AI Image Generation - Markdown URL: https://shotstack.io/docs/guide/generating-assets/ai-image-generation.md - Web URL: https://shotstack.io/docs/guide/generating-assets/ai-image-generation/ - Description: You can create super realistic imagery for use in your videos by using the text-to-image asset. # AI Image Generation ### Text to Image You can create super realistic imagery for use in your videos by using the text-to-image asset. We make use of the latest FLUX.1 model by Black Forest Labs, providing you state-of-the-art AI generated imagery. ```json { "asset": { "type": "text-to-image", "prompt": "Landscape of Sydney harbour, Editorial Photography, Aerial View, 32k, Optics", "width": 1280, "height": 720 }, "start": 0, "length": 5 } ``` :::info Info Please ensure both width and height are in multiples of 256px, up to a maximum of 1280px. ::: :::danger Warning Generated AI assets in the sandbox environment will incur credits. ::: --- ## AI Video Generation - Markdown URL: https://shotstack.io/docs/guide/generating-assets/ai-video-generation.md - Web URL: https://shotstack.io/docs/guide/generating-assets/ai-video-generation/ - Description: Image to video converts an image into a 5-second video. # AI Video Generation ### Image to Video Image to video converts an image into a 5-second video. Provide a URL to an image to generate an mp4 video file. The video file can be used inside of your Edit or as a standalone asset. We accept PNG, JPG and PNG images. ```json { "asset": { "type": "image-to-video", "src": "https://shotstack-assets.s3.amazonaws.com/images/handbag-flower-peaches.jpg", "prompt": "Slowly zoom out and orbit left around the object." }, "start": 0, "length": "auto" } ``` This will generate a video clip of the supplied image. The generated video will automatically be cropped to fit the output dimensions of your video. You can change the `fit` property of the clip to change this behaviour. :::info Info Videos will always be 720p, 25fps and 6 seconds in length. ::: ### Usage limits Due to heavy usage we currently limit the number of image to video generations to 1,000 per month. Please get in touch if you require higher limits. :::danger Warning Generated AI assets in the sandbox environment will incur credits. ::: --- ## Host and Serve Video and Images - Markdown URL: https://shotstack.io/docs/guide/serving-assets/hosting.md - Web URL: https://shotstack.io/docs/guide/serving-assets/hosting/ - Description: Learn how to host and serve videos and images with Shotstack’s CDN and Serve API. Securely store, stream, and manage assets with built-in hosting and storage solutions. # Hosting and Serving Assets Assets rendered by the [Editing API](../architecting-an-application/guidelines.md) are saved in a temporary location for 24 hours before being deleted. Before being deleted you must either download the assets and store them on your hard drive, server or storage location. Similarly, files uploaded and ingested via the [Ingest API](../ingesting-footage/ingestion.md) are stored for use in an edit and are not suited for streaming or serving to customers. An alternative is to use use one of the provided built in hosting or storage solutions integrated with Shotstack. The built in integrations are called [Destinations](../destinations/) and includes a [Shotstack hosting and CDN](./destinations/shotstack.md) service that is provided by default to all customers. --- ## Distribute Assets Using Destinations - Markdown URL: https://shotstack.io/docs/guide/serving-assets/destinations.md - Web URL: https://shotstack.io/docs/guide/serving-assets/destinations/index/ - Description: Distribute videos, images and audio to your own hosting, storage or social media accounts # Destinations The destinations feature lets you send rendered assets, ingested footage and renditions to built in and 3rd party services directly from the API. Once integrated videos and images will automatically be distributed to the destinations you specify. ### Available destinations - [Shotstack](./shotstack) (default) - [S3](./s3) - [Google Cloud Storage](./google-cloud-storage) - [Azure Blob Storage](./azure-blob-storage) - [Google Drive](./google-drive) - [Akamai NetStorage](./akamai-netstorage) - [Vimeo](./vimeo) ### Adding credentials Apart from Shotstack, destinations require you to add credentials, keys or passwords. These can be added via the dashboard from the [integrations page](https://dashboard.shotstack.io/integrations/). Each destination includes integration instructions and form fields to enter credentials and any other configuration options. Example integrations page for the AWS S3 destination: ![image](/img/s3-credentials.png) ### Integrating destinations You instruct a render or ingest task to send output files to a destination using the `destinations` parameter. When editing videos you set the destinations parameter in the `output`, and for ingested source files and renditions you set the value at the root level of the request body. The `destinations` parameter is an array of one or more destinations to send assets to. You can send one asset to multiple destinations. An edit output configuration to send a video to the S3 destination might look like: ```json { "timeline": { ... }, "output": { "format": "mp4", "resolution": "sd", "destinations": [{ "provider": "s3" }] } } ``` Using the ingest API to send a source file to the S3 destination might look like: ```json { "url": "https://example.com/video.mp4", "destinations": [{ "provider": "s3" }] } ``` ### Serve API The [Serve API](./../serve-api.md) lets you check the status of asset transfers and their availability at each destination. --- ## Shotstack Destination - Markdown URL: https://shotstack.io/docs/guide/serving-assets/destinations/shotstack.md - Web URL: https://shotstack.io/docs/guide/serving-assets/destinations/shotstack/ - Description: How to host and manage your videos and images using the Shotstack CDN and Serve API. # Serve Video with Shotstack Shotstack provides a built in asset hosting service for your generated videos, images and audio files. Assets are stored on a high availability storage system and served globally via a content delivery network (CDN). Hosting is enabled by default and all assets rendered via the editing API or ingested via the ingest API are immediately sent to the hosting service. There is a webhook that posts a callback when the asset arrives at the hosting service and an API that allows you to look up information about the asset and it's availability. :::warning Shotstack CDN hosting is subject to fair use on Pay As You Go plans. Excessive bandwidth or storage usage may be throttled or require an upgrade to a higher tier plan. ::: Assets are hosted with the following URL format: **https://cdn.shotstack.io/\{\{REGION\}\}/\{\{STAGE\}\}/\{\{OWNER_ID\}\}/\{\{FILENAME\}\}** Where the following parameters need to be replaced with: | Parameter | URL | | :-----------| :-------------------------- | | REGION | The only region currently available is `au` | | STAGE | The staging environment: `stage` for the development sandbox, `v1` for live production usage. | | OWNER_ID | The owner or key ID. This is the `owner` parameter returned by render status requests or the key ID in dashboard | | FILENAME | The asset filename to serve to users | An example URL looks like the following: **https://cdn.shotstack.io/au/v1/msgtwx8iw6/2826ceb8-f0d4-43da-a472-42d88161d1a3.mp4** ### Setup The Shotstack hosting service is setup by default and does not need to be setup or configured. ### Adding credentials The Shotstack hosting service is built in to the Shotstack platform and does not require credentials. ### Integration All assets created by the editing API and ingest API are automatically sent to the Shotstack hosting service. You do not need to add or edit the output destinations parameter unless you want to opt-out from hosting. ### Opting out from the Shotstack destination By default all generated files are sent to the Shotstack destination. If you transfer files to your own hosting or storage provider you may wish to [opt out](../self-host.md) from hosting your videos with Shotstack using: ```json "destinations": [{ "provider": "shotstack", "exclude": true }] ``` The exclude value of `true` will not send videos to Shotstack. --- ## S3 Destination - Markdown URL: https://shotstack.io/docs/guide/serving-assets/destinations/s3.md - Web URL: https://shotstack.io/docs/guide/serving-assets/destinations/s3/ - Description: Easily transfer videos from Shotstack to your AWS S3 bucket for secure storage and fast distribution. # Host videos on AWS S3 Amazon (AWS) [S3](https://aws.amazon.com/s3/) is an object storage service offering industry-leading scalability, data availability, security, and performance. Host your assets in the cloud and serve them using CloudFormation as a CDN. By using the S3 destination you can immediately send your assets to your own S3 bucket in any region, making it quick and easy to host your assets in your own AWS account. ### Setup Sign up to AWS and create an S3 bucket if you have not already set one up. An AWS access key id and secret are needed to send files to your account. We recommend creating a new user with IAM permissions that only allow access to the buckets needed. Do not use a user with root or global permissions. The IAM user must have the `s3:PutObject` permission for the target bucket. If you plan to use ACLs (e.g. `public-read`), the user also needs the `s3:PutObjectAcl` permission. An example IAM policy: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:PutObjectAcl" ], "Resource": "arn:aws:s3:::my-s3-bucket/*" } ] } ``` The S3 bucket policy must also allow the IAM user to put objects. If the bucket has a restrictive or custom bucket policy, ensure it does not deny `s3:PutObject` or `s3:PutObjectAcl` for the IAM user. If you are using ACLs, the bucket must have [Object Ownership](https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html) set to **Bucket owner preferred** or **Object writer** (not **Bucket owner enforced**, which disables ACLs). Once the bucket is created and permissions are configured, add the access credentials via the Shotstack [dashboard](https://dashboard.shotstack.io). ### Adding credentials Login to the dashboard and navigate to the [integrations](https://dashboard.shotstack.io/integrations) page, find the S3 destination and click [configure](https://dashboard.shotstack.io/integrations/s3). Enter your access key id and secret key as shown below. You can add different AWS keys for each Shotstack environment; sandbox and v1 (production). ![S3 credentials](/img/s3-credentials.png) ### Integration #### Basic integration To send files to S3 add the following to the output parameter of a render request JSON payload, JSON template or the root level of an ingest request: ```json "destinations": [{ "provider": "s3", "options": { "region": "ap-southeast-2", "bucket": "my-s3-bucket" } }] ``` This will send the generated or ingested files to the **my-s3-bucket** in **the ap-southeast-2** region. The filename will be the render id, source id or rendition id plus extension, i.e. **edc47d30-a504-4f16-8439-24c863a7a782.mp4** and will be saved to the root level of the bucket. Thumbnail and poster images will also be transferred. #### Advanced integration You can also optionally set a prefix (path), custom file name, and set an acl (the default is private): ```json "destinations": [{ "provider": "s3", "options": { "region": "ap-southeast-2", "bucket": "my-s3-bucket", "prefix": "testing", "filename": "video", "acl": "public-read" } }] ``` This will send the rendered files to the **my-s3-bucket** in the **ap-southeast-2** region. If the rendered file is an mp4, the filename will be **video.mp4** and will be saved with a prefix **testing**. The acl will be set to **public-read**, which is suitable for public hosting. When using a custom filename omit the file extension, i.e. .mp4 or .jpg. This will be added based on the rendered file type. If a poster or thumbnail are created these will have **-poster.jpg** and **-thumbnail.jpg** appended. #### Sending assets to multiple buckets It is also possible to send files to multiple buckets with one request: ```json "destinations": [{ "provider": "s3", "options": { "region": "ap-southeast-2", "bucket": "my-s3-bucket" } },{ "provider": "s3", "options": { "region": "us-east-1", "bucket": "my-us-s3-bucket" } }] ``` This will send rendered assets to the **my-s3-bucket** in the **ap-southeast-2** region and to the **my-us-s3-bucket** in the **us-east-1** region. Buckets must be in the same AWS account and use the same IAM access credentials. ### Opting out from the Shotstack destination By default all generated files are sent to the Shotstack destination. If you are using S3 you may wish to [opt out](../self-host.md) from hosting your videos with Shotstack using: ```json "destinations": [{ "provider": "s3", "options": { "region": "ap-southeast-2", "bucket": "my-s3-bucket" } },{ "provider": "shotstack", "exclude": true }] ``` This will send the generated video to S3 but not to Shotstack. --- ## Google Cloud Storage Destination - Markdown URL: https://shotstack.io/docs/guide/serving-assets/destinations/google-cloud-storage.md - Web URL: https://shotstack.io/docs/guide/serving-assets/destinations/google-cloud-storage/ - Description: Send videos and assets from Shotstack to your Google Cloud Storage bucket for secure storage and distribution. # Host Assets on Google Cloud Storage [Google Cloud Storage](https://cloud.google.com/storage) is a managed object storage service offering high availability, global redundancy, and fine-grained access controls. Store and serve your assets from Google's infrastructure. By using the Google Cloud Storage destination you can immediately send your assets to your own GCS bucket, making it quick and easy to host your assets in your own Google Cloud account. ### Setup Sign up to Google Cloud and create a Cloud Storage bucket if you have not already set one up. Google Cloud credentials are required and must be added via the Shotstack [dashboard](https://dashboard.shotstack.io). ### Adding credentials Login to the dashboard and navigate to the [integrations](https://dashboard.shotstack.io/integrations) page, find the Google Cloud Storage destination and click [configure](https://dashboard.shotstack.io/integrations/google-cloud-storage). Enter your credentials as shown. You can add different credentials for each Shotstack environment; sandbox and v1 (production). ### Integration #### Basic integration To send files to Google Cloud Storage add the following to the output parameter of a render request JSON payload, JSON template or the root level of an ingest request: ```json "destinations": [{ "provider": "google-cloud-storage", "options": { "bucket": "my-bucket" } }] ``` This will send the generated or ingested files to the **my-bucket** bucket. The filename will be the render id, source id or rendition id plus extension, i.e. **edc47d30-a504-4f16-8439-24c863a7a782.mp4** and will be saved to the root level of the bucket. Thumbnail and poster images will also be transferred. #### Advanced integration You can also optionally set a prefix (folder path) and custom file name: ```json "destinations": [{ "provider": "google-cloud-storage", "options": { "bucket": "my-bucket", "prefix": "my-renders", "filename": "video" } }] ``` This will send the rendered files to the **my-bucket** bucket. If the rendered file is an mp4, the filename will be **video.mp4** and will be saved with a prefix **my-renders**. When using a custom filename omit the file extension, i.e. .mp4 or .jpg. This will be added based on the rendered file type. If a poster or thumbnail are created these will have **-poster.jpg** and **-thumbnail.jpg** appended. #### Sending assets to multiple buckets It is also possible to send files to multiple buckets with one request: ```json "destinations": [{ "provider": "google-cloud-storage", "options": { "bucket": "my-bucket" } },{ "provider": "google-cloud-storage", "options": { "bucket": "my-other-bucket", "prefix": "backups" } }] ``` This will send rendered assets to **my-bucket** and to the **backups** prefix of **my-other-bucket**. Buckets must be accessible by the configured credentials. ### Opting out from the Shotstack destination By default all generated files are sent to the Shotstack destination. If you are using Google Cloud Storage you may wish to [opt out](../self-host.md) from hosting your videos with Shotstack using: ```json "destinations": [{ "provider": "google-cloud-storage", "options": { "bucket": "my-bucket" } },{ "provider": "shotstack", "exclude": true }] ``` This will send the generated video to Google Cloud Storage but not to Shotstack. --- ## Azure Blob Storage Destination - Markdown URL: https://shotstack.io/docs/guide/serving-assets/destinations/azure-blob-storage.md - Web URL: https://shotstack.io/docs/guide/serving-assets/destinations/azure-blob-storage/ - Description: Send videos and assets from Shotstack to your Azure Blob Storage container for secure storage and distribution. # Host Assets on Azure Blob Storage [Azure Blob Storage](https://learn.microsoft.com/en-us/azure/storage/blobs/) is Microsoft's object storage solution for the cloud. Store and serve large amounts of unstructured data with high availability and global redundancy. By using the Azure Blob Storage destination you can immediately send your assets to your own Blob container, making it quick and easy to host your assets in your own Azure account. ### Setup Sign up to Microsoft Azure and create a Storage account with a Blob container if you have not already set one up. The container must exist before files can be sent. Azure credentials are required and must be added via the Shotstack [dashboard](https://dashboard.shotstack.io). ### Adding credentials Login to the dashboard and navigate to the [integrations](https://dashboard.shotstack.io/integrations) page, find the Azure Blob Storage destination and click [configure](https://dashboard.shotstack.io/integrations/azure-blob-storage). Enter your storage account name and credentials as shown. You can add different credentials for each Shotstack environment; sandbox and v1 (production). ### Integration #### Basic integration To send files to Azure Blob Storage add the following to the output parameter of a render request JSON payload, JSON template or the root level of an ingest request: ```json "destinations": [{ "provider": "azure-blob-storage", "options": { "accountName": "mystorageaccount", "container": "my-container" } }] ``` This will send the generated or ingested files to the **my-container** container in the **mystorageaccount** storage account. The filename will be the render id, source id or rendition id plus extension, i.e. **edc47d30-a504-4f16-8439-24c863a7a782.mp4**. Thumbnail and poster images will also be transferred. #### Advanced integration You can also optionally set a virtual directory prefix and custom file name: ```json "destinations": [{ "provider": "azure-blob-storage", "options": { "accountName": "mystorageaccount", "container": "my-container", "prefix": "videos", "filename": "my-video" } }] ``` This will send the rendered files to the **my-container** container with a virtual directory prefix of **videos**. If the rendered file is an mp4, the filename will be **my-video.mp4**. When using a custom filename omit the file extension, i.e. .mp4 or .jpg. This will be added based on the rendered file type. If a poster or thumbnail are created these will have **-poster.jpg** and **-thumbnail.jpg** appended. #### Sending assets to multiple containers It is also possible to send files to multiple containers with one request: ```json "destinations": [{ "provider": "azure-blob-storage", "options": { "accountName": "mystorageaccount", "container": "my-container" } },{ "provider": "azure-blob-storage", "options": { "accountName": "mystorageaccount", "container": "my-backup-container", "prefix": "backups" } }] ``` This will send rendered assets to **my-container** and to the **backups** prefix of **my-backup-container**. Containers must be in the same Azure Storage account and accessible by the configured credentials. ### Opting out from the Shotstack destination By default all generated files are sent to the Shotstack destination. If you are using Azure Blob Storage you may wish to [opt out](../self-host.md) from hosting your videos with Shotstack using: ```json "destinations": [{ "provider": "azure-blob-storage", "options": { "accountName": "mystorageaccount", "container": "my-container" } },{ "provider": "shotstack", "exclude": true }] ``` This will send the generated video to Azure Blob Storage but not to Shotstack. --- ## Google Drive Destination - Markdown URL: https://shotstack.io/docs/guide/serving-assets/destinations/google-drive.md - Web URL: https://shotstack.io/docs/guide/serving-assets/destinations/google-drive/ - Description: Distribute videos to Google Drive # Google Drive [Google Drive](https://www.google.com/intl/en_us/drive/), a cloud-based storage service by Google, allows you to easily send your files to your Drive folders. This integration allows you to effortlessly store, organize, and share your files with colleagues or clients through Google Drive's sharing features. ### Setup To send your videos directly to your Google Drive you will have to connect Shotstack to your Google Account. This will provide Shotstack with permissions to upload files to Google Drive on your behalf. We will only have access to the files that are created by Shotstack. Login to the dashboard and navigate to the [integrations](https://dashboard.shotstack.io/integrations) page, find the Google Drive destination and click [configure](https://dashboard.shotstack.io/integrations/google-drive). Click the **Connect to Google Drive** button and follow the prompts. ![Google Drive Credentials](/img/google-drive-credentials.jpg) ### Integration #### Basic integration To send files to Google Drive add the following to the output parameter of the render payload: ```json "destinations": [{ "provider": "google-drive" }] ``` The following will send the rendered files to a folder on your Google Drive with the ID 2t-eTY12LO8tzQLAwMug-fIrK_7AQBI6B. If the rendered file is an mp4, it will be named video.mp4: ```json "destinations": [{ "provider": "google-drive", "options": { "filename": "video", "folderId": "2t-eTY12LO8tzQLAwMug-fIrK_7AQBI6B" } }] ``` You can retrieve the folder ID from the URL e.g. https://drive.google.com/drive/u/0/folders/2t-eTY12LO8tzQLAwMug-fIrK_7AQBI6B #### Sending assets to multiple folders It is also possible to send files to multiple folders with one request: ```json "destinations": [{ "provider": "google-drive", "options": { "folderId": "2t-eTY12LO8tzQLAwMug-fIrK_7AQBI6B" } },{ "provider": "google-drive", "options": { "folderId": "1t9yYms4_Y2u8k-YovnTCZNMjU_FkkNBh" } }] ``` This will send rendered assets to the two different folders. The folders must accessible by the account holders. ### Opting out from the Shotstack destination By default all generated files are sent to the Shotstack destination. If you are using Google Drive you may wish to [opt out](../self-host.md) from hosting your videos with Shotstack using: ```json "destinations": [{ "provider": "google-drive", "options": { "folderId": "2t-eTY12LO8tzQLAwMug-fIrK_7AQBI6B" } },{ "provider": "shotstack", "exclude": true }] ``` This will send the generated video to Google Drive but not to Shotstack. --- ## Akamai NetStorage Destination - Markdown URL: https://shotstack.io/docs/guide/serving-assets/destinations/akamai-netstorage.md - Web URL: https://shotstack.io/docs/guide/serving-assets/destinations/akamai-netstorage/ - Description: Send videos and assets from Shotstack to your Akamai NetStorage storage group for CDN distribution. # Host Assets on Akamai NetStorage :::info Enterprise Integration Akamai NetStorage is an enterprise integration. Please contact us to enable this feature for your account. ::: [Akamai NetStorage](https://techdocs.akamai.com/netstorage/docs/welcome-to-netstorage) is a managed, cloud-based storage service that provides geographically replicated storage for your content. Upload your assets to a NetStorage storage group and serve them globally via Akamai's edge network. By using the Akamai NetStorage destination you can immediately send your assets to your own NetStorage storage group, making it quick and easy to distribute your assets via Akamai's CDN. ### Setup Sign up to Akamai and create a storage group with an upload account if you have not already set one up. You will need your NetStorage HTTP domain name (hostname) and Content Provider (CP) code. Akamai credentials are required and must be added via the Shotstack [dashboard](https://dashboard.shotstack.io). The upload account must have the **NetStorage HTTP CMS API** access method enabled. Note that changes to upload accounts can take 60-120 minutes to propagate across the NetStorage network. ### Adding credentials Login to the dashboard and navigate to the [integrations](https://dashboard.shotstack.io/integrations) page, find the Akamai NetStorage destination and click [configure](https://dashboard.shotstack.io/integrations/akamai-netstorage). Enter your credentials as shown. You can add different credentials for each Shotstack environment; sandbox and v1 (production). ### Integration #### Basic integration To send files to Akamai NetStorage add the following to the output parameter of a render request JSON payload, JSON template or the root level of an ingest request: ```json "destinations": [{ "provider": "akamai-netstorage", "options": { "host": "example-nsu.akamaihd.net", "cpCode": "123456" } }] ``` This will send the generated or ingested files to your NetStorage upload directory using the hostname **example-nsu.akamaihd.net** and CP code **123456**. The filename will be the render id, source id or rendition id plus extension, i.e. **edc47d30-a504-4f16-8439-24c863a7a782.mp4**. Thumbnail and poster images will also be transferred. #### Advanced integration You can also optionally set a remote directory path and custom file name: ```json "destinations": [{ "provider": "akamai-netstorage", "options": { "host": "example-nsu.akamaihd.net", "cpCode": "123456", "path": "videos", "filename": "my-video" } }] ``` This will send the rendered files to the **videos** directory in your NetStorage upload directory. If the rendered file is an mp4, the filename will be **my-video.mp4**. When using a custom filename omit the file extension, i.e. .mp4 or .jpg. This will be added based on the rendered file type. If a poster or thumbnail are created these will have **-poster.jpg** and **-thumbnail.jpg** appended. ### Opting out from the Shotstack destination By default all generated files are sent to the Shotstack destination. If you are using Akamai NetStorage you may wish to [opt out](../self-host.md) from hosting your videos with Shotstack using: ```json "destinations": [{ "provider": "akamai-netstorage", "options": { "host": "example-nsu.akamaihd.net", "cpCode": "123456" } },{ "provider": "shotstack", "exclude": true }] ``` This will send the generated video to Akamai NetStorage but not to Shotstack. --- ## Vimeo Destination - Markdown URL: https://shotstack.io/docs/guide/serving-assets/destinations/vimeo.md - Web URL: https://shotstack.io/docs/guide/serving-assets/destinations/vimeo/ - Description: Upload videos to Vimeo # Host Videos on Vimeo [Vimeo](https://www.vimeo.com), a cloud-based video hosting platform, allows you to host and share your videos. This integration allows you to effortlessly upload your videos to Vimeo. ### Setup To send your videos directly to your Vimeo account you will have to connect Shotstack to your Vimeo Account. This will provide Shotstack with permissions to upload videos to Vimeo on your behalf. Login to the dashboard and navigate to the [integrations](https://dashboard.shotstack.io/integrations) page, find the Vimeo destination and click [configure](https://dashboard.shotstack.io/integrations/vimeo). Click the **Connect to Vimeo** button and follow the prompts. ![Vimeo Credentials](/img/vimeo-credentials.png) ### Integration #### Basic integration To send files to Vimeo add the following to the output parameter of the render payload: ```json "destinations": [{ "provider": "vimeo" }] ``` The following will set a name and description for your video and send the rendered videos to your Vimeo account: ```json "destinations": [{ "provider": "vimeo", "options": { "name": "My Automated Video", "description": "This video was uploaded via Shotstack" } }] ``` #### Set folder Shotstack allows you to send videos to a folder inside Vimeo: ```json "destinations": [{ "provider": "vimeo", "options": { "name": "My Automated Video", "description": "This video was uploaded via Shotstack", "folderUri": "/users/{user_id}/folders/{project_id}" } }] ``` :::info Info Review the [Vimeo documentation](https://developer.vimeo.com/api/guides/folders) for more information on how to manage folders. ::: #### Setting video privacy Shotstack allows you to set the privacy settings of your videos: ```json "destinations": [{ "provider": "vimeo", "options": { "name": "My Automated Video", "description": "This video was uploaded via Shotstack", "privacy": { "view": "anybody", "embed": "public", "comments": "anybody", "download": true, "add": false } } }] ``` The available options are: | Option | Type | Values | | :--- | :--- | :--- | | `view` | string | `anybody`, `nobody`, `contacts`, `password`, `unlisted` | | `embed` | string | `public`, `private`, `whitelist` | | `comments` | string | `anybody`, `nobody`, `contacts` | | `download` | boolean | Set whether the video can be downloaded. | | `add` | boolean | Set whether other users can add the video to their collections. | ### Opting out from the Shotstack destination By default all generated files are sent to the Shotstack destination. If you are using Vimeo you may wish to [opt out](../self-host.md) from hosting your videos with Shotstack using: ```json "destinations": [{ "provider": "vimeo", "options": { "folderId": "2t-eTY12LO8tzQLAwMug-fIrK_7AQBI6B" } },{ "provider": "shotstack", "exclude": true }] ``` This will send the generated video to Vimeo but not to Shotstack. --- ## Manage Hosted Assets withe the Serve API - Markdown URL: https://shotstack.io/docs/guide/serving-assets/serve-api.md - Web URL: https://shotstack.io/docs/guide/serving-assets/serve-api/ - Description: Host and manage your videos, images, and audio with Shotstack’s built-in CDN and Serve API. Instantly deliver high-quality assets globally with secure, high-availability storage. # Serve API Use the Serve API to interact with hosted assets, to look up their status, file size, URL and also to delete assets. When a render or ingestion task completes, it typically takes a few extra seconds to copy the output file to the hosting service. You may wish to check the status of the import step or look up other details such as file size or the assets URL. ### Look up an asset by render id You can look up the details of an asset by it's render id. Unless you receive webhooks this is the only way you can retrieve an asset as you will not yet know it's asset id. Given a render id you can fetch an asset using the following cURL command: ```bash curl -X GET \ -H "Content-Type: application/json" \ -H "x-api-key: ptmpOiyKgGYMnnONwvXH7FHzDGOazrIjaEDUS7Cf" \ https://api.shotstack.io/serve/stage/assets/render/d2b46ed6-998a-4d6b-9d91-b8cf0193a655 ``` In the above example the `stage` is the sandbox endpoint (you can also use `v1`), `d2b46ed6-998a-4d6b-9d91-b8cf0193a655` is the render id and `ptmpOiyKgGYMnnONwvXH7FHzDGOazrIjaEDUS7Cf` is the API key found in the dashboard. You should receive a response that looks something like: ```json { "data": [ { "type": "assets", "attributes": { "id": "e4433cbf-e501-76a2-ac8b-715d26997540", "owner": "5ca6hu7s9k", "region": "au", "renderId": "d2b46ed6-998a-4d6b-9d91-b8cf0193a655", "filename": "d2b46ed6-998a-4d6b-9d91-b8cf0193a655.mp4", "url": "https://cdn.shotstack.io/au/v1/5ca6hu7s9k/d2b46ed6-998a-4d6b-9d91-b8cf0193a655.mp4", "status": "ready", "created": "2021-07-12T11:04:44.574Z", "updated": "2021-07-12T11:04:44.854Z" } } ] } ``` Note that `data` is an array of objects - each video, image, mp3, thumbnail and poster asset created by the editing API is a unique asset within the Serve API. It is possible for a render task to generate up to 3 unique assets each with their own asset id and details. In the example response above, `e4433cbf-e501-76a2-ac8b-715d26997540` is the unique asset id. :::note Note The example above uses cURL but you can use any programming language or framework that can send requests and receive responses from a RESTful JSON API. ::: ### Look up an asset by ingestion source and rendition ids If a media file originates from the [ingest service](/ingesting-footage/ingestion/) then you can also look up the asset by source id or rendition id. This works the same way as looking up an asset by render id, but you use the `source` or `rendition` in the endpoint URL instead of `render`. To look up an asset by source id, the endpoint looks like below where `zzytey4v-32km-kq1z-aftr-3kcuqi0brad2` is the source id: ```bash https://api.shotstack.io/serve/stage/assets/source/zzytey4v-32km-kq1z-aftr-3kcuqi0brad2 ``` To look up an asset by rendition id, the endpoint looks like below where `zzytey4v-32km-kq1z-aftr-3kcuqi0brad2` is the rendition id: ```bash https://api.shotstack.io/serve/stage/assets/rendition/zzyap2eg-3ae4-8p0o-xxja-5o71wq1vjat3 ``` The response is the same as the render id lookup, except that either `renditionId` or `sourceId` will be returned instead of `renderId`. ### Look up an asset by asset id Every asset has a unique asset id which is different from the render id, remember that a render task can create a number of different files. To lookup a specific asset you use the asset id, this can be done with the following cURL command: ```bash curl -X GET \ -H "Content-Type: application/json" \ -H "x-api-key: ptmpOiyKgGYMnnONwvXH7FHzDGOazrIjaEDUS7Cf" \ https://api.shotstack.io/serve/stage/assets/e4433cbf-e501-76a2-ac8b-715d26997540 ``` The response will be exactly the same as the request by render id: ```json { "data": [ { "type": "assets", "attributes": { "id": "e4433cbf-e501-76a2-ac8b-715d26997540", "owner": "5ca6hu7s9k", "region": "au", "renderId": "d2b46ed6-998a-4d6b-9d91-b8cf0193a655", "filename": "d2b46ed6-998a-4d6b-9d91-b8cf0193a655.mp4", "url": "https://cdn.shotstack.io/au/v1/5ca6hu7s9k/d2b46ed6-998a-4d6b-9d91-b8cf0193a655.mp4", "status": "ready", "created": "2021-07-12T11:04:44.574Z", "updated": "2021-07-12T11:04:44.854Z" } } ] } ``` ### Delete an asset by asset id To delete an asset, you need to use the asset id, if you do not know the asset id, but know the render id, you will first need to fetch the asset by render id. If you know the asset id of the asset you wish to delete you can use the following cURL command: ```bash curl -X DELETE \ -H "Content-Type: application/json" \ -H "x-api-key: ptmpOiyKgGYMnnONwvXH7FHzDGOazrIjaEDUS7Cf" \ https://api.shotstack.io/serve/stage/assets/e4433cbf-e501-76a2-ac8b-715d26997540 ``` You will receive a `204` response code, meaning the file has been deleted from storage and will expire from the CDN cache in due course. While the file is deleted the record still remains in the API with the status of `deleted`. The deleted file will no longer use up your storage allowance. :::note Note The Serve API uses the [JSON:API](https://jsonapi.org/) specification making it easy to consume using an existing [library](https://jsonapi.org/implementations/). ::: --- ## Shotstack Hosting Asset Webhook - Markdown URL: https://shotstack.io/docs/guide/serving-assets/polling-vs-webhook.md - Web URL: https://shotstack.io/docs/guide/serving-assets/polling-vs-webhook/ - Description: Receive a webhook when an asset is copied to the Shotstack CDN # Serve: Polling vs Webhook When a video or image is generated using the Shotstack editing API the file is copied to the hosting and storage location asynchronously. This process normally takes a few seconds and there are two ways to know when the file is ready. ### Polling the API You can check the [Serve API](serve-api.md) by polling asset [endpoint by render id](serve-api.md#look-up-an-asset-by-render-id). When the `status` has a value of `ready` then the file has been copied and is ready to view. Not that this request may return up to 3 assets, each of which will need to be checked. Polling is a simple way to check the status of a file but it is not the most efficient. ### Receiving a webhook Instead of polling you can request a webhook callback. The callback notifications is sent to your server with details of the transfer when it either completes or fails. To receive a webhook you add the following to the root level of the JSON render request: ```json "callback": "https://my-server.com/webhook.php" ``` Where `https://my-server.com/webhook.php` is your own URL and webhook receiving script, or subscriber. The webhook body POSTed to your server will look like: ```json { "type": "serve", "action": "copy", "id": "e4433cbf-e501-76a2-ac8b-715d26997540", "render": "d2b46ed6-998a-4d6b-9d91-b8cf0193a655", "owner": "5ca6hu7s9k", "status": "ready", "url": "https://cdn.shotstack.io/au/v1/5ca6hu7s9k/d2b46ed6-998a-4d6b-9d91-b8cf0193a655.mp4", "error": null, "completed": "2021-07-12T11:04:44.574Z" } ``` Note that the `id` is the asset id and `render` is the render id. :::note Note You may receive up to 3 callback POSTs, one for each asset, if you create a video with a thumbnail and poster image. ::: To learn more about setting up and using webhooks check the [webhook guide](/architecting-an-application/webhooks.md) in the Architecting an Application section. ### Using the CDN URL without checking the status If you know your owner id and the name of the file (you can get this from the render endpoint status request), you can also construct the URL in the [correct format](hosting.md) and use this straight away in your application. There may be a few seconds where the file is not available but if you do not need the asset to be available immediately this approach can also work. --- ## Opt-out from Shotstack Hosting - Markdown URL: https://shotstack.io/docs/guide/serving-assets/self-host.md - Web URL: https://shotstack.io/docs/guide/serving-assets/self-host/ - Description: Use your own hosting service and ot-out from Shotstack's built in service # Host your Own Assets If you have your own hosting provider or your application does not require any kind of public hosting you can opt-out from having your videos automatically hosted. You can still use the Editing API to edit videos and images, download the created asset but instruct Shotstack to not move your assets to hosting. ### Opt out from hosting To opt out from hosting you need to add the following JSON to the output section of your render requests: ```json "destinations": [{ "provider": "shotstack", "exclude": true }] ``` Your final output JSON will look something like: ```json "output": { "format": "mp4", "resolution": "sd", "destinations": [{ "provider": "shotstack", "exclude": true }] } ``` :::info Tip You can set `exclude` to `false` to opt-in or omit the destinations section entirely. ::: ### Retrieve the asset directly from the Editing API If you are not using Shotstack's built in hosting then you will still need to retrieve the asset generated by the editing API. The URL of the final asset is returned in the response of the status request or [webhooks](/architecting-an-application/webhooks.md). You will need to [download the asset](/architecting-an-application/guidelines.md#download-video-when-rendered) and move it to your own storage location within 24 hours. --- ## Ingest and Transform Source Footage - Markdown URL: https://shotstack.io/docs/guide/ingesting-footage/ingestion.md - Web URL: https://shotstack.io/docs/guide/ingesting-footage/ingestion/ - Description: How to ingest videos, images, audio files and fonts to use in video edits, and how to create and transform them into renditions. # Ingest and Transform Source Footage When you compose a video Edit you provide URLs to source footage. These URLs can be anywhere on the internet and will be ingested (downloaded) as part of the rendering process. Source footage may also come from various sources including user generated content (UGC) or stock libraries. This can result in files with different formats, resolutions, aspect ratios, and encoding settings. ### Setup, bandwidth, storage and latency issues While pulling files from URLs is convenient, this approach comes with a number of disadvantages: - Latency due to the file being located in a different geographical region from the Edit API. - Latency due to slow server, storage location or network issues. - Outgoing bandwidth costs every time the file is requested. - The need to setup, maintain and manage your own storage and server. - Setting up a system to process customer uploads. - Dealing with CORS issues. The Ingest API solves these issues by providing endpoints to [fetch](../sources#fetch-a-file-from-a-url) or [upload](../sources#upload-files-directly) source assets (videos, images, audio and fonts) that can be used in your edit. ### Transforming and standardizing source footage In addition to solving the issues above, you can also [apply transformations](../renditions) to source footage and create renditions which helps solve the following issues: - Standardize the resolution, aspect ratio, framerate and quality of source footage from different devices. - Re-encode footage from different devices to make it compatible with the Shotstack platform. - Crop and resize footage to fit different aspect ratios to allow different creative effects to be applied. - Optimize rendering performance by creating renditions that match the output resolution of your edit. --- ## Ingest Files with the Ingest API - Markdown URL: https://shotstack.io/docs/guide/ingesting-footage/ingest-api.md - Web URL: https://shotstack.io/docs/guide/ingesting-footage/ingest-api/ - Description: How to fetch and download videos, images, audio files and fonts using the Ingest API. # Ingest API Use the Ingest API to [fetch assets from URLs](../sources#fetch-a-file-from-a-url) or to add [upload functionality](../sources#upload-files-directly) to your applications. Source assets are stored by Shotstack until you delete them. Stored assets can be used directly in your edits. The Ingest API is designed to be used with video, image, audio and font files. ## Limitations Files uploaded and ingested via the Ingest API are stored in S3 buckets in the AWS Sydney region. The primary use case for these files is to provide a simple upload service to upload media files to be used in your Edits. We do not recommend serving these files directly to your users or in your own application. If you have the to serve files directly to users we recommend using a CDN or a third party storage or hosting service. --- ## Ingesting and Uploading Source Files - Markdown URL: https://shotstack.io/docs/guide/ingesting-footage/sources.md - Web URL: https://shotstack.io/docs/guide/ingesting-footage/sources/ - Description: How to fetch and upload source videos, images, audio files and fonts using the Ingest API. # Ingesting and Uploading Source Files The Ingest API allows you to fetch and upload source files (videos, images, audio and fonts) to use in your edits. The examples below show you how to fetch files from URLs and upload files directly. ### Fetch a file from a URL If you have your own storage or hosting solution and want to transfer assets to Shotstack before editing, use the `sources` endpoint and include the file URL in the payload body: ```bash curl -X POST https://api.shotstack.io/ingest/stage/sources \ -H 'Content-Type: application/json' \ -H 'x-api-key: ptmpOiyKgGYMnnONwvXH7FHzDGOazrIjaEDUS7Cf' \ -d ' { "url": "https://github.com/shotstack/test-media/raw/main/captioning/scott-ko.mp4" }' ``` The example above requests the url `https://github.com/shotstack/test-media/raw/main/captioning/scott-ko.mp4` be ingested to the `stage` environment. You should receive a response similar to below: ```json { "data": { "type": "source", "id": "zzytey4v-32km-kq1z-aftr-3kcuqi0brad2" } } ``` The file will be fetched asynchronously from the URL which may take a some time depending on the file size, bandwidth and geographic location. You can check the status of the transfer by polling the individual [sources resource](#get-an-individual-source-files-details). ### Upload files directly If the file you want to ingest is not already stored online you can upload it directly from your application using a signed URL. By using direct uploads you do not need to worry about setting up or managing your own storage and can use Shotstacks infrastructure instead. To upload a file you first need to request a signed URL: ```bash curl -X POST https://api.shotstack.io/ingest/stage/upload \ -H 'Accept: application/json' \ -H 'x-api-key: ptmpOiyKgGYMnnONwvXH7FHzDGOazrIjaEDUS7Cf' ``` The request above calls the `stage` environment and returns a signed URL in the response like below: ```json { "data": { "type": "upload", "id": "zzyafe82-4tvb-ku1r-87ty-0io5sv0mtrdl", "attributes": { "id": "zzyafe82-4tvb-ku1r-87ty-0io5sv0mtrdl", "url": "https://shotstack-ingest-api-v1-sources.s3.ap-southeast-2.amazonaws.com/5ca6hu7s9k/zzyafe82-4tvb-ku1r-87ty-0io5sv0mtrdl/source?AWSAccessKeyId=ASIAWJV3NVDML6LI2ZVG&Expires=1672819007&Signature=9M76gBA%2FghV8ZYvGTp3alo5Ya%2Fk%3D&x-amz-acl=public-read&x-amz-security-token=IQoJb3JpZ2luX2VjEJ%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaDmFwLXNvdXRoZWFzdC0yIkcwRQIhAJHrqMCRk7ACXuXmJICTkADbx11e2wUP0RZ3KRdN3%2BGwAiAYt%2FIHlM8rcplCgvsvqH%2BBtSrlCW%2BUeZstwuwgq45Y3iqbAwjo%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F8BEAMaDDQzMzExNTIxMTk5MiIMtFX%2Bb1klptd8HXQvKu8Cd0xpHti7cRWkPxQz3foEWSYu1U8In64Qsi6TFK%2BmiOhVnUkHK%2BLSIwF1yQFMK2oTzVXwrEFEsyqlf%2FPZ9j3OL9eLlB7G5AqbC16hjXXR3psipp0dE2uvCV2d%2BIDYgcf1MKmzE0FDfN4wyTez%2Bd%2F3y8nfAtWB%2FCB0wU8AtKNUI7hwNbCYMgCa8QUeAH2UOrriDaN379vKXK%2B1XVplhhuvLX3aC1D0St2U6lC5yaDtZbLGEyymQPhgpp5Mam6jVzHVXXX4%2FvkQSNWbDMuMFd13fqdut9uMPkq4vhZgCmyQsibC7AnrK21QopLY%2F0vhHvPUhSkzRDKjiQou0vDrbTnT4yJLY5RCs9G65yisi6jbyUUbJTUgrME7PPPihs7kM5L%2FGjhmKqe9rNPuzKC%2FISRcmVtAPleX7tqPI7H%2BuEIobS%2FE%2B1jV4oNUFQA549prw3546FXds%2FgCLKRU%2BvxUyi2yKS8U0QC%2FNLMg2p9c81%2BaDCCqxtSdBjqdAcxGASzQwP6hHbfzC2hlnxn%2Bnf4MddgpIPFxvpV18Sy9vUYSU52mrsZK%2FxPcxrg1AM94v0aaW%2FaRE1ESTF2hXJrAJZkDNDPEBQBmcP3ylj4Bf5MsP%2FCspFoF6TvXZPYkH1lSlWHT8OTOugLji7%2F9qb9a6bKzFJqvcS0EiT7v5LCOMOpVA%2FAg9RM0yerN4Zot%2FREHgCSzajNII9Xio%2F0%3D", "expires": "2023-01-02T02:47:37.260Z" } } } ``` The url at `data.attributes.url` is the signed URL that you use in the next step to upload the file. #### Uploading the file To upload a file from your desktop or server you can use the following cURL command: ```bash curl -X PUT -T video.mp4 https://shotstack-ingest-api-v1-sources.s3.ap-southeast-2.amazonaws.com/5ca6hu7s9k/zzyafe82-4tvb-ku1r-87ty-0io5sv0mtrdl/source?AWSAccessKeyId=ASIAWJV3NVDML6LI2ZVG&Expires=1672819007&Signature=9M76gBA%2FghV8ZYvGTp3alo5Ya%2Fk%3D&x-amz-acl=public-read&x-amz-security-token=IQoJb3JpZ2luX2VjEJ%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaDmFwLXNvdXRoZWFzdC0yIkcwRQIhAJHrqMCRk7ACXuXmJICTkADbx11e2wUP0RZ3KRdN3%2BGwAiAYt%2FIHlM8rcplCgvsvqH%2BBtSrlCW%2BUeZstwuwgq45Y3iqbAwjo%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F8BEAMaDDQzMzExNTIxMTk5MiIMtFX%2Bb1klptd8HXQvKu8Cd0xpHti7cRWkPxQz3foEWSYu1U8In64Qsi6TFK%2BmiOhVnUkHK%2BLSIwF1yQFMK2oTzVXwrEFEsyqlf%2FPZ9j3OL9eLlB7G5AqbC16hjXXR3psipp0dE2uvCV2d%2BIDYgcf1MKmzE0FDfN4wyTez%2Bd%2F3y8nfAtWB%2FCB0wU8AtKNUI7hwNbCYMgCa8QUeAH2UOrriDaN379vKXK%2B1XVplhhuvLX3aC1D0St2U6lC5yaDtZbLGEyymQPhgpp5Mam6jVzHVXXX4%2FvkQSNWbDMuMFd13fqdut9uMPkq4vhZgCmyQsibC7AnrK21QopLY%2F0vhHvPUhSkzRDKjiQou0vDrbTnT4yJLY5RCs9G65yisi6jbyUUbJTUgrME7PPPihs7kM5L%2FGjhmKqe9rNPuzKC%2FISRcmVtAPleX7tqPI7H%2BuEIobS%2FE%2B1jV4oNUFQA549prw3546FXds%2FgCLKRU%2BvxUyi2yKS8U0QC%2FNLMg2p9c81%2BaDCCqxtSdBjqdAcxGASzQwP6hHbfzC2hlnxn%2Bnf4MddgpIPFxvpV18Sy9vUYSU52mrsZK%2FxPcxrg1AM94v0aaW%2FaRE1ESTF2hXJrAJZkDNDPEBQBmcP3ylj4Bf5MsP%2FCspFoF6TvXZPYkH1lSlWHT8OTOugLji7%2F9qb9a6bKzFJqvcS0EiT7v5LCOMOpVA%2FAg9RM0yerN4Zot%2FREHgCSzajNII9Xio%2F0%3D ``` In this example a file called `video.mp4` is uploaded to Shotstack. The signed URL is appended to the end of the command. The upload process will be performed in the foreground and should not be interrupted until it completes. If the file upload is interrupted the command will need to be run again. Once complete you can poll the [sources resource](#get-an-individual-source-files-details) to get the file details. #### Setting the Content-Type header When uploading files you can include a `Content-Type` header to help identify the file type. For common media files like videos, images and audio this is optional as the file type is detected automatically from the file contents. For text-based files such as `.srt` subtitles, `.vtt` captions, `.csv` or `.txt` files, the content type cannot be detected automatically. For these files you should either include the `Content-Type` header when uploading or provide a `filename` with the correct extension in the initial upload request. For example, to upload an SRT subtitle file with the Content-Type header: ```bash curl -X PUT -H 'Content-Type: application/x-subrip' -T subtitles.srt "SIGNED_URL" ``` Or provide a filename when requesting the signed URL: ```bash curl -X POST https://api.shotstack.io/ingest/stage/upload \ -H 'Content-Type: application/json' \ -H 'x-api-key: ptmpOiyKgGYMnnONwvXH7FHzDGOazrIjaEDUS7Cf' \ -d '{"filename": "subtitles.srt"}' ``` :::tip Most HTTP clients and browsers automatically set the `Content-Type` header based on the file being uploaded, so this is usually handled for you without any extra configuration. ::: ### Get an individual source files details Look up the details and ingestion status of a file by its id. Use the id returned when you fetch a URL or perform a direct upload: ```bash curl -X GET https://api.shotstack.io/ingest/stage/sources/zzytey4v-32km-kq1z-aftr-3kcuqi0brad2 \ -H 'Accept: application/json' \ -H 'x-api-key: ptmpOiyKgGYMnnONwvXH7FHzDGOazrIjaEDUS7Cf' ``` The example above calls the `sources` endpoint in the `stage` (sandbox) environment with the id `zzytey4v-32km-kq1z-aftr-3kcuqi0brad2` which was returned by the fetch URL request. The response will look similar to: ```json { "data": { "type": "source", "id": "zzytey4v-32km-kq1z-aftr-3kcuqi0brad2", "attributes": { "id": "zzytey4v-32km-kq1z-aftr-3kcuqi0brad2", "owner": "5ca6hu7s9k", "input": "https://github.com/shotstack/test-media/raw/main/captioning/scott-ko.mp4", "source": "https://shotstack-ingest-api-stage-sources.s3.ap-southeast-2.amazonaws.com/5ca6hu7s9k/zzytey4v-32km-kq1z-aftr-3kcuqi0brad2/source.mp4", "status": "ready", "width": 1920, "height": 1080, "duration": 25.86, "fps": 23.967, "created": "2023-01-02T01:47:18.973Z", "updated": "2023-01-02T01:47:37.260Z" } } } ``` The URL at `data.attributes.source` is the file saved with Shotstack that you can use when editing videos. If the file is still being transferred or uploaded the `status` will show as `queued` or `importing`. If there is an error the status will be `failed`. ### List all ingested files You can retrieve a list of all your ingested source files. Files are returned in date order with the most recent first. ```bash curl -X GET https://api.shotstack.io/ingest/stage/sources \ -H 'Accept: application/json' \ -H 'x-api-key: ptmpOiyKgGYMnnONwvXH7FHzDGOazrIjaEDUS7Cf' ``` The response returns an array of source files like below: ```json { "data": [ { "type": "source", "id": "zzytey4v-32km-kq1z-aftr-3kcuqi0brad2", "attributes": { "id": "zzytey4v-32km-kq1z-aftr-3kcuqi0brad2", "owner": "5ca6hu7s9k", "input": "https://github.com/shotstack/test-media/raw/main/captioning/scott-ko.mp4", "source": "https://shotstack-ingest-api-v1-sources.s3.ap-southeast-2.amazonaws.com/5ca6hu7s9k/zzytey4v-32km-kq1z-aftr-3kcuqi0brad2/source.mp4", "status": "ready", "width": 1920, "height": 1080, "duration": 25.86, "fps": 23.967, "created": "2023-01-02T01:47:18.973Z", "updated": "2023-01-02T01:47:37.260Z" } }, { "type": "source", "id": "zzyafe82-4tvb-ku1r-87ty-0io5sv0mtrdl", "attributes": { "id": "zzyafe82-4tvb-ku1r-87ty-0io5sv0mtrdl", "owner": "5ca6hu7s9k", "source": "https://shotstack-ingest-api-v1-sources.s3.ap-southeast-2.amazonaws.com/5ca6hu7s9k/zzyafe82-4tvb-ku1r-87ty-0io5sv0mtrdl/video.mp4", "status": "ready", "width": 1920, "height": 1080, "duration": 18.24, "fps": 30, "created": "2023-01-02T01:53:28.973Z", "updated": "2023-01-02T01:53:57.460Z" } } ] } ``` :::info Note In the example above the second file does not have an `input` file. The second file was uploaded using a direct upload so there is no input URL. ::: ### Delete a file from storage To delete a file by id send a DELETE request to the sources resource with the id of the file to delete: ```bash curl -X DELETE https://api.shotstack.io/ingest/stage/sources/zzyafe82-4tvb-ku1r-87ty-0io5sv0mtrdl \ -H 'x-api-key: ptmpOiyKgGYMnnONwvXH7FHzDGOazrIjaEDUS7Cf' ``` A 204 response code is returned, the file will be deleted from storage and the status updated to `deleted`, like this: ```json { "data": { "type": "source", "id": "zzytey4v-32km-kq1z-aftr-3kcuqi0brad2", "attributes": { "id": "zzyafe82-4tvb-ku1r-87ty-0io5sv0mtrdl", "owner": "5ca6hu7s9k", "source": "https://shotstack-ingest-api-v1-sources.s3.ap-southeast-2.amazonaws.com/5ca6hu7s9k/zzyafe82-4tvb-ku1r-87ty-0io5sv0mtrdl/video.mp4", "status": "deleted", "width": 1920, "height": 1080, "duration": 18.24, "fps": 30, "created": "2023-01-02T01:53:28.973Z", "updated": "2023-01-02T01:53:57.460Z" } } } ``` :::note Note All Ingest API responses follow the [JSON:API](https://jsonapi.org/) specification making it easy to consume using an existing [library](https://jsonapi.org/implementations/). ::: --- ## Creating Renditions from Source Files - Markdown URL: https://shotstack.io/docs/guide/ingesting-footage/renditions.md - Web URL: https://shotstack.io/docs/guide/ingesting-footage/renditions/ - Description: How to create renditions from source files using transformations such as resizing, cropping, changing the aspect ratio and quality. # Creating renditions from source files Shotstack provides for a range of transformations that allow you to create transcode your media. A rendition is a new file created from the source file with transformations applied. We allow for the following transformations: - Resizing - Changing the aspect ratio - Cropping - Changing the framerate - Change the quality The examples below show how to ingest a source file and create renditions as soon as the source is ready. ## Create a single rendition from a source file When you transfer or upload a source file you can specify a list of renditions to create: ```bash curl -X POST https://api.shotstack.io/ingest/stage/sources \ -H 'Content-Type: application/json' \ -H 'x-api-key: ptmpOiyKgGYMnnONwvXH7FHzDGOazrIjaEDUS7Cf' \ -d ' { "url": "https://github.com/shotstack/test-media/raw/main/captioning/scott-ko.mp4", "outputs": { "renditions": [ { "resolution": "hd" } ] } }' ``` The example above requests the url `https://github.com/shotstack/test-media/raw/main/captioning/scott-ko.mp4` be ingested to the `stage` environment. Once it is transferred and saved to Shotstack, an HD rendition (1280px x 720px) will be generated, creating a scaled down version of the original (1920px x 1080px). You receive the same response as when ingesting a [source file](../sources#fetch-a-file-from-a-url) without renditions: ```json { "data": { "type": "source", "id": "zzytey4v-32km-kq1z-aftr-3kcuqi0brad2" } } ``` ## Create multiple renditions from a source file You can create multiple renditions from a source file by adding more transformation to the list of renditions: ```bash curl -X POST https://api.shotstack.io/ingest/stage/sources \ -H 'Content-Type: application/json' \ -H 'x-api-key: ptmpOiyKgGYMnnONwvXH7FHzDGOazrIjaEDUS7Cf' \ -d ' { "url": "https://github.com/shotstack/test-media/raw/main/captioning/scott-ko.mp4", "outputs": { "renditions": [ { "resolution": "hd" }, { "resolution": "mobile" } ] } }' ``` The example above will ingest the source video and create two renditions, one HD (1280px x 720px) and one mobile (640px x 360px) videos. Each video can be used for different use cases within an application. ## Other transformations As well as creating renditions with different resolutions, you can also apply other transformations to a video: ```bash curl -X POST https://api.shotstack.io/ingest/stage/sources \ -H 'Content-Type: application/json' \ -H 'x-api-key: ptmpOiyKgGYMnnONwvXH7FHzDGOazrIjaEDUS7Cf' \ -d ' { "url": "https://github.com/shotstack/test-media/raw/main/captioning/scott-ko.mp4", "outputs": { "renditions": [ { "size": { "width": 720, "height": 720 }, "fps": 30, "quality": 50 } ] } }' ``` The example above will ingest the source video and create a custom resolution, a square video rendition; 720px x 720px. The video aspect ratio will be retained and the portion of the video outside the square area will be cropped. Also the frame rate will be converted to 30 frames per second. The quality is set to 50 which will create a smaller file size with a small loss in visual quality. The API reference documentation lists all available [rendition transformations](https://shotstack.io/docs/api/#tocs_rendition). ### Custom file names You can create renditions with custom file names by adding a `filename` property to the rendition. This allows you to identify a rendition by its file name rather than its ID. Not that the filename extension must be omitted and will be added automatically based on the rendition format. The rendition settings below will create a rendition with a file name of `720p.mp4`: ```bash { "format": "mp4", "size": { "width": 1280, "height": 720 }, "filename": "720p" } ``` ### Create renditions from a direct upload Creating renditions from a [direct upload](/ingesting-footage/sources/#upload-files-directly) is the same as creating renditions from a URL. When you make a request for the signed URL you provide a list of renditions to create: ```bash curl -X POST https://api.shotstack.io/ingest/stage/upload \ -H 'Accept: application/json' \ -H 'x-api-key: ptmpOiyKgGYMnnONwvXH7FHzDGOazrIjaEDUS7Cf' -d ' { "outputs": { "renditions": [ { "resolution": "hd" } ] } }' ``` When the file is uploaded to the signed URL, a rendition will be created with an HD resolution (1280px x 720px). ### Get a rendition files details Fetching the details of a rendition uses the same endpoint used to check the [status of a source file](/ingesting-footage/sources#get-an-individual-source-files-details). Call the sources endpoint with it's ID and the response will include the status of any renditions requested: ```bash curl -X GET https://api.shotstack.io/ingest/stage/sources/zzytey4v-32km-kq1z-aftr-3kcuqi0brad2 \ -H 'Accept: application/json' \ -H 'x-api-key: ptmpOiyKgGYMnnONwvXH7FHzDGOazrIjaEDUS7Cf' ``` The response will look similar to: ```json { "data": { "type": "source", "id": "zzytey4v-32km-kq1z-aftr-3kcuqi0brad2", "attributes": { "id": "zzytey4v-32km-kq1z-aftr-3kcuqi0brad2", "owner": "5ca6hu7s9k", "input": "https://github.com/shotstack/test-media/raw/main/captioning/scott-ko.mp4", "source": "https://shotstack-ingest-api-stage-sources.s3.ap-southeast-2.amazonaws.com/5ca6hu7s9k/zzytey4v-32km-kq1z-aftr-3kcuqi0brad2/source.mp4", "status": "ready", "outputs": { "renditions": [ { "id": "zzyap2eg-3ae4-8p0o-xxja-5o71wq1vjat3", "status": "ready", "url": "https://shotstack-ingest-api-v1-renditions.s3.ap-southeast-2.amazonaws.com/5ca6hu7s9k/zzytey4v-32km-kq1z-aftr-3kcuqi0brad2/zzyap2eg-3ae4-8p0o-xxja-5o71wq1vjat3.mp4", "executionTime": 9554.78, "transformation": { "format": "mp4", "fps": 30, "size": { "width": 1280, "height": 720 } }, "width": 1280, "duration": 25.87, "fps": 30, "height": 720 } ] }, "width": 1920, "height": 1080, "duration": 25.86, "fps": 23.967, "created": "2023-01-02T01:47:18.973Z", "updated": "2023-01-02T01:47:37.260Z" } } } ``` The response includes details of the rendition in the `data.attributes.outputs.renditions` array. The URL in the array is the rendition file saved with Shotstack that you can use when editing videos. If the file is still being transferred or uploaded the `status` will show as `queued` or `importing`. If there is an error the status will be `failed`. If the source file is still being ingested the status will be `waiting`. --- ## Shotstack Ingestion Webhook - Markdown URL: https://shotstack.io/docs/guide/ingesting-footage/polling-vs-webhook.md - Web URL: https://shotstack.io/docs/guide/ingesting-footage/polling-vs-webhook/ - Description: Receive a webhook when an asset is ingested or a rendition processed # Ingest: Polling vs Webhook Ingesting files from URLs, direct uploads and processing renditions can take several seconds and sometimes minutes. There are two ways to know when the file transfer or processing is complete; polling the API or receiving a webhook. ### Polling the API You can check the status by polling the [source file details](../sources#get-an-individual-source-files-details) with the source id. When the `status` of the source file is `ready` then the file is ready to be used in an edit or to be used to create a rendition. You can also see the rendition status using the same endpoint request. Polling is a simple way to check the status of files but it is not the most efficient. ### Receiving a webhook Instead of polling you can request a webhook callback. The webhook callback is sent to your server with details of the ingested file transfer, direct upload or rendition status and whether it succeeds or fails. To receive a webhook you add the following to the root level of the JSON ingest request: ```json "callback": "https://my-server.com/webhook.php" ``` Where `https://my-server.com/webhook.php` is your own URL and webhook receiving script, or subscriber. The upload or transfer of a source file to Shotstack will look like this: ```json { "type": "ingest", "action": "source", "id": "zzytey4v-32km-kq1z-aftr-3kcuqi0brad2", "owner": "5ca6hu7s9k", "status": "ready", "url": "https://shotstack-ingest-api-v1-sources.s3.ap-southeast-2.amazonaws.com/5ca6hu7s9k/zzytey4v-32km-kq1z-aftr-3kcuqi0brad2/source.mp4", "error": null, "completed": "2023-01-02T01:47:37.260Z" } ``` To learn more about setting up and using webhooks check the [webhook guide](/architecting-an-application/webhooks.md). --- ## Build videos with AI - Markdown URL: https://shotstack.io/docs/guide/agents.md - Web URL: https://shotstack.io/docs/guide/agents/index/ - Description: Use Shotstack from any AI tool — connect via MCP from a chat client, or install the CLI and Claude Code Skill in your terminal. # Build videos with AI on Shotstack Two ways to drive the Shotstack render API from an AI tool. Both follow the same [Edit JSON conventions](/agents/conventions). ## MCP Server — for chat clients and IDEs A single endpoint at `https://mcp.shotstack.io/` that connects Shotstack to Claude, ChatGPT, Cursor, Claude Code, VS Code Copilot, Codex CLI, Gemini CLI, Windsurf, Zed, JetBrains, Goose, and Raycast. The agent composes Edit JSON, mounts the Studio canvas inline (chat clients) or returns short share links (terminal clients), and renders on demand. → [MCP Server setup](/agents/mcp-server) ## CLI + Claude Code Skill — for terminal agents A terminal-native CLI plus a [Claude Code Skill](https://docs.claude.com/en/docs/claude-code/skills) that ships the same authoring conventions to coding agents (Claude Code, Cursor, Codex CLI, Gemini CLI, etc.). Install with `npm i -g @shotstack/cli`, or pull the skill with `npx skills add shotstack/shotstack-cli`. → [CLI + Skill setup](/agents/cli) ## Edit JSON conventions The video timeline JSON has rules every agent needs to follow — track ordering is reversed, asset type names differ from CSS instinct, fonts must come from a specific allowlist. The same conventions guide ships with the CLI Skill and is returned by the MCP server's `get_shotstack_guide` tool. → [Authoring conventions](/agents/conventions) ## Plain text everywhere Every doc URL on `shotstack.io/docs/guide/` accepts a `.md` suffix and returns plain markdown. The [`/llms.txt`](https://shotstack.io/docs/guide/llms.txt) and [`/llms-full.txt`](https://shotstack.io/docs/guide/llms-full.txt) indexes give an LLM the full corpus in one fetch. --- ## MCP Server - Markdown URL: https://shotstack.io/docs/guide/agents/mcp-server.md - Web URL: https://shotstack.io/docs/guide/agents/mcp-server/ - Description: Connect Shotstack to Claude, ChatGPT, Cursor, VS Code, JetBrains, and other AI clients. # Shotstack MCP Server [![npm version](https://img.shields.io/npm/v/@shotstack/shotstack-mcp-server.svg)](https://www.npmjs.com/package/@shotstack/shotstack-mcp-server) :::info Beta The Shotstack MCP server is currently in beta. Tools and behaviour may change without notice. ::: A [Model Context Protocol](https://modelcontextprotocol.io/) server that lets any MCP-aware AI client compose and render Shotstack videos from natural language. ``` Endpoint: https://mcp.shotstack.io/ (HTTP/streamable, OAuth) Local: npx -y @shotstack/shotstack-mcp-server (stdio, SHOTSTACK_API_KEY env) ``` Get an API key at [app.shotstack.io](https://app.shotstack.io). ## Add it to your client ### Claude.ai · Claude Desktop Settings → **Connectors** → Add custom connector → URL `https://mcp.shotstack.io/` → complete OAuth. ### Claude Code ```sh claude mcp add --transport http shotstack https://mcp.shotstack.io ``` ### ChatGPT Settings → **Connectors** → Advanced → enable Developer mode → Create → URL `https://mcp.shotstack.io/`. Developer mode requires Business, Enterprise, or Edu. ### Cursor `~/.cursor/mcp.json`: ```json { "mcpServers": { "shotstack": { "url": "https://mcp.shotstack.io/" } } } ``` ### VS Code (GitHub Copilot) `.vscode/mcp.json` (VS Code 1.101+): ```json { "servers": { "shotstack": { "type": "http", "url": "https://mcp.shotstack.io/" } } } ``` ### Codex CLI ```sh codex mcp add shotstack --url https://mcp.shotstack.io ``` ### Gemini CLI ```sh gemini mcp add -s user --transport http shotstack https://mcp.shotstack.io/ ``` ### Windsurf `~/.codeium/windsurf/mcp_config.json`: ```json { "mcpServers": { "shotstack": { "serverUrl": "https://mcp.shotstack.io/" } } } ``` ### Zed `settings.json`: ```json { "context_servers": { "shotstack": { "url": "https://mcp.shotstack.io/" } } } ``` ### JetBrains IDEs Settings → **Tools** → MCP → Add → URL `https://mcp.shotstack.io/`. ### Goose `goose configure` → Add extension → **Remote Extension (Streaming HTTP)** → URL `https://mcp.shotstack.io/`. ### Raycast Extensions → MCP → Add server → URL `https://mcp.shotstack.io/`. ### Local stdio (fallback) For offline use or if your client doesn't support HTTP transport, run the server locally: ```json { "mcpServers": { "shotstack": { "command": "npx", "args": ["-y", "@shotstack/shotstack-mcp-server"], "env": { "SHOTSTACK_API_KEY": "your_api_key" } } } } ``` ## What it does | Tool | Purpose | |---|---| | `studio` | Mounts the inline Studio canvas in the chat. **Default for chat clients** — user clicks Render in the iframe. | | `render_video` | Submits an Edit JSON to the render API directly. Use for automation or when the user says "just render it". | | `get_render_status` | Polls a render. Returns `queued`/`fetching`/`rendering`/`saving`/`done`/`failed` and the output URL. | | `create_studio_link` | Generates a `https://shotstack.studio/s/{slug}` short link for sharing. | | `get_shotstack_guide` | Returns the Edit JSON authoring conventions. | | `create_template`, `list_templates`, `get_template`, `render_template`, `delete_template` | Manage reusable templates with `{{PLACEHOLDER}}` merge fields. | ## How agents should use it 1. **Call `get_shotstack_guide` before composing Edit JSON.** Track ordering, asset types, fonts, and the top-5 mistakes are non-negotiable. The full ruleset is at [`/agents/conventions`](/agents/conventions). 2. **Default to `studio`, not `render_video`.** Mount the edit inline so the user can preview and click Render. Only call `render_video` directly when explicitly asked or when there's no human in the loop. 3. **Use public HTTPS URLs for assets.** Never hallucinate URLs; ask the user when unsure. --- ## CLI + Skill - Markdown URL: https://shotstack.io/docs/guide/agents/cli.md - Web URL: https://shotstack.io/docs/guide/agents/cli/ - Description: Use Shotstack from a terminal — npm CLI for shell agents, Claude Code Skill for IDE agents. # Shotstack CLI [![View on GitHub](https://img.shields.io/badge/GitHub-View_on_GitHub-blue?logo=github)](https://github.com/shotstack/shotstack-cli) [![npm version](https://img.shields.io/npm/v/@shotstack/cli.svg)](https://www.npmjs.com/package/@shotstack/cli) :::info Beta The Shotstack CLI and Claude Code Skill are currently in beta. Commands and conventions may change without notice. ::: A terminal-native CLI for the Shotstack Edit API plus a [Claude Code Skill](https://docs.claude.com/en/docs/claude-code/skills) packaging the same conventions for IDE-based coding agents. ## Install ```sh npm install -g @shotstack/cli export SHOTSTACK_API_KEY=... shotstack --help ``` Get an API key at [app.shotstack.io](https://app.shotstack.io). Stage credentials are free for testing. ## Two commands ```sh # Submit an Edit JSON and poll until done. shotstack render edit.json --watch # → done https://shotstack-api-...-e21df7168d9a.mp4 # Open a draft in Studio for a human to review (no render credits). shotstack studio edit.json # → opens browser at https://shotstack.studio/s/abc12345 # Poll an existing render. shotstack status --watch ``` ## Four CLI rules for agents 1. **Pipe → `--output json`.** Default output is human-readable; pass `--output json` when scripting. 2. **Use `--watch`, not a polling loop.** Both `render` and `status` accept `--watch` and exit on terminal state. 3. **Fetch the schema before composing JSON.** Pull [`api.edit.json`](https://shotstack.io/docs/api/api.edit.json) and the [conventions](/agents/conventions) — LLM training data is often stale. 4. **Hand off to a human via `studio` when uncertain.** Don't burn render credits iterating; let a human click Render after reviewing. Exit codes: `0` success, `1` permanent error, `2` transient (safe to retry). ## Claude Code Skill The CLI repo ships a [SKILL.md](https://github.com/shotstack/shotstack-cli/blob/main/SKILL.md) that teaches Claude Code how to use the CLI and how to author Edit JSON correctly. ```sh npx skills add shotstack/shotstack-cli ``` Once installed, Claude Code reads the skill on demand — references in `references/` (timeline layering, caption presets, SVG constraints, fonts, asset library, troubleshooting) load only when relevant. The skill works with any Anthropic Skills-compatible client. ## Conventions are shared with the MCP server The skill's `shared/agent-core.md` is the same content the [MCP server](/agents/mcp-server) returns from its `get_shotstack_guide` tool, and the same content rendered at [`/agents/conventions`](/agents/conventions). ## Reference - [Edit JSON conventions](/agents/conventions) - [API reference](https://shotstack.io/docs/api/) --- ## Studio SDK - Markdown URL: https://shotstack.io/docs/guide/studio-sdk/studio-sdk.md - Web URL: https://shotstack.io/docs/guide/studio-sdk/studio-sdk/ - Description: Embed previews and video editing capabilities into your application # Studio SDK [![View on GitHub](https://img.shields.io/badge/GitHub-View_on_GitHub-blue?logo=github)](https://github.com/shotstack/shotstack-studio-sdk) [![npm version](https://img.shields.io/npm/v/@shotstack/shotstack-studio.svg)](https://www.npmjs.com/package/@shotstack/shotstack-studio) A JavaScript SDK for browser-based video editing with timeline, canvas preview, and export. It provides a fully interactive editing environment that works with the [Shotstack data model](/docs/guide/getting-started/core-concepts/#edit). ## Interactive Examples Try the Shotstack Studio in your preferred framework: [![TypeScript](https://img.shields.io/badge/TypeScript-StackBlitz-blue?style=for-the-badge&logo=typescript)](https://stackblitz.com/fork/github/shotstack/shotstack-studio-sdk-demos/tree/master/typescript) [![React](https://img.shields.io/badge/React-StackBlitz-blue?style=for-the-badge&logo=react)](https://stackblitz.com/fork/github/shotstack/shotstack-studio-sdk-demos/tree/master/react) [![Vue](https://img.shields.io/badge/Vue-StackBlitz-blue?style=for-the-badge&logo=vue.js)](https://stackblitz.com/fork/github/shotstack/shotstack-studio-sdk-demos/tree/master/vue) [![Angular](https://img.shields.io/badge/Angular-StackBlitz-blue?style=for-the-badge&logo=angular)](https://stackblitz.com/fork/github/shotstack/shotstack-studio-sdk-demos/tree/master/angular) [![Next.js](https://img.shields.io/badge/Next.js-StackBlitz-blue?style=for-the-badge&logo=next.js)](https://stackblitz.com/fork/github/shotstack/shotstack-studio-sdk-demos/tree/master/nextjs) ## Installation ```bash npm install @shotstack/shotstack-studio ``` ```bash yarn add @shotstack/shotstack-studio ``` ## Quick Start ```typescript import { Edit, Canvas, Controls, Timeline, UIController } from "@shotstack/shotstack-studio"; // 1) Load a template const response = await fetch("https://shotstack-assets.s3.amazonaws.com/templates/hello-world/hello.json"); const template = await response.json(); // 2) Create core components const edit = new Edit(template); const canvas = new Canvas(edit); const ui = UIController.create(edit, canvas); // 3) Load canvas and edit await canvas.load(); await edit.load(); // 4) Register toolbar buttons ui.registerButton({ id: "text", icon: ``, tooltip: "Add Text" }); // 5) Handle button clicks ui.on("button:text", ({ position }) => { edit.addTrack(0, { clips: [ { asset: { type: "rich-text", text: "Title", font: { family: "Work Sans", size: 72, weight: 600, color: "#ffffff", opacity: 1 }, align: { horizontal: "center", vertical: "middle" } }, start: position, length: 5, width: 500, height: 200 } ] }); }); // 6) Initialize the Timeline const timelineContainer = document.querySelector("[data-shotstack-timeline]") as HTMLElement; const timeline = new Timeline(edit, timelineContainer); await timeline.load(); // 7) Add keyboard controls const controls = new Controls(edit); await controls.load(); // 8) Add event handlers edit.events.on("clip:selected", data => { console.log("Clip selected:", data); }); ``` Your HTML must include both containers: ```html
``` ## Main Components ### Edit `Edit` is the runtime editing session and source of truth for document mutations. ```typescript import { Edit } from "@shotstack/shotstack-studio"; const edit = new Edit(templateJson); await edit.load(); await edit.loadEdit(nextTemplateJson); // Playback (seconds) edit.play(); edit.pause(); edit.seek(2); edit.stop(); // Mutations await edit.addTrack(0, { clips: [] }); await edit.addClip(0, { asset: { type: "image", src: "https://example.com/image.jpg" }, start: 0, length: 5 }); await edit.updateClip(0, 0, { length: 6 }); await edit.deleteClip(0, 0); // History await edit.undo(); await edit.redo(); // Clip operations await edit.deleteTrack(0); // Output settings await edit.setOutputSize(1920, 1080); await edit.setOutputFps(30); await edit.setOutputFormat("mp4"); await edit.setOutputResolution("hd"); await edit.setOutputAspectRatio("16:9"); await edit.setTimelineBackground("#000000"); // Read state const time = edit.playbackTime; const playing = edit.isPlaying; const clip = edit.getClip(0, 0); const track = edit.getTrack(0); const snapshot = edit.getEdit(); const durationSeconds = edit.totalDuration; ``` #### Events Listen using string event names: ```typescript const unsubscribeClipSelected = edit.events.on("clip:selected", data => { console.log("Selected clip", data.trackIndex, data.clipIndex); }); edit.events.on("clip:updated", data => { console.log("Updated from", data.previous, "to", data.current); }); edit.events.on("playback:play", () => { console.log("Playback started"); }); // Unsubscribe when no longer needed unsubscribeClipSelected(); ``` Available event names: | Category | Event Names | | --- | --- | | Playback | `playback:play`, `playback:pause` | | Timeline | `timeline:updated`, `timeline:backgroundChanged` | | Clip lifecycle | `clip:added`, `clip:selected`, `clip:updated`, `clip:deleted`, `clip:restored`, `clip:copied`, `clip:loadFailed`, `clip:unresolved` | | Selection | `selection:cleared` | | Edit state | `edit:changed`, `edit:undo`, `edit:redo` | | Track | `track:added`, `track:removed` | | Duration | `duration:changed` | | Output | `output:resized`, `output:resolutionChanged`, `output:aspectRatioChanged`, `output:fpsChanged`, `output:formatChanged`, `output:destinationsChanged` | | Merge fields | `mergefield:changed` | ### Canvas `Canvas` renders the current edit. ```typescript import { Canvas } from "@shotstack/shotstack-studio"; const canvas = new Canvas(edit); await canvas.load(); canvas.centerEdit(); canvas.zoomToFit(); canvas.setZoom(1.25); canvas.resize(); const zoom = canvas.getZoom(); canvas.dispose(); ``` ### UIController `UIController` manages built-in UI wiring and extensible button events. ```typescript import { UIController } from "@shotstack/shotstack-studio"; const ui = UIController.create(edit, canvas, { mergeFields: true }); ui.registerButton({ id: "add-title", icon: `...`, tooltip: "Add Title" }); const unsubscribe = ui.on("button:add-title", ({ position }) => { console.log("Button clicked at", position, "seconds"); }); ui.unregisterButton("add-title"); unsubscribe(); ui.dispose(); ``` ### Timeline `Timeline` provides visual clip editing. ```typescript import { Timeline } from "@shotstack/shotstack-studio"; const container = document.querySelector("[data-shotstack-timeline]") as HTMLElement; const timeline = new Timeline(edit, container); await timeline.load(); timeline.zoomIn(); timeline.zoomOut(); timeline.dispose(); ``` ### Controls `Controls` enables keyboard playback and edit shortcuts. ```typescript import { Controls } from "@shotstack/shotstack-studio"; const controls = new Controls(edit); await controls.load(); ``` Available keyboard shortcuts: | Key | Action | | --- | --- | | Space | Play / Pause | | J | Stop | | K | Pause | | L | Play | | Left Arrow | Seek backward | | Right Arrow | Seek forward | | Shift + Arrow | Seek larger amount | | Comma | Step backward one frame | | Period | Step forward one frame | ### VideoExporter `VideoExporter` exports a timeline render from the browser runtime. ```typescript import { VideoExporter } from "@shotstack/shotstack-studio"; const exporter = new VideoExporter(edit, canvas); await exporter.export("my-video.mp4", 25); ``` ## Custom UI Buttons Use `UIController` to register and handle custom button actions. ```typescript ui.registerButton({ id: "text", icon: `...`, tooltip: "Add Text", dividerBefore: true }); ui.on("button:text", ({ position, selectedClip }) => { console.log("Current time (seconds):", position); console.log("Current selection:", selectedClip); }); ui.unregisterButton("text"); ``` ## API Reference For schema-level details and type definitions, see the [Shotstack API Reference](https://shotstack.io/docs/api/#tocs_edit). --- ## API Reference - Markdown URL: https://shotstack.io/docs/guide/api-reference.md - Web URL: https://shotstack.io/docs/guide/api-reference/ - Description: Comprehensive API reference with JSON and code examples. # API Reference View our comprehensive [API reference documentation](https://shotstack.io/docs/api/). ---