Description

Twelve Labs is an AI platform focused on developing advanced video understanding technology. Twelve Labs aims to empower applications with deep semantic video analysis capabilities, enabling them to comprehend and extract meaningful insights from video content.

The platform integrates various AI models to perform tasks like object detection, action recognition, and scene understanding. By leveraging state-of-the-art machine learning techniques, Twelve Labs seeks to enhance how businesses and developers interact with video data, making it more accessible and actionable.

Supported Operations

TwelveLabs Video Understanding API

Open-ended analysis

This endpoint analyzes your videos and creates fully customizable text based on your prompts, including but not limited to tables of content, action items, memos, and detailed analyses. <Note title="Notes"> - This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page. - This endpoint supports streaming responses. For details on integrating this feature into your application, refer to the [Open-ended analysis](/v1.3/docs/guides/analyze-videos/open-ended-analysis#streaming-responses) guide. </Note>

Create embeddings for text, image, and audio

This method creates embeddings for text, image, and audio content. Before you create an embedding, ensure that your image or audio files meet the following prerequisites: - [Image embeddings](/v1.3/docs/guides/create-embeddings/image#prerequisites) - [Audio embeddings](/v1.3/docs/guides/create-embeddings/audio#prerequisites) Parameters for embeddings: - **Common parameters**: - `model_name`: The video understanding model you want to use. Example: "Marengo-retrieval-2.7". - **Text embeddings**: - `text`: Text for which to create an embedding. - **Image embeddings**: Provide one of the following: - `image_url`: Publicly accessible URL of your image file. - `image_file`: Local image file. - **Audio embeddings**: Provide one of the following: - `audio_url`: Publicly accessible URL of your audio file. - `audio_file`: Local audio file. <Note title="Notes"> - The Marengo video understanding model generates embeddings for all modalities in the same latent space. This shared space enables any-to-any searches across different types of content. - You can create multiple types of embeddings in a single API call. - Audio embeddings combine generic sound and human speech in a single embedding. For videos with transcriptions, you can retrieve transcriptions and then [create text embeddings](/v1.3/api-reference/text-image-audio-embeddings/create-text-image-audio-embeddings) from these transcriptions. </Note>

List video embedding tasks

This method returns a list of the video embedding tasks in your account. The platform returns your video embedding tasks sorted by creation date, with the newest at the top of the list. <Note title="Notes"> - Video embeddings are stored for seven days - When you invoke this method without specifying the `started_at` and `ended_at` parameters, the platform returns all the video embedding tasks created within the last seven days. </Note>

Create a video embedding task

This method creates a new video embedding task that uploads a video to the platform and creates one or multiple video embeddings. Upload options: - **Local file**: Use the `video_file` parameter - **Publicly accessible URL**: Use the `video_url` parameter. Specify at least one option. If both are provided, `video_url` takes precedence. <Accordion title="Video requirements"> The videos you wish to upload must meet the following requirements: - **Video resolution**: Must be at least 360x360 and must not exceed 3840x2160. - **Aspect ratio**: Must be one of 1:1, 4:3, 4:5, 5:4, 16:9, 9:16, or 17:9. - **Video and audio formats**: Your video files must be encoded in the video and audio formats listed on the [FFmpeg Formats Documentation](https://ffmpeg.org/ffmpeg-formats.html) page. For videos in other formats, contact us at support@twelvelabs.io. - **Duration**: Must be between 4 seconds and 2 hours (7,200s). - **File size**: Must not exceed 2 GB. If you require different options, contact us at support@twelvelabs.io. </Accordion> <Note title="Notes"> - The Marengo video understanding model generates embeddings for all modalities in the same latent space. This shared space enables any-to-any searches across different types of content. - Video embeddings are stored for seven days. - The platform supports uploading video files that can play without additional user interaction or custom video players. Ensure your URL points to the raw video file, not a web page containing the video. Links to third-party hosting sites, cloud storage services, or videos requiring extra steps to play are not supported. </Note>

Retrieve video embeddings

This method retrieves embeddings for a specific video embedding task. Ensure the task status is `ready` before invoking this method. Refer to the [Retrieve the status of a video embedding tasks](/v1.3/api-reference/video-embeddings/retrieve-video-embedding-task-status) page for instructions on checking the task status.

Retrieve the status of a video embedding task

This method retrieves the status of a video embedding task. Check the task status of a video embedding task to determine when you can retrieve the embedding. A task can have one of the following statuses: - `processing`: The platform is creating the embeddings. - `ready`: Processing is complete. Retrieve the embeddings by invoking the [`GET`](/v1.3/api-reference/video-embeddings/retrieve-video-embeddings) method of the `/embed/tasks/{task_id} endpoint`. - `failed`: The task could not be completed, and the embeddings haven't been created.

Titles, topics, and hashtags

This endpoint analyzes videos and generates titles, topics, and hashtags. <Note title="Note"> This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page. </Note>

List indexes

This method returns a list of the indexes in your account. The API returns indexes sorted by creation date, with the oldest indexes at the top of the list.

Create an index

This method creates an index.

Retrieve an index

This method retrieves details about the specified index.

Update an index

This method updates the name of the specified index.

Delete an index

This method deletes the specified index and all the videos within it. This action cannot be undone.

List videos

This method returns a list of the videos in the specified index. By default, the API returns your videos sorted by creation date, with the newest at the top of the list.

Retrieve video information

This method retrieves information about the specified video.

Partial update video information

Use this method to update one or more fields of the metadata of a video. Also, you can delete a field by setting it to `null`.

Delete video information

This method deletes all the information about the specified video. This action cannot be undone.

Make any-to-video search requests

Use this endpoint to search for relevant matches in an index using text or various media queries. **Text queries**: - Use the `query_text` parameter to specify your query. **Media queries**: - Set the `query_media_type` parameter to the corresponding media type (example: `image`). - Specify either one of the following parameters: - `query_media_url`: Publicly accessible URL of your media file. - `query_media_file`: Local media file. If both `query_media_url` and `query_media_file` are specified in the same request, `query_media_url` takes precedence. <Accordion title="Image requirements"> Your images must meet the following requirements: - **Format**: JPEG and PNG. - **Dimension**: Must be at least 64 x 64 pixels. - **Size**: Must not exceed 5MB. </Accordion> <Note title="Note"> This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page. </Note>

Retrieve a specific page of search results

Use this endpoint to retrieve a specific page of search results. <Note title="Note"> When you use pagination, you will not be charged for retrieving subsequent pages of results. </Note>

Summaries, chapters, or highlights

This endpoint analyzes videos and generates summaries, chapters, or highlights. Optionally, you can provide a prompt to customize the output. <Note title="Note"> This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page. </Note>

List video indexing tasks

This method returns a list of the video indexing tasks in your account. The API returns your video indexing tasks sorted by creation date, with the newest at the top of the list.

Create a video indexing task

This method creates a video indexing task that uploads and indexes a video. Upload options: - **Local file**: Use the `video_file` parameter. - **Publicly accessible URL**: Use the `video_url` parameter. <Accordion title="Video requirements"> The videos you wish to upload must meet the following requirements: - **Video resolution**: Must be at least 360x360 and must not exceed 3840x2160. - **Aspect ratio**: Must be one of 1:1, 4:3, 4:5, 5:4, 16:9, 9:16, or 17:9. - **Video and audio formats**: Your video files must be encoded in the video and audio formats listed on the [FFmpeg Formats Documentation](https://ffmpeg.org/ffmpeg-formats.html) page. For videos in other formats, contact us at support@twelvelabs.io. - **Duration**: For Marengo, it must be between 4 seconds and 2 hours (7,200s). For Pegasus, it must be between 4 seconds and 60 minutes (3600s). In a future release, the maximum duration for Pegasus will be 2 hours (7,200 seconds). - **File size**: Must not exceed 2 GB. If you require different options, contact us at support@twelvelabs.io. If both Marengo and Pegasus are enabled for your index, the most restrictive prerequisites will apply. </Accordion> <Note title="Notes"> - The platform supports video URLs that can play without additional user interaction or custom video players. Ensure your URL points to the raw video file, not a web page containing the video. Links to third-party hosting sites, cloud storage services, or videos requiring extra steps to play are not supported. - This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page. </Note>

Import videos

An import represents the process of uploading and indexing all videos from the specified integration. This method initiates an asynchronous import and returns two lists: - Videos that will be imported. - Videos that will not be imported, typically because they do not meet the prerequisites of all enabled video understanding models for your index. Note that the most restrictive prerequisites among the enabled models will apply. The actual uploading and indexing of videos occur asynchronously after you invoke this method. To monitor the status of each upload after invoking this method, use the [Retrieve import status](/v1.3/api-reference/tasks/cloud-to-cloud-integrations/get-status) method. <Accordion title="Video requirements"> The videos you wish to upload must meet the following requirements: - **Video resolution**: Must be at least 360x360 and must not exceed 3840x2160. - **Aspect ratio**: Must be one of 1:1, 4:3, 4:5, 5:4, 16:9, 9:16, or 17:9. - **Video and audio formats**: Your video files must be encoded in the video and audio formats listed on the [FFmpeg Formats Documentation](https://ffmpeg.org/ffmpeg-formats.html) page. For videos in other formats, contact us at support@twelvelabs.io. - **Duration**: For Marengo, it must be between 4 seconds and 2 hours (7,200s). For Pegasus, it must be between 4 seconds and 60 minutes (3600s). In a future release, the maximum duration for Pegasus will be 2 hours (7,200 seconds). - **File size**: Must not exceed 2 GB. If you require different options, contact us at support@twelvelabs.io. If both Marengo and Pegasus are enabled for your index, the most restrictive prerequisites will apply. </Accordion> <Note title="Notes"> - Before importing videos, you must set up an integration. For details, see the [Set up an integration](/v1.3/docs/advanced/cloud-to-cloud-integrations#set-up-an-integration) section. - By default, the platform checks for duplicate files using hashes within the target index and will not upload the same video to the same index twice. However, the same video can exist in multiple indexes. To bypass duplicate checking entirely and import duplicate videos into the same index, set the value of the `incremental_import` parameter to `false`. - Only one import job can run at a time. To start a new import, wait for the current job to complete. Use the [`GET`](/v1.3/api-reference/tasks/cloud-to-cloud-integrations/get-status) method of the `/tasks/transfers/import/{integration-id}/logs` endpoint to retrieve a list of your import jobs, including their creation time, completion time, and processing status for each video file. </Note>

Retrieve import logs

This endpoint returns a chronological list of import operations for the specified integration. The list is sorted by creation date, with the oldest imports first. Each item in the list contains: - The number of videos in each status - Detailed error information for failed uploads, including filenames and error messages. Use this endpoint to track import progress and troubleshoot potential issues across multiple operations.

Retrieve import status

This method retrieves the current status for each video from a specified integration and index. It returns an object containing lists of videos grouped by status. See the [Task object](/v1.3/api-reference/tasks/the-task-object) page for details on each status.

Retrieve a video indexing task

This method retrieves a video indexing task.

Delete a video indexing task

This action cannot be undone. Note the following about deleting a video indexing task: - You can only delete video indexing tasks for which the status is `ready` or `failed`. - If the status of your video indexing task is `ready`, you must first delete the video vector associated with your video indexing task by calling the [`DELETE`](/v1.3/api-reference/videos/delete) method of the `/indexes/videos` endpoint.

Details
Preview

This item is available for early access. It is still in development and may contain experimental features or limitations.

Last Update

5 days ago

Includes
twelve-labs-api-client