Sora-2 Video Generation Guide
A complete guide to creating, iterating on, and managing videos with Sora-2.Overview
Sora-2 is Apifree’s advanced video generation model, capable of creating high-quality video content from natural language prompts or images. The model is built on multimodal diffusion technology with deep understanding of 3D space, motion, and scene continuity. The Video Generation API provides five main endpoints, each with distinct capabilities:- Create video: Start a new render job from a prompt, with optional reference inputs or a remix ID
- Get video status: Retrieve the current state of a render job and monitor its progress
- Download video: Fetch the finished MP4 once the job is completed
- List videos: Enumerate your videos with pagination for history, dashboards, or housekeeping
- Delete videos: Remove an individual video ID from storage
Models
The Sora-2 model comes in two variants, each tailored for different use cases.Sora-2
sora-2 is designed for speed and flexibility. It’s ideal for the exploration phase, when you’re experimenting with tone, structure, or visual style and need quick feedback rather than perfect fidelity.
It generates good quality results quickly, making it well suited for rapid iteration, concepting, and rough cuts. sora-2 is often more than sufficient for social media content, prototypes, and scenarios where turnaround time matters more than ultra-high fidelity.
Sora-2 Pro
sora-2-pro produces higher quality results. It’s the better choice when you need production-quality output.
sora-2-pro takes longer to render and is more expensive to run, but it produces more polished, stable results. It’s best for high-resolution cinematic footage, marketing assets, and any situation where visual precision is critical.
Generate a video
Generating a video is an asynchronous process:- When you call the
POST /videosendpoint, the API returns a job object with a jobidand an initialstatus - You can either poll the
GET /videos/{video_id}endpoint until the status transitions tocompleted, or – for a more efficient approach – use webhooks (see the webhooks section below) to be notified automatically when the job finishes - Once the job has reached the
completedstate you can fetch the final MP4 file withGET /videos/{video_id}/content
Start a render job
Start by callingPOST /videos with a text prompt and the required parameters. The prompt defines the creative look and feel – subjects, camera, lighting, and motion – while parameters like size and seconds control the video’s resolution and length.
- cURL
- Python (requests)
- Python (OpenAI)
id and an initial status such as queued or in_progress. This means the render job has started.
Guardrails and restrictions
The API enforces several content restrictions:- Only content suitable for audiences under 18 (a setting to bypass this restriction will be available in the future)
- Copyrighted characters and copyrighted music will be rejected
- Real people—including public figures—cannot be generated
- Input images with faces of humans are currently rejected
Effective prompting
For best results, describe shot type, subject, action, setting, and lighting. For example:- “Wide shot of a child flying a red kite in a grassy park, golden hour sunlight, camera slowly pans upward.”
- “Close-up of a steaming coffee cup on a wooden table, morning light through blinds, soft depth of field.”
Monitor progress
Video generation takes time. Depending on model, API load and resolution, a single render may take several minutes. You can poll the API to request status updates or you can get notified via a webhook.Poll the status endpoint
CallGET /videos/{video_id} with the id returned from the create call. The response shows the job’s current status, progress percentage (if available), and any errors.
Typical states are queued, in_progress, completed, and failed. Poll at a reasonable interval (for example, every 10–20 seconds), use exponential backoff if necessary, and provide feedback to users that the job is still in progress.
- cURL
- Python (requests)
- Python (OpenAI)
Use webhooks for notifications
Instead of polling job status repeatedly withGET, register a webhook to be notified automatically when a video generation completes or fails.
Webhooks can be configured in your webhook settings page. When a job finishes, the API emits one of two event types: video.completed and video.failed. Each event includes the ID of the job that triggered it.
Example webhook payload:
Retrieve results
Download the MP4
Once the job reaches statuscompleted, fetch the MP4 with GET /videos/{video_id}/content. This endpoint streams the binary video data and returns standard content headers, so you can either save the file directly to disk or pipe it to cloud storage.
- cURL
- Python (requests)
- Python (OpenAI)
Download supporting assets
For each completed video, you can also download a thumbnail and a spritesheet. These are lightweight assets useful for previews, scrubbers, or catalog displays. Use thevariant query parameter to specify what you want to download. The default is variant=video for the MP4.
- cURL
- Python (requests)
- Python (OpenAI)
Use image references
You can guide a generation with an input image, which acts as the first frame of your video. This is useful if you need the output video to preserve the look of a brand asset, a character, or a specific environment. Include an image file as theinput_reference parameter in your POST /videos request. The image must match the target video’s resolution (size).
Supported file formats are image/jpeg, image/png, and image/webp.
- cURL
- Python (requests)
- Python (OpenAI)
Remix completed videos
Remix lets you take an existing video and make targeted adjustments without regenerating everything from scratch. Provide theremix_video_id of a completed job along with a new prompt that describes the change, and the system reuses the original’s structure, continuity, and composition while applying the modification. This works best when you make a single, well-defined change because smaller, focused edits preserve more of the original fidelity and reduce the risk of introducing artifacts.
- cURL
- Python (requests)
- Python (OpenAI)
Maintain your library
UseGET /videos to enumerate your videos. The endpoint supports optional query parameters for pagination and sorting.
- cURL
- Python (requests)
- Python (OpenAI)
DELETE /videos/{video_id} to remove videos you no longer need from storage.
- cURL
- Python (requests)
- Python (OpenAI)
Errors
Like other OpenAI APIs, this endpoint returns non-2xx HTTP codes with a standard error object when something goes wrong:invalid_request_error: The request parameters are invalid or missingauthentication_error: Authentication failed (missing or invalid API key)rate_limit_error: You have hit a rate limitinsufficient_quota: Your billing plan or credit is insufficientmodel_not_found: The specified model is not available