Transcoding API

The Qencode API is structured around REST. All API methods described below accept POST as application/x-www-form-urlencoded and returns JSON-encoded responses. In case of FAILURE during method call response contains 'error' param set to value greater 0 (error code). In case of SUCCESS response contains 'error' param set to 0.

note
Note: A quick overview on working with our Transcoding API
MethodPOST
Paramsapplication/x-www-form-urlencoded
ReturnsJSON
SuccessError Code = 0
FailureError Code = (A value from our "List of Error Codes and Values" below)

Getting Access Token

POST
/v1/access_token

Qencode uses API keys and tokens to authenticate requests and launch tasks. You can view and manage the API keys associated with your projects inside of your Qencode Account.

To get started, you first need to acquire a session token in order to authenticate and access the Qencode API. You will pass this token as a parameter when you use the /v1/create_task method described below.

warning
Caution:
To build a secure solution we strongly recommend that you DO NOT to call this method directly from any client application as you will expose your api key publicly. We recommend you first obtain a session token from your server and then pass to the client app.
Arguments

For transcoding, an API key is assigned to each Project created in your Qencode account. After logging into your account, you can manage your API keys on the Projects page, as well as track the usage of each Project on the Statistics page.

Returns

After API key authentication is complete, you will receive this session based token, which will be used for the next step, the /v1/create_task method.

Request Example

Replace the value below with your API key. An API key can be found for each Project in your account on qencode.com.

curl https://api.qencode.com/v1/access_token \  
   -d api_key=your_api_key
API_KEY =  = "your_api_key";
client = qencode.client(API_KEY);
$apiKey = 'your_api_key';
$q = new QencodeApiClient($apiKey);
String apiKey = "your_api_key";
QencodeApiClient client = new QencodeApiClient(apiKey);
const apiKey = "your_api_key";
const qencodeApiClient = new QencodeApiClient(apiKey);
var apiKey = "your_api_key";
var q = new QencodeApiClient(apiKey);
Response Example

Token returned should be passed to /v1/create_task API method

{
 'token' : '1357924680',
 'expire' : '2020-12-31 23:59:59'
}

Creating a Task

POST
/v1/create_task

You can create a transcoding task once you have received the token from the /v1/access_token method. The /v1/create_task method uses the token to receive a task_token, which will be used in the next step to define all of your transcoding parameters.

Arguments

Session token returned with /v1/access_token method

Returns

The transcoding task token (Job ID).

Uniquely identifies transcoding job in the system. You can use this value with /v1/status method to get job status.

When uploading videos with the tus.io protocol, upload_url is the endpoint used to upload video files directly to the Qencode servers.
Request Example
curl https://api.qencode.com/v1/create_task \  
   -d token=76682314a86ed377730873394f8172f2
task = client.create_task()
$task = $q->createTask();
TranscodingTask task = client.CreateTask();
let task = qencodeApiClient.CreateTask();
var task = q.CreateTask();
Response Example
{
 'error': 0,
 'upload_url': 'https://storage.qencode.com/v1/upload_file',
 'task_token': '471272a512d76c22665db9dcee893409'
}

Starting a Task

POST
/v1/start_encode2

Starts a transcoding job that contains all the parameters needed to transcode your videos into the formats, codecs, resolutions you need, along with the list of fully customizable settings. This method can use URLs as well as uploaded files for the input, as well as both single and multiple sources for input (source and stitch.

Arguments

The token created for this task. See /v1/create_task method.

The query attributes list below contains the majority of parameters that are used for defining the transcoding settings for the task.

Any string up to 1000 characters long that allows you to pass additional data or JSON objects (internal IDs perhaps) which is sent back later to your server with HTTP callback.

Returns

Right after task is launched, the endpoint (https://api.qencode.com/v1/status) is always available to receive the basic set of task status attributes. To get extended status attributes (like completion percent) please refer to /v1/status method.

Input Objects Structure
Attributes

The query is the main part of your API Request since it contains the vast majority of the features and customizable parameters used to define your output settings.

Attributes

The URI is responsible for defining a single video's URI, whether it's a video URL or ID returned with upload_file method.

For direct uploads source uri will be in 'tus:<file_uuid>' format. See Direct video upload for more information.

Note: Both the source and the stitch parameters cannot be used for the same task. The source parameter is used for when you only have one source (input) video. The stitch parameter is used if you want to combine several source (input) videos together to form a single output.

Use the stitch parameter in order to combine several input videos into a single one. Stitch should be a json-list of URLs or video-objects. When using object form you can specify start_time and duration attributes.

Either specify the "source" or the "stitch" parameter: use "source " in case you have signle file input and "stitch" in case you need to stitch several files together.

Each of the objects in this list is used to define all the transcoding parameters for each output format. See the format object attributes description below for more details.
Attributes

Output video format. Currently supported values are mp4, webm, advanced_hls, advanced_dash, webm_dash, repack, mp3, gif, thumbnail, thumbnails, metadata.

See Supported formats section for more details.

The thumbnail & thumbnails values are used for creation of thumbnail images. See the Create thumbnails section for more details.

Repack output type is used for transmuxing, when no transcoding is done and just media container is changed in output. See the Trunsmuxing tutorial for more details.

In order to create a smaller video clip from the source video, start_time is used along with duration to define the point of the video to be used for the output video. Specifies the start time (in seconds) in input video to begin transcoding from.

Specifies duration of the video fragment (in seconds) to be transcoded.

Describes output endpoint, path, credentials and permissions for the destination of the output files created a result of the API request. You can save to multiple destinations,by putting all of your destination objects into array. Qencode offers a wide range of options for destinations, some of which are covered in our Storage Tutorials section.

note
Note:
If you don't specify destination, your video will be available to download from our servers during 24 hours.
Attributes

Specifies the output url. E.g. s3://example.com/bucket/video.mp4.

For 'mp4', 'webm', 'mp3' and 'thumbnail' outputs it should contain path and name of the output file.

For HLS or MPEG-DASH destination url should be path to a folder in your bucket.

For 'thumbnails' output it should be path to a folder where thumbnails and .vtt file are saved.

Supported storage prefixes are:

  • s3:// - for any S3-compatible storage (Qencode, AWS, GCS, DigitalOcean, etc.)
  • b2:// - for Backblaze B2
  • azblob:// - for Azure Blob Storage
  • ftp:// or ftps:// - for any FTP server
  • sftp:// - for any FTP over SSH server
Your access key for S3 bucket, or username for FTP server, etc.
Your secret key for S3 bucket, or password for FTP server, etc.

For S3 only. Specifies object access permissions. For AWS possible values are: 'private', 'public-read', 'authenticated-read', 'bucket-owner-read' and others described in Access Control List Overview. Default value is 'private'.

Specify 'public-read' value in order to make output video publicly accessible.

Only for AWS S3. Specifies storage class for the output. You can specify REDUCED_REDUNDANCY value in order to lower your storage costs for noncritical, reproducible data. See Reduced redundancy storage description.

Output video frame size in pixels ("width"x"height"). Defaults to original frame size.

For HLS or DASH output specify this parameter on stream object level.

Output video frame (or thumbnail) width in pixels. If specified without "height" attribute, frame height is calculated proportionally to original height.

For HLS or DASH output specify this parameter on stream object level.

Output video frame (or thumbnail) height in pixels. If specified without "width" attribute, frame width is calculated proportionally to original width.

For HLS or DASH output specify this parameter on stream object level.

Rotate video through specified degrees value. Possible values are 90, 180, 270.

For HLS or DASH output specify this parameter on stream object level.

Examples: "4:3", "16:9", "1.33", "1.77". Defaults to input video aspect ratio.

For HLS or DASH output specify this parameter on stream object level.

Specify 'scale' in case you want to transform frame to fit output size. Specify 'crop' in case you want to preserve input video aspect ratio. In case input and output aspect ratio do not match and 'crop' mode is enabled, output video is cropped or black bars are added, depending on the output dimensions. Possible values: crop, scale. Defaults to 'scale'.

Defaults to original frame rate.

If you don't specify framerate for a stitch job, output framerate will be equal to framerate of the first video in a batch.

For HLS or DASH output specify this parameter on stream object level.

Keyframe interval (in frames). Defaults to 90.

For HLS or DASH output specify this parameter on stream object level.

Also known as "Constant rate factor" (CRF). Use this parameter to produce optimized videos with variable bitrate. For H.264 the range is 0-51: where 0 is lossless and 51 is worst possible. A lower value is a higher quality and a subjectively sane range is 18-28. Consider 18 to be visually lossless or nearly so: it should look the same or nearly the same as the input but it isn't technically lossless.

For HLS or DASH output specify this parameter on stream object level.

Use two pass mode in case you want to achieve exact bitrate values.

For HLS or DASH output specify this parameter on stream object level.

Use two-pass encoding to achieve exact bitrate value for output.

Please note, two-pass encoding is almost twice slower than one-pass coding. The price is also increased twice.

For HLS or DASH output specify this parameter on stream object level.

Possible values are yuv420p, yuv422p, yuvj420p, yuvj422p. Defaults to yuv420p.

For HLS or DASH output specify this parameter on stream object level.

Defaults to libx264. Possible values are: libx264, libx265, libvpx, libvpx-vp9.

For HLS or DASH output specify this parameter on stream object level.

x264 video codec settings profile. Possible values are high, main, baseline. Defaults to main.

For HLS or DASH output specify this parameter on stream object level.

Contains video codec parameters for advanced usage.

Attributes

x264 video codec settings profile. Possible values are high, main, baseline. Defaults to main.

Set of constraints that indicate a degree of required decoder performance for a profile. Consists from two digits. Possible values are: 30, 31, 40, 41, 42.

Context-Adaptive Binary Arithmetic Coding (CABAC) is the default entropy encoder used by x264. Possible values are 1 and 0. Defaults to 1.

Possible values are +bpyramid, +wpred, +mixed_refs, +dct8×8, -fastpskip/+fastpskip, +aud. Defaults to None.

One of x264's most useful features is the ability to choose among many combinations of inter and intra partitions. Possible values are +partp8x8, +partp4x4, +partb8x8, +parti8x8, +parti4x4. Defaults to None.

Defines motion detection type: 0 - none, 1 - spatial, 2 - temporal, 3 - auto. Defaults to 1.

Motion Estimation method used in encoding. Possible values are epzs, hex, umh, full. Defaults to None.

Sets sub pel motion estimation quality.

Sets rate-distortion optimal quantization.

Number of reference frames each P-frame can use. The range is from 0-16.

Sets full pel me compare function.

Sets limit motion vectors range (1023 for DivX player).

Sets scene change threshold.

Sets QP factor between P and I frames.

Sets strategy to choose between I/P/B-frames.

Sets video quantizer scale compression (VBR). It is used as a constant in the ratecontrol equation. Recommended range for default rc_eq: 0.0-1.0.

Sets min video quantizer scale (VBR). Must be included between -1 and 69, default value is 2.

Sets max video quantizer scale (VBR). Must be included between -1 and 1024, default value is 31.

Sets max difference between the quantizer scale (VBR).

Sets max bitrate tolerance (in bits/s). Requires 'bufsize' to be set.

Tells the encoder how often to calculate the average bitrate and check to see if it conforms to the average bitrate specified.

Sets the scaler flags. This is also used to set the scaling algorithm. Only a single algorithm should be selected. Default value is 'bicubic'.

Specifies the preset for matching stream(s).

Set generic flags.

Possible values: mv4, qpel, loop, qscale, pass1, pass2, gray, emu_edge, psnr, truncated, ildct, low_delay, global_header, bitexact, aic, cbp, qprd, ilme, cgop.

Sets number of frames to look ahead for frametype and ratecontrol.

For HLS or DASH output specify this parameter on stream object level.

note
Note:
Set this value to 'hvc1' for H.265 encodings in order to enable correct playback on Apple devices.

Enables HDR (high dynamic rate) to SDR (standard dynamic rate) conversion mode. Possible values: 0 or 1. Defaults to 0.

HDR to SDR conversion can slow down transcoding significantly so standard price is multiplied by 2.

Possible values are: aac, libfdk_aac, libvorbis. Defaults to aac.

For HLS or DASH output specify this parameter on the stream object level.

Defaults to 64.

For HLS or DASH output specify this parameter on the stream object level.

Defaults to 44100.

For HLS or DASH output specify this parameter on the stream object level.

Default value is 2.

For HLS or DASH output specify this parameter on the stream object level.

If set to 1, replaces audio in the output with a silent track.

For HLS or DASH output specify this parameter on the stream object level.

Contains a list of elements each describing a single view stream for adaptive streaming format. Use stream objects for HLS or MPEG-DASH outputs.

Stream object is used with http-streaming formats (HLS and MPEG-DASH) and specifies a set of attributes defining stream properties. This is a subset of attributes working on a Format level for file-based output formats like MP4 or WEBM. These are size, bitrate, framerate, etc. There are a few attributes only used with Stream object listed below.

Attributes

Specifies custom file name for HLS or DASH chunk playlist.

Segment duration to split media (in seconds). Refers to adaptive streaming formats like HLS or DASH. Defaults to 9.

If set to 1, creates an #EXT-X-I-FRAMES-ONLY playlist for HLS output. Defaults to 0

If set to 1, creates HLS chunks in fMp4 format instead of TS.

Moment in video (% from video duration) to create thumbnail at. Used with output: thumbnail.

Interval in seconds between thumbnail images. Used with output: thumbnails.

Specifies image format for 'thumbnail' or 'thumbnails' output. Possible values: png, jpg. Defaults to 'png'.

Note: use "quality" parameter along with "image_format": "jpg" to specify image quality.

Contains object, specifying subtitles (closed captions) configuration. Contains sources - an optional array of subtitle objects for a closed captions stream. Each object should have source and language attributes. You can also include optional parameter copy, specifying if eia608 or eia708 closed captions should be copied to output stream. Copy is set to 0 by deafault which means closed captions won't be copied to output stream.

Attributes
Attributes

URL to a file with subtitles. Supported formats are: .ass, .srt

Specifies language for subtitles.

For streaming formats like HLS or MPEG-DASH specify logo as an attribute of a stream object.

It will be good idea to have different size logo images for output streams of different resolutions.

Attributes
This should be publicly available url.
Image X position relative to the video top left corner.
Image Y position relative to the video top left corner.

Possible values: rgb, bt709, fcc, bt470bg, smpte170m, smpte240m, ycocg, bt2020nc, bt2020_ncl, bt2020c, bt2020_cl, smpte2085.

Set this to 1 in order to preserve original value.

For HLS or DASH output specify this parameter on stream object level.

MPEG vs JPEG YUV range. Possible values: tv, mpeg, pc, jpeg.

Set this to 1 in order to preserve original value.

For HLS or DASH output specify this parameter on the stream object level.

Possible values: bt709, gamma22, gamma28, smpte170m, smpte240m, linear, log, log100, log_sqrt, log316, iec61966_2_4, bt1361, iec61966_2_1, bt2020_10bit, bt2020_12bit, smpte2084, smpte428, arib-std-b67.

Set this to 1 in order to preserve original value.

For HLS or DASH output specify this parameter on stream object level.

Possible values: bt709, bt470m, bt470bg, smpte170m, smpte240m, film, bt2020, smpte428, smpte431, smpte432, jedec-p22.

Set this to 1 in order to preserve original value.

For HLS or DASH output specify this parameter on stream object level.

Limits the lowest CRF (quality) for Per-Title Encoding mode to the specified value. Possible values: from 0 to 51. Defaults to 0.

Limits the highest CRF (quality) for Per-Title Encoding mode to the specified value. Possible values: from 0 to 51. Defaults to 51.

Adjusts best CRF predicted for each scene with the specified value in Per-Title Encoding mode. Should be integer in range -10..10. Defaults to 0.

Resulting CRF value can only be adjusted within the limits specified with min_crf and/or max_crf parameters in case they are applied.

Tag value to pass through encoding system. The value specified for a tag is available as 'user_tag' in job status response.

If specified, enables DRM encryption for Widevine and Playready.

Attributes

If specified, enables DRM encryption for Fairplay.

Attributes

Example for EZDRM: skd://fps.ezdrm.com/;<kid>

If specified, enables AES-128 encryption.

Attributes

URL, pointing to 128-bit encryption key in binary format.

Specifies FFPROBE util version used to get video metadata. Used with 'output' set to 'metadata'. Default value is 4.1.5.

Used with repack output only. Defaults to 1. If 0 is specified audio stream is removed from the output.

Used with repack output only. If 0 is specified subtitles are removed from the output.

Used with repack output only. If 0 is specified video stream is removed from the output.

URL of an endpoint on your server to handle task callbacks.

See Receiving Callbacks.

Send callback on each subtask event (e.g. rendition queued or completed). Possible values: 0 or 1. Defaults to 0. Recommended value is 0 unless you really need to process each rendition separately.

See Receiving Callbacks.

Instructs encoding system to produce video streams with exact dimensions specified in the task, even if upscaling is needed to match specified width and height. Possible values: 0 or 1. Defaults to 0, so in case you specify output height greater than input these output formats or streams will be ignored. In case all outputs dimensions are greater than input and upscale mode is not enabled only first output format or stream is applied but with the dimensions of input video.

Specifies encoding system version for this task. Possible values are 1 and 2. Defaults to 1.

Request Example
curl https://api.qencode.com/v1/start_encode2 \  
 -d task_token=b49e034d198262f1d5d15ed9f3cb8 \  
 -d payload="12345" \  
 -d query='{"query": {  
     "source": "https://your-server.com/video.mp4",  
     "format": [  
       {  
         "output": "mp4",  
         "destination": {  
           "url":"s3://s3.us-east-1.amazonaws.com/yourbucket/output.mp4",    
           "key":"abcde12345",    
           "secret":"abcde12345",    
           "permissions": "public-read"  
         },  
     "framerate": "29.97",  
     "keyframe": "25",  
     "size": "360x240",  
     "start_time": 10,  
     "duration": 20,  
     "audio_bitrate": 64 
    } 
   ] 
  } 
 }'
params = """ 
  {"query": {  
     "source": "https://your-server.com/video.mp4", 
     "format": [  
       { 
        "output": "mp4", 
        "size": "320x240",
        "video_codec": "libx264"  
       } 
     ] 
   } 
 } 
""" 
task.custom_start(params)
$params = '
  {"query": {  
     "source": "https://your-server.com/video.mp4", 
     "format": [  
       { 
        "output": "mp4", 
        "size": "320x240",
        "video_codec": "libx264"  
       } 
     ] 
   } 
 }'; 
$task->startCustom($params);
//TODO
let query = {  
     "source": "https://your-server.com/video.mp4", 
     "format": [  
       { 
        "output": "mp4", 
        "size": "320x240",
        "video_codec": "libx264"  
       } 
     ] 
}; 
task.StartCustom(query);
// Load API query from file.
var transcodingParams = CustomTranscodingParams.FromFile("query.json"); 
var started = task.StartCustom(transcodingParams);
Response Example
{
 'error': 0,
 'status_url': 'https://api.qencode.com/v1/status'
}

Getting Status of Tasks

POST
/v1/status

Gets the current status of one or more transcoding jobs.

The https://api.qencode.com/v1/status endpoint is to get quick feedback on whether the job is still running or has already completed.

The master endpoint https://<master>/v1/status let's you get a more complete set of information about a job. This endpoint url is returned in the status_url attribute of the job's status object.

Arguments

You can use the task tokens returned from the /v1/create_task method to get the current status of several transcoding jobs at the same time.

Returns

Dictionary containing status for each requested task token. Keys are task tokens and values contain status information for each job. See status object attributes description below.

Output Objects Structure
Attributes
Contains all information about task status. Example provided below.
{
  "status": "encoding",
  "videos": [
    {
      "status": "encoding",
      "profile": null,
      "url": null,
      "percent": 0.0,
      "output_format": null,
      "storage": null,
      "meta": {
        "height": 720,
        "resolution_height": 720,
        "resolution_width": 1280,
        "resolution": 720,
        "width": 1280
      },
      "error_description": null,
      "error": false,
      "duration": "None",
      "tag": "video-0-0",
      "user_tag": null,
      "size": null
    },
    {
      "status": "encoding",
      "profile": null,
      "url": null,
      "percent": 7.1428571428571423,
      "output_format": null,
      "storage": null,
      "meta": {
        "height": 240,
        "resolution_height": 240,
        "resolution_width": 352,
        "resolution": 240,
        "width": 352
      },
      "error_description": null,
      "error": false,
      "duration": "None",
      "tag": "video-0-1",
      "user_tag": null,
      "size": null
    }
  ],
  "percent": 3.5714285714285712,
  "source_size": 69916569.0,
  "audios": [],
  "images": [],
  "error": 0,
  "duration": 596.52099999999996
}
Attributes

See possible status values description below.

downloading Video is being downloaded to Qencode server.
queued Task is waiting for available encoders.
encoding Video is being transcoded.
saving Video is being saved to destination location.
completed The transcoding job has completed successfully and the videos were saved to the destination.

Endpoint to get most actual job status.

You should always get job status using the endpoint specified as last value returned in status_url.

Overall completion percent for the job. Currently refers only to 'encoding' status.

Equals to 0 if there's no error and 1 in case of any error.

Contains error message.

List of objects, each containing output video status information.

See video status object attributes description below.

Attributes

Possible values are listed in status attribute description