Transcoding API

The Qencode API is structured around REST. All API methods described below accept POST as application/x-www-form-urlencoded and returns JSON-encoded responses. In case of FAILURE during method call response contains 'error' param set to value greater 0 (error code). In case of SUCCESS response contains 'error' param set to 0.

note
Note: A quick overview on working with our Transcoding API
MethodPOST
Paramsapplication/x-www-form-urlencoded
ReturnsJSON
SuccessError Code = 0
FailureError Code = (A value from our "List of Error Codes and Values" below)

Getting Access Token

POST
/v1/access_token

Qencode requires an

api_key
to generate a session-based
token
to authenticate requests and launch tasks. You can view and manage the API keys associated with your Projects inside of your Qencode Account.

To get started, use the

/v1/access_token
method to acquire the session-based
token
, which you will use to authenticate all other requests through the Qencode API.

warning
Caution:
To build a secure solution we strongly recommend that you DO NOT call this method directly from any client application as you will expose your api key publicly. We recommend you first obtain a session token from your server and then pass to the client app.
Arguments

For transcoding, an API key is assigned to each Project created in your Qencode account. After logging into your account, you can manage your API keys on the Projects page, as well as track the usage of each Project on the Statistics page.

Returns

After API key authentication is complete, you will receive this session-based token, which is used to authenticate all other requests through the Qencode API.

Request Example

Replace the value below with your API key. An API key can be found for each Project in your account on qencode.com.

curl https://api.qencode.com/v1/access_token \  
   -d api_key=your_api_key
API_KEY =  = "your_api_key";
client = qencode.client(API_KEY);
$apiKey = 'your_api_key';
$q = new QencodeApiClient($apiKey);
String apiKey = "your_api_key";
QencodeApiClient client = new QencodeApiClient(apiKey);
const apiKey = "your_api_key";
const qencodeApiClient = new QencodeApiClient(apiKey);
var apiKey = "your_api_key";
var q = new QencodeApiClient(apiKey);
Response Example

Token returned should be passed to /v1/create_task API method

{
 'token' : '1357924680',
 'expire' : '2020-12-31 23:59:59'
}

Creating a Task

POST
/v1/create_task

Once you have received your

token
, use the
/v1/create_task
method to receive the
task_token
, which is used to define your transcoding parameters in the next step. You will also receive an
upload_url
which can be used for direct uploads via the
/v1/upload_file
method.

Arguments

This session token is used to get the task_token, which is required to start transcoding jobs.

Returns

The transcoding task token (also referred to as Job ID) uniquely identifies transcoding job in the system. You can use this value with ID to track down jobs in the UI or as an argument in the

/v1/status
method to get the status of jobs.

When uploading videos with the tus.io protocol,
upload_url
is the endpoint used to upload video files directly to the Qencode servers.
Request Example
curl https://api.qencode.com/v1/create_task \  
   -d token=76682314a86ed377730873394f8172f2
task = client.create_task()
$task = $q->createTask();
TranscodingTask task = client.CreateTask();
let task = qencodeApiClient.CreateTask();
var task = q.CreateTask();
Response Example
{
 'error': 0,
 'upload_url': 'https://storage.qencode.com/v1/upload_file',
 'task_token': '471272a512d76c22665db9dcee893409'
}

Starting a Task

POST
/v1/start_encode2

Starts a transcoding job that contains all the parameters needed to transcode your videos into the formats, codecs, resolutions you need, along with the list of fully customizable settings. Pass the

task_token
with the
/v1/start_encode2
method, along with the
query
JSON object containg the request with your transcoding parameters. You can also include the
payload
parameter to send additional data with the task that can be retireved in the future. The
source
parameter within the
query
JSON object can also be used for uploads by specifying 'tus:<file_uuid>'. You can also use the
stitch
parameter to stitch multiple videos together into one See Tutorial for Stitching Videos.

Arguments

The token created for this task. See

/v1/create_task
method.

The query JSON object contains all of the input attributes used for defining the transcoding settings for the task.

Any string up to 1000 characters long that allows you to pass additional data or JSON objects (internal IDs perhaps) which is sent back later to your server with HTTP callback.

Returns

Right after task is launched, the endpoint https://api.qencode.com/v1/status is always available to receive the basic set of task status attributes. To get extended status attributes (like completion percent) please refer to

/v1/status
method.

Input Objects Structure
Attributes

The query is the main part of your API Request since it contains the vast majority of the features and customizable parameters used to define your output settings.

Attributes

The URI is responsible for defining a single video's URI, whether it's a video URL or ID returned with upload_file method.

For direct uploads source uri will be in 'tus:<file_uuid>' format. See Direct video upload for more information.

Note: Both the source and the stitch parameters cannot be used for the same task. The source parameter is used for when you only have one source (input) video. The stitch parameter is used if you want to combine several source (input) videos together to form a single output.

Use the stitch parameter in order to combine several input videos into a single one. Stitch should be a json-list of URLs or video-objects. When using object form you can specify start_time and duration attributes.

Either specify the "source" or the "stitch" parameter: use "source " in case you have single file input and "stitch" in case you need to stitch several files together.

Each of the objects in this list is used to define all the transcoding parameters for each output format. See the format object attributes description below for more details.
Attributes

Output video format. Currently supported values are mp4, webm, advanced_hls, advanced_dash, webm_dash, repack, mp3, hls_audio, gif, thumbnail, thumbnails, metadata.

See Supported formats section for more details.

The thumbnail & thumbnails values are used for creation of thumbnail images. See the Create thumbnails section for more details.

Repack output type is used for transmuxing, when no transcoding is done and just media container is changed in output. See the Transmuxing tutorial for more details.

In order to create a smaller video clip from the source video, start_time is used along with duration to define the point of the video to be used for the output video. Specifies the start time (in seconds) in input video to begin transcoding from.

Specifies duration (in seconds) of the output audio or video clip to be transcoded.

Describes output endpoint, path, credentials and permissions for the destination of the output files created a result of the API request. You can save to multiple destinations, by putting all of your destination objects into array. Qencode offers a wide range of options for destinations, some of which are covered in our Storage Tutorials section.

note
Note:
If you don't specify destination, your video will be available to download from our servers during 24 hours.
Attributes

Specifies the output url. E.g. s3://example.com/bucket/video.mp4.

For 'mp4', 'webm', 'mp3' and 'thumbnail' outputs it should contain path and name of the output file.

For HLS or MPEG-DASH destination url should be path to a folder in your bucket.

For 'thumbnails' output it should be path to a folder where thumbnails and .vtt file are saved.

Supported storage prefixes are:

  • s3:// - for any S3-compatible storage (Qencode, AWS, GCS, DigitalOcean, etc.)
  • b2:// - for Backblaze B2
  • azblob:// - for Azure Blob Storage
  • ftp:// or ftps:// - for any FTP server
  • sftp:// - for any FTP over SSH server

    Your access key for S3 bucket, or username for FTP server, etc.
    Your secret key for S3 bucket, or password for FTP server, etc.

    For S3 only. Specifies object access permissions. For AWS possible values are: 'private', 'public-read', 'authenticated-read', 'bucket-owner-read' and others described in Access Control List Overview. Default value is 'private'.

    Specify 'public-read' value in order to make output video publicly accessible.

    Only for AWS S3. Specifies storage class for the output. You can specify REDUCED_REDUNDANCY value in order to lower your storage costs for noncritical, reproducible data. See Reduced redundancy storage description.

    Output video or image frame size in pixels ("width"x"height"). Defaults to original frame size.

    For HLS or DASH output specify this parameter on stream object level.

    Output video frame (or thumbnail) width in pixels. If specified without "height" attribute, frame height is calculated proportionally to original height.

    For HLS or DASH output specify this parameter on stream object level.

    Output video frame (or thumbnail) height in pixels. If specified without "width" attribute, frame width is calculated proportionally to original width.

    For HLS or DASH output specify this parameter on stream object level.

    Rotate video through specified degrees value. Possible values are 90, 180, 270.

    For HLS or DASH output specify this parameter on stream object level.

    Examples: "4:3", "16:9", "1.33", "1.77". Defaults to input video aspect ratio.

    For HLS or DASH output specify this parameter on stream object level.

    Specify 'scale' in case you want to transform frame to fit output size. Specify 'crop' in case you want to preserve input video aspect ratio. In case input and output aspect ratio do not match and 'crop' mode is enabled, output video is cropped or black bars are added, depending on the output dimensions. Possible values: crop, scale. Defaults to 'scale'.

    Defaults to original frame rate.

    If you don't specify framerate for a stitch job, output framerate will be equal to framerate of the first video in a batch.

    For HLS or DASH output specify this parameter on stream object level.

    Keyframe interval (in frames). Defaults to 90.

    For HLS or DASH output specify this parameter on stream object level.

    You can also specify keyframe interval in seconds, just by adding "s" character to the value, e.g. "3s"

    Also known as "Constant rate factor" (CRF). Use this parameter to produce optimized videos with variable bitrate. For H.264 the range is 0-51: where 0 is lossless and 51 is worst possible. A lower value is a higher quality and a subjectively sane range is 18-28. Consider 18 to be visually lossless or nearly so: it should look the same or nearly the same as the input but it isn't technically lossless.

    For HLS or DASH output specify this parameter on stream object level.

    Use two pass mode in case you want to achieve exact bitrate values.

    For HLS or DASH output specify this parameter on stream object level.

    Use two-pass encoding to achieve exact bitrate value for output.

    Please note, two-pass encoding is almost twice slower than one-pass coding. The price is also increased twice.

    For HLS or DASH output specify this parameter on stream object level.

    Possible values are yuv420p, yuv422p, yuvj420p, yuvj422p. Defaults to yuv420p.

    For HLS or DASH output specify this parameter on stream object level.

    Defaults to libx264. Possible values are: libx264, libx265, libvpx, libvpx-vp9, lcevc_h264, lcevc_hevc.

    For HLS or DASH output specify this parameter on stream object level.

    x264 video codec settings profile. Possible values are high, main, baseline. Defaults to main.

    For HLS or DASH output specify this parameter on stream object level.

    Contains video codec parameters for advanced usage.

    Attributes

    x264 video codec settings profile. Possible values are high, main, baseline. Defaults to main.

    Set of constraints that indicate a degree of required decoder performance for a profile. Consists from two digits. Possible values are: 30, 31, 40, 41, 42.

    Context-Adaptive Binary Arithmetic Coding (CABAC) is the default entropy encoder used by x264. Possible values are 1 and 0. Defaults to 1.

    Possible values are +bpyramid, +wpred, +mixed_refs, +dct8×8, -fastpskip/+fastpskip, +aud. Defaults to None.

    One of x264's most useful features is the ability to choose among many combinations of inter and intra partitions. Possible values are +partp8x8, +partp4x4, +partb8x8, +parti8x8, +parti4x4. Defaults to None.

    Defines motion detection type: 0 - none, 1 - spatial, 2 - temporal, 3 - auto. Defaults to 1.

    Motion Estimation method used in encoding. Possible values are epzs, hex, umh, full. Defaults to None.

    Sets sub pel motion estimation quality.

    Sets rate-distortion optimal quantization.

    Number of reference frames each P-frame can use. The range is from 0-16.

    Sets full pel me compare function.

    Sets limit motion vectors range (1023 for DivX player).

    Sets scene change threshold.

    Sets QP factor between P and I frames.

    Sets strategy to choose between I/P/B-frames.

    Sets video quantizer scale compression (VBR). It is used as a constant in the ratecontrol equation. Recommended range for default rc_eq: 0.0-1.0.

    Sets min video quantizer scale (VBR). Must be included between -1 and 69, default value is 2.

    Sets max video quantizer scale (VBR). Must be included between -1 and 1024, default value is 31.

    Sets max difference between the quantizer scale (VBR).

    Sets max bitrate tolerance. Requires 'bufsize' to be set.

    For libx264 max_rate is specified in Mbps. For other codecs - in kbps.

    Sets min bitrate tolerance (in bits/s). Most useful in setting up a CBR encode. It is of little use elsewise.

    For libx264 min_rate is specified in Mbps. For other codecs - in kbps.

    Tells the encoder how often to calculate the average bitrate and check to see if it conforms to the average bitrate specified.

    For libx264 bufsize is specified in Mbps. For other codecs - in kbps.

    Sets the scaler flags. This is also used to set the scaling algorithm. Only a single algorithm should be selected. Default value is 'bicubic'.

    Specifies the preset for matching stream(s).

    Set generic flags.

    Possible values: mv4, qpel, loop, qscale, pass1, pass2, gray, emu_edge, psnr, truncated, ildct, low_delay, global_header, bitexact, aic, cbp, qprd, ilme, cgop.

    Sets number of frames to look ahead for frametype and ratecontrol.

    Applies to lcevc video codecs enhancements only.

    There are six variants of lcevc_tune, according to the aim of the encodes. Depending on the chosen tuning, the encoder will combine optimal settings and parameters according to that goal. The settings are as follows:

    Setting Description
    vq optimizes for visual quality. Default.
    vmafoptimizes for VMAF
    vmaf_negoptimizes for the new VMAF NEG (No Enhancement Gain)
    psnroptimizes for PSNR
    ssimoptimizes for SSIM, MS-SSIM
    animationan alternative to 'vq', optimizes for visual quality of animation

    Applies to lcevc video codecs enhancements only.

    Specifies the scaling mode for the base encoder picture in the LCEVC hierarchy. In combination with the associated rate control strategies, 2D, 1D and 0D influence the relative allocation of bitrate to the low-, medium- and high-frequency portions of the content.

    ModeDescription
    2Dtwo-dimensional 2:1 scaling. E.g. for a 1920x1080 video, base layer is 960x540. Default for resolutions of 720p and above.
    1Dhorizontal-only 2:1 scaling. E.g. for a 1920x1080 video, base layer is 960x1080. This mode is recommendable at high bits per pixel (e.g. full HD above 5 Mbps) or low resolutions (e.g. 540p or below), especially for content with high amounts of relatively low-contrast high-frequency detail. Default for resolutions lower than 720p.
    0DNo scaling. Currently this mode can be used exclusively for Native mode (see section 4.2.3). 0D with LCEVC (encoding_mode=enhanced) will be supported in a future release.

    Applies to lcevc video codecs enhancements only.

    Specifies whether to apply a uniform dithering algorithm.

    If None is specified, no dithering is applied. Default for lcevc_tune psnr, vmaf and ssim.

    If Uniform is specified, Uniform random dithering is applied. Default for lcevc_tune vq.

    Applies to lcevc video codecs enhancements only.

    Specifies the maximum dithering strength. Range: 0-10.

    • The default value is 4.
    • A value of 7-8 displays a more visible dither.
    • A value of 2-3 should be used for substantially imperceptible dither.

      Applies to lcevc video codecs enhancements only.

      Specifies the base QP value at which to start applying dither. Range: 0-51. Default: 24.

      Applies to lcevc video codecs enhancements only.

      Specifies the base QP value at which to saturate dither. Range: 0-51. Default: 36.

      Regardless of the base QP value, other low-level parameters make dithering adapt dithering strength settings based on frame luminosity (according to contrast sensitivity function) as well as presence of no-contrast plain graphics which would not benefit from dithering.

      Specifies the M adaptive downsampling mode

      ModeDescription
      disabledM adaptive downsampling disabled. Default for lcevc_tune=psnr, lcevc_tune=ssim and lcevc_tune=vmaf_neg.
      replaceM adaptive downsampling is applied equally to both residual surfaces. Default for lcevc_tune=vq and lcevc_tune=vmaf.
      separateM adaptive downsampling is applied separately to residual surfaces. Default for lcevc_tune=animation.

      Applies to lcevc video codecs enhancements only.

      Allows to increase or decrease the energy of high frequencies, with 0 being a preference for softer details. Default values are modified adaptively by the encoder if you do not specify anything.

      Applies to lcevc video codecs enhancements only.

      Allows you to modify the way in which full resolution details are separated from the mid-to-low frequencies that are passed as low resolution to the base codec. Default values are modified adaptively by the encoder if you do not specify anything.

      For HLS or DASH output specify this parameter on stream object level.

      note
      Note:
      Set this value to 'hvc1' for H.265 encodings in order to enable correct playback on Apple devices.

      Enables HDR (high dynamic rate) to SDR (standard dynamic rate) conversion mode. Possible values: 0 or 1. Defaults to 0.

      HDR to SDR conversion can slow down transcoding significantly so standard price is multiplied by 2.

      Possible values are: aac, libfdk_aac, libvorbis. Defaults to aac.

      For HLS or DASH output specify this parameter on the stream object level.

      Defaults to 64.

      For HLS or DASH output specify this parameter on the stream object level.

      Defaults to 44100.

      For HLS or DASH output specify this parameter on the stream object level.

      Default value is 2.

      For HLS or DASH output specify this parameter on the stream object level.

      If set to 1, replaces audio in the output with a silent track.

      For HLS or DASH output specify this parameter on the stream object level.

      Contains a list of elements each describing a single view stream for adaptive streaming format. Use stream objects for HLS or MPEG-DASH outputs.

      Stream object is used with http-streaming formats (HLS and MPEG-DASH) and specifies a set of attributes defining stream properties. This is a subset of attributes working on a Format level for file-based output formats like MP4 or WEBM. These are size, bitrate, framerate, etc. There are a few attributes only used with Stream object listed below.

      Attributes

      Specifies custom file name for HLS or DASH chunk playlist.

      Segment duration to split media (in seconds). Refers to adaptive streaming formats like HLS or DASH. Defaults to 9.

      If set to 1, creates an #EXT-X-I-FRAMES-ONLY playlist for HLS output. Defaults to 0

      If set to 1, creates HLS chunks in fMp4 format instead of TS.

      If set to 1, creates playlist.m3u8 file in the DASH output. Use this to generate CMAF.

      Specifies if audio stream is a separate HLS folder or put into video chunks.

      If set to 1, creates audio as a separate HLS stream. Defaults to 1.

      By default HLS streams are put into sub-folders named video_1, video_2, etc. You can change this behavior by enabling this setting so all chunks and playlists are saved into the same folder. Defaults to 0.

      Moment in video (% from video duration) to create thumbnail at. Used with output: thumbnail.

      Interval in seconds between thumbnail images. Used with output: thumbnails.

      Specifies image format for 'thumbnail' or 'thumbnails' output. Possible values: png, jpg. Defaults to 'png'.

      Note: use "quality" parameter along with "image_format": "jpg" to specify image quality.

      Contains object, specifying subtitles (closed captions) configuration. Contains sources - an optional array of subtitle objects for a closed captions stream. Each object should have source and language attributes. You can also include optional parameter copy, specifying if eia608 or eia708 closed captions should be copied to output stream. Copy is set to 0 by deafault which means closed captions won't be copied to output stream.

      Attributes
      Attributes

      URL to a file with subtitles. Supported formats are: .ass, .srt

      Specifies language for subtitles.

      For streaming formats like HLS or MPEG-DASH specify logo as an attribute of a stream object.

      It will be good idea to have different size logo images for output streams of different resolutions.

      Attributes
      This should be publicly available url.
      Image X position relative to the video top left corner.
      Image Y position relative to the video top left corner.
      Specifies watermark opacity. Possible values are floats in 0..1 range. Defaults to 1.

      Possible values: rgb, bt709, fcc, bt470bg, smpte170m, smpte240m, ycocg, bt2020nc, bt2020_ncl, bt2020c, bt2020_cl, smpte2085.

      Set this to 1 in order to preserve original value.

      For HLS or DASH output specify this parameter on stream object level.

      MPEG vs JPEG YUV range. Possible values: tv, mpeg, pc, jpeg.

      Set this to 1 in order to preserve original value.

      For HLS or DASH output specify this parameter on the stream object level.

      Possible values: bt709, gamma22, gamma28, smpte170m, smpte240m, linear, log, log100, log_sqrt, log316, iec61966_2_4, bt1361, iec61966_2_1, bt2020_10bit, bt2020_12bit, smpte2084, smpte428, arib-std-b67.

      Set this to 1 in order to preserve original value.

      For HLS or DASH output specify this parameter on stream object level.

      Possible values: bt709, bt470m, bt470bg, smpte170m, smpte240m, film, bt2020, smpte428, smpte431, smpte432, jedec-p22.

      Set this to 1 in order to preserve original value.

      For HLS or DASH output specify this parameter on stream object level.

      In Per-Title mode system runs a special analysis on each source video to find best encoding params for each scene. This allows to significantly decrease output bitrate without sacrificing quality. Currently available for H.264 outputs only.

      Possible values: 0 or 1. Defaults to 0.

      Enabling optimize_bitrate option multiplies price by 1.5x

      Limits the lowest CRF (quality) for Per-Title Encoding mode to the specified value. Possible values: from 0 to 51. Defaults to 0.

      Limits the highest CRF (quality) for Per-Title Encoding mode to the specified value. Possible values: from 0 to 51. Defaults to 51.

      Adjusts best CRF predicted for each scene with the specified value in Per-Title Encoding mode. Should be integer in range -10..10. Defaults to 0.

      Resulting CRF value can only be adjusted within the limits specified with min_crf and/or max_crf parameters in case they are applied.

      Tag value to pass through encoding system. The value specified for a tag is available as 'user_tag' in job status response.

      For HLS or DASH output specify this parameter on stream object level.

      If specified, enables DRM encryption for Widevine and Playready.

      Attributes

      When getting from CPIX response, should be decoded from base64 and encoded to hex

      When getting from CPIX response, you need to remove dash characters from it

      Should be specified in case present in DRM provider API response, e.g. in CPIX response this is explicitIV attribute in tag.

      When getting from CPIX response, should be decoded from base64 and encoded to hex

      If specified, enables DRM encryption for Fairplay.

      Attributes

      Example for EZDRM: skd://fps.ezdrm.com/;<kid>

      If specified, enables AES-128 encryption.

      Attributes

      URL, pointing to 128-bit encryption key in binary format.